From noreply at buildbot.pypy.org Sun Jan 1 09:22:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 1 Jan 2012 09:22:31 +0100 (CET) Subject: [pypy-commit] pypy default: Update the year here (fix tool/test/test_license.py) Message-ID: <20120101082231.2537982111@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r50969:d0d0b1bbbee8 Date: 2012-01-01 09:21 +0100 http://bitbucket.org/pypy/pypy/changeset/d0d0b1bbbee8/ Log: Update the year here (fix tool/test/test_license.py) diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at From noreply at buildbot.pypy.org Sun Jan 1 13:15:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 1 Jan 2012 13:15:31 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Add a section to the documentation. Not implemented yet. Message-ID: <20120101121531.56B7E82111@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r50970:ca9580042f31 Date: 2012-01-01 13:14 +0100 http://bitbucket.org/pypy/pypy/changeset/ca9580042f31/ Log: Add a section to the documentation. Not implemented yet. diff --git a/pypy/rpython/memory/gc/concurrentgen.txt b/pypy/rpython/memory/gc/concurrentgen.txt --- a/pypy/rpython/memory/gc/concurrentgen.txt +++ b/pypy/rpython/memory/gc/concurrentgen.txt @@ -171,3 +171,130 @@ introduced by this step by doing the major collection just after the previous minor collection finished, when the quantity of new young objects should still be small. + + + +************************************************************ + Global overview +************************************************************ + +The objects are never physically moving with this GC; in the pictures +below, they "move" only in the sense that their age changes. + +Allocate new objects until 25% of the total RAM is reached: + + 25% 25% 50% + +-----------+-----------+-----------------------+ + | | | | + |->new... | free | old objects | + +-----------+-----------+-----------------------+ + +When the 25% is full, the new objects become aging, and we do a minor +collection on them. In parallel we continue to allocate more new +objects. + + 25% 25% 50% + +-----------+-----------+-----------------------+ + | | aging | | + |->new... |collecting | old objects | + +-----------+-----------+-----------------------+ + +When the minor collection finishes, the surviving objects (let's say 5%) +have become old, and the non-surviving ones (let's say 20%) become free +space again. + + 25% 20% 55% + +-----------+---------+-------------------------+ + | | |< | + |->new... | free |< old objects | + +-----------+---------+-------------------------+ + +The limit on new objects is still fixed to 25% of the RAM. When it is +full, we start the next minor collection with the aging objects using +25% of the RAM, and we only have 20% of space for further new objects: + + 20% 25% 55% + +---------+-----------+-------------------------+ + | |< | | + |->new... |< aging | old objects | + +---------+-----------+-------------------------+ + +It is still likely that we finish the minor collection before the 20% +run out. If we do, then we expand the limit from 20% back to the +original 25%, and continue running like above. + + 25% 15% 60% + +-----------+-------+---------------------------+ + | >| |< | + |->new... >| free |< old objects | + +-----------+-------+---------------------------+ + +We continue until the free fraction becomes so small (let's say 5%) +that during a minor collection we run out of these 5% for new objects +before the minor collection manages to finish: + + 5% 25% 70% + +--+-----------+--------------------------------+ + | |< | | + |->|< aging | old objects | + +--+-----------+--------------------------------+ + +At this point we want a full collection to occur. In order to do that, +we have to first wait for the minor collection to finish: + + 5% 20% 75% + +--+---------+----------------------------------+ + |wa| |< | + |it| free |< old objects | + +--+---------+----------------------------------+ + +Then we do "Step 2+" above, forcing another synchronous minor +collection. (This is the only time during which the main thread has to +wait for the collector to finish; hopefully it is a very minor issue, +because it occurs only at the start of a major collection, and we wait +for: (1) the end of the previous minor collection, which should ideally +be almost done already; and (2) an extra complete minor collection, but +that occurs on less objects than usual -- above, 5% instead of 25%.) + + 24% 76% + +-----------+-----------------------------------+ + | |< | + | free |< old objects | + +-----------+-----------------------------------+ + +The old objects are transformed into aging objects, and we start the +collection on them, while resuming the main thread: + + 24% 76% + +-----------+-----------------------------------+ + | | | + |->new.. | aging objects | + +-----------+-----------------------------------+ + +Here, the limiting factor is that we would like the main thread to not +run out of its 24% while the major collection is in progress. If it +does, then it has to be suspended and wait for the collector thread. +But hopefully, possibly with measures and tweaks to the numbers, the +major collection finishes first. It finds a proportion of old objects +to be still alive. A value above 50% means that the memory usage of the +actual program has grown; a value below 50% means that it has shrunk. +In any case we need to re-adjust the sizes of the other portions to keep +the ratio 25/25/50 of the very first picture above, changing the total +amount of RAM used by the process. Say with 60% of the old RAM size: + + 30% 30% 60% + +-------------+-------------+---------------------------+ + | >| >| surviving | + |->new... >| free >| old objects | + +-------------+-------------+---------------------------+ + +And we continue as per the first picture above. + +If the size of the surviving old objects shrinks instead of growing, we +might need to give less than a quarter to the free area, because the new +objects (24% above) might take more than a quarter of the new total RAM +size. To avoid too many issues, we constrain the total RAM size to not +shrink too much at each major collection (something like: at most -30%). +Additionally we fix an absolute minimum (at least 6 MB), to avoid doing +a large number of tiny minor collections, ending up spending all of our +time in Step 2 scanning the stack of the process. From noreply at buildbot.pypy.org Sun Jan 1 13:15:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 1 Jan 2012 13:15:34 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: hg merge default Message-ID: <20120101121534.3327A82111@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r50971:01950ba0f81a Date: 2012-01-01 13:15 +0100 http://bitbucket.org/pypy/pypy/changeset/01950ba0f81a/ Log: hg merge default diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -619,7 +619,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -655,7 +656,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -674,7 +676,8 @@ self.descr_reqcls, args.prepend(w_obj)) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -690,7 +693,8 @@ raise OperationError(space.w_SystemError, space.wrap("unexpected DescrMismatch error")) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -708,7 +712,8 @@ self.descr_reqcls, Arguments(space, [w1])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -726,7 +731,8 @@ self.descr_reqcls, Arguments(space, [w1, w2])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -744,7 +750,8 @@ self.descr_reqcls, Arguments(space, [w1, w2, w3])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -763,7 +770,8 @@ Arguments(space, [w1, w2, w3, w4])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -59,7 +59,8 @@ self.is_guard_not_invalidated = is_guard_not_invalidated DEBUG_COUNTER = lltype.Struct('DEBUG_COUNTER', ('i', lltype.Signed), - ('bridge', lltype.Signed), # 0 or 1 + ('type', lltype.Char), # 'b'ridge, 'l'abel or + # 'e'ntry point ('number', lltype.Signed)) class Assembler386(object): @@ -150,10 +151,12 @@ debug_start('jit-backend-counts') for i in range(len(self.loop_run_counters)): struct = self.loop_run_counters[i] - if not struct.bridge: + if struct.type == 'l': prefix = 'TargetToken(%d)' % struct.number + elif struct.type == 'b': + prefix = 'bridge ' + str(struct.number) else: - prefix = 'bridge ' + str(struct.number) + prefix = 'entry ' + str(struct.number) debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') @@ -425,7 +428,7 @@ self.setup(looptoken) if log: operations = self._inject_debugging_code(looptoken, operations, - False, looptoken.number) + 'e', looptoken.number) regalloc = RegAlloc(self, self.cpu.translate_support_code) # @@ -492,7 +495,7 @@ self.setup(original_loop_token) if log: operations = self._inject_debugging_code(faildescr, operations, - True, descr_number) + 'b', descr_number) arglocs = self.rebuild_faillocs_from_descr(failure_recovery) if not we_are_translated(): @@ -599,15 +602,15 @@ return self.mc.materialize(self.cpu.asmmemmgr, allblocks, self.cpu.gc_ll_descr.gcrootmap) - def _register_counter(self, bridge, number, token): + def _register_counter(self, tp, number, token): # YYY very minor leak -- we need the counters to stay alive # forever, just because we want to report them at the end # of the process struct = lltype.malloc(DEBUG_COUNTER, flavor='raw', track_allocation=False) struct.i = 0 - struct.bridge = int(bridge) - if bridge: + struct.type = tp + if tp == 'b' or tp == 'e': struct.number = number else: assert token @@ -657,8 +660,8 @@ targettoken._x86_loop_code += rawstart self.target_tokens_currently_compiling = None - def _append_debugging_code(self, operations, bridge, number, token): - counter = self._register_counter(bridge, number, token) + def _append_debugging_code(self, operations, tp, number, token): + counter = self._register_counter(tp, number, token) c_adr = ConstInt(rffi.cast(lltype.Signed, counter)) box = BoxInt() box2 = BoxInt() @@ -670,7 +673,7 @@ operations.extend(ops) @specialize.argtype(1) - def _inject_debugging_code(self, looptoken, operations, bridge, number): + def _inject_debugging_code(self, looptoken, operations, tp, number): if self._debug: # before doing anything, let's increase a counter s = 0 @@ -679,13 +682,12 @@ looptoken._x86_debug_checksum = s newoperations = [] - if bridge: - self._append_debugging_code(newoperations, bridge, number, - None) + self._append_debugging_code(newoperations, tp, number, + None) for op in operations: newoperations.append(op) if op.getopnum() == rop.LABEL: - self._append_debugging_code(newoperations, bridge, number, + self._append_debugging_code(newoperations, 'l', number, op.getdescr()) operations = newoperations return operations diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -546,13 +546,16 @@ struct = self.cpu.assembler.loop_run_counters[0] assert struct.i == 1 struct = self.cpu.assembler.loop_run_counters[1] + assert struct.i == 1 + struct = self.cpu.assembler.loop_run_counters[2] assert struct.i == 9 self.cpu.finish_once() finally: debug._log = None + l0 = ('debug_print', 'entry -1:1') l1 = ('debug_print', preambletoken.repr_of_descr() + ':1') l2 = ('debug_print', targettoken.repr_of_descr() + ':9') - assert ('jit-backend-counts', [l1, l2]) in dlog + assert ('jit-backend-counts', [l0, l1, l2]) in dlog def test_debugger_checksum(self): loop = """ diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -162,7 +162,6 @@ _ll_4_list_setslice = rlist.ll_listsetslice _ll_2_list_delslice_startonly = rlist.ll_listdelslice_startonly _ll_3_list_delslice_startstop = rlist.ll_listdelslice_startstop -_ll_1_list_list2fixed = lltypesystem_rlist.ll_list2fixed _ll_2_list_inplace_mul = rlist.ll_inplace_mul _ll_2_list_getitem_foldable = _ll_2_list_getitem diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -112,33 +112,26 @@ """ from pypy.jit.metainterp.optimizeopt import optimize_trace - history = metainterp.history metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd + history = metainterp.history - if False: - part = partial_trace - assert False - procedur_token = metainterp.get_procedure_token(greenkey) - assert procedure_token - all_target_tokens = [] - else: - jitcell_token = make_jitcell_token(jitdriver_sd) - part = create_empty_loop(metainterp) - part.inputargs = inputargs[:] - h_ops = history.operations - part.resume_at_jump_descr = resume_at_jump_descr - part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ - [h_ops[i].clone() for i in range(start, len(h_ops))] + \ - [ResOperation(rop.LABEL, jumpargs, None, descr=jitcell_token)] + jitcell_token = make_jitcell_token(jitdriver_sd) + part = create_empty_loop(metainterp) + part.inputargs = inputargs[:] + h_ops = history.operations + part.resume_at_jump_descr = resume_at_jump_descr + part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ + [h_ops[i].clone() for i in range(start, len(h_ops))] + \ + [ResOperation(rop.LABEL, jumpargs, None, descr=jitcell_token)] - try: - optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) - except InvalidLoop: - return None - target_token = part.operations[0].getdescr() - assert isinstance(target_token, TargetToken) - all_target_tokens = [target_token] + try: + optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) + except InvalidLoop: + return None + target_token = part.operations[0].getdescr() + assert isinstance(target_token, TargetToken) + all_target_tokens = [target_token] loop = create_empty_loop(metainterp) loop.inputargs = part.inputargs @@ -176,10 +169,10 @@ loop.original_jitcell_token = jitcell_token for label in all_target_tokens: assert isinstance(label, TargetToken) - label.original_jitcell_token = jitcell_token if label.virtual_state and label.short_preamble: metainterp_sd.logger_ops.log_short_preamble([], label.short_preamble) jitcell_token.target_tokens = all_target_tokens + propagate_original_jitcell_token(loop) send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) return all_target_tokens[0] @@ -247,11 +240,11 @@ for box in loop.inputargs: assert isinstance(box, Box) - target_token = loop.operations[-1].getdescr() + target_token = loop.operations[-1].getdescr() resumekey.compile_and_attach(metainterp, loop) + target_token = label.getdescr() assert isinstance(target_token, TargetToken) - target_token.original_jitcell_token = loop.original_jitcell_token record_loop_or_bridge(metainterp_sd, loop) return target_token @@ -288,6 +281,15 @@ assert i == len(inputargs) loop.operations = extra_ops + loop.operations +def propagate_original_jitcell_token(trace): + for op in trace.operations: + if op.getopnum() == rop.LABEL: + token = op.getdescr() + assert isinstance(token, TargetToken) + assert token.original_jitcell_token is None + token.original_jitcell_token = trace.original_jitcell_token + + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: @@ -319,7 +321,10 @@ metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # - metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset) + loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, + type, ops_offset, + name=loopname) # if metainterp_sd.warmrunnerdesc is not None: # for tests metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive(original_jitcell_token) @@ -558,6 +563,7 @@ inputargs = metainterp.history.inputargs if not we_are_translated(): self._debug_suboperations = new_loop.operations + propagate_original_jitcell_token(new_loop) send_bridge_to_backend(metainterp.jitdriver_sd, metainterp.staticdata, self, inputargs, new_loop.operations, new_loop.original_jitcell_token) @@ -744,6 +750,7 @@ jitdriver_sd = metainterp.jitdriver_sd redargs = new_loop.inputargs new_loop.original_jitcell_token = jitcell_token = make_jitcell_token(jitdriver_sd) + propagate_original_jitcell_token(new_loop) send_loop_to_backend(self.original_greenkey, metainterp.jitdriver_sd, metainterp_sd, new_loop, "entry bridge") # send the new_loop to warmspot.py, to be called directly the next time diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -79,9 +79,9 @@ opnum == rop.COPYSTRCONTENT or opnum == rop.COPYUNICODECONTENT): return - if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: - return - if rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST: + if (rop._OVF_FIRST <= opnum <= rop._OVF_LAST or + rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST or + rop._GUARD_FIRST <= opnum <= rop._GUARD_LAST): return if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() diff --git a/pypy/jit/metainterp/logger.py b/pypy/jit/metainterp/logger.py --- a/pypy/jit/metainterp/logger.py +++ b/pypy/jit/metainterp/logger.py @@ -13,14 +13,14 @@ self.metainterp_sd = metainterp_sd self.guard_number = guard_number - def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None): + def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None, name=''): if type is None: debug_start("jit-log-noopt-loop") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-loop") else: debug_start("jit-log-opt-loop") - debug_print("# Loop", number, ":", type, + debug_print("# Loop", number, '(%s)' % name , ":", type, "with", len(operations), "ops") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-opt-loop") diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -386,6 +386,17 @@ """ self.optimize_loop(ops, expected) + def test_virtual_as_field_of_forced_box(self): + ops = """ + [p0] + pv1 = new_with_vtable(ConstClass(node_vtable)) + label(pv1, p0) + pv2 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(pv2, pv1, descr=valuedescr) + jump(pv1, pv2) + """ + with raises(InvalidLoop): + self.optimize_loop(ops, ops) class OptRenameStrlen(Optimization): def propagate_forward(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -409,7 +409,13 @@ if self.level == LEVEL_CONSTANT: return assert 0 <= self.position_in_notvirtuals - boxes[self.position_in_notvirtuals] = value.force_box(optimizer) + if optimizer: + box = value.force_box(optimizer) + else: + if value.is_virtual(): + raise BadVirtualState + box = value.get_key_box() + boxes[self.position_in_notvirtuals] = box def _enum(self, virtual_state): if self.level == LEVEL_CONSTANT: @@ -471,8 +477,14 @@ optimizer = optimizer.optearlyforce assert len(values) == len(self.state) inputargs = [None] * len(self.notvirtuals) + + # We try twice. The first time around we allow boxes to be forced + # which might change the virtual state if the box appear in more + # than one place among the inputargs. for i in range(len(values)): self.state[i].enum_forced_boxes(inputargs, values[i], optimizer) + for i in range(len(values)): + self.state[i].enum_forced_boxes(inputargs, values[i], None) if keyboxes: for i in range(len(values)): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -976,10 +976,13 @@ self.verify_green_args(jitdriver_sd, greenboxes) self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, greenboxes) - + if self.metainterp.seen_loop_header_for_jdindex < 0: - if not jitdriver_sd.no_loop_header or not any_operation: + if not any_operation: return + if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if not jitdriver_sd.no_loop_header: + return # automatically add a loop_header if there is none self.metainterp.seen_loop_header_for_jdindex = jdindex # @@ -2053,9 +2056,15 @@ from pypy.jit.metainterp.resoperation import opname raise NotImplementedError(opname[opnum]) - def get_procedure_token(self, greenkey): + def get_procedure_token(self, greenkey, with_compiled_targets=False): cell = self.jitdriver_sd.warmstate.jit_cell_at_key(greenkey) - return cell.get_procedure_token() + token = cell.get_procedure_token() + if with_compiled_targets: + if not token: + return None + if not token.target_tokens: + return None + return token def compile_loop(self, original_boxes, live_arg_boxes, start, resume_at_jump_descr): num_green_args = self.jitdriver_sd.num_green_args @@ -2088,11 +2097,9 @@ def compile_trace(self, live_arg_boxes, resume_at_jump_descr): num_green_args = self.jitdriver_sd.num_green_args greenkey = live_arg_boxes[:num_green_args] - target_jitcell_token = self.get_procedure_token(greenkey) + target_jitcell_token = self.get_procedure_token(greenkey, True) if not target_jitcell_token: return - if not target_jitcell_token.target_tokens: - return self.history.record(rop.JUMP, live_arg_boxes[num_green_args:], None, descr=target_jitcell_token) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2697,7 +2697,7 @@ # bridge back to the preamble of the first loop is produced. A guard in # this bridge is later traced resulting in a failed attempt of retracing # the second loop. - self.check_trace_count(8) + self.check_trace_count(9) # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -18,7 +18,7 @@ self.seen.append((inputargs, operations, token)) class FakeLogger(object): - def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None): + def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None, name=''): pass def repr_of_resop(self, op): diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -255,6 +255,11 @@ assert h.getarrayitem(box1, descr1, index1) is box2 assert h.getarrayitem(box1, descr1, index2) is box4 + h.invalidate_caches(rop.GUARD_TRUE, None, []) + assert h.getfield(box1, descr1) is box2 + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr1, index2) is box4 + h.invalidate_caches( rop.CALL_LOOPINVARIANT, FakeCallDescr(FakeEffektinfo.EF_LOOPINVARIANT), []) diff --git a/pypy/jit/metainterp/test/test_logger.py b/pypy/jit/metainterp/test/test_logger.py --- a/pypy/jit/metainterp/test/test_logger.py +++ b/pypy/jit/metainterp/test/test_logger.py @@ -180,7 +180,7 @@ def test_intro_loop(self): bare_logger = logger.Logger(self.make_metainterp_sd()) output = capturing(bare_logger.log_loop, [], [], 1, "foo") - assert output.splitlines()[0] == "# Loop 1 : foo with 0 ops" + assert output.splitlines()[0] == "# Loop 1 () : foo with 0 ops" pure_parse(output) def test_intro_bridge(self): diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -756,7 +756,7 @@ res = self.meta_interp(interpret, [1]) assert res == interpret(1) # XXX it's unsure how many loops should be there - self.check_trace_count(3) + self.check_trace_count(2) def test_path_with_operations_not_from_start(self): jitdriver = JitDriver(greens = ['k'], reds = ['n', 'z']) diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -880,7 +880,7 @@ elif op == 'j': j = Int(0) elif op == '+': - sa += i.val * j.val + sa += (i.val + 2) * (j.val + 2) elif op == 'a': i = Int(i.val + 1) elif op == 'b': @@ -902,6 +902,7 @@ assert res == f(10) self.check_aborted_count(0) self.check_target_token_count(3) + self.check_resops(int_mul=2) def test_nested_loops_bridge(self): class Int(object): diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -4,6 +4,7 @@ class PyPyModule(MixedModule): interpleveldefs = { 'debug_repr': 'interp_extras.debug_repr', + 'remove_invalidates': 'interp_extras.remove_invalidates', } appleveldefs = {} diff --git a/pypy/module/micronumpy/interp_extras.py b/pypy/module/micronumpy/interp_extras.py --- a/pypy/module/micronumpy/interp_extras.py +++ b/pypy/module/micronumpy/interp_extras.py @@ -5,3 +5,11 @@ @unwrap_spec(array=BaseArray) def debug_repr(space, array): return space.wrap(array.find_sig().debug_repr()) + + at unwrap_spec(array=BaseArray) +def remove_invalidates(space, array): + """ Array modification will no longer invalidate any of it's + potential children. Use only for performance debugging + """ + del array.invalidates[:] + return space.w_None diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped +from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature from pypy.module.micronumpy.strides import calculate_slice_strides @@ -14,22 +14,26 @@ numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'] + reds=['result_size', 'frame', 'ri', 'self', 'result'], + get_printable_location=signature.new_printable_location('numpy'), ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['frame', 'self', 'dtype'] + reds=['frame', 'self', 'dtype'], + get_printable_location=signature.new_printable_location('all'), ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['frame', 'self', 'dtype'] + reds=['frame', 'self', 'dtype'], + get_printable_location=signature.new_printable_location('any'), ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'] + reds=['self', 'frame', 'source', 'res_iter'], + get_printable_location=signature.new_printable_location('slice'), ) def _find_shape_and_elems(space, w_iterable): @@ -291,7 +295,8 @@ def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], - reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'] + reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], + get_printable_location=signature.new_printable_location(op_name), ) def loop(self): sig = self.find_sig() @@ -578,8 +583,8 @@ strides.append(concrete.strides[i]) backstrides.append(concrete.backstrides[i]) shape.append(concrete.shape[i]) - return space.wrap(W_NDimSlice(concrete.start, strides[:], - backstrides[:], shape[:], concrete)) + return space.wrap(W_NDimSlice(concrete.start, strides, + backstrides, shape, concrete)) def descr_get_flatiter(self, space): return space.wrap(W_FlatIterator(self)) @@ -820,8 +825,8 @@ if self.order == 'C': strides.reverse() backstrides.reverse() - self.strides = strides[:] - self.backstrides = backstrides[:] + self.strides = strides + self.backstrides = backstrides def array_sig(self, res_shape): if res_shape is not None and self.shape != res_shape: @@ -1025,9 +1030,9 @@ strides.reverse() backstrides.reverse() new_shape.reverse() - self.strides = strides[:] - self.backstrides = backstrides[:] - self.shape = new_shape[:] + self.strides = strides + self.backstrides = backstrides + self.shape = new_shape return new_strides = calc_new_strides(new_shape, self.shape, self.strides) if new_strides is None: @@ -1037,7 +1042,7 @@ for nd in range(len(new_shape)): new_backstrides[nd] = (new_shape[nd] - 1) * new_strides[nd] self.strides = new_strides[:] - self.backstrides = new_backstrides[:] + self.backstrides = new_backstrides self.shape = new_shape[:] class W_NDimArray(ConcreteArray): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -1,9 +1,10 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, unwrap_spec +from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, types -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature, find_sig +from pypy.module.micronumpy import interp_boxes, interp_dtype +from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature,\ + find_sig, new_printable_location from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name @@ -11,7 +12,8 @@ reduce_driver = jit.JitDriver( greens = ['shapelen', "sig"], virtualizables = ["frame"], - reds = ["frame", "self", "dtype", "value", "obj"] + reds = ["frame", "self", "dtype", "value", "obj"], + get_printable_location=new_printable_location('reduce'), ) class W_Ufunc(Wrappable): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -5,6 +5,11 @@ from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib.jit import hint, unroll_safe, promote +def new_printable_location(driver_name): + def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) + return get_printable_location + def sigeq(one, two): return one.eq(two) diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -1,4 +1,9 @@ +from pypy.rlib import jit + + at jit.look_inside_iff(lambda shape, start, strides, backstrides, chunks: + jit.isconstant(len(chunks)) +) def calculate_slice_strides(shape, start, strides, backstrides, chunks): rstrides = [] rbackstrides = [] diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -898,6 +898,15 @@ b[0] = 3 assert debug_repr(b) == 'Array' + def test_remove_invalidates(self): + from numpypy import array + from numpypy.pypy import remove_invalidates + a = array([1, 2, 3]) + b = a + a + remove_invalidates(a) + a[0] = 14 + assert b[0] == 28 + def test_virtual_views(self): from numpypy import arange a = arange(15) diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -537,7 +537,7 @@ builder.append(by) builder.append_slice(input, upper, len(input)) else: - # An ok guess for the result size + # First compute the exact result size count = input.count(sub) if count > maxsplit and maxsplit > 0: count = maxsplit @@ -553,21 +553,16 @@ builder = StringBuilder(result_size) start = 0 sublen = len(sub) - first = True while maxsplit != 0: next = input.find(sub, start) if next < 0: break - if not first: - builder.append(by) - first = False builder.append_slice(input, start, next) + builder.append(by) start = next + sublen maxsplit -= 1 # NB. if it's already < 0, it stays < 0 - if not first: - builder.append(by) builder.append_slice(input, start, len(input)) return space.wrap(builder.build()) diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -528,6 +528,9 @@ set_param(driver, name1, int(value)) except ValueError: raise + break + else: + raise ValueError set_user_param._annspecialcase_ = 'specialize:arg(0)' diff --git a/pypy/rlib/rsre/rsre_jit.py b/pypy/rlib/rsre/rsre_jit.py --- a/pypy/rlib/rsre/rsre_jit.py +++ b/pypy/rlib/rsre/rsre_jit.py @@ -22,7 +22,7 @@ info = '%s/%d' % (info, args[debugprint[2]]) else: info = '' - return '%s%s %s' % (name, info, s) + return 're %s%s %s' % (name, info, s) # self.get_printable_location = get_printable_location diff --git a/pypy/rpython/lltypesystem/rlist.py b/pypy/rpython/lltypesystem/rlist.py --- a/pypy/rpython/lltypesystem/rlist.py +++ b/pypy/rpython/lltypesystem/rlist.py @@ -375,7 +375,6 @@ newitems = malloc(LIST.items.TO, n) rgc.ll_arraycopy(olditems, newitems, 0, 0, n) return newitems -ll_list2fixed.oopspec = 'list.list2fixed(l)' def ll_list2fixed_exact(l): ll_assert(l.length == len(l.items), "ll_list2fixed_exact: bad length") diff --git a/pypy/rpython/test/test_generator.py b/pypy/rpython/test/test_generator.py --- a/pypy/rpython/test/test_generator.py +++ b/pypy/rpython/test/test_generator.py @@ -54,6 +54,26 @@ res = self.interpret(f, [0]) assert res == 42 + def test_except_block(self): + def foo(): + raise ValueError + def g(a, b, c): + yield a + yield b + try: + foo() + except ValueError: + pass + yield c + def f(): + gen = g(3, 5, 8) + x = gen.next() * 100 + x += gen.next() * 10 + x += gen.next() + return x + res = self.interpret(f, []) + assert res == 358 + class TestLLtype(BaseTestGenerator, LLRtypeMixin): pass diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -24,27 +24,24 @@ self.failargs = failargs def getarg(self, i): - return self._getvar(self.args[i]) + return self.args[i] def getargs(self): - return [self._getvar(v) for v in self.args] + return self.args[:] def getres(self): - return self._getvar(self.res) + return self.res def getdescr(self): return self.descr - def _getvar(self, v): - return v - def is_guard(self): return self._is_guard def repr(self): args = self.getargs() if self.descr is not None: - args.append('descr=%s' % self.getdescr()) + args.append('descr=%s' % self.descr) arglist = ', '.join(args) if self.res is not None: return '%s = %s(%s)' % (self.getres(), self.name, arglist) @@ -53,8 +50,6 @@ def __repr__(self): return self.repr() - ## return '<%s (%s)>' % (self.name, ', '.join([repr(a) - ## for a in self.args])) class SimpleParser(OpParser): @@ -146,18 +141,27 @@ is_bytecode = True inline_level = None - def __init__(self, operations, storage): - if operations[0].name == 'debug_merge_point': - self.inline_level = int(operations[0].args[0]) - m = re.search('\w]+)\. file \'(.+?)\'\. line (\d+)> #(\d+) (\w+)', - operations[0].args[1]) - if m is None: - # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = operations[0].args[1][1:-1] - else: - self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() - self.startlineno = int(lineno) - self.bytecode_no = int(bytecode_no) + def parse_code_data(self, arg): + m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', + arg) + if m is None: + # a non-code loop, like StrLiteralSearch or something + self.bytecode_name = arg + else: + self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() + self.startlineno = int(lineno) + self.bytecode_no = int(bytecode_no) + + + def __init__(self, operations, storage, loopname): + for op in operations: + if op.name == 'debug_merge_point': + self.inline_level = int(op.args[0]) + self.parse_code_data(op.args[1][1:-1]) + break + else: + self.inline_level = 0 + self.parse_code_data(loopname) self.operations = operations self.storage = storage self.code = storage.disassemble_code(self.filename, self.startlineno, @@ -165,7 +169,7 @@ def repr(self): if self.filename is None: - return "Unknown" + return self.bytecode_name return "%s, file '%s', line %d" % (self.name, self.filename, self.startlineno) @@ -220,7 +224,8 @@ self.storage = storage @classmethod - def from_operations(cls, operations, storage, limit=None, inputargs=''): + def from_operations(cls, operations, storage, limit=None, inputargs='', + loopname=''): """ Slice given operation list into a chain of TraceForOpcode chunks. Also detect inlined functions and make them Function """ @@ -246,13 +251,13 @@ for op in operations: if op.name == 'debug_merge_point': if so_far: - append_to_res(cls.TraceForOpcode(so_far, storage)) + append_to_res(cls.TraceForOpcode(so_far, storage, loopname)) if limit: break so_far = [] so_far.append(op) if so_far: - append_to_res(cls.TraceForOpcode(so_far, storage)) + append_to_res(cls.TraceForOpcode(so_far, storage, loopname)) # wrap stack back up if not stack: # no ops whatsoever @@ -300,7 +305,7 @@ def repr(self): if self.filename is None: - return "Unknown" + return self.chunks[0].bytecode_name return "%s, file '%s', line %d" % (self.name, self.filename, self.startlineno) @@ -385,18 +390,27 @@ parser.postprocess(loop, backend_tp=bname, backend_dump=dump, dump_start=start_ofs)) - loops.append(loop) + loops += split_trace(loop) return log, loops def split_trace(trace): - labels = [i for i, op in enumerate(trace.operations) - if op.name == 'label'] - labels = [0] + labels + [len(trace.operations) - 1] + labels = [0] + if trace.comment and 'Guard' in trace.comment: + descrs = ['bridge ' + re.search('Guard (\d+)', trace.comment).group(1)] + else: + descrs = ['entry ' + re.search('Loop (\d+)', trace.comment).group(1)] + for i, op in enumerate(trace.operations): + if op.name == 'label': + labels.append(i) + descrs.append(op.descr) + labels.append(len(trace.operations) - 1) parts = [] for i in range(len(labels) - 1): start, stop = labels[i], labels[i+1] part = copy(trace) part.operations = trace.operations[start : stop + 1] + part.descr = descrs[i] + part.comment = trace.comment parts.append(part) return parts @@ -407,11 +421,7 @@ lines = input[-1].splitlines() mapping = {} for loop in loops: - com = loop.comment - if 'Loop' in com: - mapping['loop ' + re.search('Loop (\d+)', com).group(1)] = loop - else: - mapping['bridge ' + re.search('Guard (\d+)', com).group(1)] = loop + mapping[loop.descr] = loop for line in lines: if line: num, count = line.split(':', 2) diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -1,6 +1,7 @@ from pypy.tool.jitlogparser.parser import (SimpleParser, TraceForOpcode, Function, adjust_bridges, - import_log, split_trace, Op) + import_log, split_trace, Op, + parse_log_counts) from pypy.tool.jitlogparser.storage import LoopStorage import py, sys @@ -32,23 +33,26 @@ ''') res = Function.from_operations(ops.operations, LoopStorage()) assert len(res.chunks) == 1 - assert res.chunks[0].repr() + assert 'SomeRandomStuff' in res.chunks[0].repr() def test_split(): ops = parse(''' [i0] + label() debug_merge_point(0, " #10 ADD") debug_merge_point(0, " #11 SUB") i1 = int_add(i0, 1) debug_merge_point(0, " #11 SUB") i2 = int_add(i1, 1) ''') - res = Function.from_operations(ops.operations, LoopStorage()) - assert len(res.chunks) == 3 + res = Function.from_operations(ops.operations, LoopStorage(), loopname='') + assert len(res.chunks) == 4 assert len(res.chunks[0].operations) == 1 - assert len(res.chunks[1].operations) == 2 + assert len(res.chunks[1].operations) == 1 assert len(res.chunks[2].operations) == 2 - assert res.chunks[2].bytecode_no == 11 + assert len(res.chunks[3].operations) == 2 + assert res.chunks[3].bytecode_no == 11 + assert res.chunks[0].bytecode_name == '' def test_inlined_call(): ops = parse(""" @@ -236,16 +240,46 @@ loop = parse(''' [i7] i9 = int_lt(i7, 1003) - label(i9) + label(i9, descr=grrr) guard_true(i9, descr=) [] i13 = getfield_raw(151937600, descr=) - label(i13) + label(i13, descr=asb) i19 = int_lt(i13, 1003) guard_true(i19, descr=) [] i113 = getfield_raw(151937600, descr=) ''') + loop.comment = 'Loop 0' parts = split_trace(loop) assert len(parts) == 3 assert len(parts[0].operations) == 2 assert len(parts[1].operations) == 4 assert len(parts[2].operations) == 4 + assert parts[1].descr == 'grrr' + assert parts[2].descr == 'asb' + +def test_parse_log_counts(): + loop = parse(''' + [i7] + i9 = int_lt(i7, 1003) + label(i9, descr=grrr) + guard_true(i9, descr=) [] + i13 = getfield_raw(151937600, descr=) + label(i13, descr=asb) + i19 = int_lt(i13, 1003) + guard_true(i19, descr=) [] + i113 = getfield_raw(151937600, descr=) + ''') + bridge = parse(''' + # bridge out of Guard 2 with 1 ops + [] + i0 = int_lt(1, 2) + finish(i0) + ''') + bridge.comment = 'bridge out of Guard 2 with 1 ops' + loop.comment = 'Loop 0' + loops = split_trace(loop) + split_trace(bridge) + input = ['grrr:123\nasb:12\nbridge 2:1234'] + parse_log_counts(input, loops) + assert loops[-1].count == 1234 + assert loops[1].count == 123 + assert loops[2].count == 12 diff --git a/pypy/translator/generator.py b/pypy/translator/generator.py --- a/pypy/translator/generator.py +++ b/pypy/translator/generator.py @@ -2,7 +2,7 @@ from pypy.objspace.flow.model import Variable, Constant, FunctionGraph from pypy.translator.unsimplify import insert_empty_startblock from pypy.translator.unsimplify import split_block -from pypy.translator.simplify import eliminate_empty_blocks +from pypy.translator.simplify import eliminate_empty_blocks, simplify_graph from pypy.tool.sourcetools import func_with_new_name from pypy.interpreter.argument import Signature @@ -64,6 +64,7 @@ def next(self): entry = self.current self.current = None + assert entry is not None # else, recursive generator invocation (next_entry, return_value) = func(entry) self.current = next_entry return return_value @@ -91,6 +92,10 @@ block.inputargs = [v_entry1] def tweak_generator_body_graph(Entry, graph): + # First, always run simplify_graph in order to reduce the number of + # variables passed around + simplify_graph(graph) + # assert graph.startblock.operations[0].opname == 'generator_mark' graph.startblock.operations.pop(0) # @@ -100,12 +105,20 @@ # mappings = [Entry] # + stopblock = Block([]) + v0 = Variable(); v1 = Variable() + stopblock.operations = [ + SpaceOperation('simple_call', [Constant(StopIteration)], v0), + SpaceOperation('type', [v0], v1), + ] + stopblock.closeblock(Link([v1, v0], graph.exceptblock)) + # for block in list(graph.iterblocks()): for exit in block.exits: if exit.target is graph.returnblock: - exit.args = [Constant(StopIteration), - Constant(StopIteration())] - exit.target = graph.exceptblock + exit.args = [] + exit.target = stopblock + assert block is not stopblock for index in range(len(block.operations)-1, -1, -1): op = block.operations[index] if op.opname == 'yield': diff --git a/pypy/translator/sandbox/pypy_interact.py b/pypy/translator/sandbox/pypy_interact.py --- a/pypy/translator/sandbox/pypy_interact.py +++ b/pypy/translator/sandbox/pypy_interact.py @@ -13,7 +13,8 @@ ATM this only works with PyPy translated with Boehm or the semispace or generation GCs. --timeout=N limit execution time to N (real-time) seconds. - --log=FILE log all user input into the FILE + --log=FILE log all user input into the FILE. + --verbose log all proxied system calls. Note that you can get readline-like behavior with a tool like 'ledit', provided you use enough -u options: @@ -26,18 +27,19 @@ from pypy.translator.sandbox.sandlib import SimpleIOSandboxedProc from pypy.translator.sandbox.sandlib import VirtualizedSandboxedProc from pypy.translator.sandbox.vfs import Dir, RealDir, RealFile -from pypy.tool.lib_pypy import LIB_ROOT +import pypy +LIB_ROOT = os.path.dirname(os.path.dirname(pypy.__file__)) class PyPySandboxedProc(VirtualizedSandboxedProc, SimpleIOSandboxedProc): - debug = True argv0 = '/bin/pypy-c' virtual_cwd = '/tmp' virtual_env = {} virtual_console_isatty = True - def __init__(self, executable, arguments, tmpdir=None): + def __init__(self, executable, arguments, tmpdir=None, debug=True): self.executable = executable = os.path.abspath(executable) self.tmpdir = tmpdir + self.debug = debug super(PyPySandboxedProc, self).__init__([self.argv0] + arguments, executable=executable) @@ -67,12 +69,13 @@ if __name__ == '__main__': from getopt import getopt # and not gnu_getopt! - options, arguments = getopt(sys.argv[1:], 't:h', + options, arguments = getopt(sys.argv[1:], 't:hv', ['tmp=', 'heapsize=', 'timeout=', 'log=', - 'help']) + 'verbose', 'help']) tmpdir = None timeout = None logfile = None + debug = False extraoptions = [] def help(): @@ -104,6 +107,8 @@ timeout = int(value) elif option == '--log': logfile = value + elif option in ['-v', '--verbose']: + debug = True elif option in ['-h', '--help']: help() else: @@ -113,7 +118,7 @@ help() sandproc = PyPySandboxedProc(arguments[0], extraoptions + arguments[1:], - tmpdir=tmpdir) + tmpdir=tmpdir, debug=debug) if timeout is not None: sandproc.settimeout(timeout, interrupt_main=True) if logfile is not None: diff --git a/pypy/translator/sandbox/sandlib.py b/pypy/translator/sandbox/sandlib.py --- a/pypy/translator/sandbox/sandlib.py +++ b/pypy/translator/sandbox/sandlib.py @@ -4,25 +4,29 @@ for the outer process, which can run CPython or PyPy. """ -import py import sys, os, posixpath, errno, stat, time -from pypy.tool.ansi_print import AnsiLog import subprocess from pypy.tool.killsubprocess import killsubprocess from pypy.translator.sandbox.vfs import UID, GID +import py -class MyAnsiLog(AnsiLog): - KW_TO_COLOR = { - 'call': ((34,), False), - 'result': ((34,), False), - 'exception': ((34,), False), - 'vpath': ((35,), False), - 'timeout': ((1, 31), True), - } +def create_log(): + """Make and return a log for the sandbox to use, if needed.""" + # These imports are local to avoid importing pypy if we don't need to. + from pypy.tool.ansi_print import AnsiLog -log = py.log.Producer("sandlib") -py.log.setconsumer("sandlib", MyAnsiLog()) + class MyAnsiLog(AnsiLog): + KW_TO_COLOR = { + 'call': ((34,), False), + 'result': ((34,), False), + 'exception': ((34,), False), + 'vpath': ((35,), False), + 'timeout': ((1, 31), True), + } + log = py.log.Producer("sandlib") + py.log.setconsumer("sandlib", MyAnsiLog()) + return log # Note: we use lib_pypy/marshal.py instead of the built-in marshal # for two reasons. The built-in module could be made to segfault @@ -30,8 +34,9 @@ # load(). Also, marshal.load(f) blocks with the GIL held when # f is a pipe with no data immediately avaialble, preventing the # _waiting_thread to run. -from pypy.tool.lib_pypy import import_from_lib_pypy -marshal = import_from_lib_pypy('marshal') +import pypy +marshal = py.path.local(pypy.__file__).join('..', '..', 'lib_pypy', + 'marshal.py').pyimport() # Non-marshal result types RESULTTYPE_STATRESULT = object() @@ -126,6 +131,7 @@ for the external functions xxx that you want to support. """ debug = False + log = None os_level_sandboxing = False # Linux only: /proc/PID/seccomp def __init__(self, args, executable=None): @@ -142,6 +148,9 @@ self.currenttimeout = None self.currentlyidlefrom = None + if self.debug: + self.log = create_log() + def withlock(self, function, *args, **kwds): lock = self.popenlock if lock is not None: @@ -169,7 +178,8 @@ if delay <= 0.0: break # expired! time.sleep(min(delay*1.001, 1)) - log.timeout("timeout!") + if self.log: + self.log.timeout("timeout!") self.kill() #if interrupt_main: # if hasattr(os, 'kill'): @@ -246,22 +256,22 @@ args = read_message(child_stdout) except EOFError, e: break - if self.debug and not self.is_spam(fnname, *args): - log.call('%s(%s)' % (fnname, + if self.log and not self.is_spam(fnname, *args): + self.log.call('%s(%s)' % (fnname, ', '.join([shortrepr(x) for x in args]))) try: answer, resulttype = self.handle_message(fnname, *args) except Exception, e: tb = sys.exc_info()[2] write_exception(child_stdin, e, tb) - if self.debug: + if self.log: if str(e): - log.exception('%s: %s' % (e.__class__.__name__, e)) + self.log.exception('%s: %s' % (e.__class__.__name__, e)) else: - log.exception('%s' % (e.__class__.__name__,)) + self.log.exception('%s' % (e.__class__.__name__,)) else: - if self.debug and not self.is_spam(fnname, *args): - log.result(shortrepr(answer)) + if self.log and not self.is_spam(fnname, *args): + self.log.result(shortrepr(answer)) try: write_message(child_stdin, 0) # error code - 0 for ok write_message(child_stdin, answer, resulttype) @@ -440,7 +450,8 @@ node = dirnode.join(name) else: node = dirnode - log.vpath('%r => %r' % (vpath, node)) + if self.log: + self.log.vpath('%r => %r' % (vpath, node)) return node def do_ll_os__ll_os_stat(self, vpathname): From noreply at buildbot.pypy.org Mon Jan 2 11:53:53 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 2 Jan 2012 11:53:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: replace tabs with spaces Message-ID: <20120102105353.7BE5282111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50972:0b164b7fc20c Date: 2012-01-02 11:53 +0100 http://bitbucket.org/pypy/pypy/changeset/0b164b7fc20c/ Log: replace tabs with spaces diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -713,13 +713,13 @@ def gen_64_bit_func_descrs(self): d0 = self.datablockwrapper.malloc_aligned(3*WORD, alignment=1) d1 = self.datablockwrapper.malloc_aligned(3*WORD, alignment=1) - return [d0, d1] + return [d0, d1] def write_64_bit_func_descr(self, descr, start_addr): data = rffi.cast(rffi.CArrayPtr(lltype.Signed), descr) - data[0] = start_addr - data[1] = 0 - data[2] = 0 + data[0] = start_addr + data[1] = 0 + data[2] = 0 def compute_frame_depth(self, regalloc): PARAMETER_AREA = self.max_stack_params * WORD From noreply at buildbot.pypy.org Mon Jan 2 11:58:25 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 2 Jan 2012 11:58:25 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove some more tabs Message-ID: <20120102105825.C1EA782111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50973:5d2419018c35 Date: 2012-01-02 11:58 +0100 http://bitbucket.org/pypy/pypy/changeset/5d2419018c35/ Log: remove some more tabs diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -355,10 +355,10 @@ mc.mr(r.r5.value, r.SPP.value) self._restore_nonvolatiles(mc, r.r5) # load old backchain into r4 - if IS_PPC_32: - ofs = WORD - else: - ofs = WORD * 2 + if IS_PPC_32: + ofs = WORD + else: + ofs = WORD * 2 mc.load(r.r4.value, r.r5.value, self.OFFSET_SPP_TO_OLD_BACKCHAIN + ofs) mc.mtlr(r.r4.value) # restore LR # From SPP, we have a constant offset to the old backchain. We use the @@ -541,8 +541,8 @@ self.gen_direct_bootstrap_code(loophead, looptoken, inputargs, frame_depth) self.write_pending_failure_recoveries() - if IS_PPC_64: - fdescrs = self.gen_64_bit_func_descrs() + if IS_PPC_64: + fdescrs = self.gen_64_bit_func_descrs() loop_start = self.materialize_loop(looptoken, False) looptoken._ppc_bootstrap_code = loop_start From noreply at buildbot.pypy.org Mon Jan 2 13:01:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 2 Jan 2012 13:01:31 +0100 (CET) Subject: [pypy-commit] pypy default: fix test Message-ID: <20120102120131.3B19182111@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r50974:a58d0c303bce Date: 2012-01-02 13:01 +0100 http://bitbucket.org/pypy/pypy/changeset/a58d0c303bce/ Log: fix test diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -420,8 +420,8 @@ debug._log = None # assert ops_offset is looptoken._x86_ops_offset - # getfield_raw/int_add/setfield_raw + ops + None - assert len(ops_offset) == 3 + len(operations) + 1 + # 2*(getfield_raw/int_add/setfield_raw) + ops + None + assert len(ops_offset) == 2*3 + len(operations) + 1 assert (ops_offset[operations[0]] <= ops_offset[operations[1]] <= ops_offset[operations[2]] <= From noreply at buildbot.pypy.org Mon Jan 2 13:43:53 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 2 Jan 2012 13:43:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: factor out implementation of calls Message-ID: <20120102124353.4363A82111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50975:5f4684c8251c Date: 2012-01-02 13:43 +0100 http://bitbucket.org/pypy/pypy/changeset/5f4684c8251c/ Log: factor out implementation of calls diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -7,7 +7,7 @@ from pypy.jit.backend.ppc.ppcgen.assembler import Assembler from pypy.jit.backend.ppc.ppcgen.symbol_lookup import lookup from pypy.jit.backend.ppc.ppcgen.arch import (IS_PPC_32, WORD, NONVOLATILES, - GPR_SAVE_AREA) + GPR_SAVE_AREA, IS_PPC_64) from pypy.jit.backend.ppc.ppcgen.helper.assembler import gen_emit_cmp_op import pypy.jit.backend.ppc.ppcgen.register as r import pypy.jit.backend.ppc.ppcgen.condition as c @@ -1028,12 +1028,25 @@ self.trap() self.bctr() - def bl_abs(self, address): - self.alloc_scratch_reg(address) + def call(self, address): + """ do a call to an absolute address + """ + self.alloc_scratch_reg() + if IS_PPC_32: + self.load_imm(r.SCRATCH, address) + else: + self.store(r.TOC.value, r.SP.value, 5 * WORD) + self.load_imm(r.r11, address) + self.load(r.SCRATCH.value, r.r11.value, 0) + self.load(r.r2.value, r.r11.value, WORD) + self.load(r.r11.value, r.r11.value, 2 * WORD) self.mtctr(r.SCRATCH.value) self.free_scratch_reg() self.bctrl() + if IS_PPC_64: + self.load(t.TOC.value, r.SP.value, 5 * WORD) + def load(self, target_reg, base_reg, offset): if IS_PPC_32: self.lwz(target_reg, base_reg, offset) diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -411,17 +411,7 @@ remap_frame_layout(self, non_float_locs, non_float_regs, r.SCRATCH) # the actual call - if IS_PPC_32: - self.mc.bl_abs(adr) - else: - self.mc.std(r.TOC.value, r.SP.value, 5 * WORD) - self.mc.load_imm(r.r11, adr) - self.mc.load(r.SCRATCH.value, r.r11.value, 0) - self.mc.mtctr(r.SCRATCH.value) - self.mc.load(r.TOC.value, r.r11.value, WORD) - self.mc.load(r.r11.value, r.r11.value, 2 * WORD) - self.mc.bctrl() - self.mc.ld(r.TOC.value, r.SP.value, 5 * WORD) + self.mc.call(adr) self.mark_gc_roots(force_index) regalloc.possibly_free_vars(args) @@ -879,15 +869,7 @@ # # misaligned stack in the call, but it's ok because the write barrier # is not going to call anything more. - if IS_PPC_32: - self.mc.bl_abs(func) - else: - self.mc.load_imm(r.r11, func) - self.mc.load(r.SCRATCH.value, r.r11.value, 0) - self.mc.mtctr(r.SCRATCH.value) - self.mc.load(r.TOC.value, r.r11.value, WORD) - self.mc.load(r.r11.value, r.r11.value, 2 * WORD) - self.mc.bctrl() + self.mc.call(func) # patch the JZ above offset = self.mc.currpos() - jz_location @@ -952,15 +934,7 @@ # do call to helper function self.mov_loc_loc(arglocs[1], r.r4) - if IS_PPC_32: - self.mc.bl_abs(asm_helper_adr) - else: - self.mc.load_imm(r.r11, asm_helper_adr) - self.mc.load(r.SCRATCH.value, r.r11.value, 0) - self.mc.mtctr(r.SCRATCH.value) - self.mc.load(r.TOC.value, r.r11.value, WORD) - self.mc.load(r.r11.value, r.r11.value, 2 * WORD) - self.mc.bctrl() + self.mc.call(asm_helper_adr) if op.result: resloc = regalloc.after_call(op.result) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -277,15 +277,7 @@ mc = PPCBuilder() with Saved_Volatiles(mc): addr = self.cpu.get_on_leave_jitted_int(save_exception=True) - if IS_PPC_32: - mc.bl_abs(addr) - else: - mc.load_imm(r.r11, addr) - mc.load(r.SCRATCH.value, r.r11.value, 0) - mc.mtctr(r.SCRATCH.value) - mc.load(r.r2.value, r.r11.value, WORD) - mc.load(r.r11.value, r.r11.value, 2 * WORD) - mc.bctrl() + mc.call(addr) #mc.alloc_scratch_reg(self.cpu.propagate_exception_v) #mc.mr(r.RES.value, r.SCRATCH.value) #mc.free_scratch_reg() @@ -298,15 +290,7 @@ with Saved_Volatiles(mc): addr = self.cpu.get_on_leave_jitted_int(save_exception=save_exc) - if IS_PPC_32: - mc.bl_abs(addr) - else: - mc.load_imm(r.r11, addr) - mc.load(r.SCRATCH.value, r.r11.value, 0) - mc.mtctr(r.SCRATCH.value) - mc.load(r.r2.value, r.r11.value, WORD) - mc.load(r.r11.value, r.r11.value, 2 * WORD) - mc.bctrl() + mc.call(addr) mc.b_abs(self.exit_code_adr) mc.prepare_insts_blocks() @@ -333,23 +317,9 @@ mc.load(r.r3.value, r.SPP.value, self.ENCODING_AREA) # address of state encoding mc.mr(r.r4.value, r.SPP.value) # load spilling pointer # - # load address of decoding function into SCRATCH - if IS_PPC_32: - mc.alloc_scratch_reg(addr) - mc.mtctr(r.SCRATCH.value) - mc.free_scratch_reg() - # ... and branch there - mc.bctrl() - else: - mc.std(r.TOC.value, r.SP.value, 5 * WORD) - mc.load_imm(r.r11, addr) - mc.load(r.SCRATCH.value, r.r11.value, 0) - mc.mtctr(r.SCRATCH.value) - mc.load(r.TOC.value, r.r11.value, WORD) - mc.load(r.r11.value, r.r11.value, 2 * WORD) - mc.bctrl() - mc.ld(r.TOC.value, r.SP.value, 5 * WORD) - # + # call decoding function + mc.call(addr) + # save SPP in r5 # (assume that r5 has been written to failboxes) mc.mr(r.r5.value, r.SPP.value) From noreply at buildbot.pypy.org Mon Jan 2 13:51:25 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 2 Jan 2012 13:51:25 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove typo in codebuilder Message-ID: <20120102125125.E5ABD82111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50976:25b8f969eadb Date: 2012-01-02 04:50 -0800 http://bitbucket.org/pypy/pypy/changeset/25b8f969eadb/ Log: remove typo in codebuilder diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -1045,7 +1045,7 @@ self.bctrl() if IS_PPC_64: - self.load(t.TOC.value, r.SP.value, 5 * WORD) + self.load(r.TOC.value, r.SP.value, 5 * WORD) def load(self, target_reg, base_reg, offset): if IS_PPC_32: From noreply at buildbot.pypy.org Mon Jan 2 13:56:45 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 2 Jan 2012 13:56:45 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: skip test test_cond_call_gc_wb_array_card_marking_fast_path Message-ID: <20120102125645.45F5182111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50977:defdbd4220ca Date: 2012-01-02 13:56 +0100 http://bitbucket.org/pypy/pypy/changeset/defdbd4220ca/ Log: skip test test_cond_call_gc_wb_array_card_marking_fast_path diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py --- a/pypy/jit/backend/ppc/test/test_runner.py +++ b/pypy/jit/backend/ppc/test/test_runner.py @@ -1,5 +1,6 @@ from pypy.jit.backend.test.runner_test import LLtypeBackendTest from pypy.jit.backend.ppc.runner import PPC_64_CPU +import py class FakeStats(object): pass @@ -9,3 +10,6 @@ def setup_class(cls): cls.cpu = PPC_64_CPU(rtyper=None, stats=FakeStats()) cls.cpu.setup_once() + + def test_cond_call_gc_wb_array_card_marking_fast_path(self): + py.test.skip("unsure what to do here") From notifications-noreply at bitbucket.org Mon Jan 2 15:10:51 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Mon, 02 Jan 2012 14:10:51 -0000 Subject: [pypy-commit] Notification: testproject Message-ID: <20120102141051.20204.88380@bitbucket03.managed.contegix.com> You have received a notification from krischan. Hi, I forked pypy. My fork is at https://bitbucket.org/krischan/testproject. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Mon Jan 2 15:11:58 2012 From: noreply at buildbot.pypy.org (hager) Date: Mon, 2 Jan 2012 15:11:58 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): fix bug in prepare_guard_call_release_gil Message-ID: <20120102141158.E588A82111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50978:fa8e4ebd847b Date: 2012-01-02 15:11 +0100 http://bitbucket.org/pypy/pypy/changeset/fa8e4ebd847b/ Log: (bivab, hager): fix bug in prepare_guard_call_release_gil diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -938,7 +938,6 @@ def _write_fail_index(self, fail_index): self.mc.alloc_scratch_reg(fail_index) - self.mc.load_imm(r.SCRATCH, fail_index) self.mc.store(r.SCRATCH.value, r.SPP.value, self.ENCODING_AREA) self.mc.free_scratch_reg() diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -422,6 +422,7 @@ # do the call faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) + self.assembler._write_fail_index(fail_index) args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] self.assembler.emit_call(op, args, self, fail_index) # then reopen the stack From noreply at buildbot.pypy.org Mon Jan 2 21:43:35 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 2 Jan 2012 21:43:35 +0100 (CET) Subject: [pypy-commit] pypy default: Update some copyright years. Message-ID: <20120102204335.6F80682C04@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r50979:6a589f1a038a Date: 2012-01-02 14:43 -0600 http://bitbucket.org/pypy/pypy/changeset/6a589f1a038a/ Log: Update some copyright years. diff --git a/pypy/module/sys/app.py b/pypy/module/sys/app.py --- a/pypy/module/sys/app.py +++ b/pypy/module/sys/app.py @@ -66,11 +66,11 @@ return None copyright_str = """ -Copyright 2003-2011 PyPy development team. +Copyright 2003-2012 PyPy development team. All Rights Reserved. For further information, see -Portions Copyright (c) 2001-2008 Python Software Foundation. +Portions Copyright (c) 2001-2012 Python Software Foundation. All Rights Reserved. Portions Copyright (c) 2000 BeOpen.com. From noreply at buildbot.pypy.org Mon Jan 2 21:48:50 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 2 Jan 2012 21:48:50 +0100 (CET) Subject: [pypy-commit] pypy numpypy-repr-fix: add tests for issue 964 and more, make tests pass Message-ID: <20120102204850.7688E82C04@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-repr-fix Changeset: r50980:68dec1e17bd9 Date: 2012-01-02 22:46 +0200 http://bitbucket.org/pypy/pypy/changeset/68dec1e17bd9/ Log: add tests for issue 964 and more, make tests pass diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -409,7 +409,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -421,9 +421,13 @@ res.append(')') else: concrete.to_str(space, 1, res, indent=' ') - if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + if (dtype is interp_dtype.get_dtype_cache(space).w_float64dtype or \ + dtype.kind == interp_dtype.SIGNEDLTR and \ + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) \ + and self.size: + # Do not print dtype + pass + else: res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -840,14 +844,17 @@ each line will begin with indent. ''' size = self.size + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 start = True @@ -863,10 +870,9 @@ builder.append('\n' + indent) else: builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) builder.append('\n' + indent + '..., ') i = self.shape[0] - 3 while i < self.shape[0]: @@ -880,8 +886,9 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: spacer = ',' * comma + ' ' @@ -912,8 +919,6 @@ builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1347,6 +1347,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): from numpypy import array, zeros + intSize = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1354,11 +1355,23 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == intSize: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == intSize: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): from numpypy import array, zeros From noreply at buildbot.pypy.org Mon Jan 2 23:15:53 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 2 Jan 2012 23:15:53 +0100 (CET) Subject: [pypy-commit] pypy numpypy-repr-fix: additional tests, fixes to pass them Message-ID: <20120102221554.0156282C04@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-repr-fix Changeset: r50981:ee3e5819364d Date: 2012-01-03 00:11 +0200 http://bitbucket.org/pypy/pypy/changeset/ee3e5819364d/ Log: additional tests, fixes to pass them diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -422,8 +422,8 @@ else: concrete.to_str(space, 1, res, indent=' ') if (dtype is interp_dtype.get_dtype_cache(space).w_float64dtype or \ - dtype.kind == interp_dtype.SIGNEDLTR and \ - dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) \ + dtype.kind == interp_dtype.SIGNEDLTR and \ + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) \ and self.size: # Do not print dtype pass @@ -844,6 +844,8 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) dtype = self.find_dtype() if size < 1: builder.append('[]') @@ -857,30 +859,28 @@ use_ellipsis = True ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) view = self.create_slice([(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + builder.append(ccomma +'\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) @@ -891,30 +891,29 @@ use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -158,6 +158,7 @@ assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): from numpypy import ndarray, array, dtype @@ -725,19 +726,19 @@ a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): from numpypy import array @@ -954,13 +955,13 @@ def test_tolist_view(self): from numpypy import array - a = array([[1,2],[3,4]]) + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): from numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] @@ -1090,11 +1091,11 @@ from numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): @@ -1233,6 +1234,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1275,17 +1277,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1333,7 +1335,6 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): from numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail @@ -1374,7 +1375,7 @@ assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1387,6 +1388,16 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): from numpypy import array, zeros @@ -1430,7 +1441,7 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) From noreply at buildbot.pypy.org Tue Jan 3 10:02:04 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 3 Jan 2012 10:02:04 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Add myself Message-ID: <20120103090204.D518C82B1C@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: extradoc Changeset: r3995:abf475e17259 Date: 2012-01-03 10:00 +0100 http://bitbucket.org/pypy/extradoc/changeset/abf475e17259/ Log: Add myself diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt --- a/sprintinfo/leysin-winter-2012/people.txt +++ b/sprintinfo/leysin-winter-2012/people.txt @@ -11,6 +11,7 @@ Name Arrive/Depart Accomodation ==================== ============== ======================= Armin Rigo private +David Schneider 17/22 ermina ==================== ============== ======================= From noreply at buildbot.pypy.org Tue Jan 3 10:50:07 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 10:50:07 +0100 (CET) Subject: [pypy-commit] pypy default: minor simplifications and fixes that were laying around in my wc Message-ID: <20120103095007.307F882B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r50982:dfdc41ac371c Date: 2012-01-03 11:48 +0200 http://bitbucket.org/pypy/pypy/changeset/dfdc41ac371c/ Log: minor simplifications and fixes that were laying around in my wc diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -8,10 +8,12 @@ from pypy.tool import logparser from pypy.jit.tool.jitoutput import parse_prof from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range, - find_ids, TraceWithIds, + find_ids, OpMatcher, InvalidMatch) class BaseTestPyPyC(object): + log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary' + def setup_class(cls): if '__pypy__' not in sys.builtin_module_names: py.test.skip("must run this test with pypy") @@ -52,8 +54,7 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - print cmdline, logfile - env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)} + env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -98,7 +98,8 @@ end = time.time() return end - start # - log = self.run(main, [get_libc_name(), 200], threshold=150) + log = self.run(main, [get_libc_name(), 200], threshold=150, + import_site=True) assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead loops = log.loops_by_id('sleep') assert len(loops) == 1 # make sure that we actually JITted the loop @@ -121,7 +122,7 @@ return fabs._ptr.getaddr(), x libm_name = get_libm_name(sys.platform) - log = self.run(main, [libm_name]) + log = self.run(main, [libm_name], import_site=True) fabs_addr, res = log.result assert res == -4.0 loop, = log.loops_by_filename(self.filepath) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -15,7 +15,7 @@ i += letters[i % len(letters)] == uletters[i % len(letters)] return i - log = self.run(main, [300]) + log = self.run(main, [300], import_site=True) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" @@ -55,7 +55,7 @@ i += int(long(string.digits[i % len(string.digits)], 16)) return i - log = self.run(main, [1100]) + log = self.run(main, [1100], import_site=True) assert log.result == main(1100) loop, = log.loops_by_filename(self.filepath) assert loop.match(""" diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -185,7 +185,10 @@ return self.code.map[self.bytecode_no] def getlineno(self): - return self.getopcode().lineno + code = self.getopcode() + if code is None: + return None + return code.lineno lineno = property(getlineno) def getline_starts_here(self): diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py --- a/pypy/tool/jitlogparser/storage.py +++ b/pypy/tool/jitlogparser/storage.py @@ -6,7 +6,6 @@ import py import os from lib_pypy.disassembler import dis -from pypy.tool.jitlogparser.parser import Function from pypy.tool.jitlogparser.module_finder import gather_all_code_objs class LoopStorage(object): From noreply at buildbot.pypy.org Tue Jan 3 10:50:08 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 10:50:08 +0100 (CET) Subject: [pypy-commit] pypy default: (mikefc) add an ndim attribute Message-ID: <20120103095008.BD49682B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r50983:ac51513392d3 Date: 2012-01-03 11:49 +0200 http://bitbucket.org/pypy/pypy/changeset/ac51513392d3/ Log: (mikefc) add an ndim attribute diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -380,6 +380,9 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -1185,6 +1188,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -179,6 +179,19 @@ ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1,2]) + assert x.ndim == 1 + x = array([[1,2], [3,4]]) + assert x.ndim == 2 + x = array([[[1,2], [3,4]], [[5,6], [7,8]] ]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but numpypy raises an AttributeError + raises (TypeError, 'x.ndim=3') + def test_init(self): from numpypy import zeros a = zeros(15) From noreply at buildbot.pypy.org Tue Jan 3 12:16:05 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 3 Jan 2012 12:16:05 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: Generalize a bit harder by killing intbounds and constants at end of bridges forcing retracing. Message-ID: <20120103111605.229D982B1C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r50984:96d5bc2c492d Date: 2012-01-03 11:33 +0100 http://bitbucket.org/pypy/pypy/changeset/96d5bc2c492d/ Log: Generalize a bit harder by killing intbounds and constants at end of bridges forcing retracing. diff --git a/pypy/jit/metainterp/optimizeopt/generalize.py b/pypy/jit/metainterp/optimizeopt/generalize.py --- a/pypy/jit/metainterp/optimizeopt/generalize.py +++ b/pypy/jit/metainterp/optimizeopt/generalize.py @@ -5,15 +5,20 @@ self.optimizer = optimizer def apply(self): - raise NotImplementedError + for v in self.optimizer.values.values(): + self._apply(v) class KillHugeIntBounds(GeneralizationStrategy): - def apply(self): - for v in self.optimizer.values.values(): - if v.is_constant(): - continue + def _apply(self, v): + if not v.is_constant(): if v.intbound.lower < MININT/2: v.intbound.lower = MININT if v.intbound.upper > MAXINT/2: v.intbound.upper = MAXINT +class KillIntBounds(GeneralizationStrategy): + def _apply(self, v): + if not v.is_constant(): + v.intbound.lower = MININT + v.intbound.upper = MAXINT + diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -125,6 +125,12 @@ return self.box def force_at_end_of_preamble(self, already_forced, optforce): + if optforce.optimizer.kill_consts_at_end_of_preamble and self.is_constant(): + constbox = self.box + box = constbox.clonebox() + op = ResOperation(rop.SAME_AS, [constbox], box) + optforce.optimizer._newoperations.append(op) + return OptValue(box) return self def get_args_for_fail(self, modifier): @@ -352,6 +358,7 @@ self.optimizer = self self.optpure = None self.optearlyforce = None + self.kill_consts_at_end_of_preamble = False if loop is not None: self.call_pure_results = loop.call_pure_results diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -5,7 +5,7 @@ from pypy.jit.metainterp.jitexc import JitException from pypy.jit.metainterp.optimize import InvalidLoop from pypy.jit.metainterp.optimizeopt.optimizer import * -from pypy.jit.metainterp.optimizeopt.generalize import KillHugeIntBounds +from pypy.jit.metainterp.optimizeopt.generalize import KillHugeIntBounds, KillIntBounds from pypy.jit.metainterp.inliner import Inliner from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.metainterp.resume import Snapshot @@ -137,12 +137,21 @@ self.close_bridge(start_label) self.optimizer.flush() - KillHugeIntBounds(self.optimizer).apply() + self.generalize_state(start_label, stop_label) loop.operations = self.optimizer.get_newoperations() self.export_state(stop_label) loop.operations.append(stop_label) + def generalize_state(self, start_label, stop_label): + if self.jump_to_start_label(start_label, stop_label): + # At the end of the preamble, don't generalize much + KillHugeIntBounds(self.optimizer).apply() + else: + # At the end of a bridge about to force a retrcae + KillIntBounds(self.optimizer).apply() + self.optimizer.kill_consts_at_end_of_preamble = True + def jump_to_start_label(self, start_label, stop_label): if not start_label or not stop_label: return False @@ -170,14 +179,16 @@ assert self.optimizer.loop.resume_at_jump_descr resume_at_jump_descr = self.optimizer.loop.resume_at_jump_descr.clone_if_mutable() assert isinstance(resume_at_jump_descr, ResumeGuardDescr) - resume_at_jump_descr.rd_snapshot = self.fix_snapshot(jump_args, resume_at_jump_descr.rd_snapshot) + resume_at_jump_descr.rd_snapshot = self.fix_snapshot(jump_args, + resume_at_jump_descr.rd_snapshot) modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(jump_args) values = [self.getvalue(arg) for arg in jump_args] inputargs = virtual_state.make_inputargs(values, self.optimizer) - short_inputargs = virtual_state.make_inputargs(values, self.optimizer, keyboxes=True) + short_inputargs = virtual_state.make_inputargs(values, self.optimizer, + keyboxes=True) if self.boxes_created_this_iteration is not None: diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -172,7 +172,8 @@ def check_target_token_count(self, count): tokens = get_stats().get_all_jitcell_tokens() - n = sum ([len(t.target_tokens) for t in tokens]) + n = sum ([len(t.target_tokens) for t in tokens + if t.target_tokens]) assert n == count def check_enter_count(self, count): diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -902,7 +902,8 @@ assert res == f(10) self.check_aborted_count(0) self.check_target_token_count(3) - self.check_resops(int_mul=2) + self.check_trace_count(3) + self.check_resops(int_mul=3) def test_nested_loops_bridge(self): class Int(object): From noreply at buildbot.pypy.org Tue Jan 3 12:16:07 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 3 Jan 2012 12:16:07 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: fix test Message-ID: <20120103111607.2E95882C04@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r50985:8fc4e9a20794 Date: 2012-01-03 12:13 +0100 http://bitbucket.org/pypy/pypy/changeset/8fc4e9a20794/ Log: fix test diff --git a/pypy/jit/metainterp/test/test_send.py b/pypy/jit/metainterp/test/test_send.py --- a/pypy/jit/metainterp/test/test_send.py +++ b/pypy/jit/metainterp/test/test_send.py @@ -445,7 +445,7 @@ myjitdriver = JitDriver(greens = [], reds = ['node']) def f(n): node = Node(n) - while node.x > 0: + while node.x > -10: myjitdriver.can_enter_jit(node=node) myjitdriver.jit_merge_point(node=node) if node.x < 40: @@ -456,7 +456,7 @@ return node.x res = self.meta_interp(f, [55]) assert res == f(55) - self.check_trace_count(3) + self.check_trace_count(4) def test_three_classes(self): class Base: From noreply at buildbot.pypy.org Tue Jan 3 12:18:24 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 12:18:24 +0100 (CET) Subject: [pypy-commit] pypy default: (mattip) merge numpy-repr-fix Message-ID: <20120103111824.4F51E82B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r50986:9a896c04e85a Date: 2012-01-03 13:17 +0200 http://bitbucket.org/pypy/pypy/changeset/9a896c04e85a/ Log: (mattip) merge numpy-repr-fix diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -412,7 +412,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -424,9 +424,13 @@ res.append(')') else: concrete.to_str(space, 1, res, indent=' ') - if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + if (dtype is interp_dtype.get_dtype_cache(space).w_float64dtype or \ + dtype.kind == interp_dtype.SIGNEDLTR and \ + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) \ + and self.size: + # Do not print dtype + pass + else: res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -843,80 +847,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) + if i < self.shape[0] - 1: + builder.append(ccomma +'\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -158,6 +158,7 @@ assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): from numpypy import ndarray, array, dtype @@ -738,19 +739,19 @@ a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): from numpypy import array @@ -967,13 +968,13 @@ def test_tolist_view(self): from numpypy import array - a = array([[1,2],[3,4]]) + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): from numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] @@ -1103,11 +1104,11 @@ from numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): @@ -1246,6 +1247,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1288,17 +1290,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1346,7 +1348,6 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): from numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail @@ -1360,6 +1361,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): from numpypy import array, zeros + intSize = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1367,14 +1369,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == intSize: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == intSize: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1387,6 +1401,16 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): from numpypy import array, zeros @@ -1430,7 +1454,7 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) From noreply at buildbot.pypy.org Tue Jan 3 12:18:25 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 12:18:25 +0100 (CET) Subject: [pypy-commit] pypy numpypy-repr-fix: close merged branch Message-ID: <20120103111825.76E1D82B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-repr-fix Changeset: r50987:cef73b42fc52 Date: 2012-01-03 13:17 +0200 http://bitbucket.org/pypy/pypy/changeset/cef73b42fc52/ Log: close merged branch From noreply at buildbot.pypy.org Tue Jan 3 12:25:08 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 3 Jan 2012 12:25:08 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix broken offset in PPC64 direct bootstrap code Message-ID: <20120103112508.9664082B1C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50988:2044da143a09 Date: 2012-01-03 03:23 -0800 http://bitbucket.org/pypy/pypy/changeset/2044da143a09/ Log: fix broken offset in PPC64 direct bootstrap code diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -404,8 +404,12 @@ self.mc.free_scratch_reg() # load values passed on the stack to the corresponding locations - stack_position = self.OFFSET_SPP_TO_OLD_BACKCHAIN\ - + BACKCHAIN_SIZE * WORD + if IS_PPC_32: + stack_position = self.OFFSET_SPP_TO_OLD_BACKCHAIN\ + + BACKCHAIN_SIZE * WORD + else: + stack_position = self.OFFSET_SPP_TO_OLD_BACKCHAIN\ + + (BACKCHAIN_SIZE + MAX_REG_PARAMS) * WORD count = 0 for i in range(reg_args, len(inputargs)): From noreply at buildbot.pypy.org Tue Jan 3 12:25:09 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 3 Jan 2012 12:25:09 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix wrong guard condition in CALL_ASSEMBLER Message-ID: <20120103112509.C2E7482B1C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50989:4da195ede6a6 Date: 2012-01-03 03:24 -0800 http://bitbucket.org/pypy/pypy/changeset/4da195ede6a6/ Log: fix wrong guard condition in CALL_ASSEMBLER diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -1001,7 +1001,7 @@ self.mc.cror(2, 1, 2) self.mc.free_scratch_reg() - self._emit_guard(guard_op, regalloc._prepare_guard(guard_op), c.EQ) + self._emit_guard(guard_op, regalloc._prepare_guard(guard_op), c.LT) def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc): ENCODING_AREA = len(r.MANAGED_REGS) * WORD From noreply at buildbot.pypy.org Tue Jan 3 13:04:29 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 13:04:29 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: merge default Message-ID: <20120103120429.A551882B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r50990:ec5dfe7d9d1a Date: 2012-01-03 13:18 +0200 http://bitbucket.org/pypy/pypy/changeset/ec5dfe7d9d1a/ Log: merge default diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -98,7 +98,6 @@ "Abstract. Get the expected number of locals." raise TypeError, "abstract" - @jit.dont_look_inside def fast2locals(self): # Copy values from the fastlocals to self.w_locals if self.w_locals is None: @@ -112,7 +111,6 @@ w_name = self.space.wrap(name) self.space.setitem(self.w_locals, w_name, w_value) - @jit.dont_look_inside def locals2fast(self): # Copy values from self.w_locals to the fastlocals assert self.w_locals is not None diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -619,7 +619,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -655,7 +656,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -674,7 +676,8 @@ self.descr_reqcls, args.prepend(w_obj)) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -690,7 +693,8 @@ raise OperationError(space.w_SystemError, space.wrap("unexpected DescrMismatch error")) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -708,7 +712,8 @@ self.descr_reqcls, Arguments(space, [w1])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -726,7 +731,8 @@ self.descr_reqcls, Arguments(space, [w1, w2])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -744,7 +750,8 @@ self.descr_reqcls, Arguments(space, [w1, w2, w3])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -763,7 +770,8 @@ Arguments(space, [w1, w2, w3, w4])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -59,7 +59,8 @@ self.is_guard_not_invalidated = is_guard_not_invalidated DEBUG_COUNTER = lltype.Struct('DEBUG_COUNTER', ('i', lltype.Signed), - ('bridge', lltype.Signed), # 0 or 1 + ('type', lltype.Char), # 'b'ridge, 'l'abel or + # 'e'ntry point ('number', lltype.Signed)) class Assembler386(object): @@ -150,10 +151,12 @@ debug_start('jit-backend-counts') for i in range(len(self.loop_run_counters)): struct = self.loop_run_counters[i] - if not struct.bridge: + if struct.type == 'l': prefix = 'TargetToken(%d)' % struct.number + elif struct.type == 'b': + prefix = 'bridge ' + str(struct.number) else: - prefix = 'bridge ' + str(struct.number) + prefix = 'entry ' + str(struct.number) debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') @@ -425,7 +428,7 @@ self.setup(looptoken) if log: operations = self._inject_debugging_code(looptoken, operations, - False, looptoken.number) + 'e', looptoken.number) regalloc = RegAlloc(self, self.cpu.translate_support_code) # @@ -492,7 +495,7 @@ self.setup(original_loop_token) if log: operations = self._inject_debugging_code(faildescr, operations, - True, descr_number) + 'b', descr_number) arglocs = self.rebuild_faillocs_from_descr(failure_recovery) if not we_are_translated(): @@ -599,15 +602,15 @@ return self.mc.materialize(self.cpu.asmmemmgr, allblocks, self.cpu.gc_ll_descr.gcrootmap) - def _register_counter(self, bridge, number, token): + def _register_counter(self, tp, number, token): # YYY very minor leak -- we need the counters to stay alive # forever, just because we want to report them at the end # of the process struct = lltype.malloc(DEBUG_COUNTER, flavor='raw', track_allocation=False) struct.i = 0 - struct.bridge = int(bridge) - if bridge: + struct.type = tp + if tp == 'b' or tp == 'e': struct.number = number else: assert token @@ -657,8 +660,8 @@ targettoken._x86_loop_code += rawstart self.target_tokens_currently_compiling = None - def _append_debugging_code(self, operations, bridge, number, token): - counter = self._register_counter(bridge, number, token) + def _append_debugging_code(self, operations, tp, number, token): + counter = self._register_counter(tp, number, token) c_adr = ConstInt(rffi.cast(lltype.Signed, counter)) box = BoxInt() box2 = BoxInt() @@ -670,7 +673,7 @@ operations.extend(ops) @specialize.argtype(1) - def _inject_debugging_code(self, looptoken, operations, bridge, number): + def _inject_debugging_code(self, looptoken, operations, tp, number): if self._debug: # before doing anything, let's increase a counter s = 0 @@ -679,13 +682,12 @@ looptoken._x86_debug_checksum = s newoperations = [] - if bridge: - self._append_debugging_code(newoperations, bridge, number, - None) + self._append_debugging_code(newoperations, tp, number, + None) for op in operations: newoperations.append(op) if op.getopnum() == rop.LABEL: - self._append_debugging_code(newoperations, bridge, number, + self._append_debugging_code(newoperations, 'l', number, op.getdescr()) operations = newoperations return operations diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -420,8 +420,8 @@ debug._log = None # assert ops_offset is looptoken._x86_ops_offset - # getfield_raw/int_add/setfield_raw + ops + None - assert len(ops_offset) == 3 + len(operations) + 1 + # 2*(getfield_raw/int_add/setfield_raw) + ops + None + assert len(ops_offset) == 2*3 + len(operations) + 1 assert (ops_offset[operations[0]] <= ops_offset[operations[1]] <= ops_offset[operations[2]] <= @@ -546,13 +546,16 @@ struct = self.cpu.assembler.loop_run_counters[0] assert struct.i == 1 struct = self.cpu.assembler.loop_run_counters[1] + assert struct.i == 1 + struct = self.cpu.assembler.loop_run_counters[2] assert struct.i == 9 self.cpu.finish_once() finally: debug._log = None + l0 = ('debug_print', 'entry -1:1') l1 = ('debug_print', preambletoken.repr_of_descr() + ':1') l2 = ('debug_print', targettoken.repr_of_descr() + ':9') - assert ('jit-backend-counts', [l1, l2]) in dlog + assert ('jit-backend-counts', [l0, l1, l2]) in dlog def test_debugger_checksum(self): loop = """ diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -162,7 +162,6 @@ _ll_4_list_setslice = rlist.ll_listsetslice _ll_2_list_delslice_startonly = rlist.ll_listdelslice_startonly _ll_3_list_delslice_startstop = rlist.ll_listdelslice_startstop -_ll_1_list_list2fixed = lltypesystem_rlist.ll_list2fixed _ll_2_list_inplace_mul = rlist.ll_inplace_mul _ll_2_list_getitem_foldable = _ll_2_list_getitem diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -112,33 +112,26 @@ """ from pypy.jit.metainterp.optimizeopt import optimize_trace - history = metainterp.history metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd + history = metainterp.history - if False: - part = partial_trace - assert False - procedur_token = metainterp.get_procedure_token(greenkey) - assert procedure_token - all_target_tokens = [] - else: - jitcell_token = make_jitcell_token(jitdriver_sd) - part = create_empty_loop(metainterp) - part.inputargs = inputargs[:] - h_ops = history.operations - part.resume_at_jump_descr = resume_at_jump_descr - part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ - [h_ops[i].clone() for i in range(start, len(h_ops))] + \ - [ResOperation(rop.LABEL, jumpargs, None, descr=jitcell_token)] + jitcell_token = make_jitcell_token(jitdriver_sd) + part = create_empty_loop(metainterp) + part.inputargs = inputargs[:] + h_ops = history.operations + part.resume_at_jump_descr = resume_at_jump_descr + part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ + [h_ops[i].clone() for i in range(start, len(h_ops))] + \ + [ResOperation(rop.LABEL, jumpargs, None, descr=jitcell_token)] - try: - optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) - except InvalidLoop: - return None - target_token = part.operations[0].getdescr() - assert isinstance(target_token, TargetToken) - all_target_tokens = [target_token] + try: + optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) + except InvalidLoop: + return None + target_token = part.operations[0].getdescr() + assert isinstance(target_token, TargetToken) + all_target_tokens = [target_token] loop = create_empty_loop(metainterp) loop.inputargs = part.inputargs @@ -176,10 +169,10 @@ loop.original_jitcell_token = jitcell_token for label in all_target_tokens: assert isinstance(label, TargetToken) - label.original_jitcell_token = jitcell_token if label.virtual_state and label.short_preamble: metainterp_sd.logger_ops.log_short_preamble([], label.short_preamble) jitcell_token.target_tokens = all_target_tokens + propagate_original_jitcell_token(loop) send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) return all_target_tokens[0] @@ -247,11 +240,11 @@ for box in loop.inputargs: assert isinstance(box, Box) - target_token = loop.operations[-1].getdescr() + target_token = loop.operations[-1].getdescr() resumekey.compile_and_attach(metainterp, loop) + target_token = label.getdescr() assert isinstance(target_token, TargetToken) - target_token.original_jitcell_token = loop.original_jitcell_token record_loop_or_bridge(metainterp_sd, loop) return target_token @@ -288,6 +281,15 @@ assert i == len(inputargs) loop.operations = extra_ops + loop.operations +def propagate_original_jitcell_token(trace): + for op in trace.operations: + if op.getopnum() == rop.LABEL: + token = op.getdescr() + assert isinstance(token, TargetToken) + assert token.original_jitcell_token is None + token.original_jitcell_token = trace.original_jitcell_token + + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: @@ -319,7 +321,10 @@ metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # - metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset) + loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, + type, ops_offset, + name=loopname) # if metainterp_sd.warmrunnerdesc is not None: # for tests metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive(original_jitcell_token) @@ -558,6 +563,7 @@ inputargs = metainterp.history.inputargs if not we_are_translated(): self._debug_suboperations = new_loop.operations + propagate_original_jitcell_token(new_loop) send_bridge_to_backend(metainterp.jitdriver_sd, metainterp.staticdata, self, inputargs, new_loop.operations, new_loop.original_jitcell_token) @@ -744,6 +750,7 @@ jitdriver_sd = metainterp.jitdriver_sd redargs = new_loop.inputargs new_loop.original_jitcell_token = jitcell_token = make_jitcell_token(jitdriver_sd) + propagate_original_jitcell_token(new_loop) send_loop_to_backend(self.original_greenkey, metainterp.jitdriver_sd, metainterp_sd, new_loop, "entry bridge") # send the new_loop to warmspot.py, to be called directly the next time diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -79,9 +79,9 @@ opnum == rop.COPYSTRCONTENT or opnum == rop.COPYUNICODECONTENT): return - if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: - return - if rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST: + if (rop._OVF_FIRST <= opnum <= rop._OVF_LAST or + rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST or + rop._GUARD_FIRST <= opnum <= rop._GUARD_LAST): return if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() diff --git a/pypy/jit/metainterp/logger.py b/pypy/jit/metainterp/logger.py --- a/pypy/jit/metainterp/logger.py +++ b/pypy/jit/metainterp/logger.py @@ -13,14 +13,14 @@ self.metainterp_sd = metainterp_sd self.guard_number = guard_number - def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None): + def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None, name=''): if type is None: debug_start("jit-log-noopt-loop") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-loop") else: debug_start("jit-log-opt-loop") - debug_print("# Loop", number, ":", type, + debug_print("# Loop", number, '(%s)' % name , ":", type, "with", len(operations), "ops") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-opt-loop") diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -1,10 +1,13 @@ from __future__ import with_statement from pypy.jit.metainterp.optimizeopt.test.test_util import ( - LLtypeMixin, BaseTest, Storage, _sortboxes, FakeDescrWithSnapshot) + LLtypeMixin, BaseTest, Storage, _sortboxes, FakeDescrWithSnapshot, + FakeMetaInterpStaticData) from pypy.jit.metainterp.history import TreeLoop, JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.jit.metainterp.optimize import InvalidLoop from py.test import raises +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method class BaseTestMultiLabel(BaseTest): enable_opts = "intbounds:rewrite:virtualize:string:earlyforce:pure:heap:unroll" @@ -84,6 +87,8 @@ return optimized +class OptimizeoptTestMultiLabel(BaseTestMultiLabel): + def test_simple(self): ops = """ [i1] @@ -381,6 +386,66 @@ """ self.optimize_loop(ops, expected) -class TestLLtype(BaseTestMultiLabel, LLtypeMixin): + def test_virtual_as_field_of_forced_box(self): + ops = """ + [p0] + pv1 = new_with_vtable(ConstClass(node_vtable)) + label(pv1, p0) + pv2 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(pv2, pv1, descr=valuedescr) + jump(pv1, pv2) + """ + with raises(InvalidLoop): + self.optimize_loop(ops, ops) + +class OptRenameStrlen(Optimization): + def propagate_forward(self, op): + dispatch_opt(self, op) + + def optimize_STRLEN(self, op): + newop = op.clone() + newop.result = op.result.clonebox() + self.emit_operation(newop) + self.make_equal_to(op.result, self.getvalue(newop.result)) + +dispatch_opt = make_dispatcher_method(OptRenameStrlen, 'optimize_', + default=OptRenameStrlen.emit_operation) + +class BaseTestOptimizerRenamingBoxes(BaseTestMultiLabel): + + def _do_optimize_loop(self, loop, call_pure_results): + from pypy.jit.metainterp.optimizeopt.unroll import optimize_unroll + from pypy.jit.metainterp.optimizeopt.util import args_dict + from pypy.jit.metainterp.optimizeopt.pure import OptPure + + self.loop = loop + loop.call_pure_results = args_dict() + metainterp_sd = FakeMetaInterpStaticData(self.cpu) + optimize_unroll(metainterp_sd, loop, [OptRenameStrlen(), OptPure()], True) + + def test_optimizer_renaming_boxes(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + i2 = strlen(p1) + i3 = int_add(i2, 7) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + i2 = int_add(i11, 7) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + + + +class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): pass +class TestOptimizerRenamingBoxesLLtype(BaseTestOptimizerRenamingBoxes, LLtypeMixin): + pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7759,7 +7759,7 @@ jump(i0, p0, i2) """ self.optimize_loop(ops, expected) - + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -269,10 +269,8 @@ # in case it does, we would prefer to be suboptimal in asm # to a fatal RPython exception. if newresult is not op.result and not newvalue.is_constant(): - # XXX fix me? - #self.short_boxes.alias(newresult, op.result) op = ResOperation(rop.SAME_AS, [op.result], newresult) - self.optimizer._newoperations = [op] + self.optimizer._newoperations + self.optimizer._newoperations.append(op) self.optimizer.flush() self.optimizer.emitting_dissabled = False diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -409,7 +409,13 @@ if self.level == LEVEL_CONSTANT: return assert 0 <= self.position_in_notvirtuals - boxes[self.position_in_notvirtuals] = value.force_box(optimizer) + if optimizer: + box = value.force_box(optimizer) + else: + if value.is_virtual(): + raise BadVirtualState + box = value.get_key_box() + boxes[self.position_in_notvirtuals] = box def _enum(self, virtual_state): if self.level == LEVEL_CONSTANT: @@ -471,8 +477,14 @@ optimizer = optimizer.optearlyforce assert len(values) == len(self.state) inputargs = [None] * len(self.notvirtuals) + + # We try twice. The first time around we allow boxes to be forced + # which might change the virtual state if the box appear in more + # than one place among the inputargs. for i in range(len(values)): self.state[i].enum_forced_boxes(inputargs, values[i], optimizer) + for i in range(len(values)): + self.state[i].enum_forced_boxes(inputargs, values[i], None) if keyboxes: for i in range(len(values)): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -976,10 +976,13 @@ self.verify_green_args(jitdriver_sd, greenboxes) self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, greenboxes) - + if self.metainterp.seen_loop_header_for_jdindex < 0: - if not jitdriver_sd.no_loop_header or not any_operation: + if not any_operation: return + if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if not jitdriver_sd.no_loop_header: + return # automatically add a loop_header if there is none self.metainterp.seen_loop_header_for_jdindex = jdindex # @@ -2052,9 +2055,15 @@ from pypy.jit.metainterp.resoperation import opname raise NotImplementedError(opname[opnum]) - def get_procedure_token(self, greenkey): + def get_procedure_token(self, greenkey, with_compiled_targets=False): cell = self.jitdriver_sd.warmstate.jit_cell_at_key(greenkey) - return cell.get_procedure_token() + token = cell.get_procedure_token() + if with_compiled_targets: + if not token: + return None + if not token.target_tokens: + return None + return token def compile_loop(self, original_boxes, live_arg_boxes, start, resume_at_jump_descr): num_green_args = self.jitdriver_sd.num_green_args @@ -2087,11 +2096,9 @@ def compile_trace(self, live_arg_boxes, resume_at_jump_descr): num_green_args = self.jitdriver_sd.num_green_args greenkey = live_arg_boxes[:num_green_args] - target_jitcell_token = self.get_procedure_token(greenkey) + target_jitcell_token = self.get_procedure_token(greenkey, True) if not target_jitcell_token: return - if not target_jitcell_token.target_tokens: - return self.history.record(rop.JUMP, live_arg_boxes[num_green_args:], None, descr=target_jitcell_token) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2697,7 +2697,7 @@ # bridge back to the preamble of the first loop is produced. A guard in # this bridge is later traced resulting in a failed attempt of retracing # the second loop. - self.check_trace_count(8) + self.check_trace_count(9) # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -18,7 +18,7 @@ self.seen.append((inputargs, operations, token)) class FakeLogger(object): - def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None): + def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None, name=''): pass def repr_of_resop(self, op): diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -255,6 +255,11 @@ assert h.getarrayitem(box1, descr1, index1) is box2 assert h.getarrayitem(box1, descr1, index2) is box4 + h.invalidate_caches(rop.GUARD_TRUE, None, []) + assert h.getfield(box1, descr1) is box2 + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr1, index2) is box4 + h.invalidate_caches( rop.CALL_LOOPINVARIANT, FakeCallDescr(FakeEffektinfo.EF_LOOPINVARIANT), []) diff --git a/pypy/jit/metainterp/test/test_logger.py b/pypy/jit/metainterp/test/test_logger.py --- a/pypy/jit/metainterp/test/test_logger.py +++ b/pypy/jit/metainterp/test/test_logger.py @@ -180,7 +180,7 @@ def test_intro_loop(self): bare_logger = logger.Logger(self.make_metainterp_sd()) output = capturing(bare_logger.log_loop, [], [], 1, "foo") - assert output.splitlines()[0] == "# Loop 1 : foo with 0 ops" + assert output.splitlines()[0] == "# Loop 1 () : foo with 0 ops" pure_parse(output) def test_intro_bridge(self): diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -756,7 +756,7 @@ res = self.meta_interp(interpret, [1]) assert res == interpret(1) # XXX it's unsure how many loops should be there - self.check_trace_count(3) + self.check_trace_count(2) def test_path_with_operations_not_from_start(self): jitdriver = JitDriver(greens = ['k'], reds = ['n', 'z']) diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -880,7 +880,7 @@ elif op == 'j': j = Int(0) elif op == '+': - sa += i.val * j.val + sa += (i.val + 2) * (j.val + 2) elif op == 'a': i = Int(i.val + 1) elif op == 'b': @@ -902,6 +902,7 @@ assert res == f(10) self.check_aborted_count(0) self.check_target_token_count(3) + self.check_resops(int_mul=2) def test_nested_loops_bridge(self): class Int(object): diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -4,6 +4,7 @@ class PyPyModule(MixedModule): interpleveldefs = { 'debug_repr': 'interp_extras.debug_repr', + 'remove_invalidates': 'interp_extras.remove_invalidates', } appleveldefs = {} diff --git a/pypy/module/micronumpy/interp_extras.py b/pypy/module/micronumpy/interp_extras.py --- a/pypy/module/micronumpy/interp_extras.py +++ b/pypy/module/micronumpy/interp_extras.py @@ -5,3 +5,11 @@ @unwrap_spec(array=BaseArray) def debug_repr(space, array): return space.wrap(array.find_sig().debug_repr()) + + at unwrap_spec(array=BaseArray) +def remove_invalidates(space, array): + """ Array modification will no longer invalidate any of it's + potential children. Use only for performance debugging + """ + del array.invalidates[:] + return space.w_None diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped +from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature from pypy.module.micronumpy.strides import calculate_slice_strides @@ -14,22 +14,26 @@ numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'] + reds=['result_size', 'frame', 'ri', 'self', 'result'], + get_printable_location=signature.new_printable_location('numpy'), ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['frame', 'self', 'dtype'] + reds=['frame', 'self', 'dtype'], + get_printable_location=signature.new_printable_location('all'), ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['frame', 'self', 'dtype'] + reds=['frame', 'self', 'dtype'], + get_printable_location=signature.new_printable_location('any'), ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'] + reds=['self', 'frame', 'source', 'res_iter'], + get_printable_location=signature.new_printable_location('slice'), ) def _find_shape_and_elems(space, w_iterable): @@ -291,7 +295,8 @@ def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], - reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'] + reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], + get_printable_location=signature.new_printable_location(op_name), ) def loop(self): sig = self.find_sig() @@ -375,6 +380,9 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -404,7 +412,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -416,9 +424,13 @@ res.append(')') else: concrete.to_str(space, 1, res, indent=' ') - if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + if (dtype is interp_dtype.get_dtype_cache(space).w_float64dtype or \ + dtype.kind == interp_dtype.SIGNEDLTR and \ + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) \ + and self.size: + # Do not print dtype + pass + else: res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -578,8 +590,8 @@ strides.append(concrete.strides[i]) backstrides.append(concrete.backstrides[i]) shape.append(concrete.shape[i]) - return space.wrap(W_NDimSlice(concrete.start, strides[:], - backstrides[:], shape[:], concrete)) + return space.wrap(W_NDimSlice(concrete.start, strides, + backstrides, shape, concrete)) def descr_get_flatiter(self, space): return space.wrap(W_FlatIterator(self)) @@ -820,8 +832,8 @@ if self.order == 'C': strides.reverse() backstrides.reverse() - self.strides = strides[:] - self.backstrides = backstrides[:] + self.strides = strides + self.backstrides = backstrides def array_sig(self, res_shape): if res_shape is not None and self.shape != res_shape: @@ -835,80 +847,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) + if i < self.shape[0] - 1: + builder.append(ccomma +'\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe @@ -1025,9 +1037,9 @@ strides.reverse() backstrides.reverse() new_shape.reverse() - self.strides = strides[:] - self.backstrides = backstrides[:] - self.shape = new_shape[:] + self.strides = strides + self.backstrides = backstrides + self.shape = new_shape return new_strides = calc_new_strides(new_shape, self.shape, self.strides) if new_strides is None: @@ -1037,7 +1049,7 @@ for nd in range(len(new_shape)): new_backstrides[nd] = (new_shape[nd] - 1) * new_strides[nd] self.strides = new_strides[:] - self.backstrides = new_backstrides[:] + self.backstrides = new_backstrides self.shape = new_shape[:] class W_NDimArray(ConcreteArray): @@ -1180,6 +1192,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -1,9 +1,10 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, unwrap_spec +from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, types -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature, find_sig +from pypy.module.micronumpy import interp_boxes, interp_dtype +from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature,\ + find_sig, new_printable_location from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name @@ -11,7 +12,8 @@ reduce_driver = jit.JitDriver( greens = ['shapelen', "sig"], virtualizables = ["frame"], - reds = ["frame", "self", "dtype", "value", "obj"] + reds = ["frame", "self", "dtype", "value", "obj"], + get_printable_location=new_printable_location('reduce'), ) class W_Ufunc(Wrappable): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -5,6 +5,11 @@ from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib.jit import hint, unroll_safe, promote +def new_printable_location(driver_name): + def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) + return get_printable_location + def sigeq(one, two): return one.eq(two) diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -1,4 +1,9 @@ +from pypy.rlib import jit + + at jit.look_inside_iff(lambda shape, start, strides, backstrides, chunks: + jit.isconstant(len(chunks)) +) def calculate_slice_strides(shape, start, strides, backstrides, chunks): rstrides = [] rbackstrides = [] diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -158,6 +158,7 @@ assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): from numpypy import ndarray, array, dtype @@ -179,6 +180,19 @@ ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1,2]) + assert x.ndim == 1 + x = array([[1,2], [3,4]]) + assert x.ndim == 2 + x = array([[[1,2], [3,4]], [[5,6], [7,8]] ]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but numpypy raises an AttributeError + raises (TypeError, 'x.ndim=3') + def test_init(self): from numpypy import zeros a = zeros(15) @@ -725,19 +739,19 @@ a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): from numpypy import array @@ -898,6 +912,15 @@ b[0] = 3 assert debug_repr(b) == 'Array' + def test_remove_invalidates(self): + from numpypy import array + from numpypy.pypy import remove_invalidates + a = array([1, 2, 3]) + b = a + a + remove_invalidates(a) + a[0] = 14 + assert b[0] == 28 + def test_virtual_views(self): from numpypy import arange a = arange(15) @@ -945,13 +968,13 @@ def test_tolist_view(self): from numpypy import array - a = array([[1,2],[3,4]]) + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): from numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] @@ -1081,11 +1104,11 @@ from numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): @@ -1224,6 +1247,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1266,17 +1290,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1324,7 +1348,6 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): from numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail @@ -1338,6 +1361,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): from numpypy import array, zeros + intSize = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1345,14 +1369,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == intSize: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == intSize: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1365,6 +1401,16 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): from numpypy import array, zeros @@ -1408,7 +1454,7 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -8,10 +8,12 @@ from pypy.tool import logparser from pypy.jit.tool.jitoutput import parse_prof from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range, - find_ids, TraceWithIds, + find_ids, OpMatcher, InvalidMatch) class BaseTestPyPyC(object): + log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary' + def setup_class(cls): if '__pypy__' not in sys.builtin_module_names: py.test.skip("must run this test with pypy") @@ -52,8 +54,7 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - print cmdline, logfile - env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)} + env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -98,7 +98,8 @@ end = time.time() return end - start # - log = self.run(main, [get_libc_name(), 200], threshold=150) + log = self.run(main, [get_libc_name(), 200], threshold=150, + import_site=True) assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead loops = log.loops_by_id('sleep') assert len(loops) == 1 # make sure that we actually JITted the loop @@ -121,7 +122,7 @@ return fabs._ptr.getaddr(), x libm_name = get_libm_name(sys.platform) - log = self.run(main, [libm_name]) + log = self.run(main, [libm_name], import_site=True) fabs_addr, res = log.result assert res == -4.0 loop, = log.loops_by_filename(self.filepath) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -15,7 +15,7 @@ i += letters[i % len(letters)] == uletters[i % len(letters)] return i - log = self.run(main, [300]) + log = self.run(main, [300], import_site=True) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" @@ -55,7 +55,7 @@ i += int(long(string.digits[i % len(string.digits)], 16)) return i - log = self.run(main, [1100]) + log = self.run(main, [1100], import_site=True) assert log.result == main(1100) loop, = log.loops_by_filename(self.filepath) assert loop.match(""" diff --git a/pypy/module/sys/app.py b/pypy/module/sys/app.py --- a/pypy/module/sys/app.py +++ b/pypy/module/sys/app.py @@ -66,11 +66,11 @@ return None copyright_str = """ -Copyright 2003-2011 PyPy development team. +Copyright 2003-2012 PyPy development team. All Rights Reserved. For further information, see -Portions Copyright (c) 2001-2008 Python Software Foundation. +Portions Copyright (c) 2001-2012 Python Software Foundation. All Rights Reserved. Portions Copyright (c) 2000 BeOpen.com. diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -537,7 +537,7 @@ builder.append(by) builder.append_slice(input, upper, len(input)) else: - # An ok guess for the result size + # First compute the exact result size count = input.count(sub) if count > maxsplit and maxsplit > 0: count = maxsplit @@ -553,21 +553,16 @@ builder = StringBuilder(result_size) start = 0 sublen = len(sub) - first = True while maxsplit != 0: next = input.find(sub, start) if next < 0: break - if not first: - builder.append(by) - first = False builder.append_slice(input, start, next) + builder.append(by) start = next + sublen maxsplit -= 1 # NB. if it's already < 0, it stays < 0 - if not first: - builder.append(by) builder.append_slice(input, start, len(input)) return space.wrap(builder.build()) diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -528,6 +528,9 @@ set_param(driver, name1, int(value)) except ValueError: raise + break + else: + raise ValueError set_user_param._annspecialcase_ = 'specialize:arg(0)' diff --git a/pypy/rlib/rsre/rsre_jit.py b/pypy/rlib/rsre/rsre_jit.py --- a/pypy/rlib/rsre/rsre_jit.py +++ b/pypy/rlib/rsre/rsre_jit.py @@ -22,7 +22,7 @@ info = '%s/%d' % (info, args[debugprint[2]]) else: info = '' - return '%s%s %s' % (name, info, s) + return 're %s%s %s' % (name, info, s) # self.get_printable_location = get_printable_location diff --git a/pypy/rpython/lltypesystem/rlist.py b/pypy/rpython/lltypesystem/rlist.py --- a/pypy/rpython/lltypesystem/rlist.py +++ b/pypy/rpython/lltypesystem/rlist.py @@ -375,7 +375,6 @@ newitems = malloc(LIST.items.TO, n) rgc.ll_arraycopy(olditems, newitems, 0, 0, n) return newitems -ll_list2fixed.oopspec = 'list.list2fixed(l)' def ll_list2fixed_exact(l): ll_assert(l.length == len(l.items), "ll_list2fixed_exact: bad length") diff --git a/pypy/rpython/test/test_generator.py b/pypy/rpython/test/test_generator.py --- a/pypy/rpython/test/test_generator.py +++ b/pypy/rpython/test/test_generator.py @@ -54,6 +54,26 @@ res = self.interpret(f, [0]) assert res == 42 + def test_except_block(self): + def foo(): + raise ValueError + def g(a, b, c): + yield a + yield b + try: + foo() + except ValueError: + pass + yield c + def f(): + gen = g(3, 5, 8) + x = gen.next() * 100 + x += gen.next() * 10 + x += gen.next() + return x + res = self.interpret(f, []) + assert res == 358 + class TestLLtype(BaseTestGenerator, LLRtypeMixin): pass diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -24,27 +24,24 @@ self.failargs = failargs def getarg(self, i): - return self._getvar(self.args[i]) + return self.args[i] def getargs(self): - return [self._getvar(v) for v in self.args] + return self.args[:] def getres(self): - return self._getvar(self.res) + return self.res def getdescr(self): return self.descr - def _getvar(self, v): - return v - def is_guard(self): return self._is_guard def repr(self): args = self.getargs() if self.descr is not None: - args.append('descr=%s' % self.getdescr()) + args.append('descr=%s' % self.descr) arglist = ', '.join(args) if self.res is not None: return '%s = %s(%s)' % (self.getres(), self.name, arglist) @@ -53,8 +50,6 @@ def __repr__(self): return self.repr() - ## return '<%s (%s)>' % (self.name, ', '.join([repr(a) - ## for a in self.args])) class SimpleParser(OpParser): @@ -146,18 +141,27 @@ is_bytecode = True inline_level = None - def __init__(self, operations, storage): - if operations[0].name == 'debug_merge_point': - self.inline_level = int(operations[0].args[0]) - m = re.search('\w]+)\. file \'(.+?)\'\. line (\d+)> #(\d+) (\w+)', - operations[0].args[1]) - if m is None: - # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = operations[0].args[1][1:-1] - else: - self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() - self.startlineno = int(lineno) - self.bytecode_no = int(bytecode_no) + def parse_code_data(self, arg): + m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', + arg) + if m is None: + # a non-code loop, like StrLiteralSearch or something + self.bytecode_name = arg + else: + self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() + self.startlineno = int(lineno) + self.bytecode_no = int(bytecode_no) + + + def __init__(self, operations, storage, loopname): + for op in operations: + if op.name == 'debug_merge_point': + self.inline_level = int(op.args[0]) + self.parse_code_data(op.args[1][1:-1]) + break + else: + self.inline_level = 0 + self.parse_code_data(loopname) self.operations = operations self.storage = storage self.code = storage.disassemble_code(self.filename, self.startlineno, @@ -165,7 +169,7 @@ def repr(self): if self.filename is None: - return "Unknown" + return self.bytecode_name return "%s, file '%s', line %d" % (self.name, self.filename, self.startlineno) @@ -181,7 +185,10 @@ return self.code.map[self.bytecode_no] def getlineno(self): - return self.getopcode().lineno + code = self.getopcode() + if code is None: + return None + return code.lineno lineno = property(getlineno) def getline_starts_here(self): @@ -220,7 +227,8 @@ self.storage = storage @classmethod - def from_operations(cls, operations, storage, limit=None, inputargs=''): + def from_operations(cls, operations, storage, limit=None, inputargs='', + loopname=''): """ Slice given operation list into a chain of TraceForOpcode chunks. Also detect inlined functions and make them Function """ @@ -246,13 +254,13 @@ for op in operations: if op.name == 'debug_merge_point': if so_far: - append_to_res(cls.TraceForOpcode(so_far, storage)) + append_to_res(cls.TraceForOpcode(so_far, storage, loopname)) if limit: break so_far = [] so_far.append(op) if so_far: - append_to_res(cls.TraceForOpcode(so_far, storage)) + append_to_res(cls.TraceForOpcode(so_far, storage, loopname)) # wrap stack back up if not stack: # no ops whatsoever @@ -300,7 +308,7 @@ def repr(self): if self.filename is None: - return "Unknown" + return self.chunks[0].bytecode_name return "%s, file '%s', line %d" % (self.name, self.filename, self.startlineno) @@ -385,18 +393,27 @@ parser.postprocess(loop, backend_tp=bname, backend_dump=dump, dump_start=start_ofs)) - loops.append(loop) + loops += split_trace(loop) return log, loops def split_trace(trace): - labels = [i for i, op in enumerate(trace.operations) - if op.name == 'label'] - labels = [0] + labels + [len(trace.operations) - 1] + labels = [0] + if trace.comment and 'Guard' in trace.comment: + descrs = ['bridge ' + re.search('Guard (\d+)', trace.comment).group(1)] + else: + descrs = ['entry ' + re.search('Loop (\d+)', trace.comment).group(1)] + for i, op in enumerate(trace.operations): + if op.name == 'label': + labels.append(i) + descrs.append(op.descr) + labels.append(len(trace.operations) - 1) parts = [] for i in range(len(labels) - 1): start, stop = labels[i], labels[i+1] part = copy(trace) part.operations = trace.operations[start : stop + 1] + part.descr = descrs[i] + part.comment = trace.comment parts.append(part) return parts @@ -407,11 +424,7 @@ lines = input[-1].splitlines() mapping = {} for loop in loops: - com = loop.comment - if 'Loop' in com: - mapping['loop ' + re.search('Loop (\d+)', com).group(1)] = loop - else: - mapping['bridge ' + re.search('Guard (\d+)', com).group(1)] = loop + mapping[loop.descr] = loop for line in lines: if line: num, count = line.split(':', 2) diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py --- a/pypy/tool/jitlogparser/storage.py +++ b/pypy/tool/jitlogparser/storage.py @@ -6,7 +6,6 @@ import py import os from lib_pypy.disassembler import dis -from pypy.tool.jitlogparser.parser import Function from pypy.tool.jitlogparser.module_finder import gather_all_code_objs class LoopStorage(object): diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -1,6 +1,7 @@ from pypy.tool.jitlogparser.parser import (SimpleParser, TraceForOpcode, Function, adjust_bridges, - import_log, split_trace, Op) + import_log, split_trace, Op, + parse_log_counts) from pypy.tool.jitlogparser.storage import LoopStorage import py, sys @@ -32,23 +33,26 @@ ''') res = Function.from_operations(ops.operations, LoopStorage()) assert len(res.chunks) == 1 - assert res.chunks[0].repr() + assert 'SomeRandomStuff' in res.chunks[0].repr() def test_split(): ops = parse(''' [i0] + label() debug_merge_point(0, " #10 ADD") debug_merge_point(0, " #11 SUB") i1 = int_add(i0, 1) debug_merge_point(0, " #11 SUB") i2 = int_add(i1, 1) ''') - res = Function.from_operations(ops.operations, LoopStorage()) - assert len(res.chunks) == 3 + res = Function.from_operations(ops.operations, LoopStorage(), loopname='') + assert len(res.chunks) == 4 assert len(res.chunks[0].operations) == 1 - assert len(res.chunks[1].operations) == 2 + assert len(res.chunks[1].operations) == 1 assert len(res.chunks[2].operations) == 2 - assert res.chunks[2].bytecode_no == 11 + assert len(res.chunks[3].operations) == 2 + assert res.chunks[3].bytecode_no == 11 + assert res.chunks[0].bytecode_name == '' def test_inlined_call(): ops = parse(""" @@ -236,16 +240,46 @@ loop = parse(''' [i7] i9 = int_lt(i7, 1003) - label(i9) + label(i9, descr=grrr) guard_true(i9, descr=) [] i13 = getfield_raw(151937600, descr=) - label(i13) + label(i13, descr=asb) i19 = int_lt(i13, 1003) guard_true(i19, descr=) [] i113 = getfield_raw(151937600, descr=) ''') + loop.comment = 'Loop 0' parts = split_trace(loop) assert len(parts) == 3 assert len(parts[0].operations) == 2 assert len(parts[1].operations) == 4 assert len(parts[2].operations) == 4 + assert parts[1].descr == 'grrr' + assert parts[2].descr == 'asb' + +def test_parse_log_counts(): + loop = parse(''' + [i7] + i9 = int_lt(i7, 1003) + label(i9, descr=grrr) + guard_true(i9, descr=) [] + i13 = getfield_raw(151937600, descr=) + label(i13, descr=asb) + i19 = int_lt(i13, 1003) + guard_true(i19, descr=) [] + i113 = getfield_raw(151937600, descr=) + ''') + bridge = parse(''' + # bridge out of Guard 2 with 1 ops + [] + i0 = int_lt(1, 2) + finish(i0) + ''') + bridge.comment = 'bridge out of Guard 2 with 1 ops' + loop.comment = 'Loop 0' + loops = split_trace(loop) + split_trace(bridge) + input = ['grrr:123\nasb:12\nbridge 2:1234'] + parse_log_counts(input, loops) + assert loops[-1].count == 1234 + assert loops[1].count == 123 + assert loops[2].count == 12 diff --git a/pypy/translator/generator.py b/pypy/translator/generator.py --- a/pypy/translator/generator.py +++ b/pypy/translator/generator.py @@ -2,7 +2,7 @@ from pypy.objspace.flow.model import Variable, Constant, FunctionGraph from pypy.translator.unsimplify import insert_empty_startblock from pypy.translator.unsimplify import split_block -from pypy.translator.simplify import eliminate_empty_blocks +from pypy.translator.simplify import eliminate_empty_blocks, simplify_graph from pypy.tool.sourcetools import func_with_new_name from pypy.interpreter.argument import Signature @@ -64,6 +64,7 @@ def next(self): entry = self.current self.current = None + assert entry is not None # else, recursive generator invocation (next_entry, return_value) = func(entry) self.current = next_entry return return_value @@ -91,6 +92,10 @@ block.inputargs = [v_entry1] def tweak_generator_body_graph(Entry, graph): + # First, always run simplify_graph in order to reduce the number of + # variables passed around + simplify_graph(graph) + # assert graph.startblock.operations[0].opname == 'generator_mark' graph.startblock.operations.pop(0) # @@ -100,12 +105,20 @@ # mappings = [Entry] # + stopblock = Block([]) + v0 = Variable(); v1 = Variable() + stopblock.operations = [ + SpaceOperation('simple_call', [Constant(StopIteration)], v0), + SpaceOperation('type', [v0], v1), + ] + stopblock.closeblock(Link([v1, v0], graph.exceptblock)) + # for block in list(graph.iterblocks()): for exit in block.exits: if exit.target is graph.returnblock: - exit.args = [Constant(StopIteration), - Constant(StopIteration())] - exit.target = graph.exceptblock + exit.args = [] + exit.target = stopblock + assert block is not stopblock for index in range(len(block.operations)-1, -1, -1): op = block.operations[index] if op.opname == 'yield': diff --git a/pypy/translator/sandbox/pypy_interact.py b/pypy/translator/sandbox/pypy_interact.py --- a/pypy/translator/sandbox/pypy_interact.py +++ b/pypy/translator/sandbox/pypy_interact.py @@ -13,7 +13,8 @@ ATM this only works with PyPy translated with Boehm or the semispace or generation GCs. --timeout=N limit execution time to N (real-time) seconds. - --log=FILE log all user input into the FILE + --log=FILE log all user input into the FILE. + --verbose log all proxied system calls. Note that you can get readline-like behavior with a tool like 'ledit', provided you use enough -u options: @@ -26,18 +27,19 @@ from pypy.translator.sandbox.sandlib import SimpleIOSandboxedProc from pypy.translator.sandbox.sandlib import VirtualizedSandboxedProc from pypy.translator.sandbox.vfs import Dir, RealDir, RealFile -from pypy.tool.lib_pypy import LIB_ROOT +import pypy +LIB_ROOT = os.path.dirname(os.path.dirname(pypy.__file__)) class PyPySandboxedProc(VirtualizedSandboxedProc, SimpleIOSandboxedProc): - debug = True argv0 = '/bin/pypy-c' virtual_cwd = '/tmp' virtual_env = {} virtual_console_isatty = True - def __init__(self, executable, arguments, tmpdir=None): + def __init__(self, executable, arguments, tmpdir=None, debug=True): self.executable = executable = os.path.abspath(executable) self.tmpdir = tmpdir + self.debug = debug super(PyPySandboxedProc, self).__init__([self.argv0] + arguments, executable=executable) @@ -67,12 +69,13 @@ if __name__ == '__main__': from getopt import getopt # and not gnu_getopt! - options, arguments = getopt(sys.argv[1:], 't:h', + options, arguments = getopt(sys.argv[1:], 't:hv', ['tmp=', 'heapsize=', 'timeout=', 'log=', - 'help']) + 'verbose', 'help']) tmpdir = None timeout = None logfile = None + debug = False extraoptions = [] def help(): @@ -104,6 +107,8 @@ timeout = int(value) elif option == '--log': logfile = value + elif option in ['-v', '--verbose']: + debug = True elif option in ['-h', '--help']: help() else: @@ -113,7 +118,7 @@ help() sandproc = PyPySandboxedProc(arguments[0], extraoptions + arguments[1:], - tmpdir=tmpdir) + tmpdir=tmpdir, debug=debug) if timeout is not None: sandproc.settimeout(timeout, interrupt_main=True) if logfile is not None: diff --git a/pypy/translator/sandbox/sandlib.py b/pypy/translator/sandbox/sandlib.py --- a/pypy/translator/sandbox/sandlib.py +++ b/pypy/translator/sandbox/sandlib.py @@ -4,25 +4,29 @@ for the outer process, which can run CPython or PyPy. """ -import py import sys, os, posixpath, errno, stat, time -from pypy.tool.ansi_print import AnsiLog import subprocess from pypy.tool.killsubprocess import killsubprocess from pypy.translator.sandbox.vfs import UID, GID +import py -class MyAnsiLog(AnsiLog): - KW_TO_COLOR = { - 'call': ((34,), False), - 'result': ((34,), False), - 'exception': ((34,), False), - 'vpath': ((35,), False), - 'timeout': ((1, 31), True), - } +def create_log(): + """Make and return a log for the sandbox to use, if needed.""" + # These imports are local to avoid importing pypy if we don't need to. + from pypy.tool.ansi_print import AnsiLog -log = py.log.Producer("sandlib") -py.log.setconsumer("sandlib", MyAnsiLog()) + class MyAnsiLog(AnsiLog): + KW_TO_COLOR = { + 'call': ((34,), False), + 'result': ((34,), False), + 'exception': ((34,), False), + 'vpath': ((35,), False), + 'timeout': ((1, 31), True), + } + log = py.log.Producer("sandlib") + py.log.setconsumer("sandlib", MyAnsiLog()) + return log # Note: we use lib_pypy/marshal.py instead of the built-in marshal # for two reasons. The built-in module could be made to segfault @@ -30,8 +34,9 @@ # load(). Also, marshal.load(f) blocks with the GIL held when # f is a pipe with no data immediately avaialble, preventing the # _waiting_thread to run. -from pypy.tool.lib_pypy import import_from_lib_pypy -marshal = import_from_lib_pypy('marshal') +import pypy +marshal = py.path.local(pypy.__file__).join('..', '..', 'lib_pypy', + 'marshal.py').pyimport() # Non-marshal result types RESULTTYPE_STATRESULT = object() @@ -126,6 +131,7 @@ for the external functions xxx that you want to support. """ debug = False + log = None os_level_sandboxing = False # Linux only: /proc/PID/seccomp def __init__(self, args, executable=None): @@ -142,6 +148,9 @@ self.currenttimeout = None self.currentlyidlefrom = None + if self.debug: + self.log = create_log() + def withlock(self, function, *args, **kwds): lock = self.popenlock if lock is not None: @@ -169,7 +178,8 @@ if delay <= 0.0: break # expired! time.sleep(min(delay*1.001, 1)) - log.timeout("timeout!") + if self.log: + self.log.timeout("timeout!") self.kill() #if interrupt_main: # if hasattr(os, 'kill'): @@ -246,22 +256,22 @@ args = read_message(child_stdout) except EOFError, e: break - if self.debug and not self.is_spam(fnname, *args): - log.call('%s(%s)' % (fnname, + if self.log and not self.is_spam(fnname, *args): + self.log.call('%s(%s)' % (fnname, ', '.join([shortrepr(x) for x in args]))) try: answer, resulttype = self.handle_message(fnname, *args) except Exception, e: tb = sys.exc_info()[2] write_exception(child_stdin, e, tb) - if self.debug: + if self.log: if str(e): - log.exception('%s: %s' % (e.__class__.__name__, e)) + self.log.exception('%s: %s' % (e.__class__.__name__, e)) else: - log.exception('%s' % (e.__class__.__name__,)) + self.log.exception('%s' % (e.__class__.__name__,)) else: - if self.debug and not self.is_spam(fnname, *args): - log.result(shortrepr(answer)) + if self.log and not self.is_spam(fnname, *args): + self.log.result(shortrepr(answer)) try: write_message(child_stdin, 0) # error code - 0 for ok write_message(child_stdin, answer, resulttype) @@ -440,7 +450,8 @@ node = dirnode.join(name) else: node = dirnode - log.vpath('%r => %r' % (vpath, node)) + if self.log: + self.log.vpath('%r => %r' % (vpath, node)) return node def do_ll_os__ll_os_stat(self, vpathname): From noreply at buildbot.pypy.org Tue Jan 3 13:04:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 13:04:30 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: remove unnecessary imports Message-ID: <20120103120430.D5FF382B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r50991:7f549487e540 Date: 2012-01-03 14:03 +0200 http://bitbucket.org/pypy/pypy/changeset/7f549487e540/ Log: remove unnecessary imports diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -6,7 +6,6 @@ from pypy.rlib.objectmodel import CDefinedIntSymbolic, keepalive_until_here, specialize from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.extregistry import ExtRegistryEntry -from pypy.tool.sourcetools import func_with_new_name DEBUG_ELIDABLE_FUNCTIONS = False @@ -628,7 +627,6 @@ def specialize_call(self, hop, **kwds_i): # XXX to be complete, this could also check that the concretetype # of the variables are the same for each of the calls. - from pypy.rpython.error import TyperError from pypy.rpython.lltypesystem import lltype driver = self.instance.im_self greens_v = [] From noreply at buildbot.pypy.org Tue Jan 3 13:36:12 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 3 Jan 2012 13:36:12 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix another bug in CALL_ASSEMBLER Message-ID: <20120103123612.BE5C282B1C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50992:997593f72e51 Date: 2012-01-03 13:35 +0100 http://bitbucket.org/pypy/pypy/changeset/997593f72e51/ Log: fix another bug in CALL_ASSEMBLER diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -996,9 +996,8 @@ pmc.overwrite() self.mc.alloc_scratch_reg() + self.mc.load(r.SCRATCH.value, r.SPP.value, 0) self.mc.cmp_op(0, r.SCRATCH.value, 0, imm=True) - self.mc.load(r.SCRATCH.value, r.SPP.value, 0) - self.mc.cror(2, 1, 2) self.mc.free_scratch_reg() self._emit_guard(guard_op, regalloc._prepare_guard(guard_op), c.LT) From noreply at buildbot.pypy.org Tue Jan 3 13:43:32 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 13:43:32 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: work out the API Message-ID: <20120103124332.1059182B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r50993:a8dc9a3fd739 Date: 2012-01-03 14:42 +0200 http://bitbucket.org/pypy/pypy/changeset/a8dc9a3fd739/ Log: work out the API diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -449,23 +449,6 @@ # special-cased by ExtRegistryEntry pass - def on_compile(self, logger, looptoken, operations, type, *greenargs): - """ A hook called when loop is compiled. Overwrite - for your own jitdriver if you want to do something special, like - call applevel code - """ - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - """ A hook called when a bridge is compiled. Overwrite - for your own jitdriver if you want to do something special - """ - - # note: if you overwrite this functions with the above signature it'll - # work, but the *greenargs is different for each jitdriver, so we - # can't share the same methods - del on_compile - del on_compile_bridge - def _make_extregistryentries(self): # workaround: we cannot declare ExtRegistryEntries for functions # used as methods of a frozen object, but we can attach the @@ -753,6 +736,32 @@ An instance of this class might be returned by the policy.get_jit_portal method in order to function. """ + def on_abort(self, reason, jitdriver, greenkey): + """ A hook called each time a loop is aborted with jitdriver and + greenkey where it started, reason is a string why it got aborted + """ + + def on_compile(self, jitdriver, logger, looptoken, operations, greenkey, + asmaddr, asmlen): + """ A hook called when loop is compiled. Overwrite + for your own jitdriver if you want to do something special, like + call applevel code. + + jitdriver - an instance of jitdriver where tracing started + logger - an instance of jit.metainterp.logger.LogOperations + asmaddr - (int) raw address of assembler block + asmlen - assembler block length + """ + + def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, + fail_descr_no, asmaddr, asmlen): + """ A hook called when a bridge is compiled. Overwrite + for your own jitdriver if you want to do something special + """ + + def get_stats(self): + """ Returns various statistics + """ class Entry(ExtRegistryEntry): _about_ = record_known_class From noreply at buildbot.pypy.org Tue Jan 3 14:03:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 14:03:30 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: progress - have some more information on on_abort hook, kill some unnecessary Message-ID: <20120103130330.1A60482B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r50994:f7c24c0067c5 Date: 2012-01-03 14:59 +0200 http://bitbucket.org/pypy/pypy/changeset/f7c24c0067c5/ Log: progress - have some more information on on_abort hook, kill some unnecessary clutter diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -13,6 +13,9 @@ self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if portal is None: + from pypy.rlib.jit import JitPortal + portal = JitPortal() self.portal = portal def set_supports_floats(self, flag): diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1793,7 +1793,10 @@ def aborted_tracing(self, reason): self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') - self.staticdata.warmrunnerdesc.on_abort(reason) + jd_sd = self.jitdriver_sd + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.portal.on_abort(reason, jd_sd.jitdriver, + greenkey) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -9,7 +9,9 @@ reasons = [] class MyJitPortal(JitPortal): - def on_abort(self, reason): + def on_abort(self, reason, jitdriver, greenkey): + assert jitdriver is myjitdriver + assert len(greenkey) == 1 reasons.append(reason) portal = MyJitPortal() diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -206,12 +206,13 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # + self.portal = policy.portal self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() self.make_enter_functions() self.rewrite_jit_merge_points(policy) - self.make_portal_callbacks(policy.portal) + self.portal = policy.portal verbose = False # not self.cpu.translate_support_code self.codewriter.make_jitcodes(verbose=verbose) @@ -425,15 +426,6 @@ for jd in self.jitdrivers_sd: self.make_enter_function(jd) - def make_portal_callbacks(self, portal): - if portal is not None: - def on_abort(reason): - portal.on_abort(reason) - else: - def on_abort(reason): - pass - self.on_abort = on_abort - def make_enter_function(self, jd): from pypy.jit.metainterp.warmstate import WarmEnterState state = WarmEnterState(self, jd) From noreply at buildbot.pypy.org Tue Jan 3 14:03:32 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 3 Jan 2012 14:03:32 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: adjust pypyjit module to use the new API Message-ID: <20120103130332.09A2382B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r50995:fb9acc31d2e9 Date: 2012-01-03 15:02 +0200 http://bitbucket.org/pypy/pypy/changeset/fb9acc31d2e9/ Log: adjust pypyjit module to use the new API diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -5,7 +5,7 @@ from pypy.jit.metainterp.jitprof import counter_names class PyPyPortal(JitPortal): - def on_abort(self, reason): + def on_abort(self, reason, jitdriver, greenkey): space = self.space cache = space.fromcache(Cache) if cache.in_recursion: diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -3,7 +3,7 @@ from pypy.conftest import gettestobjspace, option from pypy.interpreter.pycode import PyCode from pypy.interpreter.gateway import interp2app -from pypy.jit.metainterp.history import JitCellToken +from pypy.jit.metainterp.history import JitCellToken, ConstInt, ConstPtr from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.metainterp.logger import Logger from pypy.rpython.annlowlevel import (cast_instance_to_base_ptr, @@ -41,6 +41,7 @@ debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations + greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] def interp_on_compile(): pypyjitdriver.on_compile(logger, JitCellToken(), oplist, 'loop', @@ -50,7 +51,7 @@ pypyjitdriver.on_compile_bridge(logger, JitCellToken(), oplist, 0) def interp_on_abort(): - pypy_portal.on_abort(ABORT_TOO_LONG) + pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey) cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) From noreply at buildbot.pypy.org Tue Jan 3 14:28:34 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 3 Jan 2012 14:28:34 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: adjust mplayer hack to also work with more modern mplayer Message-ID: <20120103132834.94A0B82B1C@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: extradoc Changeset: r3996:8bce6bae2607 Date: 2012-01-03 14:28 +0100 http://bitbucket.org/pypy/extradoc/changeset/8bce6bae2607/ Log: adjust mplayer hack to also work with more modern mplayer diff --git a/talk/iwtc11/benchmarks/image/io.py b/talk/iwtc11/benchmarks/image/io.py --- a/talk/iwtc11/benchmarks/image/io.py +++ b/talk/iwtc11/benchmarks/image/io.py @@ -1,4 +1,6 @@ import os, re, array +from subprocess import Popen, PIPE, STDOUT + def mplayer(Image, fn='tv://', options=''): f = os.popen('mplayer -really-quiet -noframedrop ' + options + ' ' @@ -19,18 +21,18 @@ def view(self, img): assert img.typecode == 'B' if not self.width: - self.mplayer = os.popen('mplayer -really-quiet -noframedrop - ' + - '2> /dev/null ', 'w') - self.mplayer.write('YUV4MPEG2 W%d H%d F100:1 Ip A1:1\n' % - (img.width, img.height)) + w, h = img.width, img.height + self.mplayer = Popen(['mplayer', '-', '-benchmark', + '-demuxer', 'rawvideo', + '-rawvideo', 'w=%d:h=%d:format=y8' % (w, h), + '-really-quiet'], + stdin=PIPE, stdout=PIPE, stderr=PIPE) + self.width = img.width self.height = img.height - self.color_data = array.array('B', [127]) * (img.width * img.height / 2) assert self.width == img.width assert self.height == img.height - self.mplayer.write('FRAME\n') - img.tofile(self.mplayer) - self.color_data.tofile(self.mplayer) + img.tofile(self.mplayer.stdin) default_viewer = MplayerViewer() From noreply at buildbot.pypy.org Tue Jan 3 14:29:33 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 3 Jan 2012 14:29:33 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): make test_memoryerror pass, factor out return code Message-ID: <20120103132934.002EE82B1C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50996:be5c3642001f Date: 2012-01-03 14:29 +0100 http://bitbucket.org/pypy/pypy/changeset/be5c3642001f/ Log: (bivab, hager): make test_memoryerror pass, factor out return code diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -278,10 +278,9 @@ with Saved_Volatiles(mc): addr = self.cpu.get_on_leave_jitted_int(save_exception=True) mc.call(addr) - #mc.alloc_scratch_reg(self.cpu.propagate_exception_v) - #mc.mr(r.RES.value, r.SCRATCH.value) - #mc.free_scratch_reg() + mc.load_imm(r.RES, self.cpu.propagate_exception_v) + self._gen_epilogue(mc) mc.prepare_insts_blocks() self.propagate_exception_path = mc.materialize(self.cpu.asmmemmgr, []) @@ -320,6 +319,14 @@ # call decoding function mc.call(addr) + # generate return and restore registers + self._gen_epilogue(mc) + + mc.prepare_insts_blocks() + return mc.materialize(self.cpu.asmmemmgr, [], + self.cpu.gc_ll_descr.gcrootmap) + + def _gen_epilogue(self, mc): # save SPP in r5 # (assume that r5 has been written to failboxes) mc.mr(r.r5.value, r.SPP.value) @@ -336,9 +343,6 @@ # generated before we know how much space the entire frame will need. mc.addi(r.SP.value, r.r5.value, self.OFFSET_SPP_TO_OLD_BACKCHAIN) # restore old SP mc.blr() - mc.prepare_insts_blocks() - return mc.materialize(self.cpu.asmmemmgr, [], - self.cpu.gc_ll_descr.gcrootmap) def _save_managed_regs(self, mc): """ store managed registers in ENCODING AREA From noreply at buildbot.pypy.org Tue Jan 3 14:30:34 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Tue, 3 Jan 2012 14:30:34 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: this was done by the list-strategies branch Message-ID: <20120103133034.AB8A682B1C@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: extradoc Changeset: r3997:806e6da58235 Date: 2011-12-19 19:45 +0100 http://bitbucket.org/pypy/extradoc/changeset/806e6da58235/ Log: this was done by the list-strategies branch diff --git a/planning/jit.txt b/planning/jit.txt --- a/planning/jit.txt +++ b/planning/jit.txt @@ -86,8 +86,6 @@ - ((turn max(x, y)/min(x, y) into MAXSD, MINSD instructions when x and y are floats.)) (a mess, MAXSD/MINSD have different semantics WRT nan) -- list.pop() (with no arguments) calls into delitem, rather than recognizing that - no items need to be moved BACKEND TASKS ------------- From noreply at buildbot.pypy.org Tue Jan 3 14:30:35 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Tue, 3 Jan 2012 14:30:35 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: typo in talk slides Message-ID: <20120103133035.D171B82B1C@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: extradoc Changeset: r3998:f5f38f08d90d Date: 2011-12-19 19:45 +0100 http://bitbucket.org/pypy/extradoc/changeset/f5f38f08d90d/ Log: typo in talk slides diff --git a/talk/icooolps2011/talk/talk.tex b/talk/icooolps2011/talk/talk.tex --- a/talk/icooolps2011/talk/talk.tex +++ b/talk/icooolps2011/talk/talk.tex @@ -437,7 +437,7 @@ |{\color{gray}$index_1$ = Map.getindex($map_1$, "a")}| |{\color{gray}guard($index_1$ != -1)}| $storage_1$ = $inst_1$.storage -$result_1$ = $storage_1$[$index_1$}] +$result_1$ = $storage_1$[$index_1$] # $inst_1$.getfield("b") |{\color{gray}$map_2$ = $inst_1$.map| From noreply at buildbot.pypy.org Tue Jan 3 14:30:37 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Tue, 3 Jan 2012 14:30:37 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge Message-ID: <20120103133037.9BA5B82B1C@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: extradoc Changeset: r3999:a88377852aa3 Date: 2012-01-03 14:30 +0100 http://bitbucket.org/pypy/extradoc/changeset/a88377852aa3/ Log: merge diff --git a/blog/draft/matplotlib.rst b/blog/draft/matplotlib.rst new file mode 100644 --- /dev/null +++ b/blog/draft/matplotlib.rst @@ -0,0 +1,86 @@ +=================================== +Plotting using matplotlib from PyPy +=================================== + +**Big fat warning** This is just a proof of concept. It barely works. There are +missing pieces left and right, which were replaced with hacks so I can get this +to run and prove it's possible. Don't try this at home, especially your home. +You have been warned. + +There has been a lot of talking about PyPy not integrating well with the +current scientific Python ecosystem, and ``numpypy`` (a NumPy reimplementation +on top of pypy) was dubbed "a fancy array library". I'm going to show that +integration with this ecosystem is possible with our design. + +First, `the demo`_:: + + #!/usr/bin/env pypy + + # numpy, pypy version + import numpypy as numpy + # DRAGONS LIVE THERE (fortunately hidden) + from embed.emb import import_mod + + pylab = import_mod('matplotlib.pylab') + + if __name__ == '__main__': + a = numpy.arange(100, dtype=int) + b = numpy.sin(a) + pylab.plot(a, b) + pylab.show() + +And you get: + + XXX pic + +Now, how to reproduce it: + +* You need a PyPy without cpyext, I did not find a linker that would support + overriding symbols. Right now there are no nightlies like this, so you have + to compile it yourself, like:: + + ./translate.py -Ojit targetpypystandalone.py --withoutmod-cpyext + + That would give you a PyPy that's unable to load some libraries like PIL, but + perfectly working otherwise. + +* Speaking of which, you need a reasonably recent PyPy. + +* The approach is generally portable, however the implementation has been + tested only on 64bit linux. Few tweaks might be required. + +* You need to install python2.6, the python2.6 development headers, and have + numpy and matplotlib installed on that python. + +* You need a checkout of my `hacks directory`_ and put embedded on your + ``PYTHONPATH``, your pypy checkout also has to be on the ``PYTHONPATH``. + +Er wait, what happened? +----------------------- + +What didn't happen is we did not reimplement matplotlib on top of PyPy. What +did happen is we embed CPython inside of PyPy using ctypes. We instantiate it. +and follow the `embedding`_ tutorial for CPython. Since numpy arrays are not +movable, we're able to pass around an integer that's represents the memory +address of the array data and reconstruct it in the embedded interpreter. Hence +with a relatively little effort we managed to reuse the same array data on both +sides to plot at array. Easy, no? + +This approach can be extended to support anything that's not too tied with +python objects. SciPy and matplotlib both fall into the same category +but probably the same strategy can be applied to anything, like GTK or QT. +It's just a matter of extending a hack into a working library. + +To summarize, while we're busy making numpypy better and faster, it seems +that all external libraries on the C side can be done using an embedded Python +interpreter with relatively little effort. To get to that point, I spent +a day and a half to learn how to embed CPython, with very little prior +experience in the CPython APIs. Of course you should still keep as much as +possible in PyPy to make it nice and fast :) + +Cheers, +fijal + +.. _`hacks directory`: https://bitbucket.org/fijal/hack2 +.. _`the demo`: https://bitbucket.org/fijal/hack2/src/default/embed/embed/matplotwrapper.py +.. _`embedding`: http://docs.python.org/extending/embedding.html diff --git a/blog/draft/pycon-2012-teaser.rst b/blog/draft/pycon-2012-teaser.rst new file mode 100644 --- /dev/null +++ b/blog/draft/pycon-2012-teaser.rst @@ -0,0 +1,38 @@ +Come see us at PyCon 2012 +========================= + +`PyCon 2012`_ is coming up in just a few short months, and PyPy will be well +represented there. We'll be delivering a tutorial, two talks, plus we'll be +around for the sprints. + +Here are the abstracts for the tutorials and talks: + +* **How to get the most out of your PyPy**, by Maciej Fijalkowski, Alex Gaynor + and Armin Rigo: For many applications PyPy can provide performance benefits + right out of the box. However, little details can push your application to + perform much better. In this tutorial we'll give you insights on how to push + PyPy to it's limits. We'll focus on understanding the performance + characteristics of PyPy, and learning the analysis tools in order to maximize + your applications performance. *This is the tutorial.* + +* **Why PyPy by example**, by Maciej Fijalkowski, Alex Gaynor and Armin Rigo: + One of the goals of PyPy is to make existing Python code faster, however an + even broader goal was to make it possible to write things in Python that + previous would needed to be written in C or other low-level language. This + talk will show examples of this, and describe how they represent the + tremendous progress PyPy has made, and what it means for people looking to + use PyPy. + +* **How the PyPy JIT works**, by Benjamin Peterson: The Python community is + abuzz about the major speed gains PyPy can offer pure Python code. But how + does PyPy JIT actually work? This talk will discuss how the PyPy JIT is + implemented. It will include descriptions of the tracing, optimization, and + assembly generation phases. I will demonstrate each step with a example loop. + +If you have any questions let us know! We look forward to seeing people at +PyCon and chatting about PyPy and the entire Python ecosystem. + +See you there, +Maciej Fijalkowski, Alex Gaynor, Benjamin Peterson, Armin Rigo, and the entire PyPy team + +.. _`PyCon 2012`: https://us.pycon.org/2012/ diff --git a/blog/draft/screen0.png b/blog/draft/screen0.png new file mode 100644 index 0000000000000000000000000000000000000000..0f6d5f2e8bdb961c6fc893e6d745910f98792684 GIT binary patch [cut] diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -1,10 +1,6 @@ NEW TASKS --------- -- add in numpy.generic and the various subclasses, use them in returning - instances from subscripting (and possibly internally), also make them valid - for the dtype arguments (numpy-dtype-refactor branch) - - astype - a good sort function @@ -13,16 +9,20 @@ - endianness -- scalar types like numpy.int8 (numpy-dtype-refacotr branch) - -- add multi-dim arrays (numpy-multidim-shards branch) - - - will need to refactor some functions - - frompyfunc to create ufuncs from python functions - more ufuncs -- arange/linspace/other ranges +- linspace/other ranges -- numpy.flatiter array.flat and friends +- more attributes/methods on numpy.flatiter + +- axis= parameter to various methods + +- expose ndarray.ctypes + +- subclassing ndarray (instantiating subcalsses curently returns the wrong type) + + * keep subclass type when slicing, __array_finalize__ + + * ndarray.view diff --git a/sprintinfo/leysin-winter-2011/announcement.txt b/sprintinfo/leysin-winter-2012/announcement.txt copy from sprintinfo/leysin-winter-2011/announcement.txt copy to sprintinfo/leysin-winter-2012/announcement.txt --- a/sprintinfo/leysin-winter-2011/announcement.txt +++ b/sprintinfo/leysin-winter-2012/announcement.txt @@ -1,30 +1,23 @@ ===================================================================== - PyPy Leysin Winter Sprint (16-22nd January 2011) + PyPy Leysin Winter Sprint (15-22nd January 2012) ===================================================================== The next PyPy sprint will be in Leysin, Switzerland, for the -seventh time. This is a fully public sprint: newcomers and topics +eighth time. This is a fully public sprint: newcomers and topics other than those proposed below are welcome. ------------------------------ Goals and topics of the sprint ------------------------------ -* Now that we have released 1.4, and plan to release 1.4.1 soon - (possibly before the sprint), the sprint itself is going to be - mainly working on fixing issues reported by various users. Of - course this does not prevent people from showing up with a more - precise interest in mind. If there are newcomers, we will gladly - give introduction talks. +* Py3k: work towards supporting Python 3 in PyPy -* We will also work on polishing and merging the long-standing - branches that are around, which could eventually lead to the - next PyPy release. These branches are notably: +* NumPyPy: work towards supporting the numpy module in PyPy - - fast-forward (Python 2.7 support, by Benjamin, Amaury, and others) - - jit-unroll-loops (improve JITting of smaller loops, by Hakan) - - arm-backend (a JIT backend for ARM, by David) - - jitypes2 (fast ctypes calls with the JIT, by Antonio). +* JIT backends: integrate tests for ARM; look at the PowerPC 64; + maybe try again to write an LLVM- or GCC-based one + +* STM and STM-related topics; or the Concurrent Mark-n-Sweep GC * And as usual, the main side goal is to have fun in winter sports :-) We can take a day off for ski. @@ -33,8 +26,9 @@ Exact times ----------- -The work days should be 16-22 January 2011. People may arrive on -the 15th already and/or leave on the 23rd. +The work days should be 15-21 January 2011 (Sunday-Saturday). The +official plans are for people to arrive on the 14th or the 15th, and to +leave on the 22nd. ----------------------- Location & Accomodation @@ -56,13 +50,14 @@ expensive) and maybe the possibility to get a single room if you really want to. -Please register by svn: +Please register by Mercurial:: - http://codespeak.net/svn/pypy/extradoc/sprintinfo/leysin-winter-2011/people.txt + https://bitbucket.org/pypy/extradoc/ + https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leysin-winter-2012 -or on the pypy-sprint mailing list if you do not yet have check-in rights: +or on the pypy-dev mailing list if you do not yet have check-in rights: - http://codespeak.net/mailman/listinfo/pypy-sprint + http://mail.python.org/mailman/listinfo/pypy-dev You need a Swiss-to-(insert country here) power adapter. There will be some Swiss-to-EU adapters around -- bring a EU-format power strip if you diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt new file mode 100644 --- /dev/null +++ b/sprintinfo/leysin-winter-2012/people.txt @@ -0,0 +1,58 @@ + +People coming to the Leysin sprint Winter 2011 +================================================== + +People who have a ``?`` in their arrive/depart or accomodation +column are known to be coming but there are no details +available yet from them. + + +==================== ============== ======================= + Name Arrive/Depart Accomodation +==================== ============== ======================= +Armin Rigo private +David Schneider 17/22 ermina +==================== ============== ======================= + + +People on the following list were present at previous sprints: + +==================== ============== ===================== + Name Arrive/Depart Accomodation +==================== ============== ===================== +Antonio Cuni ? ? +Michael Foord ? ? +Maciej Fijalkowski ? ? +David Schneider ? ? +Jacob Hallen ? ? +Laura Creighton ? ? +Hakan Ardo ? ? +Carl Friedrich Bolz ? ? +Samuele Pedroni ? ? +Anders Hammarquist ? ? +Christian Tismer ? ? +Niko Matsakis ? ? +Toby Watson ? ? +Paul deGrandis ? ? +Michael Hudson ? ? +Anders Lehmann ? ? +Niklaus Haldimann ? ? +Lene Wagner ? ? +Amaury Forgeot d'Arc ? ? +Valentino Volonghi ? ? +Boris Feigin ? ? +Andrew Thompson ? ? +Bert Freudenberg ? ? +Beatrice Duering ? ? +Richard Emslie ? ? +Johan Hahn ? ? +Stephan Diehl ? ? +Alexander Schremmer ? ? +Anders Chrigstroem ? ? +Eric van Riet Paap ? ? +Holger Krekel ? ? +Guido Wesdorp ? ? +Leonardo Santagada ? ? +Alexandre Fayolle ? ? +Sylvain Th�nault ? ? +==================== ============== ===================== diff --git a/talk/iwtc11/benchmarks/convolution/convolution.py b/talk/iwtc11/benchmarks/convolution/convolution.py --- a/talk/iwtc11/benchmarks/convolution/convolution.py +++ b/talk/iwtc11/benchmarks/convolution/convolution.py @@ -57,6 +57,19 @@ self[x, y] = data[y][x] return self +class NumpyArray(Array2D): + def __init__(self, w, h): + self.width = w + self.height = h + import numpypy + self.data = numpypy.zeros([h, w], 'd') + + def __getitem__(self, (x, y)): + return self.data[y, x] + + def __setitem__(self, (x, y), val): + self.data[y, x] = val + def _conv3x3(a, b, k): assert k.width == k.height == 3 for y in xrange(1, a.height-1): @@ -88,6 +101,13 @@ _conv3x3(a, b, Array2D(3,3)) return 'conv3x3(Array2D(%sx%s))' % tuple(args) +def conv3x3_numpy(args): + a = NumpyArray(int(args[0]), int(args[1])) + b = NumpyArray(a.width, a.height) + for i in range(10): + _conv3x3(a, b, NumpyArray(3,3)) + return 'conv3x3(NumpyArray(%sx%s))' % tuple(args) + def dilate3x3(args): a = Array2D(int(args[0]), int(args[1])) b = Array2D(a.width, a.height) diff --git a/talk/iwtc11/benchmarks/image/io.py b/talk/iwtc11/benchmarks/image/io.py --- a/talk/iwtc11/benchmarks/image/io.py +++ b/talk/iwtc11/benchmarks/image/io.py @@ -1,4 +1,6 @@ import os, re, array +from subprocess import Popen, PIPE, STDOUT + def mplayer(Image, fn='tv://', options=''): f = os.popen('mplayer -really-quiet -noframedrop ' + options + ' ' @@ -19,18 +21,18 @@ def view(self, img): assert img.typecode == 'B' if not self.width: - self.mplayer = os.popen('mplayer -really-quiet -noframedrop - ' + - '2> /dev/null ', 'w') - self.mplayer.write('YUV4MPEG2 W%d H%d F100:1 Ip A1:1\n' % - (img.width, img.height)) + w, h = img.width, img.height + self.mplayer = Popen(['mplayer', '-', '-benchmark', + '-demuxer', 'rawvideo', + '-rawvideo', 'w=%d:h=%d:format=y8' % (w, h), + '-really-quiet'], + stdin=PIPE, stdout=PIPE, stderr=PIPE) + self.width = img.width self.height = img.height - self.color_data = array.array('B', [127]) * (img.width * img.height / 2) assert self.width == img.width assert self.height == img.height - self.mplayer.write('FRAME\n') - img.tofile(self.mplayer) - self.color_data.tofile(self.mplayer) + img.tofile(self.mplayer.stdin) default_viewer = MplayerViewer() diff --git a/talk/iwtc11/licm.pdf b/talk/iwtc11/licm.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ff2a7bf547f542771702ac86ea8531f8ba16cc28 GIT binary patch [cut] diff --git a/talk/sea2012/abstract.rst b/talk/sea2012/abstract.rst new file mode 100644 --- /dev/null +++ b/talk/sea2012/abstract.rst @@ -0,0 +1,18 @@ +Fast numeric in Python - NumPy and PyPy +======================================= + +Python increasingly is being utilized as a powerful scientific +processing language. It successfully has been used as a glue language +to drive simulations written in C, Fortran or the array +manipulation language provided by the NumPy package. Originally +Python only was used as a glue language because the original Python +implementation was relatively slow. With the recent progress in the +PyPy project that is showing significant performance +improvements in each release, Python is nearing performance comparable +to native C language implementations. In this talk I will +describe three stages: how to use it right now, in the near future and +(xxx give before this line a hint that "we" or "our" means "pypy developers") +our plans to provide a very robust infrastructure for implementing +numerical computations. I also will spend some time exploring ideas +how dynamic compilation eventually can outperform static compilation +and how a high-level language helps accomplish this. From noreply at buildbot.pypy.org Tue Jan 3 14:42:06 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 3 Jan 2012 14:42:06 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20120103134206.E36C982B1C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50997:0529ccad7c00 Date: 2012-01-03 14:41 +0100 http://bitbucket.org/pypy/pypy/changeset/0529ccad7c00/ Log: merge diff too long, truncating to 10000 out of 53476 lines diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,3 +1,4 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 +ff4af8f318821f7f5ca998613a60fca09aa137da release-1.7 diff --git a/lib-python/modified-2.7/ctypes/__init__.py b/lib-python/modified-2.7/ctypes/__init__.py --- a/lib-python/modified-2.7/ctypes/__init__.py +++ b/lib-python/modified-2.7/ctypes/__init__.py @@ -351,7 +351,7 @@ self._FuncPtr = _FuncPtr if handle is None: - self._handle = _ffi.CDLL(name) + self._handle = _ffi.CDLL(name, mode) else: self._handle = handle diff --git a/lib-python/modified-2.7/ctypes/test/test_callbacks.py b/lib-python/modified-2.7/ctypes/test/test_callbacks.py --- a/lib-python/modified-2.7/ctypes/test/test_callbacks.py +++ b/lib-python/modified-2.7/ctypes/test/test_callbacks.py @@ -1,5 +1,6 @@ import unittest from ctypes import * +from ctypes.test import xfail import _ctypes_test class Callbacks(unittest.TestCase): @@ -98,6 +99,7 @@ ## self.check_type(c_char_p, "abc") ## self.check_type(c_char_p, "def") + @xfail def test_pyobject(self): o = () from sys import getrefcount as grc diff --git a/lib-python/modified-2.7/ctypes/test/test_libc.py b/lib-python/modified-2.7/ctypes/test/test_libc.py --- a/lib-python/modified-2.7/ctypes/test/test_libc.py +++ b/lib-python/modified-2.7/ctypes/test/test_libc.py @@ -25,7 +25,10 @@ lib.my_qsort(chars, len(chars)-1, sizeof(c_char), comparefunc(sort)) self.assertEqual(chars.raw, " ,,aaaadmmmnpppsss\x00") - def test_no_more_xfail(self): + def SKIPPED_test_no_more_xfail(self): + # We decided to not explicitly support the whole ctypes-2.7 + # and instead go for a case-by-case, demand-driven approach. + # So this test is skipped instead of failing. import socket import ctypes.test self.assertTrue(not hasattr(ctypes.test, 'xfail'), diff --git a/lib_pypy/_collections.py b/lib_pypy/_collections.py --- a/lib_pypy/_collections.py +++ b/lib_pypy/_collections.py @@ -379,12 +379,14 @@ class defaultdict(dict): def __init__(self, *args, **kwds): - self.default_factory = None - if 'default_factory' in kwds: - self.default_factory = kwds.pop('default_factory') - elif len(args) > 0 and (callable(args[0]) or args[0] is None): - self.default_factory = args[0] + if len(args) > 0: + default_factory = args[0] args = args[1:] + if not callable(default_factory) and default_factory is not None: + raise TypeError("first argument must be callable") + else: + default_factory = None + self.default_factory = default_factory super(defaultdict, self).__init__(*args, **kwds) def __missing__(self, key): @@ -404,7 +406,7 @@ recurse.remove(id(self)) def copy(self): - return type(self)(self, default_factory=self.default_factory) + return type(self)(self.default_factory, self) def __copy__(self): return self.copy() diff --git a/lib_pypy/_sha.py b/lib_pypy/_sha.py --- a/lib_pypy/_sha.py +++ b/lib_pypy/_sha.py @@ -1,5 +1,5 @@ #!/usr/bin/env python -# -*- coding: iso-8859-1 +# -*- coding: iso-8859-1 -*- # Note that PyPy contains also a built-in module 'sha' which will hide # this one if compiled in. diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -231,6 +231,11 @@ sqlite.sqlite3_result_text.argtypes = [c_void_p, c_char_p, c_int, c_void_p] sqlite.sqlite3_result_text.restype = None +HAS_LOAD_EXTENSION = hasattr(sqlite, "sqlite3_enable_load_extension") +if HAS_LOAD_EXTENSION: + sqlite.sqlite3_enable_load_extension.argtypes = [c_void_p, c_int] + sqlite.sqlite3_enable_load_extension.restype = c_int + ########################################## # END Wrapped SQLite C API and constants ########################################## @@ -705,6 +710,15 @@ from sqlite3.dump import _iterdump return _iterdump(self) + if HAS_LOAD_EXTENSION: + def enable_load_extension(self, enabled): + self._check_thread() + self._check_closed() + + rc = sqlite.sqlite3_enable_load_extension(self.db, int(enabled)) + if rc != SQLITE_OK: + raise OperationalError("Error enabling load extension") + DML, DQL, DDL = range(3) class Cursor(object): diff --git a/lib_pypy/distributed/socklayer.py b/lib_pypy/distributed/socklayer.py --- a/lib_pypy/distributed/socklayer.py +++ b/lib_pypy/distributed/socklayer.py @@ -2,7 +2,7 @@ import py from socket import socket -XXX needs import adaptation as 'green' is removed from py lib for years +raise ImportError("XXX needs import adaptation as 'green' is removed from py lib for years") from py.impl.green.msgstruct import decodemessage, message from socket import socket, AF_INET, SOCK_STREAM import marshal diff --git a/lib_pypy/itertools.py b/lib_pypy/itertools.py --- a/lib_pypy/itertools.py +++ b/lib_pypy/itertools.py @@ -25,7 +25,7 @@ __all__ = ['chain', 'count', 'cycle', 'dropwhile', 'groupby', 'ifilter', 'ifilterfalse', 'imap', 'islice', 'izip', 'repeat', 'starmap', - 'takewhile', 'tee'] + 'takewhile', 'tee', 'compress', 'product'] try: from __pypy__ import builtinify except ImportError: builtinify = lambda f: f diff --git a/lib_pypy/pyrepl/unix_console.py b/lib_pypy/pyrepl/unix_console.py --- a/lib_pypy/pyrepl/unix_console.py +++ b/lib_pypy/pyrepl/unix_console.py @@ -412,7 +412,12 @@ e.args[4] == 'unexpected end of data': pass else: - raise + # was: "raise". But it crashes pyrepl, and by extension the + # pypy currently running, in which we are e.g. in the middle + # of some debugging session. Argh. Instead just print an + # error message to stderr and continue running, for now. + self.partial_char = '' + sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: self.partial_char = '' self.event_queue.push(c) diff --git a/lib_pypy/syslog.py b/lib_pypy/syslog.py --- a/lib_pypy/syslog.py +++ b/lib_pypy/syslog.py @@ -38,9 +38,27 @@ _setlogmask.argtypes = (c_int,) _setlogmask.restype = c_int +_S_log_open = False +_S_ident_o = None + +def _get_argv(): + try: + import sys + script = sys.argv[0] + if isinstance(script, str): + return script[script.rfind('/')+1:] or None + except Exception: + pass + return None + @builtinify -def openlog(ident, option, facility): - _openlog(ident, option, facility) +def openlog(ident=None, logoption=0, facility=LOG_USER): + global _S_ident_o, _S_log_open + if ident is None: + ident = _get_argv() + _S_ident_o = c_char_p(ident) # keepalive + _openlog(_S_ident_o, logoption, facility) + _S_log_open = True @builtinify def syslog(arg1, arg2=None): @@ -48,11 +66,18 @@ priority, message = arg1, arg2 else: priority, message = LOG_INFO, arg1 + # if log is not opened, open it now + if not _S_log_open: + openlog() _syslog(priority, "%s", message) @builtinify def closelog(): - _closelog() + global _S_log_open, S_ident_o + if _S_log_open: + _closelog() + _S_log_open = False + _S_ident_o = None @builtinify def setlogmask(mask): diff --git a/py/_code/code.py b/py/_code/code.py --- a/py/_code/code.py +++ b/py/_code/code.py @@ -164,6 +164,7 @@ # if something: # assume this causes a NameError # # _this_ lines and the one # below we don't want from entry.getsource() + end = min(end, len(source)) for i in range(self.lineno, end): if source[i].rstrip().endswith(':'): end = i + 1 @@ -307,7 +308,7 @@ self._striptext = 'AssertionError: ' self._excinfo = tup self.type, self.value, tb = self._excinfo - self.typename = self.type.__name__ + self.typename = getattr(self.type, "__name__", "???") self.traceback = py.code.Traceback(tb) def __repr__(self): diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -252,7 +252,26 @@ # unsignedness is considered a rare and contagious disease def union((int1, int2)): - knowntype = rarithmetic.compute_restype(int1.knowntype, int2.knowntype) + if int1.unsigned == int2.unsigned: + knowntype = rarithmetic.compute_restype(int1.knowntype, int2.knowntype) + else: + t1 = int1.knowntype + if t1 is bool: + t1 = int + t2 = int2.knowntype + if t2 is bool: + t2 = int + + if t2 is int: + if int2.nonneg == False: + raise UnionError, "Merging %s and a possibly negative int is not allowed" % t1 + knowntype = t1 + elif t1 is int: + if int1.nonneg == False: + raise UnionError, "Merging %s and a possibly negative int is not allowed" % t2 + knowntype = t2 + else: + raise UnionError, "Merging these types (%s, %s) is not supported" % (t1, t2) return SomeInteger(nonneg=int1.nonneg and int2.nonneg, knowntype=knowntype) diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -180,7 +180,12 @@ if name is None: name = pyobj.func_name if signature is None: - signature = cpython_code_signature(pyobj.func_code) + if hasattr(pyobj, '_generator_next_method_of_'): + from pypy.interpreter.argument import Signature + signature = Signature(['entry']) # haaaaaack + defaults = () + else: + signature = cpython_code_signature(pyobj.func_code) if defaults is None: defaults = pyobj.func_defaults self.name = name diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -591,13 +591,11 @@ immutable = True def __init__(self, method): self.method = method - -NUMBER = object() + annotation_to_ll_map = [ (SomeSingleFloat(), lltype.SingleFloat), (s_None, lltype.Void), # also matches SomeImpossibleValue() (s_Bool, lltype.Bool), - (SomeInteger(knowntype=r_ulonglong), NUMBER), (SomeFloat(), lltype.Float), (SomeLongFloat(), lltype.LongFloat), (SomeChar(), lltype.Char), @@ -623,10 +621,11 @@ return lltype.Ptr(p.PARENTTYPE) if isinstance(s_val, SomePtr): return s_val.ll_ptrtype + if type(s_val) is SomeInteger: + return lltype.build_number(None, s_val.knowntype) + for witness, T in annotation_to_ll_map: if witness.contains(s_val): - if T is NUMBER: - return lltype.build_number(None, s_val.knowntype) return T if info is None: info = '' @@ -635,7 +634,7 @@ raise ValueError("%sshould return a low-level type,\ngot instead %r" % ( info, s_val)) -ll_to_annotation_map = dict([(ll, ann) for ann, ll in annotation_to_ll_map if ll is not NUMBER]) +ll_to_annotation_map = dict([(ll, ann) for ann, ll in annotation_to_ll_map]) def lltype_to_annotation(T): try: diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py --- a/pypy/annotation/specialize.py +++ b/pypy/annotation/specialize.py @@ -36,9 +36,7 @@ newtup = SpaceOperation('newtuple', starargs, argscopy[-1]) newstartblock.operations.append(newtup) newstartblock.closeblock(Link(argscopy, graph.startblock)) - graph.startblock.isstartblock = False graph.startblock = newstartblock - newstartblock.isstartblock = True argnames = argnames + ['.star%d' % i for i in range(nb_extra_args)] graph.signature = Signature(argnames) # note that we can mostly ignore defaults: if nb_extra_args > 0, diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -856,6 +856,46 @@ py.test.raises(Exception, a.build_types, f, []) # if you want to get a r_uint, you have to be explicit about it + def test_add_different_ints(self): + def f(a, b): + return a + b + a = self.RPythonAnnotator() + py.test.raises(Exception, a.build_types, f, [r_uint, int]) + + def test_merge_different_ints(self): + def f(a, b): + if a: + c = a + else: + c = b + return c + a = self.RPythonAnnotator() + py.test.raises(Exception, a.build_types, f, [r_uint, int]) + + def test_merge_ruint_zero(self): + def f(a): + if a: + c = a + else: + c = 0 + return c + a = self.RPythonAnnotator() + s = a.build_types(f, [r_uint]) + assert s == annmodel.SomeInteger(nonneg = True, unsigned = True) + + def test_merge_ruint_nonneg_signed(self): + def f(a, b): + if a: + c = a + else: + assert b >= 0 + c = b + return c + a = self.RPythonAnnotator() + s = a.build_types(f, [r_uint, int]) + assert s == annmodel.SomeInteger(nonneg = True, unsigned = True) + + def test_prebuilt_long_that_is_not_too_long(self): small_constant = 12L def f(): @@ -3029,7 +3069,7 @@ if g(x, y): g(x, r_uint(y)) a = self.RPythonAnnotator() - a.build_types(f, [int, int]) + py.test.raises(Exception, a.build_types, f, [int, int]) def test_compare_with_zero(self): def g(): diff --git a/pypy/bin/checkmodule.py b/pypy/bin/checkmodule.py --- a/pypy/bin/checkmodule.py +++ b/pypy/bin/checkmodule.py @@ -1,43 +1,45 @@ #! /usr/bin/env python """ -Usage: checkmodule.py [-b backend] +Usage: checkmodule.py -Compiles the PyPy extension module from pypy/module// -into a fake program which does nothing. Useful for testing whether a -modules compiles without doing a full translation. Default backend is cli. - -WARNING: this is still incomplete: there are chances that the -compilation fails with strange errors not due to the module. If a -module is known to compile during a translation but don't pass -checkmodule.py, please report the bug (or, better, correct it :-). +Check annotation and rtyping of the PyPy extension module from +pypy/module//. Useful for testing whether a +modules compiles without doing a full translation. """ import autopath -import sys +import sys, os from pypy.objspace.fake.checkmodule import checkmodule def main(argv): - try: - assert len(argv) in (2, 4) - if len(argv) == 2: - backend = 'cli' - modname = argv[1] - if modname in ('-h', '--help'): - print >> sys.stderr, __doc__ - sys.exit(0) - if modname.startswith('-'): - print >> sys.stderr, "Bad command line" - print >> sys.stderr, __doc__ - sys.exit(1) - else: - _, b, backend, modname = argv - assert b == '-b' - except AssertionError: + if len(argv) != 2: print >> sys.stderr, __doc__ sys.exit(2) + modname = argv[1] + if modname in ('-h', '--help'): + print >> sys.stderr, __doc__ + sys.exit(0) + if modname.startswith('-'): + print >> sys.stderr, "Bad command line" + print >> sys.stderr, __doc__ + sys.exit(1) + if os.path.sep in modname: + if os.path.basename(modname) == '': + modname = os.path.dirname(modname) + if os.path.basename(os.path.dirname(modname)) != 'module': + print >> sys.stderr, "Must give '../module/xxx', or just 'xxx'." + sys.exit(1) + modname = os.path.basename(modname) + try: + checkmodule(modname) + except Exception, e: + import traceback, pdb + traceback.print_exc() + pdb.post_mortem(sys.exc_info()[2]) + return 1 else: - checkmodule(modname, backend, interactive=True) - print 'Module compiled succesfully' + print 'Passed.' + return 0 if __name__ == '__main__': - main(sys.argv) + sys.exit(main(sys.argv)) diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -252,6 +252,10 @@ "use small tuples", default=False), + BoolOption("withspecialisedtuple", + "use specialised tuples", + default=False), + BoolOption("withrope", "use ropes as the string implementation", default=False, requires=[("objspace.std.withstrslice", False), @@ -281,6 +285,9 @@ "actually create the full list until the resulting " "list is mutated", default=False), + BoolOption("withliststrategies", + "enable optimized ways to store lists of primitives ", + default=True), BoolOption("withtypeversion", "version type objects when changing them", @@ -362,6 +369,7 @@ config.objspace.std.suggest(optimized_list_getitem=True) config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) + config.objspace.std.suggest(withspecialisedtuple=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) diff --git a/pypy/config/test/test_translationoption.py b/pypy/config/test/test_translationoption.py new file mode 100644 --- /dev/null +++ b/pypy/config/test/test_translationoption.py @@ -0,0 +1,10 @@ +import py +from pypy.config.translationoption import get_combined_translation_config +from pypy.config.translationoption import set_opt_level +from pypy.config.config import ConflictConfigError + + +def test_no_gcrootfinder_with_boehm(): + config = get_combined_translation_config() + config.translation.gcrootfinder = "shadowstack" + py.test.raises(ConflictConfigError, set_opt_level, config, '0') diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -70,8 +70,8 @@ "statistics": [("translation.gctransformer", "framework")], "generation": [("translation.gctransformer", "framework")], "hybrid": [("translation.gctransformer", "framework")], - "boehm": [("translation.gctransformer", "boehm"), - ("translation.continuation", False)], # breaks + "boehm": [("translation.continuation", False), # breaks + ("translation.gctransformer", "boehm")], "markcompact": [("translation.gctransformer", "framework")], "minimark": [("translation.gctransformer", "framework")], }, @@ -399,6 +399,10 @@ # make_sure_not_resized often relies on it, so we always enable them config.translation.suggest(list_comprehension_operations=True) + # finally, make the choice of the gc definitive. This will fail + # if we have specified strange inconsistent settings. + config.translation.gc = config.translation.gc + # ---------------------------------------------------------------- def set_platform(config): diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -496,6 +496,17 @@ def setup(self): super(AppClassCollector, self).setup() cls = self.obj + # + # + for name in dir(cls): + if name.startswith('test_'): + func = getattr(cls, name, None) + code = getattr(func, 'func_code', None) + if code and code.co_flags & 32: + raise AssertionError("unsupported: %r is a generator " + "app-level test method" % (name,)) + # + # space = cls.space clsname = cls.__name__ if self.config.option.runappdirect: diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -270,7 +270,12 @@ - *slicing*: the slice start must be within bounds. The stop doesn't need to, but it must not be smaller than the start. All negative indexes are disallowed, except for - the [:-1] special case. No step. + the [:-1] special case. No step. Slice deletion follows the same rules. + + - *slice assignment*: + only supports ``lst[x:y] = sublist``, if ``len(sublist) == y - x``. + In other words, slice assignment cannot change the total length of the list, + but just replace items. - *other operators*: ``+``, ``+=``, ``in``, ``*``, ``*=``, ``==``, ``!=`` work as expected. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.6' +version = '1.7' # The full version, including alpha/beta/rc tags. -release = '1.6' +release = '1.7' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/pypy/doc/config/objspace.std.withliststrategies.txt b/pypy/doc/config/objspace.std.withliststrategies.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.std.withliststrategies.txt @@ -0,0 +1,2 @@ +Enable list strategies: Use specialized representations for lists of primitive +objects, such as ints. diff --git a/pypy/doc/config/objspace.std.withspecialisedtuple.txt b/pypy/doc/config/objspace.std.withspecialisedtuple.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.std.withspecialisedtuple.txt @@ -0,0 +1,3 @@ +Use "specialized tuples", a custom implementation for some common kinds +of tuples. Currently limited to tuples of length 2, in three variants: +(int, int), (float, float), and a generic (object, object). diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -262,6 +262,26 @@ documented as such (as e.g. for hasattr()), in most cases PyPy lets the exception propagate instead. +Object Identity of Primitive Values, ``is`` and ``id`` +------------------------------------------------------- + +Object identity of primitive values works by value equality, not by identity of +the wrapper. This means that ``x + 1 is x + 1`` is always true, for arbitrary +integers ``x``. The rule applies for the following types: + + - ``int`` + + - ``float`` + + - ``long`` + + - ``complex`` + +This change requires some changes to ``id`` as well. ``id`` fulfills the +following condition: ``x is y <=> id(x) == id(y)``. Therefore ``id`` of the +above types will return a value that is computed from the argument, and can +thus be larger than ``sys.maxint`` (i.e. it can be an arbitrary long). + Miscellaneous ------------- @@ -284,14 +304,14 @@ never a dictionary as it sometimes is in CPython. Assigning to ``__builtins__`` has no effect. -* Do not compare immutable objects with ``is``. For example on CPython - it is true that ``x is 0`` works, i.e. does the same as ``type(x) is - int and x == 0``, but it is so by accident. If you do instead - ``x is 1000``, then it stops working, because 1000 is too large and - doesn't come from the internal cache. In PyPy it fails to work in - both cases, because we have no need for a cache at all. +* directly calling the internal magic methods of a few built-in types + with invalid arguments may have a slightly different result. For + example, ``[].__add__(None)`` and ``(2).__add__(None)`` both return + ``NotImplemented`` on PyPy; on CPython, only the later does, and the + former raises ``TypeError``. (Of course, ``[]+None`` and ``2+None`` + both raise ``TypeError`` everywhere.) This difference is an + implementation detail that shows up because of internal C-level slots + that PyPy does not have. -* Also, object identity of immutable keys in dictionaries is not necessarily - preserved. .. include:: _ref.txt diff --git a/pypy/doc/faq.rst b/pypy/doc/faq.rst --- a/pypy/doc/faq.rst +++ b/pypy/doc/faq.rst @@ -112,10 +112,32 @@ You might be interested in our `benchmarking site`_ and our `jit documentation`_. +Note that the JIT has a very high warm-up cost, meaning that the +programs are slow at the beginning. If you want to compare the timings +with CPython, even relatively simple programs need to run *at least* one +second, preferrably at least a few seconds. Large, complicated programs +need even more time to warm-up the JIT. + .. _`benchmarking site`: http://speed.pypy.org .. _`jit documentation`: jit/index.html +--------------------------------------------------------------- +Couldn't the JIT dump and reload already-compiled machine code? +--------------------------------------------------------------- + +No, we found no way of doing that. The JIT generates machine code +containing a large number of constant addresses --- constant at the time +the machine code is written. The vast majority is probably not at all +constants that you find in the executable, with a nice link name. E.g. +the addresses of Python classes are used all the time, but Python +classes don't come statically from the executable; they are created anew +every time you restart your program. This makes saving and reloading +machine code completely impossible without some very advanced way of +mapping addresses in the old (now-dead) process to addresses in the new +process, including checking that all the previous assumptions about the +(now-dead) object are still true about the new object. + .. _`prolog and javascript`: diff --git a/pypy/doc/how-to-release.rst b/pypy/doc/how-to-release.rst --- a/pypy/doc/how-to-release.rst +++ b/pypy/doc/how-to-release.rst @@ -1,6 +1,3 @@ -.. include:: needswork.txt - -.. needs work, it talks about svn. also, it is not really user documentation Making a PyPy Release ======================= @@ -12,11 +9,8 @@ forgetting things. A set of todo files may also work. Check and prioritize all issues for the release, postpone some if necessary, -create new issues also as necessary. A meeting (or meetings) should be -organized to decide what things are priorities, should go in and work for -the release. - -An important thing is to get the documentation into an up-to-date state! +create new issues also as necessary. An important thing is to get +the documentation into an up-to-date state! Release Steps ---------------- diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.6`_: the latest official release +* `Release 1.7`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.6`: http://pypy.org/download.html +.. _`Release 1.7`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.6`__. +instead of the latest release, which is `1.7`__. -.. __: release-1.6.0.html +.. __: release-1.7.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix diff --git a/pypy/doc/project-ideas.rst b/pypy/doc/project-ideas.rst --- a/pypy/doc/project-ideas.rst +++ b/pypy/doc/project-ideas.rst @@ -23,17 +23,20 @@ PyPy's implementation of the Python ``long`` type is slower than CPython's. Find out why and optimize them. +Make bytearray type fast +------------------------ + +PyPy's bytearray type is very inefficient. It would be an interesting +task to look into possible optimizations on this. + Numpy improvements ------------------ -This is more of a project-container than a single project. Possible ideas: +The numpy is rapidly progressing in pypy, so feel free to come to IRC and +ask for proposed topic. A not necesarilly up-to-date `list of topics`_ +is also available. -* experiment with auto-vectorization using SSE or implement vectorization - without automatically detecting it for array operations. - -* improve numpy, for example implement memory views. - -* interface with fortran/C libraries. +.. _`list of topics`: https://bitbucket.org/pypy/extradoc/src/extradoc/planning/micronumpy.txt Improving the jitviewer ------------------------ diff --git a/pypy/doc/release-1.7.0.rst b/pypy/doc/release-1.7.0.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/release-1.7.0.rst @@ -0,0 +1,94 @@ +================================== +PyPy 1.7 - widening the sweet spot +================================== + +We're pleased to announce the 1.7 release of PyPy. As became a habit, this +release brings a lot of bugfixes and performance improvements over the 1.6 +release. However, unlike the previous releases, the focus has been on widening +the "sweet spot" of PyPy. That is, classes of Python code that PyPy can greatly +speed up should be vastly improved with this release. You can download the 1.7 +release here: + + http://pypy.org/download.html + +What is PyPy? +============= + +PyPy is a very compliant Python interpreter, almost a drop-in replacement for +CPython 2.7. It's fast (`pypy 1.7 and cpython 2.7.1`_ performance comparison) +due to its integrated tracing JIT compiler. + +This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or +Windows 32. Windows 64 work is ongoing, but not yet natively supported. + +The main topic of this release is widening the range of code which PyPy +can greatly speed up. On average on +our benchmark suite, PyPy 1.7 is around **30%** faster than PyPy 1.6 and up +to **20 times** faster on some benchmarks. + +.. _`pypy 1.7 and cpython 2.7.1`: http://speed.pypy.org + + +Highlights +========== + +* Numerous performance improvements. There are too many examples which python + constructs now should behave faster to list them. + +* Bugfixes and compatibility fixes with CPython. + +* Windows fixes. + +* PyPy now comes with stackless features enabled by default. However, + any loop using stackless features will interrupt the JIT for now, so no real + performance improvement for stackless-based programs. Contact pypy-dev for + info how to help on removing this restriction. + +* NumPy effort in PyPy was renamed numpypy. In order to try using it, simply + write:: + + import numpypy as numpy + + at the beginning of your program. There is a huge progress on numpy in PyPy + since 1.6, the main feature being implementation of dtypes. + +* JSON encoder (but not decoder) has been replaced with a new one. This one + is written in pure Python, but is known to outperform CPython's C extension + up to **2 times** in some cases. It's about **20 times** faster than + the one that we had in 1.6. + +* The memory footprint of some of our RPython modules has been drastically + improved. This should impact any applications using for example cryptography, + like tornado. + +* There was some progress in exposing even more CPython C API via cpyext. + +Things that didn't make it, expect in 1.8 soon +============================================== + +There is an ongoing work, which while didn't make it to the release, is +probably worth mentioning here. This is what you should probably expect in +1.8 some time soon: + +* Specialized list implementation. There is a branch that implements lists of + integers/floats/strings as compactly as array.array. This should drastically + improve performance/memory impact of some applications + +* NumPy effort is progressing forward, with multi-dimensional arrays coming + soon. + +* There are two brand new JIT assembler backends, notably for the PowerPC and + ARM processors. + +Fundraising +=========== + +It's maybe worth mentioning that we're running fundraising campaigns for +NumPy effort in PyPy and for Python 3 in PyPy. In case you want to see any +of those happen faster, we urge you to donate to `numpy proposal`_ or +`py3k proposal`_. In case you want PyPy to progress, but you trust us with +the general direction, you can always donate to the `general pot`_. + +.. _`numpy proposal`: http://pypy.org/numpydonate.html +.. _`py3k proposal`: http://pypy.org/py3donate.html +.. _`general pot`: http://pypy.org diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -51,6 +51,24 @@ space.setattr(self, w_name, space.getitem(w_state, w_name)) + def missing_field(self, space, required, host): + "Find which required field is missing." + state = self.initialization_state + for i in range(len(required)): + if (state >> i) & 1: + continue # field is present + missing = required[i] + if missing is None: + continue # field is optional + w_obj = self.getdictvalue(space, missing) + if w_obj is None: + err = "required field \"%s\" missing from %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + else: + err = "incorrect type for field \"%s\" in %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + raise AssertionError("should not reach here") + class NodeVisitorNotImplemented(Exception): pass @@ -94,17 +112,6 @@ ) -def missing_field(space, state, required, host): - "Find which required field is missing." - for i in range(len(required)): - if not (state >> i) & 1: - missing = required[i] - if missing is not None: - err = "required field \"%s\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) - raise AssertionError("should not reach here") class mod(AST): @@ -112,7 +119,6 @@ class Module(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -128,7 +134,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Module') + self.missing_field(space, ['body'], 'Module') else: pass w_list = self.w_body @@ -145,7 +151,6 @@ class Interactive(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -161,7 +166,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Interactive') + self.missing_field(space, ['body'], 'Interactive') else: pass w_list = self.w_body @@ -178,7 +183,6 @@ class Expression(mod): - def __init__(self, body): self.body = body self.initialization_state = 1 @@ -192,7 +196,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Expression') + self.missing_field(space, ['body'], 'Expression') else: pass self.body.sync_app_attrs(space) @@ -200,7 +204,6 @@ class Suite(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -216,7 +219,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Suite') + self.missing_field(space, ['body'], 'Suite') else: pass w_list = self.w_body @@ -232,15 +235,13 @@ class stmt(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class FunctionDef(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, name, args, body, decorator_list, lineno, col_offset): self.name = name self.args = args @@ -264,7 +265,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['name', 'args', 'body', 'decorator_list', 'lineno', 'col_offset'], 'FunctionDef') + self.missing_field(space, ['lineno', 'col_offset', 'name', 'args', 'body', 'decorator_list'], 'FunctionDef') else: pass self.args.sync_app_attrs(space) @@ -292,9 +293,6 @@ class ClassDef(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, name, bases, body, decorator_list, lineno, col_offset): self.name = name self.bases = bases @@ -320,7 +318,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['name', 'bases', 'body', 'decorator_list', 'lineno', 'col_offset'], 'ClassDef') + self.missing_field(space, ['lineno', 'col_offset', 'name', 'bases', 'body', 'decorator_list'], 'ClassDef') else: pass w_list = self.w_bases @@ -357,9 +355,6 @@ class Return(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value stmt.__init__(self, lineno, col_offset) @@ -374,10 +369,10 @@ return visitor.visit_Return(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 6: - missing_field(space, self.initialization_state, [None, 'lineno', 'col_offset'], 'Return') + if (self.initialization_state & ~4) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None], 'Return') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.value = None if self.value: self.value.sync_app_attrs(space) @@ -385,9 +380,6 @@ class Delete(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, targets, lineno, col_offset): self.targets = targets self.w_targets = None @@ -404,7 +396,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['targets', 'lineno', 'col_offset'], 'Delete') + self.missing_field(space, ['lineno', 'col_offset', 'targets'], 'Delete') else: pass w_list = self.w_targets @@ -421,9 +413,6 @@ class Assign(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, targets, value, lineno, col_offset): self.targets = targets self.w_targets = None @@ -442,7 +431,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['targets', 'value', 'lineno', 'col_offset'], 'Assign') + self.missing_field(space, ['lineno', 'col_offset', 'targets', 'value'], 'Assign') else: pass w_list = self.w_targets @@ -460,9 +449,6 @@ class AugAssign(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, target, op, value, lineno, col_offset): self.target = target self.op = op @@ -480,7 +466,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['target', 'op', 'value', 'lineno', 'col_offset'], 'AugAssign') + self.missing_field(space, ['lineno', 'col_offset', 'target', 'op', 'value'], 'AugAssign') else: pass self.target.sync_app_attrs(space) @@ -489,9 +475,6 @@ class Print(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, dest, values, nl, lineno, col_offset): self.dest = dest self.values = values @@ -511,10 +494,10 @@ return visitor.visit_Print(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 30: - missing_field(space, self.initialization_state, [None, 'values', 'nl', 'lineno', 'col_offset'], 'Print') + if (self.initialization_state & ~4) ^ 27: + self.missing_field(space, ['lineno', 'col_offset', None, 'values', 'nl'], 'Print') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.dest = None if self.dest: self.dest.sync_app_attrs(space) @@ -532,9 +515,6 @@ class For(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, target, iter, body, orelse, lineno, col_offset): self.target = target self.iter = iter @@ -559,7 +539,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['target', 'iter', 'body', 'orelse', 'lineno', 'col_offset'], 'For') + self.missing_field(space, ['lineno', 'col_offset', 'target', 'iter', 'body', 'orelse'], 'For') else: pass self.target.sync_app_attrs(space) @@ -588,9 +568,6 @@ class While(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -613,7 +590,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'While') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'While') else: pass self.test.sync_app_attrs(space) @@ -641,9 +618,6 @@ class If(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -666,7 +640,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'If') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'If') else: pass self.test.sync_app_attrs(space) @@ -694,9 +668,6 @@ class With(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, context_expr, optional_vars, body, lineno, col_offset): self.context_expr = context_expr self.optional_vars = optional_vars @@ -717,10 +688,10 @@ return visitor.visit_With(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~2) ^ 29: - missing_field(space, self.initialization_state, ['context_expr', None, 'body', 'lineno', 'col_offset'], 'With') + if (self.initialization_state & ~8) ^ 23: + self.missing_field(space, ['lineno', 'col_offset', 'context_expr', None, 'body'], 'With') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.optional_vars = None self.context_expr.sync_app_attrs(space) if self.optional_vars: @@ -739,9 +710,6 @@ class Raise(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, type, inst, tback, lineno, col_offset): self.type = type self.inst = inst @@ -762,14 +730,14 @@ return visitor.visit_Raise(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~7) ^ 24: - missing_field(space, self.initialization_state, [None, None, None, 'lineno', 'col_offset'], 'Raise') + if (self.initialization_state & ~28) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None, None, None], 'Raise') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.type = None - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.inst = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.tback = None if self.type: self.type.sync_app_attrs(space) @@ -781,9 +749,6 @@ class TryExcept(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, body, handlers, orelse, lineno, col_offset): self.body = body self.w_body = None @@ -808,7 +773,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['body', 'handlers', 'orelse', 'lineno', 'col_offset'], 'TryExcept') + self.missing_field(space, ['lineno', 'col_offset', 'body', 'handlers', 'orelse'], 'TryExcept') else: pass w_list = self.w_body @@ -845,9 +810,6 @@ class TryFinally(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, body, finalbody, lineno, col_offset): self.body = body self.w_body = None @@ -868,7 +830,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['body', 'finalbody', 'lineno', 'col_offset'], 'TryFinally') + self.missing_field(space, ['lineno', 'col_offset', 'body', 'finalbody'], 'TryFinally') else: pass w_list = self.w_body @@ -895,9 +857,6 @@ class Assert(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, test, msg, lineno, col_offset): self.test = test self.msg = msg @@ -914,10 +873,10 @@ return visitor.visit_Assert(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~2) ^ 13: - missing_field(space, self.initialization_state, ['test', None, 'lineno', 'col_offset'], 'Assert') + if (self.initialization_state & ~8) ^ 7: + self.missing_field(space, ['lineno', 'col_offset', 'test', None], 'Assert') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.msg = None self.test.sync_app_attrs(space) if self.msg: @@ -926,9 +885,6 @@ class Import(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, names, lineno, col_offset): self.names = names self.w_names = None @@ -945,7 +901,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['names', 'lineno', 'col_offset'], 'Import') + self.missing_field(space, ['lineno', 'col_offset', 'names'], 'Import') else: pass w_list = self.w_names @@ -962,9 +918,6 @@ class ImportFrom(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, module, names, level, lineno, col_offset): self.module = module self.names = names @@ -982,12 +935,12 @@ return visitor.visit_ImportFrom(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~5) ^ 26: - missing_field(space, self.initialization_state, [None, 'names', None, 'lineno', 'col_offset'], 'ImportFrom') + if (self.initialization_state & ~20) ^ 11: + self.missing_field(space, ['lineno', 'col_offset', None, 'names', None], 'ImportFrom') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.module = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.level = 0 w_list = self.w_names if w_list is not None: @@ -1003,9 +956,6 @@ class Exec(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, body, globals, locals, lineno, col_offset): self.body = body self.globals = globals @@ -1025,12 +975,12 @@ return visitor.visit_Exec(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~6) ^ 25: - missing_field(space, self.initialization_state, ['body', None, None, 'lineno', 'col_offset'], 'Exec') + if (self.initialization_state & ~24) ^ 7: + self.missing_field(space, ['lineno', 'col_offset', 'body', None, None], 'Exec') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.globals = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.locals = None self.body.sync_app_attrs(space) if self.globals: @@ -1041,9 +991,6 @@ class Global(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, names, lineno, col_offset): self.names = names self.w_names = None @@ -1058,7 +1005,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['names', 'lineno', 'col_offset'], 'Global') + self.missing_field(space, ['lineno', 'col_offset', 'names'], 'Global') else: pass w_list = self.w_names @@ -1072,9 +1019,6 @@ class Expr(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value stmt.__init__(self, lineno, col_offset) @@ -1089,7 +1033,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Expr') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Expr') else: pass self.value.sync_app_attrs(space) @@ -1097,9 +1041,6 @@ class Pass(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1112,16 +1053,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Pass') + self.missing_field(space, ['lineno', 'col_offset'], 'Pass') else: pass class Break(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1134,16 +1072,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Break') + self.missing_field(space, ['lineno', 'col_offset'], 'Break') else: pass class Continue(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1156,21 +1091,19 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Continue') + self.missing_field(space, ['lineno', 'col_offset'], 'Continue') else: pass class expr(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class BoolOp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, op, values, lineno, col_offset): self.op = op self.values = values @@ -1188,7 +1121,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['op', 'values', 'lineno', 'col_offset'], 'BoolOp') + self.missing_field(space, ['lineno', 'col_offset', 'op', 'values'], 'BoolOp') else: pass w_list = self.w_values @@ -1205,9 +1138,6 @@ class BinOp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, left, op, right, lineno, col_offset): self.left = left self.op = op @@ -1225,7 +1155,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['left', 'op', 'right', 'lineno', 'col_offset'], 'BinOp') + self.missing_field(space, ['lineno', 'col_offset', 'left', 'op', 'right'], 'BinOp') else: pass self.left.sync_app_attrs(space) @@ -1234,9 +1164,6 @@ class UnaryOp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, op, operand, lineno, col_offset): self.op = op self.operand = operand @@ -1252,7 +1179,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['op', 'operand', 'lineno', 'col_offset'], 'UnaryOp') + self.missing_field(space, ['lineno', 'col_offset', 'op', 'operand'], 'UnaryOp') else: pass self.operand.sync_app_attrs(space) @@ -1260,9 +1187,6 @@ class Lambda(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, args, body, lineno, col_offset): self.args = args self.body = body @@ -1279,7 +1203,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['args', 'body', 'lineno', 'col_offset'], 'Lambda') + self.missing_field(space, ['lineno', 'col_offset', 'args', 'body'], 'Lambda') else: pass self.args.sync_app_attrs(space) @@ -1288,9 +1212,6 @@ class IfExp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -1309,7 +1230,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'IfExp') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'IfExp') else: pass self.test.sync_app_attrs(space) @@ -1319,9 +1240,6 @@ class Dict(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, keys, values, lineno, col_offset): self.keys = keys self.w_keys = None @@ -1342,7 +1260,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['keys', 'values', 'lineno', 'col_offset'], 'Dict') + self.missing_field(space, ['lineno', 'col_offset', 'keys', 'values'], 'Dict') else: pass w_list = self.w_keys @@ -1369,9 +1287,6 @@ class Set(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, elts, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1388,7 +1303,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['elts', 'lineno', 'col_offset'], 'Set') + self.missing_field(space, ['lineno', 'col_offset', 'elts'], 'Set') else: pass w_list = self.w_elts @@ -1405,9 +1320,6 @@ class ListComp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1426,7 +1338,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'ListComp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'ListComp') else: pass self.elt.sync_app_attrs(space) @@ -1444,9 +1356,6 @@ class SetComp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1465,7 +1374,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'SetComp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'SetComp') else: pass self.elt.sync_app_attrs(space) @@ -1483,9 +1392,6 @@ class DictComp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, key, value, generators, lineno, col_offset): self.key = key self.value = value @@ -1506,7 +1412,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['key', 'value', 'generators', 'lineno', 'col_offset'], 'DictComp') + self.missing_field(space, ['lineno', 'col_offset', 'key', 'value', 'generators'], 'DictComp') else: pass self.key.sync_app_attrs(space) @@ -1525,9 +1431,6 @@ class GeneratorExp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1546,7 +1449,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'GeneratorExp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'GeneratorExp') else: pass self.elt.sync_app_attrs(space) @@ -1564,9 +1467,6 @@ class Yield(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1581,10 +1481,10 @@ return visitor.visit_Yield(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 6: - missing_field(space, self.initialization_state, [None, 'lineno', 'col_offset'], 'Yield') + if (self.initialization_state & ~4) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None], 'Yield') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.value = None if self.value: self.value.sync_app_attrs(space) @@ -1592,9 +1492,6 @@ class Compare(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, left, ops, comparators, lineno, col_offset): self.left = left self.ops = ops @@ -1615,7 +1512,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['left', 'ops', 'comparators', 'lineno', 'col_offset'], 'Compare') + self.missing_field(space, ['lineno', 'col_offset', 'left', 'ops', 'comparators'], 'Compare') else: pass self.left.sync_app_attrs(space) @@ -1640,9 +1537,6 @@ class Call(expr): - _lineno_mask = 32 - _col_offset_mask = 64 - def __init__(self, func, args, keywords, starargs, kwargs, lineno, col_offset): self.func = func self.args = args @@ -1670,12 +1564,12 @@ return visitor.visit_Call(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~24) ^ 103: - missing_field(space, self.initialization_state, ['func', 'args', 'keywords', None, None, 'lineno', 'col_offset'], 'Call') + if (self.initialization_state & ~96) ^ 31: + self.missing_field(space, ['lineno', 'col_offset', 'func', 'args', 'keywords', None, None], 'Call') else: - if not self.initialization_state & 8: + if not self.initialization_state & 32: self.starargs = None - if not self.initialization_state & 16: + if not self.initialization_state & 64: self.kwargs = None self.func.sync_app_attrs(space) w_list = self.w_args @@ -1706,9 +1600,6 @@ class Repr(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1723,7 +1614,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Repr') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Repr') else: pass self.value.sync_app_attrs(space) @@ -1731,9 +1622,6 @@ class Num(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, n, lineno, col_offset): self.n = n expr.__init__(self, lineno, col_offset) @@ -1747,16 +1635,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['n', 'lineno', 'col_offset'], 'Num') + self.missing_field(space, ['lineno', 'col_offset', 'n'], 'Num') else: pass class Str(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, s, lineno, col_offset): self.s = s expr.__init__(self, lineno, col_offset) @@ -1770,16 +1655,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['s', 'lineno', 'col_offset'], 'Str') + self.missing_field(space, ['lineno', 'col_offset', 's'], 'Str') else: pass class Attribute(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, value, attr, ctx, lineno, col_offset): self.value = value self.attr = attr @@ -1796,7 +1678,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['value', 'attr', 'ctx', 'lineno', 'col_offset'], 'Attribute') + self.missing_field(space, ['lineno', 'col_offset', 'value', 'attr', 'ctx'], 'Attribute') else: pass self.value.sync_app_attrs(space) @@ -1804,9 +1686,6 @@ class Subscript(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, value, slice, ctx, lineno, col_offset): self.value = value self.slice = slice @@ -1824,7 +1703,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['value', 'slice', 'ctx', 'lineno', 'col_offset'], 'Subscript') + self.missing_field(space, ['lineno', 'col_offset', 'value', 'slice', 'ctx'], 'Subscript') else: pass self.value.sync_app_attrs(space) @@ -1833,9 +1712,6 @@ class Name(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, id, ctx, lineno, col_offset): self.id = id self.ctx = ctx @@ -1850,16 +1726,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['id', 'ctx', 'lineno', 'col_offset'], 'Name') + self.missing_field(space, ['lineno', 'col_offset', 'id', 'ctx'], 'Name') else: pass class List(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elts, ctx, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1877,7 +1750,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elts', 'ctx', 'lineno', 'col_offset'], 'List') + self.missing_field(space, ['lineno', 'col_offset', 'elts', 'ctx'], 'List') else: pass w_list = self.w_elts @@ -1894,9 +1767,6 @@ class Tuple(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elts, ctx, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1914,7 +1784,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elts', 'ctx', 'lineno', 'col_offset'], 'Tuple') + self.missing_field(space, ['lineno', 'col_offset', 'elts', 'ctx'], 'Tuple') else: pass w_list = self.w_elts @@ -1931,9 +1801,6 @@ class Const(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1947,7 +1814,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Const') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Const') else: pass @@ -2009,7 +1876,6 @@ class Ellipsis(slice): - def __init__(self): self.initialization_state = 0 @@ -2021,14 +1887,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 0: - missing_field(space, self.initialization_state, [], 'Ellipsis') + self.missing_field(space, [], 'Ellipsis') else: pass class Slice(slice): - def __init__(self, lower, upper, step): self.lower = lower self.upper = upper @@ -2049,7 +1914,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~7) ^ 0: - missing_field(space, self.initialization_state, [None, None, None], 'Slice') + self.missing_field(space, [None, None, None], 'Slice') else: if not self.initialization_state & 1: self.lower = None @@ -2067,7 +1932,6 @@ class ExtSlice(slice): - def __init__(self, dims): self.dims = dims self.w_dims = None @@ -2083,7 +1947,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['dims'], 'ExtSlice') + self.missing_field(space, ['dims'], 'ExtSlice') else: pass w_list = self.w_dims @@ -2100,7 +1964,6 @@ class Index(slice): - def __init__(self, value): self.value = value self.initialization_state = 1 @@ -2114,7 +1977,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['value'], 'Index') + self.missing_field(space, ['value'], 'Index') else: pass self.value.sync_app_attrs(space) @@ -2377,7 +2240,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['target', 'iter', 'ifs'], 'comprehension') + self.missing_field(space, ['target', 'iter', 'ifs'], 'comprehension') else: pass self.target.sync_app_attrs(space) @@ -2394,15 +2257,13 @@ node.sync_app_attrs(space) class excepthandler(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class ExceptHandler(excepthandler): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, type, name, body, lineno, col_offset): self.type = type self.name = name @@ -2424,12 +2285,12 @@ return visitor.visit_ExceptHandler(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~3) ^ 28: - missing_field(space, self.initialization_state, [None, None, 'body', 'lineno', 'col_offset'], 'ExceptHandler') + if (self.initialization_state & ~12) ^ 19: + self.missing_field(space, ['lineno', 'col_offset', None, None, 'body'], 'ExceptHandler') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.type = None - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.name = None if self.type: self.type.sync_app_attrs(space) @@ -2470,7 +2331,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~6) ^ 9: - missing_field(space, self.initialization_state, ['args', None, None, 'defaults'], 'arguments') + self.missing_field(space, ['args', None, None, 'defaults'], 'arguments') else: if not self.initialization_state & 2: self.vararg = None @@ -2513,7 +2374,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['arg', 'value'], 'keyword') + self.missing_field(space, ['arg', 'value'], 'keyword') else: pass self.value.sync_app_attrs(space) @@ -2533,7 +2394,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~2) ^ 1: - missing_field(space, self.initialization_state, ['name', None], 'alias') + self.missing_field(space, ['name', None], 'alias') else: if not self.initialization_state & 2: self.asname = None @@ -3019,6 +2880,8 @@ def Expression_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -3098,7 +2961,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -3112,14 +2975,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def stmt_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -3133,7 +2996,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 stmt.typedef = typedef.TypeDef("stmt", AST.typedef, @@ -3149,7 +3012,7 @@ w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -3163,14 +3026,14 @@ w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def FunctionDef_get_args(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'args') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) @@ -3184,10 +3047,10 @@ w_self.setdictvalue(space, 'args', w_new_value) return w_self.deldictvalue(space, 'args') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def FunctionDef_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3201,10 +3064,10 @@ def FunctionDef_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def FunctionDef_get_decorator_list(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: @@ -3218,7 +3081,7 @@ def FunctionDef_set_decorator_list(space, w_self, w_new_value): w_self.w_decorator_list = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _FunctionDef_field_unroller = unrolling_iterable(['name', 'args', 'body', 'decorator_list']) def FunctionDef_init(space, w_self, __args__): @@ -3254,7 +3117,7 @@ w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -3268,10 +3131,10 @@ w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ClassDef_get_bases(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: @@ -3285,10 +3148,10 @@ def ClassDef_set_bases(space, w_self, w_new_value): w_self.w_bases = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ClassDef_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3302,10 +3165,10 @@ def ClassDef_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def ClassDef_get_decorator_list(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: @@ -3319,7 +3182,7 @@ def ClassDef_set_decorator_list(space, w_self, w_new_value): w_self.w_decorator_list = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _ClassDef_field_unroller = unrolling_iterable(['name', 'bases', 'body', 'decorator_list']) def ClassDef_init(space, w_self, __args__): @@ -3356,7 +3219,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3364,13 +3227,15 @@ def Return_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, True) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Return_field_unroller = unrolling_iterable(['value']) def Return_init(space, w_self, __args__): @@ -3397,7 +3262,7 @@ ) def Delete_get_targets(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: @@ -3411,7 +3276,7 @@ def Delete_set_targets(space, w_self, w_new_value): w_self.w_targets = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Delete_field_unroller = unrolling_iterable(['targets']) def Delete_init(space, w_self, __args__): @@ -3439,7 +3304,7 @@ ) def Assign_get_targets(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: @@ -3453,14 +3318,14 @@ def Assign_set_targets(space, w_self, w_new_value): w_self.w_targets = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Assign_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3468,13 +3333,15 @@ def Assign_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Assign_field_unroller = unrolling_iterable(['targets', 'value']) def Assign_init(space, w_self, __args__): @@ -3507,7 +3374,7 @@ w_obj = w_self.getdictvalue(space, 'target') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) @@ -3515,20 +3382,22 @@ def AugAssign_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'target', w_new_value) return w_self.deldictvalue(space, 'target') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def AugAssign_get_op(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() @@ -3544,14 +3413,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def AugAssign_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3559,13 +3428,15 @@ def AugAssign_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _AugAssign_field_unroller = unrolling_iterable(['target', 'op', 'value']) def AugAssign_init(space, w_self, __args__): @@ -3598,7 +3469,7 @@ w_obj = w_self.getdictvalue(space, 'dest') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) @@ -3606,16 +3477,18 @@ def Print_set_dest(space, w_self, w_new_value): try: w_self.dest = space.interp_w(expr, w_new_value, True) + if type(w_self.dest) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'dest', w_new_value) return w_self.deldictvalue(space, 'dest') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Print_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -3629,14 +3502,14 @@ def Print_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Print_get_nl(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'nl') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) @@ -3650,7 +3523,7 @@ w_self.setdictvalue(space, 'nl', w_new_value) return w_self.deldictvalue(space, 'nl') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Print_field_unroller = unrolling_iterable(['dest', 'values', 'nl']) def Print_init(space, w_self, __args__): @@ -3684,7 +3557,7 @@ w_obj = w_self.getdictvalue(space, 'target') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) @@ -3692,20 +3565,22 @@ def For_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'target', w_new_value) return w_self.deldictvalue(space, 'target') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def For_get_iter(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'iter') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) @@ -3713,16 +3588,18 @@ def For_set_iter(space, w_self, w_new_value): try: w_self.iter = space.interp_w(expr, w_new_value, False) + if type(w_self.iter) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'iter', w_new_value) return w_self.deldictvalue(space, 'iter') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def For_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3736,10 +3613,10 @@ def For_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def For_get_orelse(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3753,7 +3630,7 @@ def For_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _For_field_unroller = unrolling_iterable(['target', 'iter', 'body', 'orelse']) def For_init(space, w_self, __args__): @@ -3789,7 +3666,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -3797,16 +3674,18 @@ def While_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def While_get_body(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3820,10 +3699,10 @@ def While_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def While_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3837,7 +3716,7 @@ def While_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _While_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def While_init(space, w_self, __args__): @@ -3872,7 +3751,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -3880,16 +3759,18 @@ def If_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def If_get_body(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3903,10 +3784,10 @@ def If_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def If_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3920,7 +3801,7 @@ def If_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _If_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def If_init(space, w_self, __args__): @@ -3955,7 +3836,7 @@ w_obj = w_self.getdictvalue(space, 'context_expr') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) @@ -3963,20 +3844,22 @@ def With_set_context_expr(space, w_self, w_new_value): try: w_self.context_expr = space.interp_w(expr, w_new_value, False) + if type(w_self.context_expr) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'context_expr', w_new_value) return w_self.deldictvalue(space, 'context_expr') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def With_get_optional_vars(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'optional_vars') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) @@ -3984,16 +3867,18 @@ def With_set_optional_vars(space, w_self, w_new_value): try: w_self.optional_vars = space.interp_w(expr, w_new_value, True) + if type(w_self.optional_vars) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'optional_vars', w_new_value) return w_self.deldictvalue(space, 'optional_vars') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def With_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4007,7 +3892,7 @@ def With_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _With_field_unroller = unrolling_iterable(['context_expr', 'optional_vars', 'body']) def With_init(space, w_self, __args__): @@ -4041,7 +3926,7 @@ w_obj = w_self.getdictvalue(space, 'type') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) @@ -4049,20 +3934,22 @@ def Raise_set_type(space, w_self, w_new_value): try: w_self.type = space.interp_w(expr, w_new_value, True) + if type(w_self.type) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'type', w_new_value) return w_self.deldictvalue(space, 'type') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Raise_get_inst(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'inst') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) @@ -4070,20 +3957,22 @@ def Raise_set_inst(space, w_self, w_new_value): try: w_self.inst = space.interp_w(expr, w_new_value, True) + if type(w_self.inst) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'inst', w_new_value) return w_self.deldictvalue(space, 'inst') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Raise_get_tback(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'tback') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) @@ -4091,13 +3980,15 @@ def Raise_set_tback(space, w_self, w_new_value): try: w_self.tback = space.interp_w(expr, w_new_value, True) + if type(w_self.tback) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'tback', w_new_value) return w_self.deldictvalue(space, 'tback') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Raise_field_unroller = unrolling_iterable(['type', 'inst', 'tback']) def Raise_init(space, w_self, __args__): @@ -4126,7 +4017,7 @@ ) def TryExcept_get_body(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4140,10 +4031,10 @@ def TryExcept_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def TryExcept_get_handlers(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: @@ -4157,10 +4048,10 @@ def TryExcept_set_handlers(space, w_self, w_new_value): w_self.w_handlers = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def TryExcept_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -4174,7 +4065,7 @@ def TryExcept_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _TryExcept_field_unroller = unrolling_iterable(['body', 'handlers', 'orelse']) def TryExcept_init(space, w_self, __args__): @@ -4206,7 +4097,7 @@ ) def TryFinally_get_body(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4220,10 +4111,10 @@ def TryFinally_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def TryFinally_get_finalbody(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: @@ -4237,7 +4128,7 @@ def TryFinally_set_finalbody(space, w_self, w_new_value): w_self.w_finalbody = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _TryFinally_field_unroller = unrolling_iterable(['body', 'finalbody']) def TryFinally_init(space, w_self, __args__): @@ -4271,7 +4162,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -4279,20 +4170,22 @@ def Assert_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Assert_get_msg(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'msg') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) @@ -4300,13 +4193,15 @@ def Assert_set_msg(space, w_self, w_new_value): try: w_self.msg = space.interp_w(expr, w_new_value, True) + if type(w_self.msg) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'msg', w_new_value) return w_self.deldictvalue(space, 'msg') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Assert_field_unroller = unrolling_iterable(['test', 'msg']) def Assert_init(space, w_self, __args__): @@ -4334,7 +4229,7 @@ ) def Import_get_names(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4348,7 +4243,7 @@ def Import_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Import_field_unroller = unrolling_iterable(['names']) def Import_init(space, w_self, __args__): @@ -4380,7 +4275,7 @@ w_obj = w_self.getdictvalue(space, 'module') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) @@ -4397,10 +4292,10 @@ w_self.setdictvalue(space, 'module', w_new_value) return w_self.deldictvalue(space, 'module') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ImportFrom_get_names(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4414,14 +4309,14 @@ def ImportFrom_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ImportFrom_get_level(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'level') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) @@ -4435,7 +4330,7 @@ w_self.setdictvalue(space, 'level', w_new_value) return w_self.deldictvalue(space, 'level') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _ImportFrom_field_unroller = unrolling_iterable(['module', 'names', 'level']) def ImportFrom_init(space, w_self, __args__): @@ -4469,7 +4364,7 @@ w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -4477,20 +4372,22 @@ def Exec_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Exec_get_globals(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'globals') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) @@ -4498,20 +4395,22 @@ def Exec_set_globals(space, w_self, w_new_value): try: w_self.globals = space.interp_w(expr, w_new_value, True) + if type(w_self.globals) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'globals', w_new_value) return w_self.deldictvalue(space, 'globals') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Exec_get_locals(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'locals') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) @@ -4519,13 +4418,15 @@ def Exec_set_locals(space, w_self, w_new_value): try: w_self.locals = space.interp_w(expr, w_new_value, True) + if type(w_self.locals) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'locals', w_new_value) return w_self.deldictvalue(space, 'locals') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Exec_field_unroller = unrolling_iterable(['body', 'globals', 'locals']) def Exec_init(space, w_self, __args__): @@ -4554,7 +4455,7 @@ ) def Global_get_names(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4568,7 +4469,7 @@ def Global_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Global_field_unroller = unrolling_iterable(['names']) def Global_init(space, w_self, __args__): @@ -4600,7 +4501,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -4608,13 +4509,15 @@ def Expr_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Expr_field_unroller = unrolling_iterable(['value']) def Expr_init(space, w_self, __args__): @@ -4696,7 +4599,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -4710,14 +4613,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def expr_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -4731,7 +4634,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 expr.typedef = typedef.TypeDef("expr", AST.typedef, @@ -4747,7 +4650,7 @@ w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() @@ -4763,10 +4666,10 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def BoolOp_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -4780,7 +4683,7 @@ def BoolOp_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _BoolOp_field_unroller = unrolling_iterable(['op', 'values']) def BoolOp_init(space, w_self, __args__): @@ -4813,7 +4716,7 @@ w_obj = w_self.getdictvalue(space, 'left') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) @@ -4821,20 +4724,22 @@ def BinOp_set_left(space, w_self, w_new_value): try: w_self.left = space.interp_w(expr, w_new_value, False) + if type(w_self.left) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'left', w_new_value) return w_self.deldictvalue(space, 'left') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def BinOp_get_op(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() @@ -4850,14 +4755,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def BinOp_get_right(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'right') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) @@ -4865,13 +4770,15 @@ def BinOp_set_right(space, w_self, w_new_value): try: w_self.right = space.interp_w(expr, w_new_value, False) + if type(w_self.right) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'right', w_new_value) return w_self.deldictvalue(space, 'right') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _BinOp_field_unroller = unrolling_iterable(['left', 'op', 'right']) def BinOp_init(space, w_self, __args__): @@ -4904,7 +4811,7 @@ w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() @@ -4920,14 +4827,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def UnaryOp_get_operand(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'operand') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) @@ -4935,13 +4842,15 @@ def UnaryOp_set_operand(space, w_self, w_new_value): try: w_self.operand = space.interp_w(expr, w_new_value, False) + if type(w_self.operand) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'operand', w_new_value) return w_self.deldictvalue(space, 'operand') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _UnaryOp_field_unroller = unrolling_iterable(['op', 'operand']) def UnaryOp_init(space, w_self, __args__): @@ -4973,7 +4882,7 @@ w_obj = w_self.getdictvalue(space, 'args') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) @@ -4987,14 +4896,14 @@ w_self.setdictvalue(space, 'args', w_new_value) return w_self.deldictvalue(space, 'args') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Lambda_get_body(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -5002,13 +4911,15 @@ def Lambda_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Lambda_field_unroller = unrolling_iterable(['args', 'body']) def Lambda_init(space, w_self, __args__): @@ -5040,7 +4951,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -5048,20 +4959,22 @@ def IfExp_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def IfExp_get_body(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -5069,20 +4982,22 @@ def IfExp_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def IfExp_get_orelse(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'orelse') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) @@ -5090,13 +5005,15 @@ def IfExp_set_orelse(space, w_self, w_new_value): try: w_self.orelse = space.interp_w(expr, w_new_value, False) + if type(w_self.orelse) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'orelse', w_new_value) return w_self.deldictvalue(space, 'orelse') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _IfExp_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def IfExp_init(space, w_self, __args__): @@ -5125,7 +5042,7 @@ ) def Dict_get_keys(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: @@ -5139,10 +5056,10 @@ def Dict_set_keys(space, w_self, w_new_value): w_self.w_keys = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Dict_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -5156,7 +5073,7 @@ def Dict_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Dict_field_unroller = unrolling_iterable(['keys', 'values']) def Dict_init(space, w_self, __args__): @@ -5186,7 +5103,7 @@ ) def Set_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -5200,7 +5117,7 @@ def Set_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Set_field_unroller = unrolling_iterable(['elts']) def Set_init(space, w_self, __args__): @@ -5232,7 +5149,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5240,16 +5157,18 @@ def ListComp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ListComp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5263,7 +5182,7 @@ def ListComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _ListComp_field_unroller = unrolling_iterable(['elt', 'generators']) def ListComp_init(space, w_self, __args__): @@ -5296,7 +5215,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5304,16 +5223,18 @@ def SetComp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def SetComp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5327,7 +5248,7 @@ def SetComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _SetComp_field_unroller = unrolling_iterable(['elt', 'generators']) def SetComp_init(space, w_self, __args__): @@ -5360,7 +5281,7 @@ w_obj = w_self.getdictvalue(space, 'key') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) @@ -5368,20 +5289,22 @@ def DictComp_set_key(space, w_self, w_new_value): try: w_self.key = space.interp_w(expr, w_new_value, False) + if type(w_self.key) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'key', w_new_value) return w_self.deldictvalue(space, 'key') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def DictComp_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5389,16 +5312,18 @@ def DictComp_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def DictComp_get_generators(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5412,7 +5337,7 @@ def DictComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _DictComp_field_unroller = unrolling_iterable(['key', 'value', 'generators']) def DictComp_init(space, w_self, __args__): @@ -5446,7 +5371,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5454,16 +5379,18 @@ def GeneratorExp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def GeneratorExp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5477,7 +5404,7 @@ def GeneratorExp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _GeneratorExp_field_unroller = unrolling_iterable(['elt', 'generators']) def GeneratorExp_init(space, w_self, __args__): @@ -5510,7 +5437,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5518,13 +5445,15 @@ def Yield_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, True) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Yield_field_unroller = unrolling_iterable(['value']) def Yield_init(space, w_self, __args__): @@ -5555,7 +5484,7 @@ w_obj = w_self.getdictvalue(space, 'left') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) @@ -5563,16 +5492,18 @@ def Compare_set_left(space, w_self, w_new_value): try: w_self.left = space.interp_w(expr, w_new_value, False) + if type(w_self.left) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'left', w_new_value) return w_self.deldictvalue(space, 'left') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Compare_get_ops(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: @@ -5586,10 +5517,10 @@ def Compare_set_ops(space, w_self, w_new_value): w_self.w_ops = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Compare_get_comparators(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: @@ -5603,7 +5534,7 @@ def Compare_set_comparators(space, w_self, w_new_value): w_self.w_comparators = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Compare_field_unroller = unrolling_iterable(['left', 'ops', 'comparators']) def Compare_init(space, w_self, __args__): @@ -5638,7 +5569,7 @@ w_obj = w_self.getdictvalue(space, 'func') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) @@ -5646,16 +5577,18 @@ def Call_set_func(space, w_self, w_new_value): try: w_self.func = space.interp_w(expr, w_new_value, False) + if type(w_self.func) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'func', w_new_value) return w_self.deldictvalue(space, 'func') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Call_get_args(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: @@ -5669,10 +5602,10 @@ def Call_set_args(space, w_self, w_new_value): w_self.w_args = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Call_get_keywords(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: @@ -5686,14 +5619,14 @@ def Call_set_keywords(space, w_self, w_new_value): w_self.w_keywords = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def Call_get_starargs(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'starargs') if w_obj is not None: return w_obj - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) @@ -5701,20 +5634,22 @@ def Call_set_starargs(space, w_self, w_new_value): try: w_self.starargs = space.interp_w(expr, w_new_value, True) + if type(w_self.starargs) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'starargs', w_new_value) return w_self.deldictvalue(space, 'starargs') - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 def Call_get_kwargs(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'kwargs') if w_obj is not None: return w_obj - if not w_self.initialization_state & 16: + if not w_self.initialization_state & 64: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) @@ -5722,13 +5657,15 @@ def Call_set_kwargs(space, w_self, w_new_value): try: w_self.kwargs = space.interp_w(expr, w_new_value, True) + if type(w_self.kwargs) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'kwargs', w_new_value) return w_self.deldictvalue(space, 'kwargs') - w_self.initialization_state |= 16 + w_self.initialization_state |= 64 _Call_field_unroller = unrolling_iterable(['func', 'args', 'keywords', 'starargs', 'kwargs']) def Call_init(space, w_self, __args__): @@ -5765,7 +5702,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5773,13 +5710,15 @@ def Repr_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Repr_field_unroller = unrolling_iterable(['value']) def Repr_init(space, w_self, __args__): @@ -5810,7 +5749,7 @@ w_obj = w_self.getdictvalue(space, 'n') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n @@ -5824,7 +5763,7 @@ w_self.setdictvalue(space, 'n', w_new_value) return w_self.deldictvalue(space, 'n') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Num_field_unroller = unrolling_iterable(['n']) def Num_init(space, w_self, __args__): @@ -5855,7 +5794,7 @@ w_obj = w_self.getdictvalue(space, 's') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s @@ -5869,7 +5808,7 @@ w_self.setdictvalue(space, 's', w_new_value) return w_self.deldictvalue(space, 's') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Str_field_unroller = unrolling_iterable(['s']) def Str_init(space, w_self, __args__): @@ -5900,7 +5839,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5908,20 +5847,22 @@ def Attribute_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Attribute_get_attr(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'attr') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) @@ -5935,14 +5876,14 @@ w_self.setdictvalue(space, 'attr', w_new_value) return w_self.deldictvalue(space, 'attr') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Attribute_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -5958,7 +5899,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Attribute_field_unroller = unrolling_iterable(['value', 'attr', 'ctx']) def Attribute_init(space, w_self, __args__): @@ -5991,7 +5932,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5999,20 +5940,22 @@ def Subscript_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Subscript_get_slice(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'slice') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) @@ -6020,20 +5963,22 @@ def Subscript_set_slice(space, w_self, w_new_value): try: w_self.slice = space.interp_w(slice, w_new_value, False) + if type(w_self.slice) is slice: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'slice', w_new_value) return w_self.deldictvalue(space, 'slice') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Subscript_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6049,7 +5994,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Subscript_field_unroller = unrolling_iterable(['value', 'slice', 'ctx']) def Subscript_init(space, w_self, __args__): @@ -6082,7 +6027,7 @@ w_obj = w_self.getdictvalue(space, 'id') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) @@ -6096,14 +6041,14 @@ w_self.setdictvalue(space, 'id', w_new_value) return w_self.deldictvalue(space, 'id') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Name_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6119,7 +6064,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Name_field_unroller = unrolling_iterable(['id', 'ctx']) def Name_init(space, w_self, __args__): @@ -6147,7 +6092,7 @@ ) def List_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -6161,14 +6106,14 @@ def List_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def List_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6184,7 +6129,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _List_field_unroller = unrolling_iterable(['elts', 'ctx']) def List_init(space, w_self, __args__): @@ -6213,7 +6158,7 @@ ) def Tuple_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -6227,14 +6172,14 @@ def Tuple_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Tuple_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6250,7 +6195,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Tuple_field_unroller = unrolling_iterable(['elts', 'ctx']) def Tuple_init(space, w_self, __args__): @@ -6283,7 +6228,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value @@ -6297,7 +6242,7 @@ w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Const_field_unroller = unrolling_iterable(['value']) def Const_init(space, w_self, __args__): @@ -6409,6 +6354,8 @@ def Slice_set_lower(space, w_self, w_new_value): try: w_self.lower = space.interp_w(expr, w_new_value, True) + if type(w_self.lower) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6430,6 +6377,8 @@ def Slice_set_upper(space, w_self, w_new_value): try: w_self.upper = space.interp_w(expr, w_new_value, True) + if type(w_self.upper) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6451,6 +6400,8 @@ def Slice_set_step(space, w_self, w_new_value): try: w_self.step = space.interp_w(expr, w_new_value, True) + if type(w_self.step) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6540,6 +6491,8 @@ def Index_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6809,6 +6762,8 @@ def comprehension_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6830,6 +6785,8 @@ def comprehension_set_iter(space, w_self, w_new_value): try: w_self.iter = space.interp_w(expr, w_new_value, False) + if type(w_self.iter) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6887,7 +6844,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -6901,14 +6858,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def excepthandler_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -6922,7 +6879,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 excepthandler.typedef = typedef.TypeDef("excepthandler", AST.typedef, @@ -6938,7 +6895,7 @@ w_obj = w_self.getdictvalue(space, 'type') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) @@ -6946,20 +6903,22 @@ def ExceptHandler_set_type(space, w_self, w_new_value): try: w_self.type = space.interp_w(expr, w_new_value, True) + if type(w_self.type) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'type', w_new_value) return w_self.deldictvalue(space, 'type') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ExceptHandler_get_name(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -6967,16 +6926,18 @@ def ExceptHandler_set_name(space, w_self, w_new_value): try: w_self.name = space.interp_w(expr, w_new_value, True) + if type(w_self.name) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ExceptHandler_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -6990,7 +6951,7 @@ def ExceptHandler_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _ExceptHandler_field_unroller = unrolling_iterable(['type', 'name', 'body']) def ExceptHandler_init(space, w_self, __args__): @@ -7164,6 +7125,8 @@ def keyword_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -1,6 +1,5 @@ """codegen helpers and AST constant folding.""" import sys -import itertools from pypy.interpreter.astcompiler import ast, consts, misc from pypy.tool import stdlib_opcode as ops @@ -146,8 +145,7 @@ } unrolling_unary_folders = unrolling_iterable(unary_folders.items()) -for folder in itertools.chain(binary_folders.itervalues(), - unary_folders.itervalues()): +for folder in binary_folders.values() + unary_folders.values(): folder._always_inline_ = True del folder diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -79,6 +79,7 @@ else: self.emit("class %s(AST):" % (base,)) if sum.attributes: + self.emit("") args = ", ".join(attr.name.value for attr in sum.attributes) self.emit("def __init__(self, %s):" % (args,), 1) for attr in sum.attributes: @@ -114,7 +115,7 @@ else: names.append(repr(field.name.value)) sub = (", ".join(names), name.value) - self.emit("missing_field(space, self.initialization_state, [%s], %r)" + self.emit("self.missing_field(space, [%s], %r)" % sub, 3) self.emit("else:", 2) # Fill in all the default fields. @@ -195,17 +196,13 @@ def visitConstructor(self, cons, base, extra_attributes): self.emit("class %s(%s):" % (cons.name, base)) self.emit("") - for field in self.data.cons_attributes[cons]: - subst = (field.name, self.data.field_masks[field]) - self.emit("_%s_mask = %i" % subst, 1) - self.emit("") self.make_constructor(cons.fields, cons, extra_attributes, base) self.emit("") self.emit("def walkabout(self, visitor):", 1) self.emit("visitor.visit_%s(self)" % (cons.name,), 2) self.emit("") self.make_mutate_over(cons, cons.name) - self.make_var_syncer(cons.fields + self.data.cons_attributes[cons], + self.make_var_syncer(self.data.cons_attributes[cons] + cons.fields, cons, cons.name) def visitField(self, field): @@ -324,7 +321,7 @@ def visitSum(self, sum, name): for field in sum.attributes: - self.make_property(field, name, True) + self.make_property(field, name) self.make_typedef(name, "AST", sum.attributes, fields_name="_attributes") if not is_simple_sum(sum): @@ -400,13 +397,10 @@ def visitField(self, field, name): self.make_property(field, name) - def make_property(self, field, name, different_masks=False): + def make_property(self, field, name): func = "def %s_get_%s(space, w_self):" % (name, field.name) self.emit(func) - if different_masks: - flag = "w_self._%s_mask" % (field.name,) - else: - flag = self.data.field_masks[field] + flag = self.data.field_masks[field] if not field.seq: self.emit("if w_self.w_dict is not None:", 1) self.emit(" w_obj = w_self.getdictvalue(space, '%s')" % (field.name,), 1) @@ -458,6 +452,11 @@ config = (field.name, field.type, repr(field.opt)) self.emit("w_self.%s = space.interp_w(%s, w_new_value, %s)" % config, 2) + if field.type.value not in self.data.prod_simple: + self.emit("if type(w_self.%s) is %s:" % ( + field.name, field.type), 2) + self.emit("raise OperationError(space.w_TypeError, " + "space.w_None)", 3) else: level = 2 if field.opt and field.type.value != "int": @@ -505,7 +504,10 @@ optional_mask = 0 for i, field in enumerate(fields): flag = 1 << i - field_masks[field] = flag + if field not in field_masks: + field_masks[field] = flag + else: + assert field_masks[field] == flag if field.opt: optional_mask |= flag else: @@ -518,9 +520,9 @@ if is_simple_sum(sum): simple_types.add(tp.name.value) else: + attrs = [field for field in sum.attributes] for cons in sum.types: - attrs = [copy_field(field) for field in sum.attributes] - add_masks(cons.fields + attrs, cons) + add_masks(attrs + cons.fields, cons) cons_attributes[cons] = attrs else: prod = tp.value @@ -588,6 +590,24 @@ space.setattr(self, w_name, space.getitem(w_state, w_name)) + def missing_field(self, space, required, host): + "Find which required field is missing." + state = self.initialization_state + for i in range(len(required)): + if (state >> i) & 1: + continue # field is present + missing = required[i] + if missing is None: + continue # field is optional + w_obj = self.getdictvalue(space, missing) + if w_obj is None: + err = "required field \\"%s\\" missing from %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + else: + err = "incorrect type for field \\"%s\\" in %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + raise AssertionError("should not reach here") + class NodeVisitorNotImplemented(Exception): pass @@ -631,15 +651,6 @@ ) -def missing_field(space, state, required, host): - "Find which required field is missing." - for i in range(len(required)): - if not (state >> i) & 1: - missing = required[i] - if missing is not None: - err = "required field \\"%s\\" missing from %s" - raise operationerrfmt(space.w_TypeError, err, missing, host) - raise AssertionError("should not reach here") """ diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1,4 +1,3 @@ -import itertools import pypy from pypy.interpreter.executioncontext import ExecutionContext, ActionFlag from pypy.interpreter.executioncontext import UserDelAction, FrameTraceAction @@ -188,6 +187,12 @@ # ------------------------------------------------------------------- + def is_w(self, space, w_other): + return self is w_other + + def immutable_unique_id(self, space): + return None + def str_w(self, space): w_msg = typed_unwrap_error_msg(space, "string", self) raise OperationError(space.w_TypeError, w_msg) @@ -482,6 +487,16 @@ 'parser', 'fcntl', '_codecs', 'binascii' ] + # These modules are treated like CPython treats built-in modules, + # i.e. they always shadow any xx.py. The other modules are treated + # like CPython treats extension modules, and are loaded in sys.path + # order by the fake entry '.../lib_pypy/__extensions__'. + MODULES_THAT_ALWAYS_SHADOW = dict.fromkeys([ + '__builtin__', '__pypy__', '_ast', '_codecs', '_sre', '_warnings', + '_weakref', 'errno', 'exceptions', 'gc', 'imp', 'marshal', + 'posix', 'nt', 'pwd', 'signal', 'sys', 'thread', 'zipimport', + ], None) + def make_builtins(self): "NOT_RPYTHON: only for initializing the space." @@ -513,8 +528,8 @@ exception_types_w = self.export_builtin_exceptions() # initialize with "bootstrap types" from objspace (e.g. w_None) - types_w = itertools.chain(self.get_builtin_types().iteritems(), - exception_types_w.iteritems()) + types_w = (self.get_builtin_types().items() + + exception_types_w.items()) for name, w_type in types_w: self.setitem(self.builtin.w_dict, self.wrap(name), w_type) @@ -681,9 +696,20 @@ """shortcut for space.is_true(space.eq(w_obj1, w_obj2))""" return self.is_w(w_obj1, w_obj2) or self.is_true(self.eq(w_obj1, w_obj2)) - def is_w(self, w_obj1, w_obj2): - """shortcut for space.is_true(space.is_(w_obj1, w_obj2))""" - return self.is_true(self.is_(w_obj1, w_obj2)) + def is_(self, w_one, w_two): + return self.newbool(self.is_w(w_one, w_two)) + + def is_w(self, w_one, w_two): + # done by a method call on w_two (and not on w_one, because of the + # expected programming style where we say "if x is None" or + # "if x is object"). + return w_two.is_w(self, w_one) + + def id(self, w_obj): + w_result = w_obj.immutable_unique_id(self) + if w_result is None: + w_result = self.wrap(compute_unique_id(w_obj)) + return w_result def hash_w(self, w_obj): """shortcut for space.int_w(space.hash(w_obj))""" @@ -879,6 +905,16 @@ """ return self.unpackiterable(w_iterable, expected_length) + def listview_str(self, w_list): + """ Return a list of unwrapped strings out of a list of strings. If the + argument is not a list or does not contain only strings, return None. + May return None anyway. + """ + return None + + def newlist_str(self, list_s): + return self.newlist([self.wrap(s) for s in list_s]) + @jit.unroll_safe def exception_match(self, w_exc_type, w_check_class): """Checks if the given exception type matches 'w_check_class'.""" @@ -1013,9 +1049,6 @@ def isinstance_w(self, w_obj, w_type): return self.is_true(self.isinstance(w_obj, w_type)) - def id(self, w_obj): - return self.wrap(compute_unique_id(w_obj)) - # The code below only works # for the simple case (new-style instance). # These methods are patched with the full logic by the __builtin__ @@ -1587,6 +1620,8 @@ 'UnicodeError', 'ValueError', 'ZeroDivisionError', + 'UnicodeEncodeError', + 'UnicodeDecodeError', ] ## Irregular part of the interface: diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -98,7 +98,6 @@ "Abstract. Get the expected number of locals." raise TypeError, "abstract" - @jit.dont_look_inside def fast2locals(self): # Copy values from the fastlocals to self.w_locals if self.w_locals is None: @@ -112,7 +111,6 @@ w_name = self.space.wrap(name) self.space.setitem(self.w_locals, w_name, w_value) - @jit.dont_look_inside def locals2fast(self): # Copy values from self.w_locals to the fastlocals assert self.w_locals is not None diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -619,7 +619,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -655,7 +656,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -674,7 +676,8 @@ self.descr_reqcls, args.prepend(w_obj)) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -690,7 +693,8 @@ raise OperationError(space.w_SystemError, space.wrap("unexpected DescrMismatch error")) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -708,7 +712,8 @@ self.descr_reqcls, Arguments(space, [w1])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -726,7 +731,8 @@ self.descr_reqcls, Arguments(space, [w1, w2])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -744,7 +750,8 @@ self.descr_reqcls, Arguments(space, [w1, w2, w3])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -763,7 +770,8 @@ Arguments(space, [w1, w2, w3, w4])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -1,8 +1,9 @@ +from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError -from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import NoneNotWrapped +from pypy.interpreter.pyopcode import LoopBlock from pypy.rlib import jit -from pypy.interpreter.pyopcode import LoopBlock +from pypy.rlib.objectmodel import specialize class GeneratorIterator(Wrappable): @@ -156,38 +157,43 @@ break block = block.previous - def unpack_into(self, results_w): - """This is a hack for performance: runs the generator and collects - all produced items in a list.""" - # XXX copied and simplified version of send_ex() - space = self.space - if self.running: - raise OperationError(space.w_ValueError, - space.wrap('generator already executing')) - frame = self.frame - if frame is None: # already finished - return - self.running = True - try: - pycode = self.pycode - while True: - jitdriver.jit_merge_point(self=self, frame=frame, - results_w=results_w, - pycode=pycode) - try: - w_result = frame.execute_frame(space.w_None) - except OperationError, e: - if not e.match(space, space.w_StopIteration): - raise - break - # if the frame is now marked as finished, it was RETURNed from - if frame.frame_finished_execution: - break - results_w.append(w_result) # YIELDed - finally: - frame.f_backref = jit.vref_None - self.running = False - self.frame = None - -jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results_w']) + # Results can be either an RPython list of W_Root, or it can be an + # app-level W_ListObject, which also has an append() method, that's why we + # generate 2 versions of the function and 2 jit drivers. + def _create_unpack_into(): + jitdriver = jit.JitDriver(greens=['pycode'], + reds=['self', 'frame', 'results']) + def unpack_into(self, results): + """This is a hack for performance: runs the generator and collects + all produced items in a list.""" + # XXX copied and simplified version of send_ex() + space = self.space + if self.running: + raise OperationError(space.w_ValueError, + space.wrap('generator already executing')) + frame = self.frame + if frame is None: # already finished + return + self.running = True + try: + pycode = self.pycode + while True: + jitdriver.jit_merge_point(self=self, frame=frame, + results=results, pycode=pycode) + try: + w_result = frame.execute_frame(space.w_None) + except OperationError, e: + if not e.match(space, space.w_StopIteration): + raise + break + # if the frame is now marked as finished, it was RETURNed from + if frame.frame_finished_execution: + break + results.append(w_result) # YIELDed + finally: + frame.f_backref = jit.vref_None + self.running = False + self.frame = None + return unpack_into + unpack_into = _create_unpack_into() + unpack_into_w = _create_unpack_into() \ No newline at end of file diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -10,7 +10,7 @@ from pypy.rlib.objectmodel import we_are_translated, instantiate from pypy.rlib.jit import hint from pypy.rlib.debug import make_sure_not_resized, check_nonneg -from pypy.rlib.rarithmetic import intmask +from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rlib import jit from pypy.tool import stdlib_opcode from pypy.tool.stdlib_opcode import host_bytecode_spec @@ -167,7 +167,7 @@ # Execution starts just after the last_instr. Initially, # last_instr is -1. After a generator suspends it points to # the YIELD_VALUE instruction. - next_instr = self.last_instr + 1 + next_instr = r_uint(self.last_instr + 1) if next_instr != 0: self.pushvalue(w_inputvalue) # @@ -691,6 +691,7 @@ handlerposition = space.int_w(w_handlerposition) valuestackdepth = space.int_w(w_valuestackdepth) assert valuestackdepth >= 0 + assert handlerposition >= 0 blk = instantiate(get_block_class(opname)) blk.handlerposition = handlerposition blk.valuestackdepth = valuestackdepth diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -837,6 +837,7 @@ raise Yield def jump_absolute(self, jumpto, next_instr, ec): + check_nonneg(jumpto) return jumpto def JUMP_FORWARD(self, jumpby, next_instr): @@ -1278,7 +1279,7 @@ def handle(self, frame, unroller): next_instr = self.really_handle(frame, unroller) # JIT hack - return next_instr + return r_uint(next_instr) def really_handle(self, frame, unroller): """ Purely abstract method diff --git a/pypy/interpreter/test/test_executioncontext.py b/pypy/interpreter/test/test_executioncontext.py --- a/pypy/interpreter/test/test_executioncontext.py +++ b/pypy/interpreter/test/test_executioncontext.py @@ -292,7 +292,7 @@ import os, sys print sys.executable, self.tmpfile if sys.platform == "win32": - cmdformat = '""%s" "%s""' # excellent! tons of "! + cmdformat = '"%s" "%s"' else: cmdformat = "'%s' '%s'" g = os.popen(cmdformat % (sys.executable, self.tmpfile), 'r') diff --git a/pypy/interpreter/test/test_function.py b/pypy/interpreter/test/test_function.py --- a/pypy/interpreter/test/test_function.py +++ b/pypy/interpreter/test/test_function.py @@ -587,7 +587,7 @@ assert isinstance(meth2, Method) assert meth2.call_args(args) == obj1 # Check method returned from unbound_method.__get__() - w_meth3 = descr_function_get(space, func, None, space.type(obj2)) + w_meth3 = descr_function_get(space, func, space.w_None, space.type(obj2)) meth3 = space.unwrap(w_meth3) w_meth4 = meth3.descr_method_get(obj2, space.w_None) meth4 = space.unwrap(w_meth4) diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -63,10 +63,13 @@ def test_unpackiterable(self): space = self.space w = space.wrap - l = [w(1), w(2), w(3), w(4)] + l = [space.newlist([]) for l in range(4)] w_l = space.newlist(l) - assert space.unpackiterable(w_l) == l - assert space.unpackiterable(w_l, 4) == l + l1 = space.unpackiterable(w_l) + l2 = space.unpackiterable(w_l, 4) + for i in range(4): + assert space.is_w(l1[i], l[i]) + assert space.is_w(l2[i], l[i]) err = raises(OperationError, space.unpackiterable, w_l, 3) assert err.value.match(space, space.w_ValueError) err = raises(OperationError, space.unpackiterable, w_l, 5) diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -54,7 +54,11 @@ # Hash support def default_identity_hash(space, w_obj): - return space.wrap(compute_identity_hash(w_obj)) + w_unique_id = w_obj.immutable_unique_id(space) + if w_unique_id is None: # common case + return space.wrap(compute_identity_hash(w_obj)) + else: + return space.hash(w_unique_id) # ____________________________________________________________ # diff --git a/pypy/jit/backend/arm/arch.py b/pypy/jit/backend/arm/arch.py --- a/pypy/jit/backend/arm/arch.py +++ b/pypy/jit/backend/arm/arch.py @@ -1,10 +1,9 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rarithmetic import r_uint -from pypy.rpython.lltypesystem import lltype -FUNC_ALIGN=8 -WORD=4 +FUNC_ALIGN = 8 +WORD = 4 # the number of registers that we need to save around malloc calls N_REGISTERS_SAVED_BY_MALLOC = 9 @@ -27,18 +26,22 @@ } """]) -arm_int_div_sign = lltype.Ptr(lltype.FuncType([lltype.Signed, lltype.Signed], lltype.Signed)) + def arm_int_div_emulator(a, b): - return int(a/float(b)) + return int(a / float(b)) +arm_int_div_sign = lltype.Ptr( + lltype.FuncType([lltype.Signed, lltype.Signed], lltype.Signed)) arm_int_div = rffi.llexternal( "pypy__arm_int_div", [lltype.Signed, lltype.Signed], lltype.Signed, _callable=arm_int_div_emulator, compilation_info=eci, _nowrapper=True, elidable_function=True) -arm_uint_div_sign = lltype.Ptr(lltype.FuncType([lltype.Unsigned, lltype.Unsigned], lltype.Unsigned)) + def arm_uint_div_emulator(a, b): - return r_uint(a)/r_uint(b) + return r_uint(a) / r_uint(b) +arm_uint_div_sign = lltype.Ptr( + lltype.FuncType([lltype.Unsigned, lltype.Unsigned], lltype.Unsigned)) arm_uint_div = rffi.llexternal( "pypy__arm_uint_div", [lltype.Unsigned, lltype.Unsigned], lltype.Unsigned, _callable=arm_uint_div_emulator, @@ -46,7 +49,6 @@ _nowrapper=True, elidable_function=True) -arm_int_mod_sign = arm_int_div_sign def arm_int_mod_emulator(a, b): sign = 1 if a < 0: @@ -56,9 +58,9 @@ b = -1 * b res = a % b return sign * res +arm_int_mod_sign = arm_int_div_sign arm_int_mod = rffi.llexternal( "pypy__arm_int_mod", [lltype.Signed, lltype.Signed], lltype.Signed, _callable=arm_int_mod_emulator, compilation_info=eci, _nowrapper=True, elidable_function=True) - diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -1,41 +1,36 @@ from __future__ import with_statement import os -from pypy.jit.backend.arm.helper.assembler import saved_registers, count_reg_args, \ +from pypy.jit.backend.arm.helper.assembler import saved_registers, \ + count_reg_args, \ decode32, encode32, \ - decode64, encode64 + decode64 from pypy.jit.backend.arm import conditions as c -from pypy.jit.backend.arm import locations from pypy.jit.backend.arm import registers as r -from pypy.jit.backend.arm.arch import WORD, FUNC_ALIGN, PC_OFFSET, N_REGISTERS_SAVED_BY_MALLOC +from pypy.jit.backend.arm.arch import WORD, FUNC_ALIGN, \ + PC_OFFSET, N_REGISTERS_SAVED_BY_MALLOC from pypy.jit.backend.arm.codebuilder import ARMv7Builder, OverwritingBuilder -from pypy.jit.backend.arm.regalloc import (Regalloc, ARMFrameManager, ARMv7RegisterMananger, - _check_imm_arg, TempInt, - TempPtr, - operations as regalloc_operations, - operations_with_guard as regalloc_operations_with_guard) +from pypy.jit.backend.arm.regalloc import (Regalloc, ARMFrameManager, + ARMv7RegisterManager, check_imm_arg, + operations as regalloc_operations, + operations_with_guard as regalloc_operations_with_guard) from pypy.jit.backend.arm.jump import remap_frame_layout -from pypy.jit.backend.llsupport.regalloc import compute_vars_longevity, TempBox from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.codewriter import longlong -from pypy.jit.metainterp.history import (Const, ConstInt, ConstPtr, - BoxInt, BoxPtr, AbstractFailDescr, - INT, REF, FLOAT) +from pypy.jit.metainterp.history import (AbstractFailDescr, INT, REF, FLOAT) from pypy.jit.metainterp.resoperation import rop from pypy.rlib import rgc from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.rarithmetic import r_uint, r_longlong -from pypy.rlib.longlong2float import float2longlong, longlong2float from pypy.rpython.annlowlevel import llhelper from pypy.rpython.lltypesystem import lltype, rffi, llmemory from pypy.rpython.lltypesystem.lloperation import llop -from pypy.jit.backend.arm.opassembler import ResOpAssembler, GuardToken -from pypy.rlib.debug import (debug_print, debug_start, debug_stop, - have_debug_prints) +from pypy.jit.backend.arm.opassembler import ResOpAssembler +from pypy.rlib.debug import debug_print, debug_start, debug_stop # XXX Move to llsupport from pypy.jit.backend.x86.support import values_array, memcpy_fn + class AssemblerARM(ResOpAssembler): """ Encoding for locations in memory @@ -52,8 +47,8 @@ \xFF = END_OF_LOCS """ FLOAT_TYPE = '\xED' - REF_TYPE = '\xEE' - INT_TYPE = '\xEF' + REF_TYPE = '\xEE' + INT_TYPE = '\xEF' STACK_LOC = '\xFC' IMM_LOC = '\xFD' @@ -62,20 +57,18 @@ END_OF_LOCS = '\xFF' + STACK_FIXED_AREA = -1 def __init__(self, cpu, failargs_limit=1000): self.cpu = cpu self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) - self.fail_boxes_float = values_array(longlong.FLOATSTORAGE, failargs_limit) + self.fail_boxes_float = values_array(longlong.FLOATSTORAGE, + failargs_limit) self.fail_boxes_ptr = values_array(llmemory.GCREF, failargs_limit) self.fail_boxes_count = 0 self.fail_force_index = 0 self.setup_failure_recovery() self.mc = None - self.malloc_func_addr = 0 - self.malloc_array_func_addr = 0 - self.malloc_str_func_addr = 0 - self.malloc_unicode_func_addr = 0 self.memcpy_addr = 0 self.pending_guards = None self._exit_code_addr = 0 @@ -84,10 +77,22 @@ self._regalloc = None self.datablockwrapper = None self.propagate_exception_path = 0 + self._compute_stack_size() + + def _compute_stack_size(self): + self.STACK_FIXED_AREA = len(r.callee_saved_registers) * WORD + self.STACK_FIXED_AREA += WORD # FORCE_TOKEN + self.STACK_FIXED_AREA += N_REGISTERS_SAVED_BY_MALLOC * WORD + if self.cpu.supports_floats: + self.STACK_FIXED_AREA += (len(r.callee_saved_vfp_registers) + * 2 * WORD) + if self.STACK_FIXED_AREA % 8 != 0: + self.STACK_FIXED_AREA += WORD # Stack alignment + assert self.STACK_FIXED_AREA % 8 == 0 def setup(self, looptoken, operations): self.current_clt = looptoken.compiled_loop_token - operations = self.cpu.gc_ll_descr.rewrite_assembler(self.cpu, + operations = self.cpu.gc_ll_descr.rewrite_assembler(self.cpu, operations, self.current_clt.allgcrefs) assert self.memcpy_addr != 0, 'setup_once() not called?' self.mc = ARMv7Builder() @@ -96,6 +101,7 @@ allblocks = self.get_asmmemmgr_blocks(looptoken) self.datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, allblocks) + self.target_tokens_currently_compiling = {} return operations def teardown(self): @@ -109,28 +115,15 @@ # Addresses of functions called by new_xxx operations gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() - ll_new = gc_ll_descr.get_funcptr_for_new() - self.malloc_func_addr = rffi.cast(lltype.Signed, ll_new) self._build_propagate_exception_path() - if gc_ll_descr.get_funcptr_for_newarray is not None: - ll_new_array = gc_ll_descr.get_funcptr_for_newarray() - self.malloc_array_func_addr = rffi.cast(lltype.Signed, - ll_new_array) - if gc_ll_descr.get_funcptr_for_newstr is not None: - ll_new_str = gc_ll_descr.get_funcptr_for_newstr() - self.malloc_str_func_addr = rffi.cast(lltype.Signed, - ll_new_str) - if gc_ll_descr.get_funcptr_for_newunicode is not None: - ll_new_unicode = gc_ll_descr.get_funcptr_for_newunicode() - self.malloc_unicode_func_addr = rffi.cast(lltype.Signed, - ll_new_unicode) if gc_ll_descr.get_malloc_slowpath_addr is not None: self._build_malloc_slowpath() if gc_ll_descr.gcrootmap and gc_ll_descr.gcrootmap.is_shadow_stack: self._build_release_gil(gc_ll_descr.gcrootmap) self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) self._exit_code_addr = self._gen_exit_path() - self._leave_jitted_hook_save_exc = self._gen_leave_jitted_hook_code(True) + self._leave_jitted_hook_save_exc = \ + self._gen_leave_jitted_hook_code(True) self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) @staticmethod @@ -146,16 +139,17 @@ after() _NOARG_FUNC = lltype.Ptr(lltype.FuncType([], lltype.Void)) + def _build_release_gil(self, gcrootmap): assert gcrootmap.is_shadow_stack releasegil_func = llhelper(self._NOARG_FUNC, self._release_gil_shadowstack) reacqgil_func = llhelper(self._NOARG_FUNC, self._reacquire_gil_shadowstack) - self.releasegil_addr = rffi.cast(lltype.Signed, releasegil_func) + self.releasegil_addr = rffi.cast(lltype.Signed, releasegil_func) self.reacqgil_addr = rffi.cast(lltype.Signed, reacqgil_func) - def _gen_leave_jitted_hook_code(self, save_exc=False): + def _gen_leave_jitted_hook_code(self, save_exc): mc = ARMv7Builder() # XXX add a check if cpu supports floats with saved_registers(mc, r.caller_resp + [r.ip], r.caller_vfp_resp): @@ -174,7 +168,8 @@ # call on_leave_jitted_save_exc() # XXX add a check if cpu supports floats with saved_registers(mc, r.caller_resp + [r.ip], r.caller_vfp_resp): - addr = self.cpu.get_on_leave_jitted_int(save_exception=True) + addr = self.cpu.get_on_leave_jitted_int(save_exception=True, + default_to_memoryerror=True) mc.BL(addr) mc.gen_load_int(r.ip.value, self.cpu.propagate_exception_v) mc.MOV_rr(r.r0.value, r.ip.value) @@ -186,34 +181,37 @@ @rgc.no_collect def failure_recovery_func(mem_loc, frame_pointer, stack_pointer): """mem_loc is a structure in memory describing where the values for - the failargs are stored. - frame loc is the address of the frame pointer for the frame to be - decoded frame """ - return self.decode_registers_and_descr(mem_loc, frame_pointer, stack_pointer) + the failargs are stored. frame loc is the address of the frame + pointer for the frame to be decoded frame """ + return self.decode_registers_and_descr(mem_loc, + frame_pointer, stack_pointer) self.failure_recovery_func = failure_recovery_func - recovery_func_sign = lltype.Ptr(lltype.FuncType([lltype.Signed, lltype.Signed, lltype.Signed], lltype.Signed)) + recovery_func_sign = lltype.Ptr(lltype.FuncType([lltype.Signed, + lltype.Signed, lltype.Signed], lltype.Signed)) @rgc.no_collect def decode_registers_and_descr(self, mem_loc, frame_loc, regs_loc): - """Decode locations encoded in memory at mem_loc and write the values to - the failboxes. - Values for spilled vars and registers are stored on stack at frame_loc - """ - #XXX check if units are correct here, when comparing words and bytes and stuff - # assert 0, 'check if units are correct here, when comparing words and bytes and stuff' + """Decode locations encoded in memory at mem_loc and write the values + to the failboxes. Values for spilled vars and registers are stored on + stack at frame_loc """ + # XXX check if units are correct here, when comparing words and bytes + # and stuff assert 0, 'check if units are correct here, when comparing + # words and bytes and stuff' enc = rffi.cast(rffi.CCHARP, mem_loc) - frame_depth = frame_loc - (regs_loc + len(r.all_regs)*WORD + len(r.all_vfp_regs)*2*WORD) + frame_depth = frame_loc - (regs_loc + len(r.all_regs) + * WORD + len(r.all_vfp_regs) * 2 * WORD) assert (frame_loc - frame_depth) % 4 == 0 stack = rffi.cast(rffi.CCHARP, frame_loc - frame_depth) assert regs_loc % 4 == 0 vfp_regs = rffi.cast(rffi.CCHARP, regs_loc) - assert (regs_loc + len(r.all_vfp_regs)*2*WORD) % 4 == 0 + assert (regs_loc + len(r.all_vfp_regs) * 2 * WORD) % 4 == 0 assert frame_depth >= 0 - regs = rffi.cast(rffi.CCHARP, regs_loc + len(r.all_vfp_regs)*2*WORD) + regs = rffi.cast(rffi.CCHARP, + regs_loc + len(r.all_vfp_regs) * 2 * WORD) i = -1 fail_index = -1 while(True): @@ -231,49 +229,52 @@ if res == self.IMM_LOC: # imm value if group == self.INT_TYPE or group == self.REF_TYPE: - value = decode32(enc, i+1) + value = decode32(enc, i + 1) i += 4 else: assert group == self.FLOAT_TYPE - adr = decode32(enc, i+1) - value = rffi.cast(rffi.CArrayPtr(longlong.FLOATSTORAGE), adr)[0] + adr = decode32(enc, i + 1) + tp = rffi.CArrayPtr(longlong.FLOATSTORAGE) + value = rffi.cast(tp, adr)[0] self.fail_boxes_float.setitem(fail_index, value) i += 4 continue elif res == self.STACK_LOC: - stack_loc = decode32(enc, i+1) + stack_loc = decode32(enc, i + 1) i += 4 if group == self.FLOAT_TYPE: - value = decode64(stack, frame_depth - stack_loc*WORD) + value = decode64(stack, + frame_depth - (stack_loc + 1) * WORD) + fvalue = rffi.cast(longlong.FLOATSTORAGE, value) + self.fail_boxes_float.setitem(fail_index, fvalue) + continue + else: + value = decode32(stack, frame_depth - stack_loc * WORD) + else: # REG_LOC + reg = ord(enc[i]) + if group == self.FLOAT_TYPE: + value = decode64(vfp_regs, reg * 2 * WORD) self.fail_boxes_float.setitem(fail_index, value) continue else: - value = decode32(stack, frame_depth - stack_loc*WORD) - else: # REG_LOC - reg = ord(enc[i]) - if group == self.FLOAT_TYPE: - value = decode64(vfp_regs, reg*2*WORD) - self.fail_boxes_float.setitem(fail_index, value) - continue - else: - value = decode32(regs, reg*WORD) + value = decode32(regs, reg * WORD) if group == self.INT_TYPE: self.fail_boxes_int.setitem(fail_index, value) elif group == self.REF_TYPE: + assert (value & 3) == 0, "misaligned pointer" tgt = self.fail_boxes_ptr.get_addr_for_num(fail_index) rffi.cast(rffi.LONGP, tgt)[0] = value else: assert 0, 'unknown type' - assert enc[i] == self.END_OF_LOCS - descr = decode32(enc, i+1) + descr = decode32(enc, i + 1) self.fail_boxes_count = fail_index self.fail_force_index = frame_loc return descr - def decode_inputargs(self, enc, regalloc): + def decode_inputargs(self, enc): locs = [] j = 0 while enc[j] != self.END_OF_LOCS: @@ -282,7 +283,8 @@ j += 1 continue - assert res in [self.FLOAT_TYPE, self.INT_TYPE, self.REF_TYPE], 'location type is not supported' + assert res in [self.FLOAT_TYPE, self.INT_TYPE, self.REF_TYPE], \ + 'location type is not supported' res_type = res j += 1 res = enc[j] @@ -296,10 +298,10 @@ t = INT else: t = REF - stack_loc = decode32(enc, j+1) - loc = regalloc.frame_manager.frame_pos(stack_loc, t) + stack_loc = decode32(enc, j + 1) + loc = ARMFrameManager.frame_pos(stack_loc, t) j += 4 - else: # REG_LOC + else: # REG_LOC if res_type == self.FLOAT_TYPE: loc = r.all_vfp_regs[ord(res)] else: @@ -309,7 +311,6 @@ return locs def _build_malloc_slowpath(self): - gcrootmap = self.cpu.gc_ll_descr.gcrootmap mc = ARMv7Builder() assert self.cpu.supports_floats # We need to push two registers here because we are going to make a @@ -321,10 +322,10 @@ mc.SUB_rr(r.r0.value, r.r1.value, r.r0.value) addr = self.cpu.gc_ll_descr.get_malloc_slowpath_addr() # XXX replace with an STMxx operation - for reg, ofs in ARMv7RegisterMananger.REGLOC_TO_COPY_AREA_OFS.items(): + for reg, ofs in ARMv7RegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): mc.STR_ri(reg.value, r.fp.value, imm=ofs) mc.BL(addr) - for reg, ofs in ARMv7RegisterMananger.REGLOC_TO_COPY_AREA_OFS.items(): + for reg, ofs in ARMv7RegisterManager.REGLOC_TO_COPY_AREA_OFS.items(): mc.LDR_ri(reg.value, r.fp.value, imm=ofs) mc.CMP_ri(r.r0.value, 0) @@ -349,13 +350,16 @@ def _gen_exit_path(self): mc = ARMv7Builder() - decode_registers_addr = llhelper(self.recovery_func_sign, self.failure_recovery_func) - + decode_registers_addr = llhelper(self.recovery_func_sign, + self.failure_recovery_func) self._insert_checks(mc) with saved_registers(mc, r.all_regs, r.all_vfp_regs): - mc.MOV_rr(r.r0.value, r.ip.value) # move mem block address, to r0 to pass as - mc.MOV_rr(r.r1.value, r.fp.value) # pass the current frame pointer as second param - mc.MOV_rr(r.r2.value, r.sp.value) # pass the current stack pointer as third param + # move mem block address, to r0 to pass as + mc.MOV_rr(r.r0.value, r.ip.value) + # pass the current frame pointer as second param + mc.MOV_rr(r.r1.value, r.fp.value) + # pass the current stack pointer as third param + mc.MOV_rr(r.r2.value, r.sp.value) self._insert_checks(mc) mc.BL(rffi.cast(lltype.Signed, decode_registers_addr)) mc.MOV_rr(r.ip.value, r.r0.value) @@ -374,15 +378,15 @@ # 1 separator byte # 4 bytes for the faildescr # const floats are stored in memory and the box contains the address - memsize = (len(arglocs)-1)*6+5 + memsize = (len(arglocs) - 1) * 6 + 5 memaddr = self.datablockwrapper.malloc_aligned(memsize, alignment=1) mem = rffi.cast(rffi.CArrayPtr(lltype.Char), memaddr) i = 0 j = 0 while i < len(args): - if arglocs[i+1]: + if arglocs[i + 1]: arg = args[i] - loc = arglocs[i+1] + loc = arglocs[i + 1] if arg.type == INT: mem[j] = self.INT_TYPE j += 1 @@ -402,12 +406,18 @@ assert (arg.type == INT or arg.type == REF or arg.type == FLOAT) mem[j] = self.IMM_LOC - encode32(mem, j+1, loc.getint()) + encode32(mem, j + 1, loc.getint()) j += 5 else: assert loc.is_stack() mem[j] = self.STACK_LOC - encode32(mem, j+1, loc.position) + if arg.type == FLOAT: + # Float locs store the location number with an offset + # of 1 -.- so we need to take this into account here + # when generating the encoding + encode32(mem, j + 1, loc.position - 1) + else: + encode32(mem, j + 1, loc.position) j += 5 else: mem[j] = self.EMPTY_LOC @@ -417,15 +427,18 @@ mem[j] = chr(0xFF) n = self.cpu.get_fail_descr_number(descr) - encode32(mem, j+1, n) + encode32(mem, j + 1, n) return memaddr - def _gen_path_to_exit_path(self, descr, args, arglocs, fcond=c.AL, save_exc=False): + def _gen_path_to_exit_path(self, descr, args, arglocs, + save_exc, fcond=c.AL): + assert isinstance(save_exc, bool) memaddr = self.gen_descr_encoding(descr, args, arglocs) - self.gen_exit_code(self.mc, memaddr, fcond, save_exc) + self.gen_exit_code(self.mc, memaddr, save_exc, fcond) return memaddr - def gen_exit_code(self, mc, memaddr, fcond=c.AL, save_exc=False): + def gen_exit_code(self, mc, memaddr, save_exc, fcond=c.AL): + assert isinstance(save_exc, bool) self.mc.gen_load_int(r.ip.value, memaddr) #mc.LDR_ri(r.ip.value, r.pc.value, imm=WORD) if save_exc: @@ -440,30 +453,36 @@ self.mc.writechar(chr(0)) def gen_func_epilog(self, mc=None, cond=c.AL): + stack_size = self.STACK_FIXED_AREA + stack_size -= len(r.callee_saved_registers) * WORD + if self.cpu.supports_floats: + stack_size -= len(r.callee_saved_vfp_registers) * 2 * WORD + gcrootmap = self.cpu.gc_ll_descr.gcrootmap if mc is None: mc = self.mc if gcrootmap and gcrootmap.is_shadow_stack: self.gen_footer_shadowstack(gcrootmap, mc) - offset = 1 + mc.MOV_rr(r.sp.value, r.fp.value, cond=cond) + mc.ADD_ri(r.sp.value, r.sp.value, stack_size, cond=cond) if self.cpu.supports_floats: - offset += 1 # to keep stack alignment - mc.MOV_rr(r.sp.value, r.fp.value, cond=cond) - mc.ADD_ri(r.sp.value, r.sp.value, (N_REGISTERS_SAVED_BY_MALLOC+offset)*WORD, cond=cond) - if self.cpu.supports_floats: - mc.VPOP([reg.value for reg in r.callee_saved_vfp_registers], cond=cond) + mc.VPOP([reg.value for reg in r.callee_saved_vfp_registers], + cond=cond) mc.POP([reg.value for reg in r.callee_restored_registers], cond=cond) def gen_func_prolog(self): + stack_size = self.STACK_FIXED_AREA + stack_size -= len(r.callee_saved_registers) * WORD + if self.cpu.supports_floats: + stack_size -= len(r.callee_saved_vfp_registers) * 2 * WORD + self.mc.PUSH([reg.value for reg in r.callee_saved_registers]) - offset = 1 if self.cpu.supports_floats: self.mc.VPUSH([reg.value for reg in r.callee_saved_vfp_registers]) - offset +=1 # to keep stack alignment # here we modify the stack pointer to leave room for the 9 registers # that are going to be saved here around malloc calls and one word to # store the force index - self.mc.SUB_ri(r.sp.value, r.sp.value, (N_REGISTERS_SAVED_BY_MALLOC+offset)*WORD) + self.mc.SUB_ri(r.sp.value, r.sp.value, stack_size) self.mc.MOV_rr(r.fp.value, r.sp.value) gcrootmap = self.cpu.gc_ll_descr.gcrootmap if gcrootmap and gcrootmap.is_shadow_stack: @@ -475,191 +494,86 @@ # XXX add some comments rst = gcrootmap.get_root_stack_top_addr() self.mc.gen_load_int(r.ip.value, rst) - self.mc.LDR_ri(r.r4.value, r.ip.value) # LDR r4, [rootstacktop] - self.mc.ADD_ri(r.r5.value, r.r4.value, imm=2*WORD) # ADD r5, r4 [2*WORD] + self.mc.LDR_ri(r.r4.value, r.ip.value) # LDR r4, [rootstacktop] + self.mc.ADD_ri(r.r5.value, r.r4.value, + imm=2 * WORD) # ADD r5, r4 [2*WORD] self.mc.gen_load_int(r.r6.value, gcrootmap.MARKER) self.mc.STR_ri(r.r6.value, r.r4.value) - self.mc.STR_ri(r.fp.value, r.r4.value, WORD) + self.mc.STR_ri(r.fp.value, r.r4.value, WORD) self.mc.STR_ri(r.r5.value, r.ip.value) def gen_footer_shadowstack(self, gcrootmap, mc): rst = gcrootmap.get_root_stack_top_addr() mc.gen_load_int(r.ip.value, rst) - mc.LDR_ri(r.r4.value, r.ip.value) # LDR r4, [rootstacktop] - mc.SUB_ri(r.r5.value, r.r4.value, imm=2*WORD) # ADD r5, r4 [2*WORD] + mc.LDR_ri(r.r4.value, r.ip.value) # LDR r4, [rootstacktop] + mc.SUB_ri(r.r5.value, r.r4.value, imm=2 * WORD) # ADD r5, r4 [2*WORD] mc.STR_ri(r.r5.value, r.ip.value) - def gen_bootstrap_code(self, nonfloatlocs, floatlocs, inputargs): - for i in range(len(nonfloatlocs)): - loc = nonfloatlocs[i] - if loc is None: - continue - arg = inputargs[i] - assert arg.type != FLOAT - if arg.type == REF: - addr = self.fail_boxes_ptr.get_addr_for_num(i) - elif arg.type == INT: - addr = self.fail_boxes_int.get_addr_for_num(i) - else: - assert 0 - if loc.is_reg(): - reg = loc - else: - reg = r.ip - self.mc.gen_load_int(reg.value, addr) - self.mc.LDR_ri(reg.value, reg.value) - if loc.is_stack(): - self.mov_loc_loc(r.ip, loc) - for i in range(len(floatlocs)): - loc = floatlocs[i] - if loc is None: - continue - arg = inputargs[i] - assert arg.type == FLOAT - addr = self.fail_boxes_float.get_addr_for_num(i) - self.mc.gen_load_int(r.ip.value, addr) - if loc.is_vfp_reg(): - self.mc.VLDR(loc.value, r.ip.value) - else: - self.mc.VLDR(r.vfp_ip.value, r.ip.value) - self.mov_loc_loc(r.vfp_ip, loc) - - def gen_direct_bootstrap_code(self, loop_head, looptoken, inputargs): - self.gen_func_prolog() - nonfloatlocs, floatlocs = looptoken._arm_arglocs - - reg_args = count_reg_args(inputargs) - - stack_locs = len(inputargs) - reg_args - - selected_reg = 0 - count = 0 - float_args = [] - nonfloat_args = [] - nonfloat_regs = [] - # load reg args - for i in range(reg_args): - arg = inputargs[i] - if arg.type == FLOAT and count % 2 != 0: - selected_reg += 1 - count = 0 - reg = r.all_regs[selected_reg] - - if arg.type == FLOAT: - float_args.append((reg, floatlocs[i])) - else: - nonfloat_args.append(reg) - nonfloat_regs.append(nonfloatlocs[i]) - - if arg.type == FLOAT: - selected_reg += 2 - else: - selected_reg += 1 - count += 1 - - # move float arguments to vfp regsiters - for loc, vfp_reg in float_args: - self.mov_to_vfp_loc(loc, r.all_regs[loc.value+1], vfp_reg) - - # remap values stored in core registers - remap_frame_layout(self, nonfloat_args, nonfloat_regs, r.ip) - - # load values passed on the stack to the corresponding locations - stack_position = len(r.callee_saved_registers)*WORD + \ - len(r.callee_saved_vfp_registers)*2*WORD + \ - N_REGISTERS_SAVED_BY_MALLOC * WORD + \ - 2 * WORD # for the FAIL INDEX and the stack padding - count = 0 - for i in range(reg_args, len(inputargs)): - arg = inputargs[i] - if arg.type == FLOAT: - loc = floatlocs[i] - else: - loc = nonfloatlocs[i] - if loc.is_reg(): - self.mc.LDR_ri(loc.value, r.fp.value, stack_position) - count += 1 - elif loc.is_vfp_reg(): - if count % 2 != 0: - stack_position += WORD - count = 0 - self.mc.VLDR(loc.value, r.fp.value, stack_position) - elif loc.is_stack(): - if loc.type == FLOAT: - if count % 2 != 0: - stack_position += WORD - count = 0 - self.mc.VLDR(r.vfp_ip.value, r.fp.value, stack_position) - self.mov_loc_loc(r.vfp_ip, loc) - elif loc.type == INT or loc.type == REF: - count += 1 - self.mc.LDR_ri(r.ip.value, r.fp.value, stack_position) - self.mov_loc_loc(r.ip, loc) - else: - assert 0, 'invalid location' - else: - assert 0, 'invalid location' - if loc.type == FLOAT: - size = 2 - else: - size = 1 - stack_position += size * WORD - - sp_patch_location = self._prepare_sp_patch_position() - self.mc.B_offs(loop_head) - self._patch_sp_offset(sp_patch_location, looptoken._arm_frame_depth) - def _dump(self, ops, type='loop'): debug_start('jit-backend-ops') debug_print(type) for op in ops: debug_print(op.repr()) debug_stop('jit-backend-ops') + + def _call_header(self): + self.align() + self.gen_func_prolog() + # cpu interface def assemble_loop(self, inputargs, operations, looptoken, log): - clt = CompiledLoopToken(self.cpu, looptoken.number) clt.allgcrefs = [] looptoken.compiled_loop_token = clt + clt._debug_nbargs = len(inputargs) + + if not we_are_translated(): + # Arguments should be unique + assert len(set(inputargs)) == len(inputargs) operations = self.setup(looptoken, operations) self._dump(operations) - longevity = compute_vars_longevity(inputargs, operations) - regalloc = Regalloc(longevity, assembler=self, frame_manager=ARMFrameManager()) + self._call_header() + sp_patch_location = self._prepare_sp_patch_position() - self.align() - self.gen_func_prolog() - sp_patch_location = self._prepare_sp_patch_position() - nonfloatlocs, floatlocs = regalloc.prepare_loop(inputargs, operations, looptoken) - self.gen_bootstrap_code(nonfloatlocs, floatlocs, inputargs) - looptoken._arm_arglocs = [nonfloatlocs, floatlocs] + regalloc = Regalloc(assembler=self, frame_manager=ARMFrameManager()) + regalloc.prepare_loop(inputargs, operations) + loop_head = self.mc.currpos() + looptoken._arm_loop_code = loop_head - looptoken._arm_loop_code = loop_head - looptoken._arm_bootstrap_code = 0 - - self._walk_operations(operations, regalloc) - - looptoken._arm_frame_depth = regalloc.frame_manager.frame_depth - self._patch_sp_offset(sp_patch_location, looptoken._arm_frame_depth) - - self.align() - - direct_bootstrap_code = self.mc.currpos() - self.gen_direct_bootstrap_code(loop_head, looptoken, inputargs) + clt.frame_depth = -1 + frame_depth = self._assemble(operations, regalloc) + clt.frame_depth = frame_depth + self._patch_sp_offset(sp_patch_location, frame_depth) self.write_pending_failure_recoveries() - loop_start = self.materialize_loop(looptoken) - looptoken._arm_bootstrap_code = loop_start - looptoken._arm_direct_bootstrap_code = loop_start + direct_bootstrap_code - self.process_pending_guards(loop_start) + + rawstart = self.materialize_loop(looptoken) + looptoken._arm_func_addr = rawstart + + self.process_pending_guards(rawstart) + self.fixup_target_tokens(rawstart) + if log and not we_are_translated(): print 'Loop', inputargs, operations - self.mc._dump_trace(loop_start, 'loop_%s.asm' % self.cpu.total_compiled_loops) + self.mc._dump_trace(rawstart, + 'loop_%s.asm' % self.cpu.total_compiled_loops) print 'Done assembling loop with token %r' % looptoken self.teardown() + def _assemble(self, operations, regalloc): + regalloc.compute_hint_frame_locations(operations) + #self.mc.BKPT() + self._walk_operations(operations, regalloc) + frame_depth = regalloc.frame_manager.get_frame_depth() + jump_target_descr = regalloc.jump_target_descr + if jump_target_descr is not None: + frame_depth = max(frame_depth, + jump_target_descr._arm_clt.frame_depth) + return frame_depth + def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): operations = self.setup(original_loop_token, operations) @@ -667,32 +581,47 @@ assert isinstance(faildescr, AbstractFailDescr) code = faildescr._failure_recovery_code enc = rffi.cast(rffi.CCHARP, code) - longevity = compute_vars_longevity(inputargs, operations) - regalloc = Regalloc(longevity, assembler=self, - frame_manager=ARMFrameManager()) + frame_depth = faildescr._arm_current_frame_depth + arglocs = self.decode_inputargs(enc) + if not we_are_translated(): + assert len(inputargs) == len(arglocs) + + regalloc = Regalloc(assembler=self, frame_manager=ARMFrameManager()) + regalloc.prepare_bridge(inputargs, arglocs, operations) sp_patch_location = self._prepare_sp_patch_position() - frame_depth = faildescr._arm_frame_depth - locs = self.decode_inputargs(enc, regalloc) - assert len(inputargs) == len(locs) - regalloc.update_bindings(locs, frame_depth, inputargs) - self._walk_operations(operations, regalloc) + frame_depth = self._assemble(operations, regalloc) - #original_loop_token._arm_frame_depth = regalloc.frame_manager.frame_depth - self._patch_sp_offset(sp_patch_location, regalloc.frame_manager.frame_depth) + self._patch_sp_offset(sp_patch_location, frame_depth) self.write_pending_failure_recoveries() - bridge_start = self.materialize_loop(original_loop_token) - self.process_pending_guards(bridge_start) + rawstart = self.materialize_loop(original_loop_token) + self.process_pending_guards(rawstart) - self.patch_trace(faildescr, original_loop_token, bridge_start, regalloc) - if log and not we_are_translated(): - print 'Bridge', inputargs, operations - self.mc._dump_trace(bridge_start, 'bridge_%d.asm' % - self.cpu.total_compiled_bridges) + self.patch_trace(faildescr, original_loop_token, + rawstart, regalloc) + self.fixup_target_tokens(rawstart) + + if not we_are_translated(): + # for the benefit of tests + faildescr._arm_bridge_frame_depth = frame_depth + if log: + print 'Bridge', inputargs, operations + self.mc._dump_trace(rawstart, 'bridge_%d.asm' % + self.cpu.total_compiled_bridges) + self.current_clt.frame_depth = max(self.current_clt.frame_depth, + frame_depth) self.teardown() + def fixup_target_tokens(self, rawstart): + for targettoken in self.target_tokens_currently_compiling: + targettoken._arm_loop_code += rawstart + self.target_tokens_currently_compiling = None + + def target_arglocs(self, loop_token): + return loop_token._arm_arglocs + def materialize_loop(self, looptoken): self.datablockwrapper.done() # finish using cpu.asmmemmgr self.datablockwrapper = None @@ -705,12 +634,12 @@ descr = tok.descr #generate the exit stub and the encoded representation pos = self.mc.currpos() - tok.pos_recovery_stub = pos + tok.pos_recovery_stub = pos memaddr = self._gen_path_to_exit_path(descr, tok.failargs, - tok.faillocs, save_exc=tok.save_exc) + tok.faillocs, save_exc=tok.save_exc) # store info on the descr - descr._arm_frame_depth = tok.faillocs[0].getint() + descr._arm_current_frame_depth = tok.faillocs[0].getint() descr._failure_recovery_code = memaddr descr._arm_guard_pos = pos @@ -725,13 +654,15 @@ if not tok.is_invalidate: #patch the guard jumpt to the stub - # overwrite the generate NOP with a B_offs to the pos of the stub + # overwrite the generate NOP with a B_offs to the pos of the + # stub mc = ARMv7Builder() - mc.B_offs(descr._arm_guard_pos - tok.offset, c.get_opposite_of(tok.fcond)) + mc.B_offs(descr._arm_guard_pos - tok.offset, + c.get_opposite_of(tok.fcond)) mc.copy_to_raw_memory(block_start + tok.offset) else: clt.invalidate_positions.append( - (block_start + tok.offset, descr._arm_guard_pos - tok.offset)) + (block_start + tok.offset, descr._arm_guard_pos - tok.offset)) def get_asmmemmgr_blocks(self, looptoken): clt = looptoken.compiled_loop_token @@ -740,21 +671,22 @@ return clt.asmmemmgr_blocks def _prepare_sp_patch_position(self): - """Generate NOPs as placeholder to patch the instruction(s) to update the - sp according to the number of spilled variables""" - size = (self.mc.size_of_gen_load_int+WORD) + """Generate NOPs as placeholder to patch the instruction(s) to update + the sp according to the number of spilled variables""" + size = (self.mc.size_of_gen_load_int + WORD) l = self.mc.currpos() - for _ in range(size//WORD): + for _ in range(size // WORD): self.mc.NOP() return l def _patch_sp_offset(self, pos, frame_depth): - cb = OverwritingBuilder(self.mc, pos, OverwritingBuilder.size_of_gen_load_int+WORD) + cb = OverwritingBuilder(self.mc, pos, + OverwritingBuilder.size_of_gen_load_int + WORD) # Note: the frame_depth is one less than the value stored in the frame # manager if frame_depth == 1: return - n = (frame_depth-1)*WORD + n = (frame_depth - 1) * WORD # ensure the sp is 8 byte aligned when patching it if n % 8 != 0: @@ -784,7 +716,7 @@ cb.SUB_rr(r.sp.value, base_reg.value, r.ip.value, cond=fcond) def _walk_operations(self, operations, regalloc): - fcond=c.AL + fcond = c.AL self._regalloc = regalloc while regalloc.position() < len(operations) - 1: regalloc.next_instruction() @@ -794,20 +726,28 @@ if op.has_no_side_effect() and op.result not in regalloc.longevity: regalloc.possibly_free_vars_for_op(op) elif self.can_merge_with_next_guard(op, i, operations): + guard = operations[i + 1] + assert guard.is_guard() + arglocs = regalloc_operations_with_guard[opnum](regalloc, op, + guard, fcond) + fcond = asm_operations_with_guard[opnum](self, op, + guard, arglocs, regalloc, fcond) regalloc.next_instruction() - arglocs = regalloc_operations_with_guard[opnum](regalloc, op, - operations[i+1], fcond) - fcond = asm_operations_with_guard[opnum](self, op, - operations[i+1], arglocs, regalloc, fcond) + regalloc.possibly_free_vars_for_op(guard) + regalloc.possibly_free_vars(guard.getfailargs()) elif not we_are_translated() and op.getopnum() == -124: regalloc.prepare_force_spill(op, fcond) else: arglocs = regalloc_operations[opnum](regalloc, op, fcond) if arglocs is not None: - fcond = asm_operations[opnum](self, op, arglocs, regalloc, fcond) + fcond = asm_operations[opnum](self, op, arglocs, + regalloc, fcond) + if op.is_guard(): + regalloc.possibly_free_vars(op.getfailargs()) if op.result: regalloc.possibly_free_var(op.result) regalloc.possibly_free_vars_for_op(op) + regalloc.free_temp_vars() regalloc._check_invariants() # from ../x86/regalloc.py @@ -847,7 +787,7 @@ if size == 4: return if size == 1: - if not signed: #unsigned char + if not signed: # unsigned char self.mc.AND_ri(resloc.value, resloc.value, 0xFF) else: self.mc.LSL_ri(resloc.value, resloc.value, 24) @@ -856,9 +796,6 @@ if not signed: self.mc.LSL_ri(resloc.value, resloc.value, 16) self.mc.LSR_ri(resloc.value, resloc.value, 16) - #self.mc.MOV_ri(r.ip.value, 0xFF) - #self.mc.ORR_ri(r.ip.value, 0xCFF) - #self.mc.AND_rr(resloc.value, resloc.value, r.ip.value) else: self.mc.LSL_ri(resloc.value, resloc.value, 16) self.mc.ASR_ri(resloc.value, resloc.value, 16) @@ -873,7 +810,7 @@ # regalloc support def load(self, loc, value): - assert (loc.is_reg() and value.is_imm() + assert (loc.is_reg() and value.is_imm() or loc.is_vfp_reg() and value.is_imm_float()) if value.is_imm(): self.mc.gen_load_int(loc.value, value.getint()) @@ -907,44 +844,49 @@ temp = r.lr else: temp = r.ip - offset = ConstInt(loc.position*WORD) - if not _check_imm_arg(offset, size=0xFFF): + offset = loc.position * WORD + if not check_imm_arg(offset, size=0xFFF): self.mc.PUSH([temp.value], cond=cond) - self.mc.gen_load_int(temp.value, -offset.value, cond=cond) - self.mc.STR_rr(prev_loc.value, r.fp.value, temp.value, cond=cond) + self.mc.gen_load_int(temp.value, -offset, cond=cond) + self.mc.STR_rr(prev_loc.value, r.fp.value, + temp.value, cond=cond) self.mc.POP([temp.value], cond=cond) else: - self.mc.STR_ri(prev_loc.value, r.fp.value, imm=-1*offset.value, cond=cond) + self.mc.STR_ri(prev_loc.value, r.fp.value, + imm=-offset, cond=cond) else: assert 0, 'unsupported case' def _mov_stack_to_loc(self, prev_loc, loc, cond=c.AL): pushed = False if loc.is_reg(): - assert prev_loc.type != FLOAT, 'trying to load from an incompatible location into a core register' - assert loc is not r.lr, 'lr is not supported as a target when moving from the stack' + assert prev_loc.type != FLOAT, 'trying to load from an \ + incompatible location into a core register' + assert loc is not r.lr, 'lr is not supported as a target \ + when moving from the stack' # unspill a core register - offset = ConstInt(prev_loc.position*WORD) - if not _check_imm_arg(offset, size=0xFFF): + offset = prev_loc.position * WORD + if not check_imm_arg(offset, size=0xFFF): self.mc.PUSH([r.lr.value], cond=cond) pushed = True - self.mc.gen_load_int(r.lr.value, -offset.value, cond=cond) + self.mc.gen_load_int(r.lr.value, -offset, cond=cond) self.mc.LDR_rr(loc.value, r.fp.value, r.lr.value, cond=cond) else: - self.mc.LDR_ri(loc.value, r.fp.value, imm=-offset.value, cond=cond) + self.mc.LDR_ri(loc.value, r.fp.value, imm=-offset, cond=cond) if pushed: self.mc.POP([r.lr.value], cond=cond) elif loc.is_vfp_reg(): - assert prev_loc.type == FLOAT, 'trying to load from an incompatible location into a float register' + assert prev_loc.type == FLOAT, 'trying to load from an \ + incompatible location into a float register' # load spilled value into vfp reg - offset = ConstInt(prev_loc.position*WORD) + offset = prev_loc.position * WORD self.mc.PUSH([r.ip.value], cond=cond) pushed = True - if not _check_imm_arg(offset): - self.mc.gen_load_int(r.ip.value, offset.value, cond=cond) + if not check_imm_arg(offset): + self.mc.gen_load_int(r.ip.value, offset, cond=cond) self.mc.SUB_rr(r.ip.value, r.fp.value, r.ip.value, cond=cond) else: - self.mc.SUB_ri(r.ip.value, r.fp.value, offset.value, cond=cond) + self.mc.SUB_ri(r.ip.value, r.fp.value, offset, cond=cond) self.mc.VLDR(loc.value, r.ip.value, cond=cond) if pushed: self.mc.POP([r.ip.value], cond=cond) @@ -963,15 +905,16 @@ if loc.is_vfp_reg(): self.mc.VMOV_cc(loc.value, prev_loc.value, cond=cond) elif loc.is_stack(): - assert loc.type == FLOAT, 'trying to store to an incompatible location from a float register' + assert loc.type == FLOAT, 'trying to store to an \ + incompatible location from a float register' # spill vfp register self.mc.PUSH([r.ip.value], cond=cond) - offset = ConstInt(loc.position*WORD) - if not _check_imm_arg(offset): - self.mc.gen_load_int(r.ip.value, offset.value, cond=cond) + offset = loc.position * WORD + if not check_imm_arg(offset): + self.mc.gen_load_int(r.ip.value, offset, cond=cond) self.mc.SUB_rr(r.ip.value, r.fp.value, r.ip.value, cond=cond) else: - self.mc.SUB_ri(r.ip.value, r.fp.value, offset.value, cond=cond) + self.mc.SUB_ri(r.ip.value, r.fp.value, offset, cond=cond) self.mc.VSTR(prev_loc.value, r.ip.value, cond=cond) self.mc.POP([r.ip.value], cond=cond) else: @@ -1009,17 +952,18 @@ self.mc.POP([r.ip.value], cond=cond) elif vfp_loc.is_stack() and vfp_loc.type == FLOAT: # load spilled vfp value into two core registers - offset = ConstInt((vfp_loc.position)*WORD) - if not _check_imm_arg(offset, size=0xFFF): + offset = vfp_loc.position * WORD + if not check_imm_arg(offset, size=0xFFF): self.mc.PUSH([r.ip.value], cond=cond) - self.mc.gen_load_int(r.ip.value, -offset.value, cond=cond) + self.mc.gen_load_int(r.ip.value, -offset, cond=cond) self.mc.LDR_rr(reg1.value, r.fp.value, r.ip.value, cond=cond) self.mc.ADD_ri(r.ip.value, r.ip.value, imm=WORD, cond=cond) self.mc.LDR_rr(reg2.value, r.fp.value, r.ip.value, cond=cond) self.mc.POP([r.ip.value], cond=cond) else: - self.mc.LDR_ri(reg1.value, r.fp.value, imm=-offset.value, cond=cond) - self.mc.LDR_ri(reg2.value, r.fp.value, imm=-offset.value+WORD, cond=cond) + self.mc.LDR_ri(reg1.value, r.fp.value, imm=-offset, cond=cond) + self.mc.LDR_ri(reg2.value, r.fp.value, + imm=-offset + WORD, cond=cond) else: assert 0, 'unsupported case' @@ -1031,17 +975,18 @@ self.mc.VMOV_cr(vfp_loc.value, reg1.value, reg2.value, cond=cond) elif vfp_loc.is_stack(): # move from two core registers to a float stack location - offset = ConstInt((vfp_loc.position)*WORD) - if not _check_imm_arg(offset, size=0xFFF): + offset = vfp_loc.position * WORD + if not check_imm_arg(offset, size=0xFFF): self.mc.PUSH([r.ip.value], cond=cond) - self.mc.gen_load_int(r.ip.value, -offset.value, cond=cond) + self.mc.gen_load_int(r.ip.value, -offset, cond=cond) self.mc.STR_rr(reg1.value, r.fp.value, r.ip.value, cond=cond) self.mc.ADD_ri(r.ip.value, r.ip.value, imm=WORD, cond=cond) self.mc.STR_rr(reg2.value, r.fp.value, r.ip.value, cond=cond) self.mc.POP([r.ip.value], cond=cond) else: - self.mc.STR_ri(reg1.value, r.fp.value, imm=-offset.value, cond=cond) - self.mc.STR_ri(reg2.value, r.fp.value, imm=-offset.value+WORD, cond=cond) + self.mc.STR_ri(reg1.value, r.fp.value, imm=-offset, cond=cond) + self.mc.STR_ri(reg2.value, r.fp.value, + imm=-offset + WORD, cond=cond) else: assert 0, 'unsupported case' @@ -1092,14 +1037,15 @@ llop.gc_assume_young_pointers(lltype.Void, llmemory.cast_ptr_to_adr(ptrs)) - def malloc_cond(self, nursery_free_adr, nursery_top_adr, size, tid): + def malloc_cond(self, nursery_free_adr, nursery_top_adr, size): + assert size & (WORD-1) == 0 # must be correctly aligned size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery) - size = (size + WORD-1) & ~(WORD-1) # round up + size = (size + WORD - 1) & ~(WORD - 1) # round up self.mc.gen_load_int(r.r0.value, nursery_free_adr) self.mc.LDR_ri(r.r0.value, r.r0.value) - if _check_imm_arg(ConstInt(size)): + if check_imm_arg(size): self.mc.ADD_ri(r.r1.value, r.r0.value, size) else: self.mc.gen_load_int(r.r1.value, size) @@ -1136,10 +1082,6 @@ self.mc.gen_load_int(r.ip.value, nursery_free_adr) self.mc.STR_ri(r.r1.value, r.ip.value) - self.mc.gen_load_int(r.ip.value, tid) - self.mc.STR_ri(r.ip.value, r.r0.value) - - def mark_gc_roots(self, force_index, use_copy_area=False): if force_index < 0: return # not needed @@ -1163,14 +1105,18 @@ else: return 0 + def not_implemented(msg): os.write(2, '[ARM/asm] %s\n' % msg) raise NotImplementedError(msg) + def notimplemented_op(self, op, arglocs, regalloc, fcond): - raise NotImplementedError, op + raise NotImplementedError(op) + + def notimplemented_op_with_guard(self, op, guard_op, arglocs, regalloc, fcond): - raise NotImplementedError, op + raise NotImplementedError(op) asm_operations = [notimplemented_op] * (rop._LAST + 1) asm_operations_with_guard = [notimplemented_op_with_guard] * (rop._LAST + 1) @@ -1192,4 +1138,3 @@ if hasattr(AssemblerARM, methname): func = getattr(AssemblerARM, methname).im_func asm_operations_with_guard[value] = func - diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py --- a/pypy/jit/backend/arm/codebuilder.py +++ b/pypy/jit/backend/arm/codebuilder.py @@ -3,18 +3,22 @@ from pypy.jit.backend.arm import registers as reg from pypy.jit.backend.arm.arch import (WORD, FUNC_ALIGN) from pypy.jit.backend.arm.instruction_builder import define_instructions - -from pypy.rlib.rmmap import alloc, PTR -from pypy.rpython.annlowlevel import llhelper -from pypy.rpython.lltypesystem import lltype, rffi -from pypy.jit.metainterp.history import ConstInt, BoxInt, AbstractFailDescr +from pypy.jit.backend.llsupport.asmmemmgr import BlockBuilderMixin from pypy.rlib.objectmodel import we_are_translated -from pypy.jit.backend.llsupport.asmmemmgr import BlockBuilderMixin +from pypy.rpython.lltypesystem import lltype, rffi, llmemory from pypy.tool.udir import udir +clear_cache = rffi.llexternal( + "__clear_cache", + [llmemory.Address, llmemory.Address], + lltype.Void, + _nowrapper=True, + sandboxsafe=True) + + def binary_helper_call(name): - signature = getattr(arch, 'arm_%s_sign' % name) function = getattr(arch, 'arm_%s' % name) + def f(self, c=cond.AL): """Generates a call to a helper function, takes its arguments in r0 and r1, result is placed in r0""" @@ -24,9 +28,10 @@ else: self.PUSH(range(2, 4), cond=c) self.BL(addr, c) - self.POP(range(2,4), cond=c) + self.POP(range(2, 4), cond=c) return f + class AbstractARMv7Builder(object): def __init__(self): @@ -35,6 +40,7 @@ def align(self): while(self.currpos() % FUNC_ALIGN != 0): self.writechar(chr(0)) + def NOP(self): self.MOV_rr(0, 0) @@ -72,7 +78,7 @@ | 0xB << 8 | nregs) self.write32(instr) - + def VMOV_rc(self, rt, rt2, dm, cond=cond.AL): """This instruction copies two words from two ARM core registers into a doubleword extension register, or from a doubleword extension register @@ -109,7 +115,7 @@ self.write32(instr) def VMOV_cc(self, dd, dm, cond=cond.AL): - sz = 1 # for 64-bit mode + sz = 1 # for 64-bit mode instr = (cond << 28 | 0xEB << 20 | (dd & 0xF) << 12 @@ -156,10 +162,8 @@ self.write32(cond << 28 | 0xEF1FA10) def B(self, target, c=cond.AL): - #assert self._fits_in_24bits(target) - #return (c << 20 | 0xA << 24 | target & 0xFFFFFF) if c == cond.AL: - self.LDR_ri(reg.pc.value, reg.pc.value, -arch.PC_OFFSET/2) + self.LDR_ri(reg.pc.value, reg.pc.value, -arch.PC_OFFSET / 2) self.write32(target) else: self.gen_load_int(reg.ip.value, target, cond=c) @@ -173,8 +177,8 @@ def BL(self, target, c=cond.AL): if c == cond.AL: - self.ADD_ri(reg.lr.value, reg.pc.value, arch.PC_OFFSET/2) - self.LDR_ri(reg.pc.value, reg.pc.value, imm=-arch.PC_OFFSET/2) + self.ADD_ri(reg.lr.value, reg.pc.value, arch.PC_OFFSET / 2) + self.LDR_ri(reg.pc.value, reg.pc.value, imm=-arch.PC_OFFSET / 2) self.write32(target) else: self.gen_load_int(reg.ip.value, target, cond=c) @@ -228,7 +232,6 @@ def currpos(self): raise NotImplementedError - size_of_gen_load_int = 2 * WORD def gen_load_int(self, r, value, cond=cond.AL): """r is the register number, value is the value to be loaded to the register""" @@ -237,6 +240,8 @@ self.MOVW_ri(r, bottom, cond) if top: self.MOVT_ri(r, top, cond) + size_of_gen_load_int = 2 * WORD + class OverwritingBuilder(AbstractARMv7Builder): def __init__(self, cb, start, size): @@ -253,6 +258,7 @@ self.cb.overwrite(self.index, char) self.index += 1 + class ARMv7Builder(BlockBuilderMixin, AbstractARMv7Builder): def __init__(self): AbstractARMv7Builder.__init__(self) @@ -272,7 +278,7 @@ # XXX remove and setup aligning in llsupport def materialize(self, asmmemmgr, allblocks, gcrootmap=None): size = self.get_relative_pos() - malloced = asmmemmgr.malloc(size, size+7) + malloced = asmmemmgr.malloc(size, size + 7) allblocks.append(malloced) rawstart = malloced[0] while(rawstart % FUNC_ALIGN != 0): @@ -284,8 +290,16 @@ gcrootmap.put(rawstart + pos, mark) return rawstart + def clear_cache(self, addr): + if we_are_translated(): + startaddr = rffi.cast(llmemory.Address, addr) + endaddr = rffi.cast(llmemory.Address, + addr + self.get_relative_pos()) + clear_cache(startaddr, endaddr) + def copy_to_raw_memory(self, addr): self._copy_to_raw_memory(addr) + self.clear_cache(addr) self._dump(addr, "jit-backend-dump", 'arm') def currpos(self): diff --git a/pypy/jit/backend/arm/conditions.py b/pypy/jit/backend/arm/conditions.py --- a/pypy/jit/backend/arm/conditions.py +++ b/pypy/jit/backend/arm/conditions.py @@ -15,10 +15,12 @@ AL = 0xE opposites = [NE, EQ, CC, CS, PL, MI, VC, VS, LS, HI, LT, GE, LE, GT, AL] + + def get_opposite_of(operation): return opposites[operation] -# see mapping for floating poin according to +# see mapping for floating poin according to # http://blogs.arm.com/software-enablement/405-condition-codes-4-floating-point-comparisons-using-vfp/ VFP_LT = CC VFP_LE = LS diff --git a/pypy/jit/backend/arm/helper/assembler.py b/pypy/jit/backend/arm/helper/assembler.py --- a/pypy/jit/backend/arm/helper/assembler.py +++ b/pypy/jit/backend/arm/helper/assembler.py @@ -29,7 +29,7 @@ guard_opnum = guard.getopnum() if guard_opnum == rop.GUARD_FALSE: cond = false_cond - return self._emit_guard(guard, arglocs[1:], cond) + return self._emit_guard(guard, arglocs[1:], cond, save_exc=False) f.__name__ = 'emit_guard_%s' % name return f @@ -92,7 +92,7 @@ cond = true_cond if guard_opnum == rop.GUARD_FALSE: cond = false_cond - return self._emit_guard(guard, arglocs[2:], cond) + return self._emit_guard(guard, arglocs[2:], cond, save_exc=False) f.__name__ = 'emit_guard_%s' % name return f @@ -137,7 +137,7 @@ guard_opnum = guard.getopnum() if guard_opnum == rop.GUARD_FALSE: cond = false_cond - return self._emit_guard(guard, arglocs[2:], cond) + return self._emit_guard(guard, arglocs[2:], cond, save_exc=False) f.__name__ = 'emit_guard_%s' % name return f diff --git a/pypy/jit/backend/arm/helper/regalloc.py b/pypy/jit/backend/arm/helper/regalloc.py --- a/pypy/jit/backend/arm/helper/regalloc.py +++ b/pypy/jit/backend/arm/helper/regalloc.py @@ -3,42 +3,46 @@ from pypy.jit.backend.arm.codebuilder import AbstractARMv7Builder from pypy.jit.metainterp.history import ConstInt, BoxInt, Box, FLOAT from pypy.jit.metainterp.history import ConstInt +from pypy.rlib.objectmodel import we_are_translated -# XXX create a version that does not need a ConstInt -def _check_imm_arg(arg, size=0xFF, allow_zero=True): +def check_imm_arg(arg, size=0xFF, allow_zero=True): + assert not isinstance(arg, ConstInt) + if not we_are_translated(): + if not isinstance(arg, int): + import pdb; pdb.set_trace() + i = arg + if allow_zero: + lower_bound = i >= 0 + else: + lower_bound = i > 0 + return i <= size and lower_bound + +def check_imm_box(arg, size=0xFF, allow_zero=True): if isinstance(arg, ConstInt): - i = arg.getint() - if allow_zero: - lower_bound = i >= 0 - else: - lower_bound = i > 0 - return i <= size and lower_bound + return check_imm_arg(arg.getint(), size, allow_zero) return False + def prepare_op_ri(name=None, imm_size=0xFF, commutative=True, allow_zero=True): def f(self, op, fcond): assert fcond is not None a0 = op.getarg(0) a1 = op.getarg(1) boxes = list(op.getarglist()) - imm_a0 = _check_imm_arg(a0, imm_size, allow_zero=allow_zero) - imm_a1 = _check_imm_arg(a1, imm_size, allow_zero=allow_zero) + imm_a0 = check_imm_box(a0, imm_size, allow_zero=allow_zero) + imm_a1 = check_imm_box(a1, imm_size, allow_zero=allow_zero) if not imm_a0 and imm_a1: - l0, box = self._ensure_value_is_boxed(a0) - boxes.append(box) + l0 = self._ensure_value_is_boxed(a0) l1 = self.make_sure_var_in_reg(a1, boxes) elif commutative and imm_a0 and not imm_a1: l1 = self.make_sure_var_in_reg(a0, boxes) - l0, box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(box) + l0 = self._ensure_value_is_boxed(a1, boxes) else: - l0, box = self._ensure_value_is_boxed(a0, boxes) - boxes.append(box) - l1, box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(box) - self.possibly_free_vars(boxes) + l0 = self._ensure_value_is_boxed(a0, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result, boxes) - self.possibly_free_var(op.result) return [l0, l1, res] if name: f.__name__ = name @@ -48,36 +52,33 @@ if guard: def f(self, op, guard_op, fcond): locs = [] - loc1, box1 = self._ensure_value_is_boxed(op.getarg(0)) + loc1 = self._ensure_value_is_boxed(op.getarg(0)) locs.append(loc1) if base: - loc2, box2 = self._ensure_value_is_boxed(op.getarg(1)) + loc2 = self._ensure_value_is_boxed(op.getarg(1)) locs.append(loc2) - self.possibly_free_var(box2) - self.possibly_free_var(box1) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() if guard_op is None: res = self.force_allocate_reg(op.result) assert float_result == (op.result.type == FLOAT) - self.possibly_free_var(op.result) locs.append(res) return locs else: args = self._prepare_guard(guard_op, locs) - self.possibly_free_vars(guard_op.getfailargs()) return args else: def f(self, op, fcond): locs = [] - loc1, box1 = self._ensure_value_is_boxed(op.getarg(0)) + loc1 = self._ensure_value_is_boxed(op.getarg(0)) locs.append(loc1) if base: - loc2, box2 = self._ensure_value_is_boxed(op.getarg(1)) + loc2 = self._ensure_value_is_boxed(op.getarg(1)) locs.append(loc2) - self.possibly_free_var(box2) - self.possibly_free_var(box1) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) assert float_result == (op.result.type == FLOAT) - self.possibly_free_var(op.result) locs.append(res) return locs if name: @@ -108,23 +109,21 @@ assert fcond is not None boxes = list(op.getarglist()) arg0, arg1 = boxes - imm_a1 = _check_imm_arg(arg1) + imm_a1 = check_imm_box(arg1) - l0, box = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) - boxes.append(box) + l0 = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) if imm_a1: l1 = self.make_sure_var_in_reg(arg1, boxes) else: - l1, box = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) - boxes.append(box) - self.possibly_free_vars(boxes) + l1 = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) + + self.possibly_free_vars_for_op(op) + self.free_temp_vars() if guard_op is None: res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) return [l0, l1, res] else: args = self._prepare_guard(guard_op, [l0, l1]) - self.possibly_free_vars(guard_op.getfailargs()) return args if name: f.__name__ = name @@ -134,15 +133,14 @@ def f(self, op, guard_op, fcond): assert fcond is not None a0 = op.getarg(0) - reg, box = self._ensure_value_is_boxed(a0) + assert isinstance(a0, Box) + reg = self.make_sure_var_in_reg(a0) + self.possibly_free_vars_for_op(op) if guard_op is None: - res = self.force_allocate_reg(op.result, [box]) - self.possibly_free_vars([a0, box, op.result]) + res = self.force_allocate_reg(op.result, [a0]) return [reg, res] else: - args = self._prepare_guard(guard_op, [reg]) - self.possibly_free_vars(guard_op.getfailargs()) - return args + return self._prepare_guard(guard_op, [reg]) if name: f.__name__ = name return f diff --git a/pypy/jit/backend/arm/instruction_builder.py b/pypy/jit/backend/arm/instruction_builder.py --- a/pypy/jit/backend/arm/instruction_builder.py +++ b/pypy/jit/backend/arm/instruction_builder.py @@ -1,5 +1,7 @@ from pypy.jit.backend.arm import conditions as cond from pypy.jit.backend.arm import instructions + + # move table lookup out of generated functions def define_load_store_func(name, table): n = (0x1 << 26 @@ -13,6 +15,7 @@ rncond = ('rn' in table and table['rn'] == '!0xF') if table['imm']: assert not b_zero + def f(self, rt, rn, imm=0, cond=cond.AL): assert not (rncond and rn == 0xF) p = 1 @@ -20,7 +23,7 @@ u, imm = self._encode_imm(imm) instr = (n | cond << 28 - | (p & 0x1) << 24 + | (p & 0x1) << 24 | (u & 0x1) << 23 | (w & 0x1) << 21 | imm_operation(rt, rn, imm)) @@ -34,7 +37,7 @@ u, imm = self._encode_imm(imm) instr = (n | cond << 28 - | (p & 0x1) << 24 + | (p & 0x1) << 24 | (u & 0x1) << 23 | (w & 0x1) << 21 | reg_operation(rt, rn, rm, imm, s, shifttype)) @@ -44,6 +47,7 @@ self.write32(instr) return f + def define_extra_load_store_func(name, table): def check_registers(r1, r2): assert r1 % 2 == 0 @@ -57,7 +61,7 @@ p = 1 w = 0 rncond = ('rn' in table and table['rn'] == '!0xF') - dual = (name[-4] == 'D') + dual = (name[-4] == 'D') if dual: if name[-2:] == 'rr': @@ -114,6 +118,7 @@ | (imm & 0xF)) return f + def define_data_proc_imm_func(name, table): n = (0x1 << 25 | (table['op'] & 0x1F) << 20) @@ -139,6 +144,7 @@ | imm_operation(0, rn, imm)) return imm_func + def define_data_proc_func(name, table): n = ((table['op1'] & 0x1F) << 20 | (table['op2'] & 0x1F) << 7 @@ -175,6 +181,7 @@ | reg_operation(rd, rn, rm, imm, s, shifttype)) return f + def define_data_proc_reg_shift_reg_func(name, table): n = ((0x1 << 4) | (table['op1'] & 0x1F) << 20 | (table['op2'] & 0x3) << 5) if 'result' in table and not table['result']: @@ -211,8 +218,10 @@ | (rn & 0xF)) return f + def define_supervisor_and_coproc_func(name, table): n = (0x3 << 26 | (table['op1'] & 0x3F) << 20 | (table['op'] & 0x1) << 4) + def f(self, coproc, opc1, rt, crn, crm, opc2=0, cond=cond.AL): self.write32(n | cond << 28 @@ -224,6 +233,7 @@ | (crm & 0xF)) return f + def define_multiply_func(name, table): n = (table['op'] & 0xF) << 20 | 0x9 << 4 if 'acc' in table and table['acc']: @@ -246,14 +256,14 @@ | (rn & 0xF)) elif 'long' in table and table['long']: - def f(self, rdlo, rdhi, rn, rm, cond=cond.AL): + def f(self, rdlo, rdhi, rn, rm, cond=cond.AL): assert rdhi != rdlo self.write32(n - | cond << 28 - | (rdhi & 0xF) << 16 - | (rdlo & 0xF) << 12 - | (rm & 0xF) << 8 - | (rn & 0xF)) + | cond << 28 + | (rdhi & 0xF) << 16 + | (rdlo & 0xF) << 12 + | (rm & 0xF) << 8 + | (rn & 0xF)) else: def f(self, rd, rn, rm, cond=cond.AL, s=0): self.write32(n @@ -265,8 +275,10 @@ return f + def define_block_data_func(name, table): n = (table['op'] & 0x3F) << 20 + def f(self, rn, regs, w=0, cond=cond.AL): # no R bit for now at bit 15 instr = (n @@ -278,6 +290,8 @@ self.write32(instr) return f + + def define_float_load_store_func(name, table): n = (0x3 << 26 | (table['opcode'] & 0x1F) << 20 @@ -288,9 +302,9 @@ # the value actually encoded is imm / 4 def f(self, dd, rn, imm=0, cond=cond.AL): assert imm % 4 == 0 - imm = imm/4 + imm = imm / 4 u, imm = self._encode_imm(imm) - instr = ( n + instr = (n | (cond & 0xF) << 28 | (u & 0x1) << 23 | (rn & 0xF) << 16 @@ -299,10 +313,11 @@ self.write32(instr) return f + def define_float64_data_proc_instructions_func(name, table): n = (0xE << 24 | 0x5 << 9 - | 0x1 << 8 # 64 bit flag + | 0x1 << 8 # 64 bit flag | (table['opc3'] & 0x3) << 6) if 'opc1' in table: @@ -335,11 +350,13 @@ self.write32(instr) return f + def imm_operation(rt, rn, imm): return ((rn & 0xFF) << 16 | (rt & 0xFF) << 12 | (imm & 0xFFF)) + def reg_operation(rt, rn, rm, imm, s, shifttype): return ((s & 0x1) << 20 | (rn & 0xF) << 16 @@ -348,10 +365,12 @@ | (shifttype & 0x3) << 5 | (rm & 0xF)) + def define_instruction(builder, key, val, target): f = builder(key, val) setattr(target, key, f) + def define_instructions(target): inss = [k for k in instructions.__dict__.keys() if not k.startswith('__')] for name in inss: diff --git a/pypy/jit/backend/arm/instructions.py b/pypy/jit/backend/arm/instructions.py --- a/pypy/jit/backend/arm/instructions.py +++ b/pypy/jit/backend/arm/instructions.py @@ -1,93 +1,94 @@ load_store = { - 'STR_ri': {'A':0, 'op1': 0x0, 'op1not': 0x2, 'imm': True}, - 'STR_rr': {'A':1, 'op1': 0x0, 'op1not': 0x2, 'B': 0, 'imm': False}, - 'LDR_ri': {'A':0, 'op1': 0x1, 'op1not': 0x3, 'imm': True}, - 'LDR_rr': {'A':1, 'op1': 0x1, 'op1not': 0x3, 'B': 0, 'imm': False}, - 'STRB_ri': {'A':0, 'op1': 0x4, 'op1not': 0x6, 'rn':'!0xF', 'imm': True}, - 'STRB_rr': {'A':1, 'op1': 0x4, 'op1not': 0x6, 'B': 0, 'imm': False}, - 'LDRB_ri': {'A':0, 'op1': 0x5, 'op1not': 0x7, 'rn':'!0xF', 'imm': True}, - 'LDRB_rr': {'A':1, 'op1': 0x5, 'op1not': 0x7, 'B': 0, 'imm': False}, + 'STR_ri': {'A': 0, 'op1': 0x0, 'op1not': 0x2, 'imm': True}, + 'STR_rr': {'A': 1, 'op1': 0x0, 'op1not': 0x2, 'B': 0, 'imm': False}, + 'LDR_ri': {'A': 0, 'op1': 0x1, 'op1not': 0x3, 'imm': True}, + 'LDR_rr': {'A': 1, 'op1': 0x1, 'op1not': 0x3, 'B': 0, 'imm': False}, + 'STRB_ri': {'A': 0, 'op1': 0x4, 'op1not': 0x6, 'rn': '!0xF', 'imm': True}, + 'STRB_rr': {'A': 1, 'op1': 0x4, 'op1not': 0x6, 'B': 0, 'imm': False}, + 'LDRB_ri': {'A': 0, 'op1': 0x5, 'op1not': 0x7, 'rn': '!0xF', 'imm': True}, + 'LDRB_rr': {'A': 1, 'op1': 0x5, 'op1not': 0x7, 'B': 0, 'imm': False}, } -extra_load_store = { #Section 5.2.8 +extra_load_store = { # Section 5.2.8 'STRH_rr': {'op2': 0x1, 'op1': 0x0}, 'LDRH_rr': {'op2': 0x1, 'op1': 0x1}, 'STRH_ri': {'op2': 0x1, 'op1': 0x4}, - 'LDRH_ri': {'op2': 0x1, 'op1': 0x5, 'rn':'!0xF'}, + 'LDRH_ri': {'op2': 0x1, 'op1': 0x5, 'rn': '!0xF'}, 'LDRD_rr': {'op2': 0x2, 'op1': 0x0}, 'LDRSB_rr': {'op2': 0x2, 'op1': 0x1}, 'LDRD_ri': {'op2': 0x2, 'op1': 0x4}, - 'LDRSB_ri': {'op2': 0x2, 'op1': 0x5, 'rn':'!0xF'}, + 'LDRSB_ri': {'op2': 0x2, 'op1': 0x5, 'rn': '!0xF'}, 'STRD_rr': {'op2': 0x3, 'op1': 0x0}, 'LDRSH_rr': {'op2': 0x3, 'op1': 0x1}, 'STRD_ri': {'op2': 0x3, 'op1': 0x4}, - 'LDRSH_ri': {'op2': 0x3, 'op1': 0x5, 'rn':'!0xF'}, + 'LDRSH_ri': {'op2': 0x3, 'op1': 0x5, 'rn': '!0xF'}, } data_proc = { - 'AND_rr': {'op1':0x0, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'EOR_rr': {'op1':0x2, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'SUB_rr': {'op1':0x4, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'RSB_rr': {'op1':0x6, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'ADD_rr': {'op1':0x8, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'ADC_rr': {'op1':0xA, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'SBC_rr': {'op1':0xC, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'RSC_rr': {'op1':0xE, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'TST_rr': {'op1':0x11, 'op2':0, 'op3':0, 'result':False, 'base':True}, - 'TEQ_rr': {'op1':0x13, 'op2':0, 'op3':0, 'result':False, 'base':True}, - 'CMP_rr': {'op1':0x15, 'op2':0, 'op3':0, 'result':False, 'base':True}, - 'CMN_rr': {'op1':0x17, 'op2':0, 'op3':0, 'result':False, 'base':True}, - 'ORR_rr': {'op1':0x18, 'op2':0, 'op3':0, 'result':True, 'base':True}, - 'MOV_rr': {'op1':0x1A, 'op2':0, 'op3':0, 'result':True, 'base':False}, - 'LSL_ri': {'op1':0x1A, 'op2':0x0, 'op3':0, 'op2cond':'!0', 'result':False, 'base':True}, - 'LSR_ri': {'op1':0x1A, 'op2':0, 'op3':0x1, 'op2cond':'', 'result':False, 'base':True}, - 'ASR_ri': {'op1':0x1A, 'op2':0, 'op3':0x2, 'op2cond':'', 'result':False, 'base':True}, - #'RRX_ri': {'op1':0x1A, 'op2':0, 'op3':0x3, 'op2cond':'0', 'result':False, 'base':True}, - 'ROR_ri': {'op1':0x1A, 'op2':0x0, 'op3':0x3, 'op2cond':'!0', 'result':True, 'base':False}, - #BIC - 'MVN_rr': {'op1':0x1E, 'op2':0x0, 'op3':0x0, 'result':True, 'base':False}, + 'AND_rr': {'op1': 0x0, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'EOR_rr': {'op1': 0x2, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'SUB_rr': {'op1': 0x4, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'RSB_rr': {'op1': 0x6, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'ADD_rr': {'op1': 0x8, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'ADC_rr': {'op1': 0xA, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'SBC_rr': {'op1': 0xC, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'RSC_rr': {'op1': 0xE, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'TST_rr': {'op1': 0x11, 'op2': 0, 'op3': 0, 'result': False, 'base': True}, + 'TEQ_rr': {'op1': 0x13, 'op2': 0, 'op3': 0, 'result': False, 'base': True}, + 'CMP_rr': {'op1': 0x15, 'op2': 0, 'op3': 0, 'result': False, 'base': True}, + 'CMN_rr': {'op1': 0x17, 'op2': 0, 'op3': 0, 'result': False, 'base': True}, + 'ORR_rr': {'op1': 0x18, 'op2': 0, 'op3': 0, 'result': True, 'base': True}, + 'MOV_rr': {'op1': 0x1A, 'op2': 0, 'op3': 0, 'result': True, 'base': False}, + 'LSL_ri': {'op1': 0x1A, 'op2': 0x0, 'op3': 0, 'op2cond': '!0', + 'result': False, 'base': True}, + 'LSR_ri': {'op1': 0x1A, 'op2': 0, 'op3': 0x1, 'op2cond': '', + 'result': False, 'base': True}, + 'ASR_ri': {'op1': 0x1A, 'op2': 0, 'op3': 0x2, 'op2cond': '', + 'result': False, 'base': True}, + 'ROR_ri': {'op1': 0x1A, 'op2': 0x0, 'op3': 0x3, 'op2cond': '!0', + 'result': True, 'base': False}, + 'MVN_rr': {'op1': 0x1E, 'op2': 0x0, 'op3': 0x0, 'result': True, + 'base': False}, } data_proc_reg_shift_reg = { - 'AND_rr_sr': {'op1':0x0, 'op2':0}, - 'EOR_rr_sr': {'op1':0x2, 'op2':0}, - 'SUB_rr_sr': {'op1':0x4, 'op2':0}, - 'RSB_rr_sr': {'op1':0x6, 'op2':0}, - 'ADD_rr_sr': {'op1':0x8, 'op2':0}, - 'ADC_rr_sr': {'op1':0xA, 'op2':0}, - 'SBC_rr_sr': {'op1':0xC, 'op2':0}, - 'RSC_rr_sr': {'op1':0xE, 'op2':0}, - 'TST_rr_sr': {'op1':0x11, 'op2':0, 'result': False}, - 'TEQ_rr_sr': {'op1':0x13, 'op2':0, 'result': False}, - 'CMP_rr_sr': {'op1':0x15, 'op2':0, 'result': False}, - 'CMN_rr_sr': {'op1':0x17, 'op2':0, 'result': False}, - 'ORR_rr_sr': {'op1':0x18, 'op2':0}, - 'LSL_rr': {'op1':0x1A, 'op2':0, }, - 'LSR_rr': {'op1':0x1A, 'op2':0x1}, - 'ASR_rr': {'op1':0x1A, 'op2':0x2}, - #'RRX_rr': {'op1':0x1A, 'op2':0,}, - 'ROR_rr': {'op1':0x1A, 'op2':0x3}, - # BIC, MVN + 'AND_rr_sr': {'op1': 0x0, 'op2': 0}, + 'EOR_rr_sr': {'op1': 0x2, 'op2': 0}, + 'SUB_rr_sr': {'op1': 0x4, 'op2': 0}, + 'RSB_rr_sr': {'op1': 0x6, 'op2': 0}, + 'ADD_rr_sr': {'op1': 0x8, 'op2': 0}, + 'ADC_rr_sr': {'op1': 0xA, 'op2': 0}, + 'SBC_rr_sr': {'op1': 0xC, 'op2': 0}, + 'RSC_rr_sr': {'op1': 0xE, 'op2': 0}, + 'TST_rr_sr': {'op1': 0x11, 'op2': 0, 'result': False}, + 'TEQ_rr_sr': {'op1': 0x13, 'op2': 0, 'result': False}, + 'CMP_rr_sr': {'op1': 0x15, 'op2': 0, 'result': False}, + 'CMN_rr_sr': {'op1': 0x17, 'op2': 0, 'result': False}, + 'ORR_rr_sr': {'op1': 0x18, 'op2': 0}, + 'LSL_rr': {'op1': 0x1A, 'op2': 0, }, + 'LSR_rr': {'op1': 0x1A, 'op2': 0x1}, + 'ASR_rr': {'op1': 0x1A, 'op2': 0x2}, + 'ROR_rr': {'op1': 0x1A, 'op2': 0x3}, } data_proc_imm = { - 'AND_ri': {'op': 0, 'result':True, 'base':True}, - 'EOR_ri': {'op': 0x2, 'result':True, 'base':True}, - 'SUB_ri': {'op': 0x4, 'result':True, 'base':True}, - 'RSB_ri': {'op': 0x6, 'result':True, 'base':True}, - 'ADD_ri': {'op': 0x8, 'result':True, 'base':True}, - 'ADC_ri': {'op': 0xA, 'result':True, 'base':True}, - 'SBC_ri': {'op': 0xC, 'result':True, 'base':True}, - 'RSC_ri': {'op': 0xE, 'result':True, 'base':True}, - 'TST_ri': {'op': 0x11, 'result':False, 'base':True}, - 'TEQ_ri': {'op': 0x13, 'result':False, 'base':True}, - 'CMP_ri': {'op': 0x15, 'result':False, 'base':True}, - 'CMN_ri': {'op': 0x17, 'result':False, 'base':True}, - 'ORR_ri': {'op': 0x18, 'result':True, 'base':True}, - 'MOV_ri': {'op': 0x1A, 'result':True, 'base':False}, - 'BIC_ri': {'op': 0x1C, 'result':True, 'base':True}, - 'MVN_ri': {'op': 0x1E, 'result':True, 'base':False}, + 'AND_ri': {'op': 0, 'result': True, 'base': True}, + 'EOR_ri': {'op': 0x2, 'result': True, 'base': True}, + 'SUB_ri': {'op': 0x4, 'result': True, 'base': True}, + 'RSB_ri': {'op': 0x6, 'result': True, 'base': True}, + 'ADD_ri': {'op': 0x8, 'result': True, 'base': True}, + 'ADC_ri': {'op': 0xA, 'result': True, 'base': True}, + 'SBC_ri': {'op': 0xC, 'result': True, 'base': True}, + 'RSC_ri': {'op': 0xE, 'result': True, 'base': True}, + 'TST_ri': {'op': 0x11, 'result': False, 'base': True}, + 'TEQ_ri': {'op': 0x13, 'result': False, 'base': True}, + 'CMP_ri': {'op': 0x15, 'result': False, 'base': True}, + 'CMN_ri': {'op': 0x17, 'result': False, 'base': True}, + 'ORR_ri': {'op': 0x18, 'result': True, 'base': True}, + 'MOV_ri': {'op': 0x1A, 'result': True, 'base': False}, + 'BIC_ri': {'op': 0x1C, 'result': True, 'base': True}, + 'MVN_ri': {'op': 0x1E, 'result': True, 'base': False}, } supervisor_and_coproc = { diff --git a/pypy/jit/backend/arm/jump.py b/pypy/jit/backend/arm/jump.py --- a/pypy/jit/backend/arm/jump.py +++ b/pypy/jit/backend/arm/jump.py @@ -1,7 +1,6 @@ # ../x86/jump.py # XXX combine with ../x86/jump.py and move to llsupport -import sys -from pypy.tool.pairtype import extendabletype + def remap_frame_layout(assembler, src_locations, dst_locations, tmpreg): pending_dests = len(dst_locations) @@ -18,7 +17,10 @@ key = src.as_key() if key in srccount: if key == dst_locations[i].as_key(): - srccount[key] = -sys.maxint # ignore a move "x = x" + # ignore a move "x = x" + # setting any "large enough" negative value is ok, but + # be careful of overflows, don't use -sys.maxint + srccount[key] = -len(dst_locations) - 1 pending_dests -= 1 else: srccount[key] += 1 @@ -65,12 +67,14 @@ assembler.regalloc_pop(dst) assert pending_dests == 0 + def _move(assembler, src, dst, tmpreg): if dst.is_stack() and src.is_stack(): assembler.regalloc_mov(src, tmpreg) src = tmpreg assembler.regalloc_mov(src, dst) + def remap_frame_layout_mixed(assembler, src_locations1, dst_locations1, tmpreg1, src_locations2, dst_locations2, tmpreg2): @@ -84,7 +88,7 @@ src_locations2red = [] dst_locations2red = [] for i in range(len(src_locations2)): - loc = src_locations2[i] + loc = src_locations2[i] dstloc = dst_locations2[i] if loc.is_stack(): key = loc.as_key() diff --git a/pypy/jit/backend/arm/locations.py b/pypy/jit/backend/arm/locations.py --- a/pypy/jit/backend/arm/locations.py +++ b/pypy/jit/backend/arm/locations.py @@ -1,5 +1,7 @@ -from pypy.jit.metainterp.history import INT, FLOAT, REF +from pypy.jit.metainterp.history import INT, FLOAT from pypy.jit.backend.arm.arch import WORD + + class AssemblerLocation(object): _immutable_ = True type = INT @@ -22,6 +24,7 @@ def as_key(self): raise NotImplementedError + class RegisterLocation(AssemblerLocation): _immutable_ = True width = WORD @@ -38,13 +41,15 @@ def as_key(self): return self.value + class VFPRegisterLocation(RegisterLocation): _immutable_ = True - type = FLOAT - width = 2*WORD + type = FLOAT + width = 2 * WORD def get_single_precision_regs(self): - return [VFPRegisterLocation(i) for i in [self.value*2, self.value*2+1]] + return [VFPRegisterLocation(i) for i in + [self.value * 2, self.value * 2 + 1]] def __repr__(self): return 'vfp%d' % self.value @@ -58,11 +63,11 @@ def as_key(self): return self.value + 20 + class ImmLocation(AssemblerLocation): _immutable_ = True width = WORD - def __init__(self, value): self.value = value @@ -78,11 +83,12 @@ def as_key(self): return self.value + 40 + class ConstFloatLoc(AssemblerLocation): """This class represents an imm float value which is stored in memory at the address stored in the field value""" _immutable_ = True - width = 2*WORD + width = 2 * WORD type = FLOAT def __init__(self, value): @@ -100,6 +106,7 @@ def as_key(self): return -1 * self.value + class StackLocation(AssemblerLocation): _immutable_ = True @@ -123,5 +130,6 @@ def as_key(self): return -self.position + def imm(i): return ImmLocation(i) diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -1,51 +1,48 @@ from __future__ import with_statement from pypy.jit.backend.arm import conditions as c -from pypy.jit.backend.arm import locations from pypy.jit.backend.arm import registers as r from pypy.jit.backend.arm import shift -from pypy.jit.backend.arm.arch import (WORD, FUNC_ALIGN, arm_int_div, - arm_int_div_sign, arm_int_mod_sign, - arm_int_mod, PC_OFFSET) +from pypy.jit.backend.arm.arch import WORD, PC_OFFSET from pypy.jit.backend.arm.helper.assembler import (gen_emit_op_by_helper_call, - gen_emit_op_unary_cmp, - gen_emit_guard_unary_cmp, - gen_emit_op_ri, - gen_emit_cmp_op, - gen_emit_cmp_op_guard, - gen_emit_float_op, - gen_emit_float_cmp_op, - gen_emit_float_cmp_op_guard, - gen_emit_unary_float_op, - saved_registers, - count_reg_args) + gen_emit_op_unary_cmp, + gen_emit_guard_unary_cmp, + gen_emit_op_ri, + gen_emit_cmp_op, + gen_emit_cmp_op_guard, + gen_emit_float_op, + gen_emit_float_cmp_op, + gen_emit_float_cmp_op_guard, + gen_emit_unary_float_op, + saved_registers, + count_reg_args) from pypy.jit.backend.arm.codebuilder import ARMv7Builder, OverwritingBuilder from pypy.jit.backend.arm.jump import remap_frame_layout -from pypy.jit.backend.arm.regalloc import Regalloc, TempInt, TempPtr +from pypy.jit.backend.arm.regalloc import TempInt, TempPtr from pypy.jit.backend.arm.locations import imm from pypy.jit.backend.llsupport import symbolic -from pypy.jit.backend.llsupport.descr import BaseFieldDescr, BaseArrayDescr -from pypy.jit.backend.llsupport.regalloc import compute_vars_longevity -from pypy.jit.metainterp.history import (Const, ConstInt, BoxInt, Box, - AbstractFailDescr, LoopToken, INT, FLOAT, REF) +from pypy.jit.metainterp.history import (Box, AbstractFailDescr, + INT, FLOAT, REF) +from pypy.jit.metainterp.history import JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop -from pypy.rlib import rgc from pypy.rlib.objectmodel import we_are_translated -from pypy.rpython.annlowlevel import llhelper -from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory +from pypy.rpython.lltypesystem import lltype, rffi, rstr NO_FORCE_INDEX = -1 + class GuardToken(object): - def __init__(self, descr, failargs, faillocs, offset, fcond=c.AL, - save_exc=False, is_invalidate=False): + def __init__(self, descr, failargs, faillocs, offset, + save_exc, fcond=c.AL, is_invalidate=False): + assert isinstance(save_exc, bool) self.descr = descr self.offset = offset self.is_invalidate = is_invalidate self.failargs = failargs self.faillocs = faillocs self.save_exc = save_exc - self.fcond=fcond + self.fcond = fcond + class IntOpAsslember(object): @@ -95,15 +92,17 @@ def emit_guard_int_mul_ovf(self, op, guard, arglocs, regalloc, fcond): reg1 = arglocs[0] reg2 = arglocs[1] - res = arglocs[2] + res = arglocs[2] failargs = arglocs[3:] - self.mc.SMULL(res.value, r.ip.value, reg1.value, reg2.value, cond=fcond) - self.mc.CMP_rr(r.ip.value, res.value, shifttype=shift.ASR, imm=31, cond=fcond) + self.mc.SMULL(res.value, r.ip.value, reg1.value, reg2.value, + cond=fcond) + self.mc.CMP_rr(r.ip.value, res.value, shifttype=shift.ASR, + imm=31, cond=fcond) if guard.getopnum() == rop.GUARD_OVERFLOW: - fcond = self._emit_guard(guard, failargs, c.NE) + fcond = self._emit_guard(guard, failargs, c.NE, save_exc=False) elif guard.getopnum() == rop.GUARD_NO_OVERFLOW: - fcond = self._emit_guard(guard, failargs, c.EQ) + fcond = self._emit_guard(guard, failargs, c.EQ, save_exc=False) else: assert 0 return fcond @@ -112,7 +111,7 @@ self.emit_op_int_add(op, arglocs[0:3], regalloc, fcond, flags=True) self._emit_guard_overflow(guard, arglocs[3:], fcond) return fcond - + def emit_guard_int_sub_ovf(self, op, guard, arglocs, regalloc, fcond): self.emit_op_int_sub(op, arglocs[0:3], regalloc, fcond, flags=True) self._emit_guard_overflow(guard, arglocs[3:], fcond) @@ -136,8 +135,6 @@ emit_op_int_gt = gen_emit_cmp_op('int_gt', c.GT) emit_op_int_ge = gen_emit_cmp_op('int_ge', c.GE) - - emit_guard_int_lt = gen_emit_cmp_op_guard('int_lt', c.LT) emit_guard_int_le = gen_emit_cmp_op_guard('int_le', c.LE) emit_guard_int_eq = gen_emit_cmp_op_guard('int_eq', c.EQ) @@ -155,16 +152,15 @@ emit_guard_uint_lt = gen_emit_cmp_op_guard('uint_lt', c.LO) emit_guard_uint_ge = gen_emit_cmp_op_guard('uint_ge', c.HS) - emit_op_ptr_eq = emit_op_int_eq - emit_op_ptr_ne = emit_op_int_ne - emit_guard_ptr_eq = emit_guard_int_eq - emit_guard_ptr_ne = emit_guard_int_ne + emit_op_ptr_eq = emit_op_instance_ptr_eq = emit_op_int_eq + emit_op_ptr_ne = emit_op_instance_ptr_ne = emit_op_int_ne + emit_guard_ptr_eq = emit_guard_instance_ptr_eq = emit_guard_int_eq + emit_guard_ptr_ne = emit_guard_instance_ptr_ne = emit_guard_int_ne emit_op_int_add_ovf = emit_op_int_add emit_op_int_sub_ovf = emit_op_int_sub - class UnaryIntOpAssembler(object): _mixin_ = True @@ -186,34 +182,44 @@ self.mc.RSB_ri(resloc.value, l0.value, imm=0) return fcond + class GuardOpAssembler(object): _mixin_ = True - def _emit_guard(self, op, arglocs, fcond, save_exc=False, is_guard_not_ivalidated=False): + def _emit_guard(self, op, arglocs, fcond, save_exc, + is_guard_not_invalidated=False): + assert isinstance(save_exc, bool) + assert isinstance(fcond, int) descr = op.getdescr() assert isinstance(descr, AbstractFailDescr) - if not we_are_translated() and hasattr(op, 'getfailargs'): - print 'Failargs: ', op.getfailargs() + print 'Failargs: ', op.getfailargs() pos = self.mc.currpos() - self.mc.NOP() + # For all guards that are not GUARD_NOT_INVALIDATED we emit a + # breakpoint to ensure the location is patched correctly. In the case + # of GUARD_NOT_INVALIDATED we use just a NOP, because it is only + # eventually patched at a later point. + if is_guard_not_invalidated: + self.mc.NOP() + else: + self.mc.BKPT() self.pending_guards.append(GuardToken(descr, failargs=op.getfailargs(), faillocs=arglocs, offset=pos, - fcond=fcond, - is_invalidate=is_guard_not_ivalidated, - save_exc=save_exc)) + save_exc=save_exc, + is_invalidate=is_guard_not_invalidated, + fcond=fcond)) return c.AL def _emit_guard_overflow(self, guard, failargs, fcond): if guard.getopnum() == rop.GUARD_OVERFLOW: - fcond = self._emit_guard(guard, failargs, c.VS) + fcond = self._emit_guard(guard, failargs, c.VS, save_exc=False) elif guard.getopnum() == rop.GUARD_NO_OVERFLOW: - fcond = self._emit_guard(guard, failargs, c.VC) + fcond = self._emit_guard(guard, failargs, c.VC, save_exc=False) else: assert 0 return fcond @@ -222,14 +228,14 @@ l0 = arglocs[0] failargs = arglocs[1:] self.mc.CMP_ri(l0.value, 0) - fcond = self._emit_guard(op, failargs, c.NE) + fcond = self._emit_guard(op, failargs, c.NE, save_exc=False) return fcond def emit_op_guard_false(self, op, arglocs, regalloc, fcond): l0 = arglocs[0] failargs = arglocs[1:] self.mc.CMP_ri(l0.value, 0) - fcond = self._emit_guard(op, failargs, c.EQ) + fcond = self._emit_guard(op, failargs, c.EQ, save_exc=False) return fcond def emit_op_guard_value(self, op, arglocs, regalloc, fcond): @@ -246,17 +252,17 @@ assert l1.is_vfp_reg() self.mc.VCMP(l0.value, l1.value) self.mc.VMRS(cond=fcond) - fcond = self._emit_guard(op, failargs, c.EQ) + fcond = self._emit_guard(op, failargs, c.EQ, save_exc=False) return fcond emit_op_guard_nonnull = emit_op_guard_true emit_op_guard_isnull = emit_op_guard_false def emit_op_guard_no_overflow(self, op, arglocs, regalloc, fcond): - return self._emit_guard(op, arglocs, c.VC) + return self._emit_guard(op, arglocs, c.VC, save_exc=False) def emit_op_guard_overflow(self, op, arglocs, regalloc, fcond): - return self._emit_guard(op, arglocs, c.VS) + return self._emit_guard(op, arglocs, c.VS, save_exc=False) # from ../x86/assembler.py:1265 def emit_op_guard_class(self, op, arglocs, regalloc, fcond): @@ -268,14 +274,15 @@ self.mc.CMP_ri(arglocs[0].value, 0) if offset is not None: - self._emit_guard(op, arglocs[3:], c.NE) + self._emit_guard(op, arglocs[3:], c.NE, save_exc=False) else: raise NotImplementedError self._cmp_guard_class(op, arglocs, regalloc, fcond) return fcond def emit_op_guard_not_invalidated(self, op, locs, regalloc, fcond): - return self._emit_guard(op, locs, fcond, is_guard_not_ivalidated=True) + return self._emit_guard(op, locs, fcond, save_exc=False, + is_guard_not_invalidated=True) def _cmp_guard_class(self, op, locs, regalloc, fcond): offset = locs[2] @@ -290,7 +297,7 @@ raise NotImplementedError # XXX port from x86 backend once gc support is in place - return self._emit_guard(op, locs[3:], c.EQ) + return self._emit_guard(op, locs[3:], c.EQ, save_exc=False) class OpAssembler(object): @@ -298,37 +305,86 @@ _mixin_ = True def emit_op_jump(self, op, arglocs, regalloc, fcond): + # The backend's logic assumes that the target code is in a piece of + # assembler that was also called with the same number of arguments, + # so that the locations [ebp+8..] of the input arguments are valid + # stack locations both before and after the jump. + # descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, TargetToken) assert fcond == c.AL + my_nbargs = self.current_clt._debug_nbargs + target_nbargs = descr._arm_clt._debug_nbargs + assert my_nbargs == target_nbargs self._insert_checks() - if descr._arm_bootstrap_code == 0: + if descr in self.target_tokens_currently_compiling: self.mc.B_offs(descr._arm_loop_code, fcond) else: - target = descr._arm_bootstrap_code + descr._arm_loop_code - self.mc.B(target, fcond) - new_fd = max(regalloc.frame_manager.frame_depth, descr._arm_frame_depth) - regalloc.frame_manager.frame_depth = new_fd + self.mc.B(descr._arm_loop_code, fcond) return fcond def emit_op_finish(self, op, arglocs, regalloc, fcond): - self._gen_path_to_exit_path(op.getdescr(), op.getarglist(), arglocs, c.AL) + for i in range(len(arglocs) - 1): + loc = arglocs[i] + box = op.getarg(i) + if loc is None: + continue + if loc.is_reg(): + if box.type == REF: + adr = self.fail_boxes_ptr.get_addr_for_num(i) + elif box.type == INT: + adr = self.fail_boxes_int.get_addr_for_num(i) + else: + assert 0 + self.mc.gen_load_int(r.ip.value, adr) + self.mc.STR_ri(loc.value, r.ip.value) + elif loc.is_vfp_reg(): + assert box.type == FLOAT + adr = self.fail_boxes_float.get_addr_for_num(i) + self.mc.gen_load_int(r.ip.value, adr) + self.mc.VSTR(loc.value, r.ip.value) + elif loc.is_stack() or loc.is_imm() or loc.is_imm_float(): + if box.type == FLOAT: + adr = self.fail_boxes_float.get_addr_for_num(i) + self.mov_loc_loc(loc, r.vfp_ip) + self.mc.gen_load_int(r.ip.value, adr) + self.mc.VSTR(r.vfp_ip.value, r.ip.value) + elif box.type == REF or box.type == INT: + if box.type == REF: + adr = self.fail_boxes_ptr.get_addr_for_num(i) + elif box.type == INT: + adr = self.fail_boxes_int.get_addr_for_num(i) + else: + assert 0 + self.mov_loc_loc(loc, r.ip) + self.mc.gen_load_int(r.lr.value, adr) + self.mc.STR_ri(r.ip.value, r.lr.value) + else: + assert 0 + # note: no exception should currently be set in llop.get_exception_addr + # even if this finish may be an exit_frame_with_exception (in this case + # the exception instance is in arglocs[0]). + addr = self.cpu.get_on_leave_jitted_int(save_exception=False) + self.mc.BL(addr) + self.mc.gen_load_int(r.r0.value, arglocs[-1].value) + self.gen_func_epilog() return fcond - def emit_op_call(self, op, args, regalloc, fcond, force_index=-1): + def emit_op_call(self, op, args, regalloc, fcond, + force_index=NO_FORCE_INDEX): adr = args[0].value arglist = op.getarglist()[1:] - if force_index == -1: + if force_index == NO_FORCE_INDEX: force_index = self.write_new_force_index() - cond = self._emit_call(force_index, adr, arglist, + cond = self._emit_call(force_index, adr, arglist, regalloc, fcond, op.result) descr = op.getdescr() #XXX Hack, Hack, Hack - if op.result and not we_are_translated() and not isinstance(descr, LoopToken): + if (op.result and not we_are_translated()): #XXX check result type loc = regalloc.rm.call_result_location(op.result) - size = descr.get_result_size(False) + size = descr.get_result_size() signed = descr.is_result_signed() self._ensure_result_bit_extension(loc, size, signed) return cond @@ -336,11 +392,12 @@ # XXX improve this interface # emit_op_call_may_force # XXX improve freeing of stuff here - def _emit_call(self, force_index, adr, args, regalloc, fcond=c.AL, result=None): + # XXX add an interface that takes locations instead of boxes + def _emit_call(self, force_index, adr, args, regalloc, fcond=c.AL, + result=None): n_args = len(args) reg_args = count_reg_args(args) - # all arguments past the 4th go on the stack n = 0 # used to count the number of words pushed on the stack, so we #can later modify the SP back to its original value @@ -365,15 +422,15 @@ stack_args.append(None) #then we push every thing on the stack - for i in range(len(stack_args) -1, -1, -1): + for i in range(len(stack_args) - 1, -1, -1): arg = stack_args[i] if arg is None: self.mc.PUSH([r.ip.value]) else: self.regalloc_push(regalloc.loc(arg)) - # collect variables that need to go in registers - # and the registers they will be stored in + # collect variables that need to go in registers and the registers they + # will be stored in num = 0 count = 0 non_float_locs = [] @@ -405,7 +462,7 @@ remap_frame_layout(self, non_float_locs, non_float_regs, r.ip) for loc, reg in float_locs: - self.mov_from_vfp_loc(loc, reg, r.all_regs[reg.value+1]) + self.mov_from_vfp_loc(loc, reg, r.all_regs[reg.value + 1]) #the actual call self.mc.BL(adr) @@ -448,7 +505,7 @@ self.mc.CMP_rr(r.ip.value, loc.value) self._emit_guard(op, failargs, c.EQ, save_exc=True) - self.mc.gen_load_int(loc.value, pos_exc_value.value, fcond) + self.mc.gen_load_int(loc.value, pos_exc_value.value) if resloc: self.mc.LDR_ri(resloc.value, loc.value) self.mc.MOV_ri(r.ip.value, 0) @@ -494,7 +551,7 @@ self.mc.TST_ri(r.ip.value, imm=ofs) jz_location = self.mc.currpos() - self.mc.NOP() + self.mc.BKPT() # the following is supposed to be the slow path, so whenever possible # we choose the most compact encoding over the most efficient one. @@ -505,9 +562,8 @@ callargs = [r.r0, r.r1, r.r2] remap_frame_layout(self, arglocs, callargs, r.ip) func = rffi.cast(lltype.Signed, addr) - # - # misaligned stack in the call, but it's ok because the write barrier - # is not going to call anything more. + # misaligned stack in the call, but it's ok because the write + # barrier is not going to call anything more. self.mc.BL(func) # patch the JZ above @@ -518,6 +574,7 @@ emit_op_cond_call_gc_wb_array = emit_op_cond_call_gc_wb + class FieldOpAssembler(object): _mixin_ = True @@ -600,7 +657,8 @@ emit_op_getfield_gc_pure = emit_op_getfield_gc def emit_op_getinteriorfield_gc(self, op, arglocs, regalloc, fcond): - base_loc, index_loc, res_loc, ofs_loc, ofs, itemsize, fieldsize = arglocs + (base_loc, index_loc, res_loc, + ofs_loc, ofs, itemsize, fieldsize) = arglocs self.mc.gen_load_int(r.ip.value, itemsize.value) self.mc.MUL(r.ip.value, index_loc.value, r.ip.value) if ofs.value > 0: @@ -632,7 +690,8 @@ return fcond def emit_op_setinteriorfield_gc(self, op, arglocs, regalloc, fcond): - base_loc, index_loc, value_loc, ofs_loc, ofs, itemsize, fieldsize = arglocs + (base_loc, index_loc, value_loc, + ofs_loc, ofs, itemsize, fieldsize) = arglocs self.mc.gen_load_int(r.ip.value, itemsize.value) self.mc.MUL(r.ip.value, index_loc.value, r.ip.value) if ofs.value > 0: @@ -658,8 +717,6 @@ return fcond - - class ArrayOpAssember(object): _mixin_ = True @@ -678,7 +735,7 @@ else: scale_loc = ofs_loc - # add the base offset + # add the base offset if ofs.value > 0: self.mc.ADD_ri(r.ip.value, scale_loc.value, imm=ofs.value) scale_loc = r.ip @@ -689,11 +746,14 @@ self.mc.ADD_rr(r.ip.value, base_loc.value, scale_loc.value) self.mc.VSTR(value_loc.value, r.ip.value, cond=fcond) elif scale.value == 2: - self.mc.STR_rr(value_loc.value, base_loc.value, scale_loc.value, cond=fcond) + self.mc.STR_rr(value_loc.value, base_loc.value, scale_loc.value, + cond=fcond) elif scale.value == 1: - self.mc.STRH_rr(value_loc.value, base_loc.value, scale_loc.value, cond=fcond) + self.mc.STRH_rr(value_loc.value, base_loc.value, scale_loc.value, + cond=fcond) elif scale.value == 0: - self.mc.STRB_rr(value_loc.value, base_loc.value, scale_loc.value, cond=fcond) + self.mc.STRB_rr(value_loc.value, base_loc.value, scale_loc.value, + cond=fcond) else: assert 0 return fcond @@ -709,7 +769,7 @@ else: scale_loc = ofs_loc - # add the base offset + # add the base offset if ofs.value > 0: self.mc.ADD_ri(r.ip.value, scale_loc.value, imm=ofs.value) scale_loc = r.ip @@ -720,18 +780,21 @@ self.mc.ADD_rr(r.ip.value, base_loc.value, scale_loc.value) self.mc.VLDR(res.value, r.ip.value, cond=fcond) elif scale.value == 2: - self.mc.LDR_rr(res.value, base_loc.value, scale_loc.value, cond=fcond) + self.mc.LDR_rr(res.value, base_loc.value, scale_loc.value, + cond=fcond) elif scale.value == 1: - self.mc.LDRH_rr(res.value, base_loc.value, scale_loc.value, cond=fcond) + self.mc.LDRH_rr(res.value, base_loc.value, scale_loc.value, + cond=fcond) elif scale.value == 0: - self.mc.LDRB_rr(res.value, base_loc.value, scale_loc.value, cond=fcond) + self.mc.LDRB_rr(res.value, base_loc.value, scale_loc.value, + cond=fcond) else: assert 0 #XXX Hack, Hack, Hack if not we_are_translated(): descr = op.getdescr() - size = descr.get_item_size(False) + size = descr.itemsize signed = descr.is_item_signed() self._ensure_result_bit_extension(res, size, signed) return fcond @@ -755,9 +818,11 @@ def emit_op_strgetitem(self, op, arglocs, regalloc, fcond): res, base_loc, ofs_loc, basesize = arglocs if ofs_loc.is_imm(): - self.mc.ADD_ri(r.ip.value, base_loc.value, ofs_loc.getint(), cond=fcond) + self.mc.ADD_ri(r.ip.value, base_loc.value, ofs_loc.getint(), + cond=fcond) else: - self.mc.ADD_rr(r.ip.value, base_loc.value, ofs_loc.value, cond=fcond) + self.mc.ADD_rr(r.ip.value, base_loc.value, ofs_loc.value, + cond=fcond) self.mc.LDRB_ri(res.value, r.ip.value, basesize.value, cond=fcond) return fcond @@ -765,11 +830,14 @@ def emit_op_strsetitem(self, op, arglocs, regalloc, fcond): value_loc, base_loc, ofs_loc, basesize = arglocs if ofs_loc.is_imm(): - self.mc.ADD_ri(r.ip.value, base_loc.value, ofs_loc.getint(), cond=fcond) + self.mc.ADD_ri(r.ip.value, base_loc.value, ofs_loc.getint(), + cond=fcond) else: - self.mc.ADD_rr(r.ip.value, base_loc.value, ofs_loc.value, cond=fcond) + self.mc.ADD_rr(r.ip.value, base_loc.value, ofs_loc.value, + cond=fcond) - self.mc.STRB_ri(value_loc.value, r.ip.value, basesize.value, cond=fcond) + self.mc.STRB_ri(value_loc.value, r.ip.value, basesize.value, + cond=fcond) return fcond #from ../x86/regalloc.py:928 ff. @@ -785,71 +853,78 @@ def _emit_copystrcontent(self, op, regalloc, fcond, is_unicode): # compute the source address - args = list(op.getarglist()) - base_loc, box = regalloc._ensure_value_is_boxed(args[0], args) - args.append(box) - ofs_loc, box = regalloc._ensure_value_is_boxed(args[2], args) - args.append(box) + args = op.getarglist() + base_loc = regalloc._ensure_value_is_boxed(args[0], args) + ofs_loc = regalloc._ensure_value_is_boxed(args[2], args) assert args[0] is not args[1] # forbidden case of aliasing regalloc.possibly_free_var(args[0]) + regalloc.free_temp_vars() if args[3] is not args[2] is not args[4]: # MESS MESS MESS: don't free - regalloc.possibly_free_var(args[2]) # it if ==args[3] or args[4] + regalloc.possibly_free_var(args[2]) # it if ==args[3] or args[4] + regalloc.free_temp_vars() srcaddr_box = TempPtr() forbidden_vars = [args[1], args[3], args[4], srcaddr_box] - srcaddr_loc = regalloc.force_allocate_reg(srcaddr_box, selected_reg=r.r1) + srcaddr_loc = regalloc.force_allocate_reg(srcaddr_box, + selected_reg=r.r1) self._gen_address_inside_string(base_loc, ofs_loc, srcaddr_loc, is_unicode=is_unicode) # compute the destination address forbidden_vars = [args[4], args[3], srcaddr_box] dstaddr_box = TempPtr() - dstaddr_loc = regalloc.force_allocate_reg(dstaddr_box, selected_reg=r.r0) + dstaddr_loc = regalloc.force_allocate_reg(dstaddr_box, + selected_reg=r.r0) forbidden_vars.append(dstaddr_box) - base_loc, box = regalloc._ensure_value_is_boxed(args[1], forbidden_vars) - args.append(box) - forbidden_vars.append(box) - ofs_loc, box = regalloc._ensure_value_is_boxed(args[3], forbidden_vars) - args.append(box) + base_loc = regalloc._ensure_value_is_boxed(args[1], forbidden_vars) + ofs_loc = regalloc._ensure_value_is_boxed(args[3], forbidden_vars) assert base_loc.is_reg() assert ofs_loc.is_reg() regalloc.possibly_free_var(args[1]) if args[3] is not args[4]: # more of the MESS described above regalloc.possibly_free_var(args[3]) + regalloc.free_temp_vars() self._gen_address_inside_string(base_loc, ofs_loc, dstaddr_loc, is_unicode=is_unicode) # compute the length in bytes forbidden_vars = [srcaddr_box, dstaddr_box] - length_loc, length_box = regalloc._ensure_value_is_boxed(args[4], forbidden_vars) - args.append(length_box) + # XXX basically duplicates regalloc.ensure_value_is_boxed, but we + # need the box here + if isinstance(args[4], Box): + length_box = args[4] + length_loc = regalloc.make_sure_var_in_reg(args[4], forbidden_vars) + else: + length_box = TempInt() + length_loc = regalloc.force_allocate_reg(length_box, + forbidden_vars, selected_reg=r.r2) + imm = regalloc.convert_to_imm(args[4]) + self.load(length_loc, imm) if is_unicode: - forbidden_vars = [srcaddr_box, dstaddr_box] bytes_box = TempPtr() - bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars) + bytes_loc = regalloc.force_allocate_reg(bytes_box, + forbidden_vars, selected_reg=r.r2) scale = self._get_unicode_item_scale() assert length_loc.is_reg() - self.mc.MOV_ri(r.ip.value, 1<= 0: - from pypy.jit.backend.llsupport.descr import BaseFieldDescr + from pypy.jit.backend.llsupport.descr import FieldDescr fielddescr = jd.vable_token_descr - assert isinstance(fielddescr, BaseFieldDescr) + assert isinstance(fielddescr, FieldDescr) ofs = fielddescr.offset resloc = regalloc.force_allocate_reg(resbox) self.mov_loc_loc(arglocs[1], r.ip) @@ -1024,26 +1105,31 @@ self.mc.LDR_ri(r.ip.value, r.fp.value) self.mc.CMP_ri(r.ip.value, 0) - self._emit_guard(guard_op, regalloc._prepare_guard(guard_op), c.GE) + self._emit_guard(guard_op, regalloc._prepare_guard(guard_op), + c.GE, save_exc=True) return fcond - # ../x86/assembler.py:668 def redirect_call_assembler(self, oldlooptoken, newlooptoken): - # we overwrite the instructions at the old _x86_direct_bootstrap_code - # to start with a JMP to the new _arm_direct_bootstrap_code. + # some minimal sanity checking + old_nbargs = oldlooptoken.compiled_loop_token._debug_nbargs + new_nbargs = newlooptoken.compiled_loop_token._debug_nbargs + assert old_nbargs == new_nbargs + # we overwrite the instructions at the old _arm_func_adddr + # to start with a JMP to the new _arm_func_addr. # Ideally we should rather patch all existing CALLs, but well. - oldadr = oldlooptoken._arm_direct_bootstrap_code - target = newlooptoken._arm_direct_bootstrap_code + oldadr = oldlooptoken._arm_func_addr + target = newlooptoken._arm_func_addr mc = ARMv7Builder() mc.B(target) mc.copy_to_raw_memory(oldadr) - def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc, fcond): + def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc, + fcond): self.mc.LDR_ri(r.ip.value, r.fp.value) self.mc.CMP_ri(r.ip.value, 0) - self._emit_guard(guard_op, arglocs, c.GE) + self._emit_guard(guard_op, arglocs, c.GE, save_exc=True) return fcond emit_guard_call_release_gil = emit_guard_call_may_force @@ -1058,21 +1144,26 @@ regs_to_save.append(reg) assert gcrootmap.is_shadow_stack with saved_registers(self.mc, regs_to_save): - self._emit_call(-1, self.releasegil_addr, [], self._regalloc, fcond) + self._emit_call(NO_FORCE_INDEX, self.releasegil_addr, [], + self._regalloc, fcond) def call_reacquire_gil(self, gcrootmap, save_loc, fcond): # save the previous result into the stack temporarily. # XXX like with call_release_gil(), we assume that we don't need - # to save vfp regs in this case. + # to save vfp regs in this case. Besides the result location regs_to_save = [] + vfp_regs_to_save = [] if save_loc.is_reg(): regs_to_save.append(save_loc) + if save_loc.is_vfp_reg(): + vfp_regs_to_save.append(save_loc) # call the reopenstack() function (also reacquiring the GIL) - if len(regs_to_save) == 1: - regs_to_save.append(r.ip) # for alingment + if len(regs_to_save) % 2 != 1: + regs_to_save.append(r.ip) # for alingment assert gcrootmap.is_shadow_stack - with saved_registers(self.mc, regs_to_save): - self._emit_call(-1, self.reacqgil_addr, [], self._regalloc, fcond) + with saved_registers(self.mc, regs_to_save, vfp_regs_to_save): + self._emit_call(NO_FORCE_INDEX, self.reacqgil_addr, [], + self._regalloc, fcond) def write_new_force_index(self): # for shadowstack only: get a new, unused force_index number and @@ -1092,54 +1183,16 @@ self.mc.gen_load_int(r.ip.value, fail_index) self.mc.STR_ri(r.ip.value, r.fp.value) + class AllocOpAssembler(object): _mixin_ = True - - # from: ../x86/regalloc.py:750 - # called from regalloc - # XXX kill this function at some point - def _regalloc_malloc_varsize(self, size, size_box, vloc, vbox, ofs_items_loc, regalloc, result): - self.mc.MUL(size.value, size.value, vloc.value) - if ofs_items_loc.is_imm(): - self.mc.ADD_ri(size.value, size.value, ofs_items_loc.value) - else: - self.mc.ADD_rr(size.value, size.value, ofs_items_loc.value) - force_index = self.write_new_force_index() - regalloc.force_spill_var(vbox) - self._emit_call(force_index, self.malloc_func_addr, [size_box], regalloc, - result=result) - - def emit_op_new(self, op, arglocs, regalloc, fcond): + def emit_op_call_malloc_gc(self, op, arglocs, regalloc, fcond): + self.emit_op_call(op, arglocs, regalloc, fcond) self.propagate_memoryerror_if_r0_is_null() return fcond - def emit_op_new_with_vtable(self, op, arglocs, regalloc, fcond): - classint = arglocs[0].value - self.set_vtable(op.result, classint) - return fcond - - def set_vtable(self, box, vtable): - if self.cpu.vtable_offset is not None: - adr = rffi.cast(lltype.Signed, vtable) - self.mc.gen_load_int(r.ip.value, adr) - self.mc.STR_ri(r.ip.value, r.r0.value, self.cpu.vtable_offset) - - def set_new_array_length(self, loc, ofs_length, loc_num_elem): - assert loc.is_reg() - self.mc.gen_load_int(r.ip.value, loc_num_elem) - self.mc.STR_ri(r.ip.value, loc.value, imm=ofs_length) - - def emit_op_new_array(self, op, arglocs, regalloc, fcond): - self.propagate_memoryerror_if_r0_is_null() - if len(arglocs) > 0: - value_loc, base_loc, ofs_length = arglocs - self.mc.STR_ri(value_loc.value, base_loc.value, ofs_length.value) - return fcond - - emit_op_newstr = emit_op_new_array - emit_op_newunicode = emit_op_new_array class FloatOpAssemlber(object): _mixin_ = True @@ -1182,6 +1235,7 @@ self.mc.VCVT_int_to_float(res.value, temp.value) return fcond + class ResOpAssembler(GuardOpAssembler, IntOpAsslember, OpAssembler, UnaryIntOpAssembler, FieldOpAssembler, ArrayOpAssember, @@ -1189,4 +1243,3 @@ ForceOpAssembler, AllocOpAssembler, FloatOpAssemlber): pass - diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -1,5 +1,5 @@ from pypy.jit.backend.llsupport.regalloc import FrameManager, \ - RegisterManager, compute_vars_longevity, TempBox, compute_loop_consts + RegisterManager, TempBox, compute_vars_longevity from pypy.jit.backend.arm import registers as r from pypy.jit.backend.arm import locations from pypy.jit.backend.arm.locations import imm @@ -8,22 +8,30 @@ prepare_op_ri, prepare_cmp_op, prepare_float_op, - _check_imm_arg) + check_imm_arg, + check_imm_box + ) from pypy.jit.backend.arm.jump import remap_frame_layout_mixed -from pypy.jit.backend.arm.arch import MY_COPY_OF_REGS, WORD +from pypy.jit.backend.arm.arch import MY_COPY_OF_REGS +from pypy.jit.backend.arm.arch import WORD, N_REGISTERS_SAVED_BY_MALLOC from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import (Const, ConstInt, ConstFloat, ConstPtr, - Box, BoxInt, BoxPtr, AbstractFailDescr, - INT, REF, FLOAT, LoopToken) + Box, BoxPtr, + INT, REF, FLOAT) +from pypy.jit.metainterp.history import JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop -from pypy.jit.backend.llsupport.descr import BaseFieldDescr, BaseArrayDescr, \ - BaseCallDescr, BaseSizeDescr, \ - InteriorFieldDescr +from pypy.jit.backend.llsupport.descr import ArrayDescr from pypy.jit.backend.llsupport import symbolic -from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory -from pypy.jit.codewriter import heaptracker +from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.rlib.objectmodel import we_are_translated +from pypy.jit.backend.llsupport.descr import unpack_arraydescr +from pypy.jit.backend.llsupport.descr import unpack_fielddescr +from pypy.jit.backend.llsupport.descr import unpack_interiorfielddescr + + +# xxx hack: set a default value for TargetToken._arm_loop_code. If 0, we know +# that it is a LABEL that was not compiled yet. +TargetToken._arm_loop_code = 0 class TempInt(TempBox): type = INT @@ -31,27 +39,40 @@ def __repr__(self): return "" % (id(self),) + class TempPtr(TempBox): type = REF def __repr__(self): return "" % (id(self),) + class TempFloat(TempBox): type = FLOAT def __repr__(self): return "" % (id(self),) + class ARMFrameManager(FrameManager): + def __init__(self): FrameManager.__init__(self) - self.frame_depth = 1 + self.used = [True] # keep first slot free + # XXX refactor frame to avoid this issue of keeping the first slot + # reserved + @staticmethod def frame_pos(loc, type): num_words = ARMFrameManager.frame_size(type) if type == FLOAT: - return locations.StackLocation(loc+1, num_words=num_words, type=type) + if loc > 0: + # Make sure that loc is an even value + # the frame layout requires loc to be even if it is a spilled + # value!! + assert (loc & 1) == 0 + return locations.StackLocation(loc + 1, + num_words=num_words, type=type) return locations.StackLocation(loc, num_words=num_words, type=type) @staticmethod @@ -60,9 +81,19 @@ return 2 return 1 + @staticmethod + def get_loc_index(loc): + assert loc.is_stack() + if loc.type == FLOAT: + return loc.position - 1 + else: + return loc.position + + def void(self, op, fcond): return [] + class VFPRegisterManager(RegisterManager): all_regs = r.all_vfp_regs box_types = [FLOAT] @@ -84,10 +115,33 @@ self._check_type(v) r = self.force_allocate_reg(v) return r -class ARMv7RegisterMananger(RegisterManager): - all_regs = r.all_regs - box_types = None # or a list of acceptable types - no_lower_byte_regs = all_regs + + def ensure_value_is_boxed(self, thing, forbidden_vars=[]): + loc = None + if isinstance(thing, Const): + assert isinstance(thing, ConstFloat) + loc = self.get_scratch_reg(FLOAT, self.temp_boxes + forbidden_vars) + immvalue = self.convert_to_imm(thing) + self.assembler.load(loc, immvalue) + else: + loc = self.make_sure_var_in_reg(thing, + forbidden_vars=self.temp_boxes + forbidden_vars) + return loc + + def get_scratch_reg(self, type=FLOAT, forbidden_vars=[], + selected_reg=None): + assert type == FLOAT # for now + box = TempFloat() + self.temp_boxes.append(box) + reg = self.force_allocate_reg(box, forbidden_vars=forbidden_vars, + selected_reg=selected_reg) + return reg + + +class ARMv7RegisterManager(RegisterManager): + all_regs = r.all_regs + box_types = None # or a list of acceptable types + no_lower_byte_regs = all_regs save_around_call_regs = r.caller_resp REGLOC_TO_COPY_AREA_OFS = { @@ -110,27 +164,53 @@ def convert_to_imm(self, c): if isinstance(c, ConstInt): - return locations.ImmLocation(c.value) + val = rffi.cast(rffi.INT, c.value) + return locations.ImmLocation(val) else: assert isinstance(c, ConstPtr) return locations.ImmLocation(rffi.cast(lltype.Signed, c.value)) - + assert 0 + + def ensure_value_is_boxed(self, thing, forbidden_vars=None): + loc = None + if isinstance(thing, Const): + if isinstance(thing, ConstPtr): + tp = REF + else: + tp = INT + loc = self.get_scratch_reg(tp, forbidden_vars=self.temp_boxes + + forbidden_vars) + immvalue = self.convert_to_imm(thing) + self.assembler.load(loc, immvalue) + else: + loc = self.make_sure_var_in_reg(thing, + forbidden_vars=forbidden_vars) + return loc + + def get_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): + assert type == INT or type == REF + box = TempBox() + self.temp_boxes.append(box) + reg = self.force_allocate_reg(box, forbidden_vars=forbidden_vars, + selected_reg=selected_reg) + return reg + + class Regalloc(object): - def __init__(self, longevity, frame_manager=None, assembler=None): + def __init__(self, frame_manager=None, assembler=None): self.cpu = assembler.cpu - self.longevity = longevity + self.assembler = assembler self.frame_manager = frame_manager - self.assembler = assembler - self.vfprm = VFPRegisterManager(longevity, frame_manager, assembler) - self.rm = ARMv7RegisterMananger(longevity, frame_manager, assembler) + self.jump_target_descr = None + self.final_jump_op = None def loc(self, var): if var.type == FLOAT: return self.vfprm.loc(var) else: return self.rm.loc(var) - + def position(self): return self.rm.position @@ -168,9 +248,11 @@ else: return self.rm.force_allocate_reg(var, forbidden_vars, selected_reg, need_lower_byte) + def try_allocate_reg(self, v, selected_reg=None, need_lower_byte=False): if v.type == FLOAT: - return self.vfprm.try_allocate_reg(v, selected_reg, need_lower_byte) + return self.vfprm.try_allocate_reg(v, selected_reg, + need_lower_byte) else: return self.rm.try_allocate_reg(v, selected_reg, need_lower_byte) @@ -183,14 +265,25 @@ def possibly_free_vars_for_op(self, op): for i in range(op.numargs()): var = op.getarg(i) - if var is not None: # xxx kludgy + if var is not None: # xxx kludgy self.possibly_free_var(var) def possibly_free_vars(self, vars): for var in vars: - if var is not None: # xxx kludgy + if var is not None: # xxx kludgy self.possibly_free_var(var) + def get_scratch_reg(self, type, forbidden_vars=[], selected_reg=None): + if type == FLOAT: + return self.vfprm.get_scratch_reg(type, forbidden_vars, + selected_reg) + else: + return self.rm.get_scratch_reg(type, forbidden_vars, selected_reg) + + def free_temp_vars(self): + self.rm.free_temp_vars() + self.vfprm.free_temp_vars() + def make_sure_var_in_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): if var.type == FLOAT: @@ -207,31 +300,68 @@ assert isinstance(value, ConstFloat) return self.vfprm.convert_to_imm(value) - def prepare_loop(self, inputargs, operations, looptoken): - loop_consts = compute_loop_consts(inputargs, operations[-1], looptoken) - floatlocs = [None] * len(inputargs) - nonfloatlocs = [None] * len(inputargs) - for i in range(len(inputargs)): - arg = inputargs[i] - assert not isinstance(arg, Const) - reg = None - loc = inputargs[i] - if arg not in loop_consts and self.longevity[arg][1] > -1: - reg = self.try_allocate_reg(loc) + def _prepare(self, inputargs, operations): + longevity, last_real_usage = compute_vars_longevity( + inputargs, operations) + self.longevity = longevity + self.last_real_usage = last_real_usage + fm = self.frame_manager + asm = self.assembler + self.vfprm = VFPRegisterManager(longevity, fm, asm) + self.rm = ARMv7RegisterManager(longevity, fm, asm) - loc = self.loc(arg) - if arg.type == FLOAT: - floatlocs[i] = loc + def prepare_loop(self, inputargs, operations): + self._prepare(inputargs, operations) + self._set_initial_bindings(inputargs) + self.possibly_free_vars(list(inputargs)) + + def prepare_bridge(self, inputargs, arglocs, ops): + self._prepare(inputargs, ops) + self._update_bindings(arglocs, inputargs) + + def _set_initial_bindings(self, inputargs): + # The first inputargs are passed in registers r0-r3 + # we relly on the soft-float calling convention so we need to move + # float params to the coprocessor. + + arg_index = 0 + count = 0 + n_register_args = len(r.argument_regs) + cur_frame_pos = - (self.assembler.STACK_FIXED_AREA / WORD) + 1 + for box in inputargs: + assert isinstance(box, Box) + # handle inputargs in argument registers + if box.type == FLOAT and arg_index % 2 != 0: + arg_index += 1 # align argument index for float passed + # in register + if arg_index < n_register_args: + if box.type == FLOAT: + loc = r.argument_regs[arg_index] + loc2 = r.argument_regs[arg_index + 1] + vfpreg = self.try_allocate_reg(box) + # move soft-float argument to vfp + self.assembler.mov_to_vfp_loc(loc, loc2, vfpreg) + arg_index += 2 # this argument used to argument registers + else: + loc = r.argument_regs[arg_index] + self.try_allocate_reg(box, selected_reg=loc) + arg_index += 1 else: - nonfloatlocs[i] = loc - self.possibly_free_vars(list(inputargs)) - - return nonfloatlocs, floatlocs + # treat stack args as stack locations with a negative offset + if box.type == FLOAT: + cur_frame_pos -= 2 + if count % 2 != 0: # Stack argument alignment + cur_frame_pos -= 1 + count = 0 + else: + cur_frame_pos -= 1 + count += 1 + loc = self.frame_manager.frame_pos(cur_frame_pos, box.type) + self.frame_manager.set_binding(box, loc) - def update_bindings(self, locs, frame_depth, inputargs): + def _update_bindings(self, locs, inputargs): used = {} i = 0 - self.frame_manager.frame_depth = frame_depth for loc in locs: arg = inputargs[i] i += 1 @@ -241,7 +371,7 @@ self.vfprm.reg_bindings[arg] = loc else: assert loc.is_stack() - self.frame_manager.frame_bindings[arg] = loc + self.frame_manager.set_binding(arg, loc) used[loc] = None # XXX combine with x86 code and move to llsupport @@ -257,7 +387,6 @@ # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - def force_spill_var(self, var): if var.type == FLOAT: self.vfprm.force_spill_var(var) @@ -267,28 +396,12 @@ def before_call(self, force_store=[], save_all_regs=False): self.rm.before_call(force_store, save_all_regs) self.vfprm.before_call(force_store, save_all_regs) + def _ensure_value_is_boxed(self, thing, forbidden_vars=[]): - box = None - loc = None - if isinstance(thing, Const): - if isinstance(thing, ConstPtr): - box = TempPtr() - elif isinstance(thing, ConstFloat): - box = TempFloat() - else: - box = TempInt() - loc = self.force_allocate_reg(box, - forbidden_vars=forbidden_vars) - if isinstance(thing, ConstFloat): - imm = self.vfprm.convert_to_imm(thing) - else: - imm = self.rm.convert_to_imm(thing) - self.assembler.load(loc, imm) + if thing.type == FLOAT: + return self.vfprm.ensure_value_is_boxed(thing, forbidden_vars) else: - loc = self.make_sure_var_in_reg(thing, - forbidden_vars=forbidden_vars) - box = thing - return loc, box + return self.rm.ensure_value_is_boxed(thing, forbidden_vars) def _sync_var(self, v): if v.type == FLOAT: @@ -299,52 +412,45 @@ def _prepare_op_int_add(self, op, fcond): boxes = list(op.getarglist()) a0, a1 = boxes - imm_a0 = _check_imm_arg(a0) - imm_a1 = _check_imm_arg(a1) + imm_a0 = check_imm_box(a0) + imm_a1 = check_imm_box(a1) if not imm_a0 and imm_a1: - l0, box = self._ensure_value_is_boxed(a0) - l1 = self.make_sure_var_in_reg(a1, [a0]) - boxes.append(box) + l0 = self._ensure_value_is_boxed(a0) + l1 = self.make_sure_var_in_reg(a1, boxes) elif imm_a0 and not imm_a1: l0 = self.make_sure_var_in_reg(a0) - l1, box = self._ensure_value_is_boxed(a1, [a0]) - boxes.append(box) + l1 = self._ensure_value_is_boxed(a1, boxes) else: - l0, box = self._ensure_value_is_boxed(a0) - boxes.append(box) - l1, box = self._ensure_value_is_boxed(a1, [box]) - boxes.append(box) - return [l0, l1], boxes + l0 = self._ensure_value_is_boxed(a0) + l1 = self._ensure_value_is_boxed(a1, boxes) + return [l0, l1] def prepare_op_int_add(self, op, fcond): - locs, boxes = self._prepare_op_int_add(op, fcond) - self.possibly_free_vars(boxes) + locs = self._prepare_op_int_add(op, fcond) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) return locs + [res] def _prepare_op_int_sub(self, op, fcond): - boxes = list(op.getarglist()) - a0, a1 = boxes - imm_a0 = _check_imm_arg(a0) - imm_a1 = _check_imm_arg(a1) + a0, a1 = boxes = op.getarglist() + imm_a0 = check_imm_box(a0) + imm_a1 = check_imm_box(a1) if not imm_a0 and imm_a1: - l0, box = self._ensure_value_is_boxed(a0, boxes) - l1 = self.make_sure_var_in_reg(a1, [a0]) - boxes.append(box) + l0 = self._ensure_value_is_boxed(a0, boxes) + l1 = self.make_sure_var_in_reg(a1, boxes) elif imm_a0 and not imm_a1: - l0 = self.make_sure_var_in_reg(a0) - l1, box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(box) + l0 = self.make_sure_var_in_reg(a0, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) else: - l0, box = self._ensure_value_is_boxed(a0, boxes) - boxes.append(box) - l1, box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(box) - return [l0, l1], boxes + l0 = self._ensure_value_is_boxed(a0, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) + return [l0, l1] def prepare_op_int_sub(self, op, fcond): - locs, boxes = self._prepare_op_int_sub(op, fcond) - self.possibly_free_vars(boxes) + locs = self._prepare_op_int_sub(op, fcond) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) return locs + [res] @@ -352,10 +458,8 @@ boxes = list(op.getarglist()) a0, a1 = boxes - reg1, box = self._ensure_value_is_boxed(a0, forbidden_vars=boxes) - boxes.append(box) - reg2, box = self._ensure_value_is_boxed(a1, forbidden_vars=boxes) - boxes.append(box) + reg1 = self._ensure_value_is_boxed(a0, forbidden_vars=boxes) + reg2 = self._ensure_value_is_boxed(a1, forbidden_vars=boxes) self.possibly_free_vars(boxes) self.possibly_free_vars_for_op(op) @@ -364,42 +468,23 @@ return [reg1, reg2, res] def prepare_guard_int_mul_ovf(self, op, guard, fcond): - boxes = list(op.getarglist()) - a0, a1 = boxes - - reg1, box = self._ensure_value_is_boxed(a0, forbidden_vars=boxes) - boxes.append(box) - reg2, box = self._ensure_value_is_boxed(a1, forbidden_vars=boxes) - boxes.append(box) + boxes = op.getarglist() + reg1 = self._ensure_value_is_boxed(boxes[0], forbidden_vars=boxes) + reg2 = self._ensure_value_is_boxed(boxes[1], forbidden_vars=boxes) res = self.force_allocate_reg(op.result) - args = self._prepare_guard(guard, [reg1, reg2, res]) - - self.possibly_free_vars(boxes) - self.possibly_free_vars_for_op(op) - self.possibly_free_var(op.result) - self.possibly_free_vars(guard.getfailargs()) - return args - + return self._prepare_guard(guard, [reg1, reg2, res]) def prepare_guard_int_add_ovf(self, op, guard, fcond): - locs, boxes = self._prepare_op_int_add(op, fcond) + locs = self._prepare_op_int_add(op, fcond) res = self.force_allocate_reg(op.result) locs.append(res) - locs = self._prepare_guard(guard, locs) - self.possibly_free_vars(boxes) - self.possibly_free_vars_for_op(op) - self.possibly_free_vars(guard.getfailargs()) - return locs + return self._prepare_guard(guard, locs) def prepare_guard_int_sub_ovf(self, op, guard, fcond): - locs, boxes = self._prepare_op_int_sub(op, fcond) + locs = self._prepare_op_int_sub(op, fcond) res = self.force_allocate_reg(op.result) locs.append(res) - locs = self._prepare_guard(guard, locs) - self.possibly_free_vars(boxes) - self.possibly_free_vars_for_op(op) - self.possibly_free_vars(guard.getfailargs()) - return locs + return self._prepare_guard(guard, locs) prepare_op_int_floordiv = prepare_op_by_helper_call('int_floordiv') prepare_op_int_mod = prepare_op_by_helper_call('int_mod') @@ -408,9 +493,12 @@ prepare_op_int_and = prepare_op_ri('int_and') prepare_op_int_or = prepare_op_ri('int_or') prepare_op_int_xor = prepare_op_ri('int_xor') - prepare_op_int_lshift = prepare_op_ri('int_lshift', imm_size=0x1F, allow_zero=False, commutative=False) - prepare_op_int_rshift = prepare_op_ri('int_rshift', imm_size=0x1F, allow_zero=False, commutative=False) - prepare_op_uint_rshift = prepare_op_ri('uint_rshift', imm_size=0x1F, allow_zero=False, commutative=False) + prepare_op_int_lshift = prepare_op_ri('int_lshift', imm_size=0x1F, + allow_zero=False, commutative=False) + prepare_op_int_rshift = prepare_op_ri('int_rshift', imm_size=0x1F, + allow_zero=False, commutative=False) + prepare_op_uint_rshift = prepare_op_ri('uint_rshift', imm_size=0x1F, + allow_zero=False, commutative=False) prepare_op_int_lt = prepare_cmp_op('int_lt') prepare_op_int_le = prepare_cmp_op('int_le') @@ -425,8 +513,8 @@ prepare_op_uint_lt = prepare_cmp_op('uint_lt') prepare_op_uint_ge = prepare_cmp_op('uint_ge') - prepare_op_ptr_eq = prepare_op_int_eq - prepare_op_ptr_ne = prepare_op_int_ne + prepare_op_ptr_eq = prepare_op_instance_ptr_eq = prepare_op_int_eq + prepare_op_ptr_ne = prepare_op_instance_ptr_ne = prepare_op_int_ne prepare_guard_int_lt = prepare_cmp_op('guard_int_lt') prepare_guard_int_le = prepare_cmp_op('guard_int_le') @@ -441,13 +529,12 @@ prepare_guard_uint_lt = prepare_cmp_op('guard_uint_lt') prepare_guard_uint_ge = prepare_cmp_op('guard_uint_ge') - prepare_guard_ptr_eq = prepare_guard_int_eq - prepare_guard_ptr_ne = prepare_guard_int_ne + prepare_guard_ptr_eq = prepare_guard_instance_ptr_eq = prepare_guard_int_eq + prepare_guard_ptr_ne = prepare_guard_instance_ptr_ne = prepare_guard_int_ne prepare_op_int_add_ovf = prepare_op_int_add prepare_op_int_sub_ovf = prepare_op_int_sub - prepare_op_int_is_true = prepare_op_unary_cmp('int_is_true') prepare_op_int_is_zero = prepare_op_unary_cmp('int_is_zero') @@ -455,10 +542,10 @@ prepare_guard_int_is_zero = prepare_op_unary_cmp('int_is_zero') def prepare_op_int_neg(self, op, fcond): - l0, box = self._ensure_value_is_boxed(op.getarg(0)) - self.possibly_free_var(box) + l0 = self._ensure_value_is_boxed(op.getarg(0)) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() resloc = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) return [l0, resloc] prepare_op_int_invert = prepare_op_int_neg @@ -474,10 +561,14 @@ args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] return args + def prepare_op_call_malloc_gc(self, op, fcond): + args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] + return args + def _prepare_guard(self, op, args=None): if args is None: args = [] - args.append(imm(self.frame_manager.frame_depth)) + args.append(imm(self.frame_manager.get_frame_depth())) for arg in op.getfailargs(): if arg: args.append(self.loc(arg)) @@ -486,21 +577,19 @@ return args def prepare_op_finish(self, op, fcond): - args = [imm(self.frame_manager.frame_depth)] + args = [None] * (op.numargs() + 1) for i in range(op.numargs()): arg = op.getarg(i) if arg: - args.append(self.loc(arg)) + args[i] = self.loc(arg) self.possibly_free_var(arg) - else: - args.append(None) + n = self.cpu.get_fail_descr_number(op.getdescr()) + args[-1] = imm(n) return args def prepare_op_guard_true(self, op, fcond): - l0, box = self._ensure_value_is_boxed(op.getarg(0)) + l0 = self._ensure_value_is_boxed(op.getarg(0)) args = self._prepare_guard(op, [l0]) - self.possibly_free_var(box) - self.possibly_free_vars(op.getfailargs()) return args prepare_op_guard_false = prepare_op_guard_true @@ -510,17 +599,15 @@ def prepare_op_guard_value(self, op, fcond): boxes = list(op.getarglist()) a0, a1 = boxes - imm_a1 = _check_imm_arg(a1) - l0, box = self._ensure_value_is_boxed(a0, boxes) - boxes.append(box) + imm_a1 = check_imm_box(a1) + l0 = self._ensure_value_is_boxed(a0, boxes) if not imm_a1: - l1, box = self._ensure_value_is_boxed(a1,boxes) - boxes.append(box) + l1 = self._ensure_value_is_boxed(a1, boxes) else: - l1 = self.make_sure_var_in_reg(a1) + l1 = self.make_sure_var_in_reg(a1, boxes) assert op.result is None arglocs = self._prepare_guard(op, [l0, l1]) - self.possibly_free_vars(boxes) + self.possibly_free_vars(op.getarglist()) self.possibly_free_vars(op.getfailargs()) return arglocs @@ -532,33 +619,26 @@ prepare_op_guard_overflow = prepare_op_guard_no_overflow prepare_op_guard_not_invalidated = prepare_op_guard_no_overflow - def prepare_op_guard_exception(self, op, fcond): boxes = list(op.getarglist()) arg0 = ConstInt(rffi.cast(lltype.Signed, op.getarg(0).getint())) - loc, box = self._ensure_value_is_boxed(arg0) - boxes.append(box) - box = TempInt() - loc1 = self.force_allocate_reg(box, boxes) - boxes.append(box) + loc = self._ensure_value_is_boxed(arg0) + loc1 = self.get_scratch_reg(INT, boxes) if op.result in self.longevity: resloc = self.force_allocate_reg(op.result, boxes) - boxes.append(op.result) + self.possibly_free_var(op.result) else: resloc = None pos_exc_value = imm(self.cpu.pos_exc_value()) pos_exception = imm(self.cpu.pos_exception()) - arglocs = self._prepare_guard(op, [loc, loc1, resloc, pos_exc_value, pos_exception]) - self.possibly_free_vars(boxes) - self.possibly_free_vars(op.getfailargs()) + arglocs = self._prepare_guard(op, + [loc, loc1, resloc, pos_exc_value, pos_exception]) return arglocs def prepare_op_guard_no_exception(self, op, fcond): - loc, box = self._ensure_value_is_boxed( + loc = self._ensure_value_is_boxed( ConstInt(self.cpu.pos_exception())) arglocs = self._prepare_guard(op, [loc]) - self.possibly_free_var(box) - self.possibly_free_vars(op.getfailargs()) return arglocs def prepare_op_guard_class(self, op, fcond): @@ -570,88 +650,111 @@ assert isinstance(op.getarg(0), Box) boxes = list(op.getarglist()) - x, x_box = self._ensure_value_is_boxed(boxes[0], boxes) - boxes.append(x_box) - - t = TempInt() - y = self.force_allocate_reg(t, boxes) - boxes.append(t) + x = self._ensure_value_is_boxed(boxes[0], boxes) + y = self.get_scratch_reg(REF, forbidden_vars=boxes) y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) self.assembler.load(y, imm(y_val)) offset = self.cpu.vtable_offset assert offset is not None - offset_loc, offset_box = self._ensure_value_is_boxed(ConstInt(offset), boxes) - boxes.append(offset_box) + offset_loc = self._ensure_value_is_boxed(ConstInt(offset), boxes) arglocs = self._prepare_guard(op, [x, y, offset_loc]) - self.possibly_free_vars(boxes) - self.possibly_free_vars(op.getfailargs()) return arglocs + def compute_hint_frame_locations(self, operations): + # optimization only: fill in the 'hint_frame_locations' dictionary + # of rm and xrm based on the JUMP at the end of the loop, by looking + # at where we would like the boxes to be after the jump. + op = operations[-1] + if op.getopnum() != rop.JUMP: + return + self.final_jump_op = op + descr = op.getdescr() + assert isinstance(descr, TargetToken) + if descr._arm_loop_code != 0: + # if the target LABEL was already compiled, i.e. if it belongs + # to some already-compiled piece of code + self._compute_hint_frame_locations_from_descr(descr) + #else: + # The loop ends in a JUMP going back to a LABEL in the same loop. + # We cannot fill 'hint_frame_locations' immediately, but we can + # wait until the corresponding prepare_op_label() to know where the + # we would like the boxes to be after the jump. + + def _compute_hint_frame_locations_from_descr(self, descr): + arglocs = self.assembler.target_arglocs(descr) + jump_op = self.final_jump_op + assert len(arglocs) == jump_op.numargs() + for i in range(jump_op.numargs()): + box = jump_op.getarg(i) + if isinstance(box, Box): + loc = arglocs[i] + if loc is not None and loc.is_stack(): + self.frame_manager.hint_frame_locations[box] = loc def prepare_op_jump(self, op, fcond): - assembler = self.assembler descr = op.getdescr() - assert isinstance(descr, LoopToken) - nonfloatlocs, floatlocs = descr._arm_arglocs + assert isinstance(descr, TargetToken) + self.jump_target_descr = descr + arglocs = self.assembler.target_arglocs(descr) # get temporary locs tmploc = r.ip - box = TempFloat() - # compute 'vfptmploc' to be all_regs[0] by spilling what is there - vfptmp = self.vfprm.all_regs[0] - vfptmploc = self.vfprm.force_allocate_reg(box, selected_reg=vfptmp) + vfptmploc = r.vfp_ip # Part about non-floats - # XXX we don't need a copy, we only just the original list - src_locations1 = [self.loc(op.getarg(i)) for i in range(op.numargs()) - if op.getarg(i).type != FLOAT] - assert tmploc not in nonfloatlocs - dst_locations1 = [loc for loc in nonfloatlocs if loc is not None] + src_locations1 = [] + dst_locations1 = [] # Part about floats - src_locations2 = [self.loc(op.getarg(i)) for i in range(op.numargs()) - if op.getarg(i).type == FLOAT] - dst_locations2 = [loc for loc in floatlocs if loc is not None] + src_locations2 = [] + dst_locations2 = [] + + # Build the four lists + for i in range(op.numargs()): + box = op.getarg(i) + src_loc = self.loc(box) + dst_loc = arglocs[i] + if box.type != FLOAT: + src_locations1.append(src_loc) + dst_locations1.append(dst_loc) + else: + src_locations2.append(src_loc) + dst_locations2.append(dst_loc) + remap_frame_layout_mixed(self.assembler, src_locations1, dst_locations1, tmploc, src_locations2, dst_locations2, vfptmploc) - self.possibly_free_var(box) return [] def prepare_op_setfield_gc(self, op, fcond): boxes = list(op.getarglist()) a0, a1 = boxes - ofs, size, ptr = self._unpack_fielddescr(op.getdescr()) - base_loc, base_box = self._ensure_value_is_boxed(a0, boxes) - boxes.append(base_box) - value_loc, value_box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(value_box) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): + ofs, size, sign = unpack_fielddescr(op.getdescr()) + base_loc = self._ensure_value_is_boxed(a0, boxes) + value_loc = self._ensure_value_is_boxed(a1, boxes) + if check_imm_arg(ofs): ofs_loc = imm(ofs) else: - ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, boxes) - boxes.append(ofs_box) - self.possibly_free_vars(boxes) + ofs_loc = self.get_scratch_reg(INT, boxes) + self.assembler.load(ofs_loc, imm(ofs)) return [value_loc, base_loc, ofs_loc, imm(size)] prepare_op_setfield_raw = prepare_op_setfield_gc def prepare_op_getfield_gc(self, op, fcond): a0 = op.getarg(0) - ofs, size, ptr = self._unpack_fielddescr(op.getdescr()) - base_loc, base_box = self._ensure_value_is_boxed(a0) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): - ofs_loc = imm(ofs) + ofs, size, sign = unpack_fielddescr(op.getdescr()) + base_loc = self._ensure_value_is_boxed(a0) + immofs = imm(ofs) + if check_imm_arg(ofs): + ofs_loc = immofs else: - ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, [base_box]) - self.possibly_free_var(ofs_box) - self.possibly_free_var(a0) - self.possibly_free_var(base_box) + ofs_loc = self.get_scratch_reg(INT, [a0]) + self.assembler.load(ofs_loc, immofs) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) return [base_loc, ofs_loc, res, imm(size)] prepare_op_getfield_raw = prepare_op_getfield_gc @@ -659,126 +762,110 @@ prepare_op_getfield_gc_pure = prepare_op_getfield_gc def prepare_op_getinteriorfield_gc(self, op, fcond): - t = self._unpack_interiorfielddescr(op.getdescr()) + t = unpack_interiorfielddescr(op.getdescr()) ofs, itemsize, fieldsize, sign = t args = op.getarglist() - base_loc, base_box = self._ensure_value_is_boxed(op.getarg(0), args) - index_loc, index_box = self._ensure_value_is_boxed(op.getarg(1), args) - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): - ofs_loc = imm(ofs) + base_loc = self._ensure_value_is_boxed(op.getarg(0), args) + index_loc = self._ensure_value_is_boxed(op.getarg(1), args) + immofs = imm(ofs) + if check_imm_arg(ofs): + ofs_loc = immofs else: - ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, [base_box, index_box]) - self.possibly_free_var(ofs_box) - self.possibly_free_vars(args) - self.possibly_free_var(base_box) - self.possibly_free_var(index_box) + ofs_loc = self.get_scratch_reg(INT, args) + self.assembler.load(ofs_loc, immofs) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() result_loc = self.force_allocate_reg(op.result) - return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), - imm(itemsize), imm(fieldsize)] + return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), + imm(itemsize), imm(fieldsize)] - def prepare_op_setinteriorfield_gc(self, op, fcond): - t = self._unpack_interiorfielddescr(op.getdescr()) + t = unpack_interiorfielddescr(op.getdescr()) ofs, itemsize, fieldsize, sign = t - boxes = [None]*3 - base_loc, base_box = self._ensure_value_is_boxed(op.getarg(0), boxes) - boxes[0] = base_box - index_loc, index_box = self._ensure_value_is_boxed(op.getarg(1), boxes) - boxes[1] = index_box - value_loc, value_box = self._ensure_value_is_boxed(op.getarg(2), boxes) - boxes[2] = value_box - c_ofs = ConstInt(ofs) - if _check_imm_arg(c_ofs): - ofs_loc = imm(ofs) + args = op.getarglist() + base_loc = self._ensure_value_is_boxed(op.getarg(0), args) + index_loc = self._ensure_value_is_boxed(op.getarg(1), args) + value_loc = self._ensure_value_is_boxed(op.getarg(2), args) + immofs = imm(ofs) + if check_imm_arg(ofs): + ofs_loc = immofs else: - ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, boxes) - self.possibly_free_var(ofs_box) - self.possibly_free_vars(boxes) + ofs_loc = self.get_scratch_reg(INT, args) + self.assembler.load(ofs_loc, immofs) return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] def prepare_op_arraylen_gc(self, op, fcond): arraydescr = op.getdescr() - assert isinstance(arraydescr, BaseArrayDescr) - ofs = arraydescr.get_ofs_length(self.cpu.translate_support_code) + assert isinstance(arraydescr, ArrayDescr) + ofs = arraydescr.lendescr.offset arg = op.getarg(0) - base_loc, base_box = self._ensure_value_is_boxed(arg) - self.possibly_free_vars([arg, base_box]) - + base_loc = self._ensure_value_is_boxed(arg) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) return [res, base_loc, imm(ofs)] def prepare_op_setarrayitem_gc(self, op, fcond): - a0, a1, a2 = boxes = list(op.getarglist()) - _, scale, base_ofs, _, ptr = self._unpack_arraydescr(op.getdescr()) - - base_loc, base_box = self._ensure_value_is_boxed(a0, boxes) - boxes.append(base_box) - ofs_loc, ofs_box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(ofs_box) - value_loc, value_box = self._ensure_value_is_boxed(a2, boxes) - boxes.append(value_box) - self.possibly_free_vars(boxes) - assert _check_imm_arg(ConstInt(base_ofs)) - return [value_loc, base_loc, ofs_loc, imm(scale), imm(base_ofs)] + a0, a1, a2 = list(op.getarglist()) + size, ofs, _ = unpack_arraydescr(op.getdescr()) + scale = get_scale(size) + args = op.getarglist() + base_loc = self._ensure_value_is_boxed(a0, args) + ofs_loc = self._ensure_value_is_boxed(a1, args) + value_loc = self._ensure_value_is_boxed(a2, args) + assert check_imm_arg(ofs) + return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs)] prepare_op_setarrayitem_raw = prepare_op_setarrayitem_gc def prepare_op_getarrayitem_gc(self, op, fcond): a0, a1 = boxes = list(op.getarglist()) - _, scale, base_ofs, _, ptr = self._unpack_arraydescr(op.getdescr()) - - base_loc, base_box = self._ensure_value_is_boxed(a0, boxes) - boxes.append(base_box) - ofs_loc, ofs_box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(ofs_box) - self.possibly_free_vars(boxes) + size, ofs, _ = unpack_arraydescr(op.getdescr()) + scale = get_scale(size) + base_loc = self._ensure_value_is_boxed(a0, boxes) + ofs_loc = self._ensure_value_is_boxed(a1, boxes) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) - assert _check_imm_arg(ConstInt(base_ofs)) - return [res, base_loc, ofs_loc, imm(scale), imm(base_ofs)] + assert check_imm_arg(ofs) + return [res, base_loc, ofs_loc, imm(scale), imm(ofs)] prepare_op_getarrayitem_raw = prepare_op_getarrayitem_gc prepare_op_getarrayitem_gc_pure = prepare_op_getarrayitem_gc def prepare_op_strlen(self, op, fcond): - l0, box = self._ensure_value_is_boxed(op.getarg(0)) - boxes = [box] - - + args = op.getarglist() + l0 = self._ensure_value_is_boxed(op.getarg(0)) basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, self.cpu.translate_support_code) - ofs_box = ConstInt(ofs_length) - imm_ofs = _check_imm_arg(ofs_box) + immofs = imm(ofs_length) + if check_imm_arg(ofs_length): + l1 = immofs + else: + l1 = self.get_scratch_reg(INT, args) + self.assembler.load(l1, immofs) - if imm_ofs: - l1 = self.make_sure_var_in_reg(ofs_box, boxes) - else: - l1, box1 = self._ensure_value_is_boxed(ofs_box, boxes) - boxes.append(box1) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() - self.possibly_free_vars(boxes) res = self.force_allocate_reg(op.result) self.possibly_free_var(op.result) return [l0, l1, res] def prepare_op_strgetitem(self, op, fcond): boxes = list(op.getarglist()) - base_loc, box = self._ensure_value_is_boxed(boxes[0]) - boxes.append(box) + base_loc = self._ensure_value_is_boxed(boxes[0]) a1 = boxes[1] - imm_a1 = _check_imm_arg(a1) + imm_a1 = check_imm_box(a1) if imm_a1: ofs_loc = self.make_sure_var_in_reg(a1, boxes) else: - ofs_loc, box = self._ensure_value_is_boxed(a1, boxes) - boxes.append(box) + ofs_loc = self._ensure_value_is_boxed(a1, boxes) - self.possibly_free_vars(boxes) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, self.cpu.translate_support_code) @@ -787,18 +874,9 @@ def prepare_op_strsetitem(self, op, fcond): boxes = list(op.getarglist()) - - base_loc, box = self._ensure_value_is_boxed(boxes[0], boxes) - boxes.append(box) - - ofs_loc, box = self._ensure_value_is_boxed(boxes[1], boxes) - boxes.append(box) - - value_loc, box = self._ensure_value_is_boxed(boxes[2], boxes) - boxes.append(box) - - self.possibly_free_vars(boxes) - + base_loc = self._ensure_value_is_boxed(boxes[0], boxes) + ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) + value_loc = self._ensure_value_is_boxed(boxes[2], boxes) basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.STR, self.cpu.translate_support_code) assert itemsize == 1 @@ -808,156 +886,86 @@ prepare_op_copyunicodecontent = void def prepare_op_unicodelen(self, op, fcond): - l0, box = self._ensure_value_is_boxed(op.getarg(0)) - boxes = [box] + l0 = self._ensure_value_is_boxed(op.getarg(0)) basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, self.cpu.translate_support_code) - ofs_box = ConstInt(ofs_length) - imm_ofs = _check_imm_arg(ofs_box) + immofs = imm(ofs_length) + if check_imm_arg(ofs_length): + l1 = immofs + else: + l1 = self.get_scratch_reg(INT, [op.getarg(0)]) + self.assembler.load(l1, immofs) - if imm_ofs: - l1 = imm(ofs_length) - else: - l1, box1 = self._ensure_value_is_boxed(ofs_box, boxes) - boxes.append(box1) - - self.possibly_free_vars(boxes) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) return [l0, l1, res] def prepare_op_unicodegetitem(self, op, fcond): boxes = list(op.getarglist()) - base_loc, box = self._ensure_value_is_boxed(boxes[0], boxes) - boxes.append(box) - ofs_loc, box = self._ensure_value_is_boxed(boxes[1], boxes) - boxes.append(box) - self.possibly_free_vars(boxes) + base_loc = self._ensure_value_is_boxed(boxes[0], boxes) + ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, self.cpu.translate_support_code) - scale = itemsize/2 - return [res, base_loc, ofs_loc, imm(scale), imm(basesize), imm(itemsize)] + scale = itemsize / 2 + return [res, base_loc, ofs_loc, + imm(scale), imm(basesize), imm(itemsize)] def prepare_op_unicodesetitem(self, op, fcond): boxes = list(op.getarglist()) - base_loc, box = self._ensure_value_is_boxed(boxes[0], boxes) - boxes.append(box) - ofs_loc, box = self._ensure_value_is_boxed(boxes[1], boxes) - boxes.append(box) - value_loc, box = self._ensure_value_is_boxed(boxes[2], boxes) - boxes.append(box) - - self.possibly_free_vars(boxes) - + base_loc = self._ensure_value_is_boxed(boxes[0], boxes) + ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) + value_loc = self._ensure_value_is_boxed(boxes[2], boxes) basesize, itemsize, ofs_length = symbolic.get_array_token(rstr.UNICODE, self.cpu.translate_support_code) - scale = itemsize/2 - return [value_loc, base_loc, ofs_loc, imm(scale), imm(basesize), imm(itemsize)] + scale = itemsize / 2 + return [value_loc, base_loc, ofs_loc, + imm(scale), imm(basesize), imm(itemsize)] def prepare_op_same_as(self, op, fcond): arg = op.getarg(0) - imm_arg = _check_imm_arg(arg) + imm_arg = check_imm_box(arg) if imm_arg: argloc = self.make_sure_var_in_reg(arg) else: - argloc, box = self._ensure_value_is_boxed(arg) - self.possibly_free_var(box) + argloc = self._ensure_value_is_boxed(arg) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + resloc = self.force_allocate_reg(op.result) + return [argloc, resloc] - resloc = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) - return [argloc, resloc] prepare_op_cast_ptr_to_int = prepare_op_same_as prepare_op_cast_int_to_ptr = prepare_op_same_as - def prepare_op_new(self, op, fcond): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.can_inline_malloc(op.getdescr()): - self.fastpath_malloc_fixedsize(op, op.getdescr()) - else: - arglocs = self._prepare_args_for_new_op(op.getdescr()) - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, self.assembler.malloc_func_addr, - arglocs, self, fcond, result=op.result) - self.possibly_free_vars(arglocs) - self.possibly_free_var(op.result) - return [] + def prepare_op_call_malloc_nursery(self, op, fcond): + size_box = op.getarg(0) + assert isinstance(size_box, ConstInt) + size = size_box.getint() - def prepare_op_new_with_vtable(self, op, fcond): - classint = op.getarg(0).getint() - descrsize = heaptracker.vtable2descr(self.cpu, classint) - if self.assembler.cpu.gc_ll_descr.can_inline_malloc(descrsize): - self.fastpath_malloc_fixedsize(op, descrsize) - else: - callargs = self._prepare_args_for_new_op(descrsize) - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, self.assembler.malloc_func_addr, - callargs, self, fcond, result=op.result) - self.possibly_free_vars(callargs) - self.possibly_free_var(op.result) - return [imm(classint)] - - def prepare_op_new_array(self, op, fcond): - gc_ll_descr = self.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newarray is not None: - # framework GC - box_num_elem = op.getarg(0) - if isinstance(box_num_elem, ConstInt): - num_elem = box_num_elem.value - if gc_ll_descr.can_inline_malloc_varsize(op.getdescr(), - num_elem): - self.fastpath_malloc_varsize(op, op.getdescr(), num_elem) - return [] - args = self.assembler.cpu.gc_ll_descr.args_for_new_array( - op.getdescr()) - argboxes = [ConstInt(x) for x in args] - argboxes.append(box_num_elem) - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, self.assembler.malloc_array_func_addr, - argboxes, self, fcond, result=op.result) - return [] - # boehm GC - itemsize, scale, basesize, ofs_length, _ = ( - self._unpack_arraydescr(op.getdescr())) - return self._malloc_varsize(basesize, ofs_length, itemsize, op) - - def fastpath_malloc_varsize(self, op, arraydescr, num_elem): - assert isinstance(arraydescr, BaseArrayDescr) - ofs_length = arraydescr.get_ofs_length(self.cpu.translate_support_code) - basesize = arraydescr.get_base_size(self.cpu.translate_support_code) - itemsize = arraydescr.get_item_size(self.cpu.translate_support_code) - size = basesize + itemsize * num_elem - self._do_fastpath_malloc(op, size, arraydescr.tid) - # we know the resullt of the malloc call is in r0 - self.assembler.set_new_array_length(r.r0, ofs_length, num_elem) - - def fastpath_malloc_fixedsize(self, op, descr): - assert isinstance(descr, BaseSizeDescr) - self._do_fastpath_malloc(op, descr.size, descr.tid) - - def _do_fastpath_malloc(self, op, size, tid): - gc_ll_descr = self.assembler.cpu.gc_ll_descr self.rm.force_allocate_reg(op.result, selected_reg=r.r0) t = TempInt() self.rm.force_allocate_reg(t, selected_reg=r.r1) self.possibly_free_var(op.result) self.possibly_free_var(t) + gc_ll_descr = self.assembler.cpu.gc_ll_descr self.assembler.malloc_cond( gc_ll_descr.get_nursery_free_addr(), gc_ll_descr.get_nursery_top_addr(), - size, tid, + size ) def get_mark_gc_roots(self, gcrootmap, use_copy_area=False): shape = gcrootmap.get_basic_shape(False) - for v, val in self.frame_manager.frame_bindings.items(): + for v, val in self.frame_manager.bindings.items(): if (isinstance(v, BoxPtr) and self.rm.stays_alive(v)): assert val.is_stack() - gcrootmap.add_frame_offset(shape, val.position*-WORD) + gcrootmap.add_frame_offset(shape, val.position * -WORD) for v, reg in self.rm.reg_bindings.items(): if reg is r.r0: continue @@ -970,59 +978,6 @@ assert 0, 'sure??' return gcrootmap.compress_callshape(shape, self.assembler.datablockwrapper) - def prepare_op_newstr(self, op, fcond): - gc_ll_descr = self.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newstr is not None: - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, - self.assembler.malloc_str_func_addr, [op.getarg(0)], - self, fcond, op.result) - return [] - # boehm GC - ofs_items, itemsize, ofs = symbolic.get_array_token(rstr.STR, - self.cpu.translate_support_code) - assert itemsize == 1 - return self._malloc_varsize(ofs_items, ofs, itemsize, op) - - def prepare_op_newunicode(self, op, fcond): - gc_ll_descr = self.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newunicode is not None: - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, self.assembler.malloc_unicode_func_addr, - [op.getarg(0)], self, fcond, op.result) - return [] - # boehm GC - ofs_items, _, ofs = symbolic.get_array_token(rstr.UNICODE, - self.cpu.translate_support_code) - _, itemsize, _ = symbolic.get_array_token(rstr.UNICODE, - self.cpu.translate_support_code) - return self._malloc_varsize(ofs_items, ofs, itemsize, op) - - def _malloc_varsize(self, ofs_items, ofs_length, itemsize, op): - v = op.getarg(0) - res_v = op.result - boxes = [v, res_v] - itemsize_box = ConstInt(itemsize) - ofs_items_box = ConstInt(ofs_items) - if _check_imm_arg(ofs_items_box): - ofs_items_loc = self.convert_to_imm(ofs_items_box) - else: - ofs_items_loc, ofs_items_box = self._ensure_value_is_boxed(ofs_items_box, boxes) - boxes.append(ofs_items_box) - vloc, vbox = self._ensure_value_is_boxed(v, [res_v]) - boxes.append(vbox) - size, size_box = self._ensure_value_is_boxed(itemsize_box, boxes) - boxes.append(size_box) - self.assembler._regalloc_malloc_varsize(size, size_box, - vloc, vbox, ofs_items_loc, self, res_v) - base_loc = self.make_sure_var_in_reg(res_v) - - value_loc, vbox = self._ensure_value_is_boxed(v, [res_v]) - boxes.append(vbox) - self.possibly_free_vars(boxes) - assert value_loc.is_reg() - assert base_loc.is_reg() - return [value_loc, base_loc, imm(ofs_length)] prepare_op_debug_merge_point = void prepare_op_jit_debug = void @@ -1034,12 +989,10 @@ # because it will be needed anyway by the following setfield_gc # or setarrayitem_gc. It avoids loading it twice from the memory. arglocs = [] - argboxes = [] + args = op.getarglist() for i in range(N): - loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) + loc = self._ensure_value_is_boxed(op.getarg(i), args) arglocs.append(loc) - argboxes.append(box) - self.rm.possibly_free_vars(argboxes) return arglocs prepare_op_cond_call_gc_wb_array = prepare_op_cond_call_gc_wb @@ -1049,6 +1002,46 @@ self.possibly_free_var(op.result) return [res_loc] + def prepare_op_label(self, op, fcond): + # XXX big refactoring needed? + descr = op.getdescr() + assert isinstance(descr, TargetToken) + inputargs = op.getarglist() + arglocs = [None] * len(inputargs) + # + # we use force_spill() on the boxes that are not going to be really + # used any more in the loop, but that are kept alive anyway + # by being in a next LABEL's or a JUMP's argument or fail_args + # of some guard + position = self.rm.position + for arg in inputargs: + assert isinstance(arg, Box) + if self.last_real_usage.get(arg, -1) <= position: + self.force_spill_var(arg) + + # + for i in range(len(inputargs)): + arg = inputargs[i] + assert isinstance(arg, Box) + loc = self.loc(arg) + arglocs[i] = loc + if loc.is_reg(): + self.frame_manager.mark_as_free(arg) + # + descr._arm_arglocs = arglocs + descr._arm_loop_code = self.assembler.mc.currpos() + descr._arm_clt = self.assembler.current_clt + self.assembler.target_tokens_currently_compiling[descr] = None + self.possibly_free_vars_for_op(op) + # + # if the LABEL's descr is precisely the target of the JUMP at the + # end of the same loop, i.e. if what we are compiling is a single + # loop that ends up jumping to this LABEL, then we can now provide + # the hints about the expected position of the spilled variables. + jump_op = self.final_jump_op + if jump_op is not None and jump_op.getdescr() is descr: + self._compute_hint_frame_locations_from_descr(descr) + def prepare_guard_call_may_force(self, op, guard_op, fcond): faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) @@ -1067,13 +1060,11 @@ gcrootmap = self.cpu.gc_ll_descr.gcrootmap if gcrootmap: arglocs = [] - argboxes = [] + args = op.getarglist() for i in range(op.numargs()): - loc, box = self._ensure_value_is_boxed(op.getarg(i), argboxes) + loc = self._ensure_value_is_boxed(op.getarg(i), args) arglocs.append(loc) - argboxes.append(box) self.assembler.call_release_gil(gcrootmap, arglocs, fcond) - self.possibly_free_vars(argboxes) # do the call faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) @@ -1081,17 +1072,20 @@ self.assembler.emit_op_call(op, args, self, fcond, fail_index) # then reopen the stack if gcrootmap: - self.assembler.call_reacquire_gil(gcrootmap, r.r0, fcond) + if op.result: + result_loc = self.call_result_location(op.result) + else: + result_loc = None + self.assembler.call_reacquire_gil(gcrootmap, result_loc, fcond) locs = self._prepare_guard(guard_op) - self.possibly_free_vars(guard_op.getfailargs()) return locs def prepare_guard_call_assembler(self, op, guard_op, fcond): descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, JitCellToken) jd = descr.outermost_jitdriver_sd assert jd is not None - size = jd.portal_calldescr.get_result_size(self.cpu.translate_support_code) + size = jd.portal_calldescr.get_result_size() vable_index = jd.index_of_virtualizable if vable_index >= 0: self._sync_var(op.getarg(vable_index)) @@ -1113,117 +1107,92 @@ arglocs.append(t) return arglocs - # from ../x86/regalloc.py:791 - def _unpack_fielddescr(self, fielddescr): - assert isinstance(fielddescr, BaseFieldDescr) - ofs = fielddescr.offset - size = fielddescr.get_field_size(self.cpu.translate_support_code) - ptr = fielddescr.is_pointer_field() - return ofs, size, ptr - - # from ../x86/regalloc.py:779 - def _unpack_arraydescr(self, arraydescr): - assert isinstance(arraydescr, BaseArrayDescr) - cpu = self.cpu - ofs_length = arraydescr.get_ofs_length(cpu.translate_support_code) - ofs = arraydescr.get_base_size(cpu.translate_support_code) - size = arraydescr.get_item_size(cpu.translate_support_code) - ptr = arraydescr.is_array_of_pointers() - scale = 0 - while (1 << scale) < size: - scale += 1 - assert (1 << scale) == size - return size, scale, ofs, ofs_length, ptr - - # from ../x86/regalloc.py:965 - def _unpack_interiorfielddescr(self, descr): - assert isinstance(descr, InteriorFieldDescr) - arraydescr = descr.arraydescr - ofs = arraydescr.get_base_size(self.cpu.translate_support_code) - itemsize = arraydescr.get_item_size(self.cpu.translate_support_code) - fieldsize = descr.fielddescr.get_field_size(self.cpu.translate_support_code) - sign = descr.fielddescr.is_field_signed() - ofs += descr.fielddescr.offset - return ofs, itemsize, fieldsize, sign - prepare_op_float_add = prepare_float_op(name='prepare_op_float_add') prepare_op_float_sub = prepare_float_op(name='prepare_op_float_sub') prepare_op_float_mul = prepare_float_op(name='prepare_op_float_mul') - prepare_op_float_truediv = prepare_float_op(name='prepare_op_float_truediv') - prepare_op_float_lt = prepare_float_op(float_result=False, name='prepare_op_float_lt') - prepare_op_float_le = prepare_float_op(float_result=False, name='prepare_op_float_le') - prepare_op_float_eq = prepare_float_op(float_result=False, name='prepare_op_float_eq') - prepare_op_float_ne = prepare_float_op(float_result=False, name='prepare_op_float_ne') - prepare_op_float_gt = prepare_float_op(float_result=False, name='prepare_op_float_gt') - prepare_op_float_ge = prepare_float_op(float_result=False, name='prepare_op_float_ge') - prepare_op_float_neg = prepare_float_op(base=False, name='prepare_op_float_neg') - prepare_op_float_abs = prepare_float_op(base=False, name='prepare_op_float_abs') + prepare_op_float_truediv = prepare_float_op( + name='prepare_op_float_truediv') + prepare_op_float_lt = prepare_float_op(float_result=False, + name='prepare_op_float_lt') + prepare_op_float_le = prepare_float_op(float_result=False, + name='prepare_op_float_le') + prepare_op_float_eq = prepare_float_op(float_result=False, + name='prepare_op_float_eq') + prepare_op_float_ne = prepare_float_op(float_result=False, + name='prepare_op_float_ne') + prepare_op_float_gt = prepare_float_op(float_result=False, + name='prepare_op_float_gt') + prepare_op_float_ge = prepare_float_op(float_result=False, + name='prepare_op_float_ge') + prepare_op_float_neg = prepare_float_op(base=False, + name='prepare_op_float_neg') + prepare_op_float_abs = prepare_float_op(base=False, + name='prepare_op_float_abs') - prepare_guard_float_lt = prepare_float_op(guard=True, float_result=False, name='prepare_guard_float_lt') - prepare_guard_float_le = prepare_float_op(guard=True, float_result=False, name='prepare_guard_float_le') - prepare_guard_float_eq = prepare_float_op(guard=True, float_result=False, name='prepare_guard_float_eq') - prepare_guard_float_ne = prepare_float_op(guard=True, float_result=False, name='prepare_guard_float_ne') - prepare_guard_float_gt = prepare_float_op(guard=True, float_result=False, name='prepare_guard_float_gt') - prepare_guard_float_ge = prepare_float_op(guard=True, float_result=False, name='prepare_guard_float_ge') + prepare_guard_float_lt = prepare_float_op(guard=True, + float_result=False, name='prepare_guard_float_lt') + prepare_guard_float_le = prepare_float_op(guard=True, + float_result=False, name='prepare_guard_float_le') + prepare_guard_float_eq = prepare_float_op(guard=True, + float_result=False, name='prepare_guard_float_eq') + prepare_guard_float_ne = prepare_float_op(guard=True, + float_result=False, name='prepare_guard_float_ne') + prepare_guard_float_gt = prepare_float_op(guard=True, + float_result=False, name='prepare_guard_float_gt') + prepare_guard_float_ge = prepare_float_op(guard=True, + float_result=False, name='prepare_guard_float_ge') def prepare_op_math_sqrt(self, op, fcond): - loc, box = self._ensure_value_is_boxed(op.getarg(1)) - self.possibly_free_var(box) + loc = self._ensure_value_is_boxed(op.getarg(1)) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.vfprm.force_allocate_reg(op.result) self.possibly_free_var(op.result) return [loc, res] def prepare_op_cast_float_to_int(self, op, fcond): - locs = [] - - loc1, box1 = self._ensure_value_is_boxed(op.getarg(0)) - locs.append(loc1) - self.possibly_free_var(box1) - - t = TempFloat() - temp_loc = self.vfprm.force_allocate_reg(t) - locs.append(temp_loc) - self.possibly_free_var(t) - - res = self.rm.force_allocate_reg(op.result) - self.possibly_free_var(op.result) - locs.append(res) - - return locs + loc1 = self._ensure_value_is_boxed(op.getarg(0)) + temp_loc = self.get_scratch_reg(FLOAT) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + res = self.rm.force_allocate_reg(op.result) + return [loc1, temp_loc, res] def prepare_op_cast_int_to_float(self, op, fcond): - locs = [] - - loc1, box1 = self._ensure_value_is_boxed(op.getarg(0)) - locs.append(loc1) - self.possibly_free_var(box1) - - t = TempFloat() - temp_loc = self.vfprm.force_allocate_reg(t) - locs.append(temp_loc) - self.possibly_free_var(t) - - res = self.vfprm.force_allocate_reg(op.result) - self.possibly_free_var(op.result) - locs.append(res) - - return locs + loc1 = self._ensure_value_is_boxed(op.getarg(0)) + temp_loc = self.get_scratch_reg(FLOAT) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() + res = self.vfprm.force_allocate_reg(op.result) + return [loc1, temp_loc, res] def prepare_force_spill(self, op, fcond): self.force_spill_var(op.getarg(0)) return [] + def add_none_argument(fn): return lambda self, op, fcond: fn(self, op, None, fcond) + def notimplemented(self, op, fcond): - raise NotImplementedError, op + raise NotImplementedError(op) + + def notimplemented_with_guard(self, op, guard_op, fcond): - raise NotImplementedError, op + raise NotImplementedError(op) operations = [notimplemented] * (rop._LAST + 1) operations_with_guard = [notimplemented_with_guard] * (rop._LAST + 1) + +def get_scale(size): + scale = 0 + while (1 << scale) < size: + scale += 1 + assert (1 << scale) == size + return scale + for key, value in rop.__dict__.items(): key = key.lower() if key.startswith('_'): diff --git a/pypy/jit/backend/arm/registers.py b/pypy/jit/backend/arm/registers.py --- a/pypy/jit/backend/arm/registers.py +++ b/pypy/jit/backend/arm/registers.py @@ -1,11 +1,14 @@ -from pypy.jit.backend.arm.locations import RegisterLocation, VFPRegisterLocation +from pypy.jit.backend.arm.locations import VFPRegisterLocation +from pypy.jit.backend.arm.locations import RegisterLocation registers = [RegisterLocation(i) for i in range(16)] vfpregisters = [VFPRegisterLocation(i) for i in range(16)] -r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, r13, r14, r15 = registers +[r0, r1, r2, r3, r4, r5, r6, r7, + r8, r9, r10, r11, r12, r13, r14, r15] = registers #vfp registers interpreted as 64-bit registers -d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, d10, d11, d12, d13, d14, d15 = vfpregisters +[d0, d1, d2, d3, d4, d5, d6, d7, + d8, d9, d10, d11, d12, d13, d14, d15] = vfpregisters # aliases for registers fp = r11 @@ -18,13 +21,12 @@ all_regs = [r0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10] all_vfp_regs = vfpregisters[:-1] -caller_resp = [r0, r1, r2, r3] +argument_regs = caller_resp = [r0, r1, r2, r3] callee_resp = [r4, r5, r6, r7, r8, r9, r10, fp] -callee_saved_registers = callee_resp+[lr] -callee_restored_registers = callee_resp+[pc] +callee_saved_registers = callee_resp + [lr] +callee_restored_registers = callee_resp + [pc] caller_vfp_resp = [d0, d1, d2, d3, d4, d5, d6, d7] callee_vfp_resp = [d8, d9, d10, d11, d12, d13, d14, d15] callee_saved_vfp_registers = callee_vfp_resp - diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -9,21 +9,24 @@ class ArmCPU(AbstractLLCPU): - BOOTSTRAP_TP = lltype.FuncType([], lltype.Signed) supports_floats = True def __init__(self, rtyper, stats, opts=None, translate_support_code=False, gcdescr=None): if gcdescr is not None: gcdescr.force_index_ofs = FORCE_INDEX_OFS + # XXX for now the arm backend does not support the gcremovetypeptr + # translation option + assert gcdescr.config.translation.gcremovetypeptr is False AbstractLLCPU.__init__(self, rtyper, stats, opts, translate_support_code, gcdescr) + def setup(self): if self.opts is not None: failargs_limit = self.opts.failargs_limit else: failargs_limit = 1000 - self.assembler = AssemblerARM(self) + self.assembler = AssemblerARM(self, failargs_limit=failargs_limit) def setup_once(self): self.assembler.setup_once() @@ -31,7 +34,8 @@ def finish_once(self): pass - def compile_loop(self, inputargs, operations, looptoken, log=True, name=''): + def compile_loop(self, inputargs, operations, looptoken, + log=True, name=''): self.assembler.assemble_loop(inputargs, operations, looptoken, log=log) @@ -42,15 +46,6 @@ self.assembler.assemble_bridge(faildescr, inputargs, operations, original_loop_token, log=log) - def set_future_value_float(self, index, floatvalue): - self.assembler.fail_boxes_float.setitem(index, floatvalue) - - def set_future_value_int(self, index, intvalue): - self.assembler.fail_boxes_int.setitem(index, intvalue) - - def set_future_value_ref(self, index, ptrvalue): - self.assembler.fail_boxes_ptr.setitem(index, ptrvalue) - def get_latest_value_float(self, index): return self.assembler.fail_boxes_float.getitem(index) @@ -63,9 +58,6 @@ def get_latest_value_count(self): return self.assembler.fail_boxes_count - def get_latest_value_count(self): - return self.assembler.fail_boxes_count - def get_latest_force_token(self): return self.assembler.fail_force_index @@ -78,27 +70,29 @@ for index in range(count): setitem(index, null) - def execute_token(self, executable_token): - #i = [self.get_latest_value_int(x) for x in range(10)] - #print 'Inputargs: %r for token %r' % (i, executable_token) - addr = executable_token._arm_bootstrap_code - assert addr % 8 == 0 - func = rffi.cast(lltype.Ptr(self.BOOTSTRAP_TP), addr) - fail_index = self._execute_call(func) - return self.get_fail_descr_from_number(fail_index) + def make_execute_token(self, *ARGS): + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, lltype.Signed)) - def _execute_call(self, func): - prev_interpreter = None - if not self.translate_support_code: - prev_interpreter = LLInterpreter.current_interpreter - LLInterpreter.current_interpreter = self.debug_ll_interpreter - res = 0 - try: - res = func() - finally: + def execute_token(executable_token, *args): + clt = executable_token.compiled_loop_token + assert len(args) == clt._debug_nbargs + # + addr = executable_token._arm_func_addr + assert addr % 8 == 0 + func = rffi.cast(FUNCPTR, addr) + #llop.debug_print(lltype.Void, ">>>> Entering", addr) + prev_interpreter = None # help flow space if not self.translate_support_code: - LLInterpreter.current_interpreter = prev_interpreter - return res + prev_interpreter = LLInterpreter.current_interpreter + LLInterpreter.current_interpreter = self.debug_ll_interpreter + try: + fail_index = func(*args) + finally: + if not self.translate_support_code: + LLInterpreter.current_interpreter = prev_interpreter + #llop.debug_print(lltype.Void, "<<<< Back") + return self.get_fail_descr_from_number(fail_index) + return execute_token def cast_ptr_to_int(x): adr = llmemory.cast_ptr_to_adr(x) @@ -111,13 +105,13 @@ fail_index = rffi.cast(TP, addr_of_force_index)[0] assert fail_index >= 0, "already forced!" faildescr = self.get_fail_descr_from_number(fail_index) - rffi.cast(TP, addr_of_force_index)[0] = -1 + rffi.cast(TP, addr_of_force_index)[0] = ~fail_index # start of "no gc operation!" block - frame_depth = faildescr._arm_frame_depth*WORD + frame_depth = faildescr._arm_current_frame_depth * WORD addr_end_of_frame = (addr_of_force_index - (frame_depth + - len(all_regs)*WORD + - len(all_vfp_regs)*2*WORD)) + len(all_regs) * WORD + + len(all_vfp_regs) * 2 * WORD)) fail_index_2 = self.assembler.failure_recovery_func( faildescr._failure_recovery_code, addr_of_force_index, diff --git a/pypy/jit/backend/arm/shift.py b/pypy/jit/backend/arm/shift.py --- a/pypy/jit/backend/arm/shift.py +++ b/pypy/jit/backend/arm/shift.py @@ -3,4 +3,4 @@ LSR = 0x1 ASR = 0x2 ROR = 0x3 -RRX = 0x3 # with imm = 0 +RRX = 0x3 # with imm = 0 diff --git a/pypy/jit/backend/arm/test/test_assembler.py b/pypy/jit/backend/arm/test/test_assembler.py --- a/pypy/jit/backend/arm/test/test_assembler.py +++ b/pypy/jit/backend/arm/test/test_assembler.py @@ -1,8 +1,6 @@ -from pypy.jit.backend.arm import arch from pypy.jit.backend.arm import conditions as c from pypy.jit.backend.arm import registers as r -from pypy.jit.backend.arm.arch import WORD -from pypy.jit.backend.arm.arch import arm_int_div, arm_int_div_sign +from pypy.jit.backend.arm.arch import arm_int_div from pypy.jit.backend.arm.assembler import AssemblerARM from pypy.jit.backend.arm.locations import imm from pypy.jit.backend.arm.test.support import skip_unless_arm, run_asm @@ -10,21 +8,21 @@ from pypy.jit.metainterp.resoperation import rop from pypy.rpython.annlowlevel import llhelper -from pypy.rpython.lltypesystem import lltype, rffi, llmemory -from pypy.jit.metainterp.history import LoopToken +from pypy.rpython.lltypesystem import lltype, rffi +from pypy.jit.metainterp.history import JitCellToken from pypy.jit.backend.model import CompiledLoopToken skip_unless_arm() CPU = getcpuclass() + + class TestRunningAssembler(object): def setup_method(self, method): cpu = CPU(None, None) - #lp = LoopToken() - #lp.compiled_loop_token = CompiledLoopToken(cpu, None) self.a = AssemblerARM(cpu) self.a.setup_once() - token = LoopToken() + token = JitCellToken() clt = CompiledLoopToken(cpu, 0) clt.allgcrefs = [] token.compiled_loop_token = clt @@ -33,7 +31,8 @@ def test_make_operation_list(self): i = rop.INT_ADD from pypy.jit.backend.arm import assembler - assert assembler.asm_operations[i] is AssemblerARM.emit_op_int_add.im_func + assert assembler.asm_operations[i] \ + is AssemblerARM.emit_op_int_add.im_func def test_load_small_int_to_reg(self): self.a.gen_func_prolog() @@ -77,7 +76,6 @@ self.a.gen_func_epilog() assert run_asm(self.a) == 464 - def test_or(self): self.a.gen_func_prolog() self.a.mc.MOV_ri(r.r1.value, 8) @@ -115,7 +113,7 @@ self.a.gen_func_prolog() self.a.mc.MOV_ri(r.r1.value, 1) loop_head = self.a.mc.currpos() - self.a.mc.CMP_ri(r.r1.value, 0) # z=0, z=1 + self.a.mc.CMP_ri(r.r1.value, 0) # z=0, z=1 self.a.mc.MOV_ri(r.r1.value, 0, cond=c.NE) self.a.mc.MOV_ri(r.r1.value, 7, cond=c.EQ) self.a.mc.B_offs(loop_head, c.NE) @@ -143,7 +141,8 @@ self.a.mc.MOV_ri(r.r0.value, 123, cond=c.NE) for x in range(15): - self.a.mc.POP([reg.value for reg in r.callee_restored_registers], cond=c.NE) + self.a.mc.POP( + [reg.value for reg in r.callee_restored_registers], cond=c.NE) self.a.mc.MOV_ri(r.r1.value, 33) self.a.mc.MOV_ri(r.r0.value, 23) @@ -160,7 +159,8 @@ self.a.mc.MOV_ri(r.r0.value, 123, cond=c.NE) for x in range(100): - self.a.mc.POP([reg.value for reg in r.callee_restored_registers], cond=c.NE) + self.a.mc.POP( + [reg.value for reg in r.callee_restored_registers], cond=c.NE) self.a.mc.MOV_ri(r.r1.value, 33) self.a.mc.MOV_ri(r.r0.value, 23) @@ -216,7 +216,6 @@ self.a.gen_func_epilog() assert run_asm(self.a) == -36 - def test_bl_with_conditional_exec(self): functype = lltype.Ptr(lltype.FuncType([lltype.Signed], lltype.Signed)) call_addr = rffi.cast(lltype.Signed, llhelper(functype, callme)) @@ -240,7 +239,7 @@ assert run_asm(self.a) == 2478 def test_load_store(self): - x = 0x60002224 + x = 0x60002224 self.a.gen_func_prolog() self.a.mc.gen_load_int(r.r1.value, x) self.a.mc.MOV_ri(r.r3.value, 8) @@ -249,7 +248,7 @@ self.a.gen_func_epilog() assert run_asm(self.a) == x + def callme(inp): i = inp + 10 return i - diff --git a/pypy/jit/backend/arm/test/test_calling_convention.py b/pypy/jit/backend/arm/test/test_calling_convention.py --- a/pypy/jit/backend/arm/test/test_calling_convention.py +++ b/pypy/jit/backend/arm/test/test_calling_convention.py @@ -1,13 +1,15 @@ from pypy.rpython.annlowlevel import llhelper -from pypy.jit.metainterp.history import LoopToken +from pypy.jit.metainterp.history import JitCellToken from pypy.jit.backend.test.calling_convention_test import TestCallingConv, parse from pypy.rpython.lltypesystem import lltype from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.backend.arm.test.support import skip_unless_arm skip_unless_arm() -# ../../test/calling_convention_test.py + class TestARMCallingConvention(TestCallingConv): + # ../../test/calling_convention_test.py + def test_call_argument_spilling(self): # bug when we have a value in r0, that is overwritten by an argument # and needed after the call, so that the register gets spilled after it @@ -28,11 +30,10 @@ i99 = call(ConstClass(func_ptr), 22, descr=calldescr) finish(%s, i99)""" % (args, args) loop = parse(ops, namespace=locals()) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) - for x in range(11): - self.cpu.set_future_value_int(x, x) - self.cpu.execute_token(looptoken) + args = [x for x in range(11)] + self.cpu.execute_token(looptoken, *args) for x in range(11): assert self.cpu.get_latest_value_int(x) == x assert self.cpu.get_latest_value_int(11) == 38 diff --git a/pypy/jit/backend/arm/test/test_gc_integration.py b/pypy/jit/backend/arm/test/test_gc_integration.py --- a/pypy/jit/backend/arm/test/test_gc_integration.py +++ b/pypy/jit/backend/arm/test/test_gc_integration.py @@ -3,63 +3,74 @@ """ import py -from pypy.jit.metainterp.history import BoxInt, ConstInt,\ - BoxPtr, ConstPtr, TreeLoop +from pypy.jit.metainterp.history import BoxInt, \ + BoxPtr, TreeLoop, TargetToken from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.codewriter import heaptracker from pypy.jit.backend.llsupport.descr import GcCache from pypy.jit.backend.llsupport.gc import GcLLDescription from pypy.jit.backend.detect_cpu import getcpuclass -from pypy.jit.backend.arm.regalloc import Regalloc from pypy.jit.backend.arm.arch import WORD -from pypy.jit.tool.oparser import parse from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.annlowlevel import llhelper -from pypy.rpython.lltypesystem import rclass, rstr -from pypy.jit.backend.llsupport.gc import GcLLDescr_framework, GcPtrFieldDescr +from pypy.rpython.lltypesystem import rclass +from pypy.jit.backend.llsupport.gc import GcLLDescr_framework from pypy.jit.backend.arm.test.test_regalloc import MockAssembler from pypy.jit.backend.arm.test.test_regalloc import BaseTestRegalloc -from pypy.jit.backend.arm.regalloc import ARMv7RegisterMananger, ARMFrameManager,\ - VFPRegisterManager +from pypy.jit.backend.arm.regalloc import ARMFrameManager, VFPRegisterManager from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.backend.arm.test.support import skip_unless_arm +from pypy.jit.backend.arm.regalloc import Regalloc, ARMv7RegisterManager skip_unless_arm() CPU = getcpuclass() + class MockGcRootMap(object): is_shadow_stack = False + def get_basic_shape(self, is_64_bit): return ['shape'] + def add_frame_offset(self, shape, offset): shape.append(offset) + def add_callee_save_reg(self, shape, reg_index): - index_to_name = { 1: 'ebx', 2: 'esi', 3: 'edi' } + index_to_name = {1: 'ebx', 2: 'esi', 3: 'edi'} shape.append(index_to_name[reg_index]) + def compress_callshape(self, shape, datablockwrapper): assert datablockwrapper == 'fakedatablockwrapper' assert shape[0] == 'shape' return ['compressed'] + shape[1:] + class MockGcRootMap2(object): is_shadow_stack = False + def get_basic_shape(self, is_64_bit): return ['shape'] + def add_frame_offset(self, shape, offset): shape.append(offset) + def add_callee_save_reg(self, shape, reg_index): - index_to_name = { 1: 'ebx', 2: 'esi', 3: 'edi' } + index_to_name = {1: 'ebx', 2: 'esi', 3: 'edi'} shape.append(index_to_name[reg_index]) + def compress_callshape(self, shape, datablockwrapper): assert datablockwrapper == 'fakedatablockwrapper' assert shape[0] == 'shape' return ['compressed'] + shape[1:] + class MockGcDescr(GcCache): is_shadow_stack = False + def get_funcptr_for_new(self): return 123 + get_funcptr_for_newarray = get_funcptr_for_new get_funcptr_for_newstr = get_funcptr_for_new get_funcptr_for_newunicode = get_funcptr_for_new @@ -74,13 +85,14 @@ record_constptrs = GcLLDescr_framework.record_constptrs.im_func rewrite_assembler = GcLLDescr_framework.rewrite_assembler.im_func + class TestRegallocDirectGcIntegration(object): def test_mark_gc_roots(self): py.test.skip('roots') cpu = CPU(None, None) cpu.setup_once() - regalloc = RegAlloc(MockAssembler(cpu, MockGcDescr(False))) + regalloc = Regalloc(MockAssembler(cpu, MockGcDescr(False))) regalloc.assembler.datablockwrapper = 'fakedatablockwrapper' boxes = [BoxPtr() for i in range(len(ARMv7RegisterManager.all_regs))] longevity = {} @@ -95,7 +107,8 @@ for box in boxes: regalloc.rm.try_allocate_reg(box) TP = lltype.FuncType([], lltype.Signed) - calldescr = cpu.calldescrof(TP, TP.ARGS, TP.RESULT, EffectInfo.MOST_GENERAL) + calldescr = cpu.calldescrof(TP, TP.ARGS, TP.RESULT, + EffectInfo.MOST_GENERAL) regalloc.rm._check_invariants() box = boxes[0] regalloc.position = 0 @@ -129,6 +142,7 @@ descr0 = cpu.fielddescrof(S, 'int') ptr0 = struct_ref + targettoken = TargetToken() namespace = locals().copy() @@ -153,6 +167,7 @@ def test_bug_0(self): ops = ''' [i0, i1, i2, i3, i4, i5, i6, i7, i8] + label(i0, i1, i2, i3, i4, i5, i6, i7, i8, descr=targettoken) guard_value(i2, 1) [i2, i3, i4, i5, i6, i7, i0, i1, i8] guard_class(i4, 138998336) [i4, i5, i6, i7, i0, i1, i8] i11 = getfield_gc(i4, descr=descr0) @@ -180,7 +195,7 @@ guard_false(i32) [i4, i6, i7, i0, i1, i24] i33 = getfield_gc(i0, descr=descr0) guard_value(i33, ConstPtr(ptr0)) [i4, i6, i7, i0, i1, i33, i24] - jump(i0, i1, 1, 17, i4, ConstPtr(ptr0), i6, i7, i24) + jump(i0, i1, 1, 17, i4, ConstPtr(ptr0), i6, i7, i24, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0, 0, 0, 0, 0, 0], run=False) @@ -322,17 +337,22 @@ class Seen(Exception): pass + class GCDescrFastpathMallocVarsize(GCDescrFastpathMalloc): def can_inline_malloc_varsize(self, arraydescr, num_elem): return num_elem < 5 + def get_funcptr_for_newarray(self): return 52 + def init_array_descr(self, A, descr): descr.tid = self._counter self._counter += 1 + def args_for_new_array(self, descr): raise Seen("args_for_new_array") + class TestMallocVarsizeFastpath(BaseTestRegalloc): def setup_method(self, method): cpu = CPU(None, None) diff --git a/pypy/jit/backend/arm/test/test_generated.py b/pypy/jit/backend/arm/test/test_generated.py --- a/pypy/jit/backend/arm/test/test_generated.py +++ b/pypy/jit/backend/arm/test/test_generated.py @@ -3,10 +3,10 @@ AbstractDescr, BasicFailDescr, BoxInt, Box, BoxPtr, - LoopToken, ConstInt, ConstPtr, BoxObj, Const, ConstObj, BoxFloat, ConstFloat) +from pypy.jit.metainterp.history import JitCellToken from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.rpython.test.test_llinterp import interpret from pypy.jit.backend.detect_cpu import getcpuclass @@ -40,20 +40,11 @@ ResOperation(rop.GUARD_TRUE, [v12], None, descr=faildescr1), ResOperation(rop.FINISH, [v9, v6, v10, v2, v8, v5, v1, v4], None, descr=faildescr2), ] - looptoken = LoopToken() + looptoken = JitCellToken() operations[2].setfailargs([v12, v8, v3, v2, v1, v11]) cpu.compile_loop(inputargs, operations, looptoken) - cpu.set_future_value_int(0, -12) - cpu.set_future_value_int(1, -26) - cpu.set_future_value_int(2, -19) - cpu.set_future_value_int(3, 7) - cpu.set_future_value_int(4, -5) - cpu.set_future_value_int(5, -24) - cpu.set_future_value_int(6, -37) - cpu.set_future_value_int(7, 62) - cpu.set_future_value_int(8, 9) - cpu.set_future_value_int(9, 12) - op = cpu.execute_token(looptoken) + args = [-12 , -26 , -19 , 7 , -5 , -24 , -37 , 62 , 9 , 12] + op = cpu.execute_token(looptoken, *args) assert cpu.get_latest_value_int(0) == 0 assert cpu.get_latest_value_int(1) == 62 assert cpu.get_latest_value_int(2) == -19 @@ -101,19 +92,10 @@ ] operations[2].setfailargs([v10, v6]) operations[9].setfailargs([v15, v7, v10, v18, v4, v17, v1]) - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) - cpu.set_future_value_int(0, 16) - cpu.set_future_value_int(1, 5) - cpu.set_future_value_int(2, 5) - cpu.set_future_value_int(3, 16) - cpu.set_future_value_int(4, 46) - cpu.set_future_value_int(5, 6) - cpu.set_future_value_int(6, 63) - cpu.set_future_value_int(7, 39) - cpu.set_future_value_int(8, 78) - cpu.set_future_value_int(9, 0) - op = cpu.execute_token(looptoken) + args = [16 , 5 , 5 , 16 , 46 , 6 , 63 , 39 , 78 , 0] + op = cpu.execute_token(looptoken, *args) assert cpu.get_latest_value_int(0) == 105 assert cpu.get_latest_value_int(1) == 63 assert cpu.get_latest_value_int(2) == 0 @@ -152,18 +134,9 @@ ] operations[2].setfailargs([v8, v3]) operations[4].setfailargs([v2, v12, v1, v3, v4]) - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) - cpu.set_future_value_int(0, -5) - cpu.set_future_value_int(1, 24) - cpu.set_future_value_int(2, 46) - cpu.set_future_value_int(3, -15) - cpu.set_future_value_int(4, 13) - cpu.set_future_value_int(5, -8) - cpu.set_future_value_int(6, 0) - cpu.set_future_value_int(7, -6) - cpu.set_future_value_int(8, 6) - cpu.set_future_value_int(9, 6) + args = [-5 , 24 , 46 , -15 , 13 , -8 , 0 , -6 , 6 , 6] op = cpu.execute_token(looptoken) assert op.identifier == 2 assert cpu.get_latest_value_int(0) == 24 @@ -203,19 +176,10 @@ ResOperation(rop.FINISH, [v8, v2, v6, v5, v7, v1, v10], None, descr=faildescr2), ] operations[5].setfailargs([]) - looptoken = LoopToken() + looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) - cpu.set_future_value_int(0, 19) - cpu.set_future_value_int(1, -3) - cpu.set_future_value_int(2, -58) - cpu.set_future_value_int(3, -7) - cpu.set_future_value_int(4, 12) - cpu.set_future_value_int(5, 22) - cpu.set_future_value_int(6, -54) - cpu.set_future_value_int(7, -29) - cpu.set_future_value_int(8, -19) - cpu.set_future_value_int(9, -64) - op = cpu.execute_token(looptoken) + args = [19 , -3 , -58 , -7 , 12 , 22 , -54 , -29 , -19 , -64] + op = cpu.execute_token(looptoken, *args) assert cpu.get_latest_value_int(0) == -29 assert cpu.get_latest_value_int(1) == -3 assert cpu.get_latest_value_int(2) == 22 @@ -254,20 +218,11 @@ ResOperation(rop.GUARD_NO_OVERFLOW, [], None, descr=faildescr1), ResOperation(rop.FINISH, [v1, v4, v10, v8, v7, v3], None, descr=faildescr2), ] - looptoken = LoopToken() + looptoken = JitCellToken() operations[5].setfailargs([]) cpu.compile_loop(inputargs, operations, looptoken) - cpu.set_future_value_int(0, 1073741824) - cpu.set_future_value_int(1, 95) - cpu.set_future_value_int(2, -16) - cpu.set_future_value_int(3, 5) - cpu.set_future_value_int(4, 92) - cpu.set_future_value_int(5, 12) - cpu.set_future_value_int(6, 32) - cpu.set_future_value_int(7, 17) - cpu.set_future_value_int(8, 37) From noreply at buildbot.pypy.org Tue Jan 3 15:03:57 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 3 Jan 2012 15:03:57 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: changes due to merge Message-ID: <20120103140357.39AD182B1C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50998:d3d90edc432a Date: 2012-01-03 15:03 +0100 http://bitbucket.org/pypy/pypy/changeset/d3d90edc432a/ Log: changes due to merge diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -11,8 +11,8 @@ from pypy.jit.backend.ppc.ppcgen.helper.assembler import gen_emit_cmp_op import pypy.jit.backend.ppc.ppcgen.register as r import pypy.jit.backend.ppc.ppcgen.condition as c -from pypy.jit.metainterp.history import (Const, ConstPtr, LoopToken, - AbstractFailDescr) +from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, + TargetToken, AbstractFailDescr) from pypy.jit.backend.llsupport.asmmemmgr import (BlockBuilderMixin, AsmMemoryManager, MachineDataBlockWrapper) from pypy.jit.backend.llsupport.regalloc import (RegisterManager, compute_vars_longevity) diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -6,8 +6,8 @@ GPR_SAVE_AREA, BACKCHAIN_SIZE, MAX_REG_PARAMS) -from pypy.jit.metainterp.history import (LoopToken, AbstractFailDescr, FLOAT, - INT) +from pypy.jit.metainterp.history import (JitCellToken, TargetToken, + AbstractFailDescr, FLOAT, INT) from pypy.rlib.objectmodel import we_are_translated from pypy.jit.backend.ppc.ppcgen.helper.assembler import (count_reg_args, Saved_Volatiles) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -21,8 +21,8 @@ Saved_Volatiles) import pypy.jit.backend.ppc.ppcgen.register as r import pypy.jit.backend.ppc.ppcgen.condition as c -from pypy.jit.metainterp.history import (Const, ConstPtr, LoopToken, - AbstractFailDescr) +from pypy.jit.metainterp.history import (Const, ConstPtr, JitCellToken, + TargetToken, AbstractFailDescr) from pypy.jit.backend.llsupport.asmmemmgr import (BlockBuilderMixin, AsmMemoryManager, MachineDataBlockWrapper) @@ -93,10 +93,6 @@ self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) self.fail_boxes_ptr = values_array(llmemory.GCREF, failargs_limit) self.mc = None - self.malloc_func_addr = 0 - self.malloc_array_func_addr = 0 - self.malloc_str_func_addr = 0 - self.malloc_unicode_func_addr = 0 self.datablockwrapper = None self.memcpy_addr = 0 self.fail_boxes_count = 0 @@ -466,21 +462,7 @@ def setup_once(self): gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() - ll_new = gc_ll_descr.get_funcptr_for_new() - self.malloc_func_addr = rffi.cast(lltype.Signed, ll_new) self._build_propagate_exception_path() - if gc_ll_descr.get_funcptr_for_newarray is not None: - ll_new_array = gc_ll_descr.get_funcptr_for_newarray() - self.malloc_array_func_addr = rffi.cast(lltype.Signed, - ll_new_array) - if gc_ll_descr.get_funcptr_for_newstr is not None: - ll_new_str = gc_ll_descr.get_funcptr_for_newstr() - self.malloc_str_func_addr = rffi.cast(lltype.Signed, - ll_new_str) - if gc_ll_descr.get_funcptr_for_newunicode is not None: - ll_new_unicode = gc_ll_descr.get_funcptr_for_newunicode() - self.malloc_unicode_func_addr = rffi.cast(lltype.Signed, - ll_new_unicode) self.memcpy_addr = self.cpu.cast_ptr_to_int(memcpy_fn) self.setup_failure_recovery() self.exit_code_adr = self._gen_exit_path() diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -1,6 +1,5 @@ from pypy.jit.backend.llsupport.regalloc import (RegisterManager, FrameManager, - TempBox, compute_vars_longevity, - compute_loop_consts) + TempBox, compute_vars_longevity) from pypy.jit.backend.ppc.ppcgen.arch import (WORD, MY_COPY_OF_REGS) from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout_mixed from pypy.jit.backend.ppc.ppcgen.locations import imm @@ -11,9 +10,8 @@ prepare_binary_int_op_with_imm, prepare_unary_cmp) from pypy.jit.metainterp.history import (INT, REF, FLOAT, Const, ConstInt, - ConstPtr, LoopToken, Box) -from pypy.jit.backend.llsupport.descr import BaseFieldDescr, BaseArrayDescr, \ - BaseCallDescr, BaseSizeDescr + ConstPtr, Box) +from pypy.jit.metainterp.history import JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop from pypy.jit.backend.ppc.ppcgen import locations from pypy.rpython.lltypesystem import rffi, lltype, rstr diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -150,8 +150,6 @@ i0 = BoxInt() i1 = BoxInt() i2 = BoxInt() - looptoken = LoopToken() - operations = [ looptoken = JitCellToken() targettoken = TargetToken() operations = [ From noreply at buildbot.pypy.org Tue Jan 3 17:06:42 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 3 Jan 2012 17:06:42 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (edelsohn, hager): corrected diagram of stackframes Message-ID: <20120103160642.7C2C882B1C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r50999:59f9ad63fcbc Date: 2012-01-03 17:05 +0100 http://bitbucket.org/pypy/pypy/changeset/59f9ad63fcbc/ Log: (edelsohn, hager): corrected diagram of stackframes diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py b/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py +++ b/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py @@ -32,7 +32,11 @@ | | | --------------------------- -- (64 Bit) | TOC POINTER | WORD | - --------------------------- | + --------------------------- -- + | | | + (64 Bit) | RESERVED FOR COMPILER | |>> 2 * WORD + | AND LINKER | | + --------------------------- -- | SAVED LR | WORD | --------------------------- |>> 4 WORDS (64 Bit) (64 Bit) | SAVED CR | WORD | 2 WORDS (32 Bit) From noreply at buildbot.pypy.org Tue Jan 3 17:28:59 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 3 Jan 2012 17:28:59 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fixed typo in diagram Message-ID: <20120103162859.08DA082B1C@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51000:c0bb80a287e9 Date: 2012-01-03 17:28 +0100 http://bitbucket.org/pypy/pypy/changeset/c0bb80a287e9/ Log: fixed typo in diagram diff --git a/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py b/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py --- a/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py +++ b/pypy/jit/backend/ppc/ppcgen/test/test_stackframe.py @@ -38,7 +38,7 @@ | AND LINKER | | --------------------------- -- | SAVED LR | WORD | - --------------------------- |>> 4 WORDS (64 Bit) + --------------------------- |>> 3 WORDS (64 Bit) (64 Bit) | SAVED CR | WORD | 2 WORDS (32 Bit) --------------------------- | | BACK CHAIN | WORD | From noreply at buildbot.pypy.org Tue Jan 3 17:36:22 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 3 Jan 2012 17:36:22 +0100 (CET) Subject: [pypy-commit] pypy default: small beautification Message-ID: <20120103163622.46E2582B1C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51001:1946d8c1d887 Date: 2012-01-03 10:36 -0600 http://bitbucket.org/pypy/pypy/changeset/1946d8c1d887/ Log: small beautification diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -424,13 +424,10 @@ res.append(')') else: concrete.to_str(space, 1, res, indent=' ') - if (dtype is interp_dtype.get_dtype_cache(space).w_float64dtype or \ - dtype.kind == interp_dtype.SIGNEDLTR and \ - dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) \ - and self.size: - # Do not print dtype - pass - else: + if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -184,15 +184,16 @@ from numpypy import array x = array(0.2) assert x.ndim == 0 - x = array([1,2]) + x = array([1, 2]) assert x.ndim == 1 - x = array([[1,2], [3,4]]) + x = array([[1, 2], [3, 4]]) assert x.ndim == 2 - x = array([[[1,2], [3,4]], [[5,6], [7,8]] ]) + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) assert x.ndim == 3 - # numpy actually raises an AttributeError, but numpypy raises an AttributeError - raises (TypeError, 'x.ndim=3') - + # numpy actually raises an AttributeError, but numpypy raises an + # AttributeError + raises (TypeError, 'x.ndim = 3') + def test_init(self): from numpypy import zeros a = zeros(15) @@ -1361,7 +1362,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): from numpypy import array, zeros - intSize = array(5).dtype.itemsize + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1369,12 +1370,12 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - if a.dtype.itemsize == intSize: + if a.dtype.itemsize == int_size: assert repr(a) == "array([0, 1, 2, 3, 4])" else: assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" a = array(range(5), 'int32') - if a.dtype.itemsize == intSize: + if a.dtype.itemsize == int_size: assert repr(a) == "array([0, 1, 2, 3, 4])" else: assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" From noreply at buildbot.pypy.org Tue Jan 3 22:19:30 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 3 Jan 2012 22:19:30 +0100 (CET) Subject: [pypy-commit] pypy default: fix comment Message-ID: <20120103211930.50F6182B1C@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51002:ece227c225ab Date: 2012-01-03 15:19 -0600 http://bitbucket.org/pypy/pypy/changeset/ece227c225ab/ Log: fix comment diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -191,8 +191,8 @@ x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) assert x.ndim == 3 # numpy actually raises an AttributeError, but numpypy raises an - # AttributeError - raises (TypeError, 'x.ndim = 3') + # TypeError + raises(TypeError, 'x.ndim = 3') def test_init(self): from numpypy import zeros From pullrequests-noreply at bitbucket.org Tue Jan 3 23:03:23 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Tue, 03 Jan 2012 22:03:23 -0000 Subject: [pypy-commit] [ACCEPTED] Pull request #1 for pypy/benchmarks: Added GZip benchmark In-Reply-To: References: Message-ID: <20120103220323.27781.38317@bitbucket05.managed.contegix.com> Pull request #1 has been accepted by Alex Gaynor. Changes in jonashaag/benchmarks have been pulled into pypy/benchmarks. https://bitbucket.org/pypy/benchmarks/pull-request/1/added-gzip-benchmark -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Tue Jan 3 23:03:29 2012 From: noreply at buildbot.pypy.org (jonashaag) Date: Tue, 3 Jan 2012 23:03:29 +0100 (CET) Subject: [pypy-commit] benchmarks default: Added GZip benchmark Message-ID: <20120103220329.00B3382B1C@wyvern.cs.uni-duesseldorf.de> Author: Jonas Haag Branch: Changeset: r155:783f9a3c79e7 Date: 2011-11-24 10:43 +0100 http://bitbucket.org/pypy/benchmarks/changeset/783f9a3c79e7/ Log: Added GZip benchmark diff --git a/own/bm_gzip.py b/own/bm_gzip.py new file mode 100644 --- /dev/null +++ b/own/bm_gzip.py @@ -0,0 +1,45 @@ +import os +import time +import shutil +import tempfile +import tarfile + +DJANGO_DIR = os.path.join(os.path.dirname(__file__), os.pardir, + 'unladen_swallow', 'lib', 'django') + +def _bootstrap(): + fd, archive = tempfile.mkstemp() + os.close(fd) + with tarfile.open(archive, 'w:gz') as targz: + targz.add(DJANGO_DIR) + return archive + +def bench(archive): + dest = tempfile.mkdtemp() + try: + with tarfile.open(archive) as targz: + targz.extractall(dest) + finally: + shutil.rmtree(dest) + +def main(n): + archive = _bootstrap() + try: + times = [] + for k in range(n): + t0 = time.time() + bench(archive) + times.append(time.time() - t0) + return times + finally: + os.remove(archive) + +if __name__ == '__main__': + import util, optparse + parser = optparse.OptionParser( + usage="%prog [options]", + description="Test the performance of the GZip decompression benchmark") + util.add_standard_options_to(parser) + options, args = parser.parse_args() + + util.run_benchmark(options, options.num_runs, main) From noreply at buildbot.pypy.org Tue Jan 3 23:04:51 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 3 Jan 2012 23:04:51 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: merge with default Message-ID: <20120103220451.549E182B1C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51003:20bacb9bb86a Date: 2012-01-03 23:36 +0200 http://bitbucket.org/pypy/pypy/changeset/20bacb9bb86a/ Log: merge with default diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -420,8 +420,8 @@ debug._log = None # assert ops_offset is looptoken._x86_ops_offset - # getfield_raw/int_add/setfield_raw + ops + None - assert len(ops_offset) == 3 + len(operations) + 1 + # 2*(getfield_raw/int_add/setfield_raw) + ops + None + assert len(ops_offset) == 2*3 + len(operations) + 1 assert (ops_offset[operations[0]] <= ops_offset[operations[1]] <= ops_offset[operations[2]] <= diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -169,10 +169,10 @@ loop.original_jitcell_token = jitcell_token for label in all_target_tokens: assert isinstance(label, TargetToken) - label.original_jitcell_token = jitcell_token if label.virtual_state and label.short_preamble: metainterp_sd.logger_ops.log_short_preamble([], label.short_preamble) jitcell_token.target_tokens = all_target_tokens + propagate_original_jitcell_token(loop) send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) return all_target_tokens[0] @@ -240,11 +240,11 @@ for box in loop.inputargs: assert isinstance(box, Box) - target_token = loop.operations[-1].getdescr() + target_token = loop.operations[-1].getdescr() resumekey.compile_and_attach(metainterp, loop) + target_token = label.getdescr() assert isinstance(target_token, TargetToken) - target_token.original_jitcell_token = loop.original_jitcell_token record_loop_or_bridge(metainterp_sd, loop) return target_token @@ -281,6 +281,15 @@ assert i == len(inputargs) loop.operations = extra_ops + loop.operations +def propagate_original_jitcell_token(trace): + for op in trace.operations: + if op.getopnum() == rop.LABEL: + token = op.getdescr() + assert isinstance(token, TargetToken) + assert token.original_jitcell_token is None + token.original_jitcell_token = trace.original_jitcell_token + + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: @@ -554,6 +563,7 @@ inputargs = metainterp.history.inputargs if not we_are_translated(): self._debug_suboperations = new_loop.operations + propagate_original_jitcell_token(new_loop) send_bridge_to_backend(metainterp.jitdriver_sd, metainterp.staticdata, self, inputargs, new_loop.operations, new_loop.original_jitcell_token) @@ -740,6 +750,7 @@ jitdriver_sd = metainterp.jitdriver_sd redargs = new_loop.inputargs new_loop.original_jitcell_token = jitcell_token = make_jitcell_token(jitdriver_sd) + propagate_original_jitcell_token(new_loop) send_loop_to_backend(self.original_greenkey, metainterp.jitdriver_sd, metainterp_sd, new_loop, "entry bridge") # send the new_loop to warmspot.py, to be called directly the next time diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -386,6 +386,17 @@ """ self.optimize_loop(ops, expected) + def test_virtual_as_field_of_forced_box(self): + ops = """ + [p0] + pv1 = new_with_vtable(ConstClass(node_vtable)) + label(pv1, p0) + pv2 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(pv2, pv1, descr=valuedescr) + jump(pv1, pv2) + """ + with raises(InvalidLoop): + self.optimize_loop(ops, ops) class OptRenameStrlen(Optimization): def propagate_forward(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -409,7 +409,13 @@ if self.level == LEVEL_CONSTANT: return assert 0 <= self.position_in_notvirtuals - boxes[self.position_in_notvirtuals] = value.force_box(optimizer) + if optimizer: + box = value.force_box(optimizer) + else: + if value.is_virtual(): + raise BadVirtualState + box = value.get_key_box() + boxes[self.position_in_notvirtuals] = box def _enum(self, virtual_state): if self.level == LEVEL_CONSTANT: @@ -471,8 +477,14 @@ optimizer = optimizer.optearlyforce assert len(values) == len(self.state) inputargs = [None] * len(self.notvirtuals) + + # We try twice. The first time around we allow boxes to be forced + # which might change the virtual state if the box appear in more + # than one place among the inputargs. for i in range(len(values)): self.state[i].enum_forced_boxes(inputargs, values[i], optimizer) + for i in range(len(values)): + self.state[i].enum_forced_boxes(inputargs, values[i], None) if keyboxes: for i in range(len(values)): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -976,10 +976,13 @@ self.verify_green_args(jitdriver_sd, greenboxes) self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, greenboxes) - + if self.metainterp.seen_loop_header_for_jdindex < 0: - if not jitdriver_sd.no_loop_header or not any_operation: + if not any_operation: return + if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if not jitdriver_sd.no_loop_header: + return # automatically add a loop_header if there is none self.metainterp.seen_loop_header_for_jdindex = jdindex # @@ -2053,9 +2056,15 @@ from pypy.jit.metainterp.resoperation import opname raise NotImplementedError(opname[opnum]) - def get_procedure_token(self, greenkey): + def get_procedure_token(self, greenkey, with_compiled_targets=False): cell = self.jitdriver_sd.warmstate.jit_cell_at_key(greenkey) - return cell.get_procedure_token() + token = cell.get_procedure_token() + if with_compiled_targets: + if not token: + return None + if not token.target_tokens: + return None + return token def compile_loop(self, original_boxes, live_arg_boxes, start, resume_at_jump_descr): num_green_args = self.jitdriver_sd.num_green_args @@ -2088,11 +2097,9 @@ def compile_trace(self, live_arg_boxes, resume_at_jump_descr): num_green_args = self.jitdriver_sd.num_green_args greenkey = live_arg_boxes[:num_green_args] - target_jitcell_token = self.get_procedure_token(greenkey) + target_jitcell_token = self.get_procedure_token(greenkey, True) if not target_jitcell_token: return - if not target_jitcell_token.target_tokens: - return self.history.record(rop.JUMP, live_arg_boxes[num_green_args:], None, descr=target_jitcell_token) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2697,7 +2697,7 @@ # bridge back to the preamble of the first loop is produced. A guard in # this bridge is later traced resulting in a failed attempt of retracing # the second loop. - self.check_trace_count(8) + self.check_trace_count(9) # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -756,7 +756,7 @@ res = self.meta_interp(interpret, [1]) assert res == interpret(1) # XXX it's unsure how many loops should be there - self.check_trace_count(3) + self.check_trace_count(2) def test_path_with_operations_not_from_start(self): jitdriver = JitDriver(greens = ['k'], reds = ['n', 'z']) diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -880,7 +880,7 @@ elif op == 'j': j = Int(0) elif op == '+': - sa += i.val * j.val + sa += (i.val + 2) * (j.val + 2) elif op == 'a': i = Int(i.val + 1) elif op == 'b': @@ -902,6 +902,7 @@ assert res == f(10) self.check_aborted_count(0) self.check_target_token_count(3) + self.check_resops(int_mul=2) def test_nested_loops_bridge(self): class Int(object): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -384,6 +384,9 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -413,7 +416,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -426,8 +429,9 @@ else: concrete.to_str(space, 1, res, indent=' ') if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -910,78 +914,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) view = self.create_slice([(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + builder.append(ccomma +'\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) + # create_slice requires len(chunks) > 1 in order to reduce + # shape view = self.create_slice([(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe @@ -1254,6 +1260,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -158,6 +158,7 @@ assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): from numpypy import ndarray, array, dtype @@ -179,6 +180,20 @@ ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1, 2]) + assert x.ndim == 1 + x = array([[1, 2], [3, 4]]) + assert x.ndim == 2 + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but numpypy raises an + # TypeError + raises(TypeError, 'x.ndim = 3') + def test_init(self): from numpypy import zeros a = zeros(15) @@ -1241,6 +1256,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1354,6 +1370,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): from numpypy import array, zeros + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1361,14 +1378,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1381,6 +1410,16 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): from numpypy import array, zeros @@ -1424,7 +1463,7 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -8,10 +8,12 @@ from pypy.tool import logparser from pypy.jit.tool.jitoutput import parse_prof from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range, - find_ids, TraceWithIds, + find_ids, OpMatcher, InvalidMatch) class BaseTestPyPyC(object): + log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary' + def setup_class(cls): if '__pypy__' not in sys.builtin_module_names: py.test.skip("must run this test with pypy") @@ -52,8 +54,7 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - print cmdline, logfile - env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)} + env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -98,7 +98,8 @@ end = time.time() return end - start # - log = self.run(main, [get_libc_name(), 200], threshold=150) + log = self.run(main, [get_libc_name(), 200], threshold=150, + import_site=True) assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead loops = log.loops_by_id('sleep') assert len(loops) == 1 # make sure that we actually JITted the loop @@ -121,7 +122,7 @@ return fabs._ptr.getaddr(), x libm_name = get_libm_name(sys.platform) - log = self.run(main, [libm_name]) + log = self.run(main, [libm_name], import_site=True) fabs_addr, res = log.result assert res == -4.0 loop, = log.loops_by_filename(self.filepath) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -15,7 +15,7 @@ i += letters[i % len(letters)] == uletters[i % len(letters)] return i - log = self.run(main, [300]) + log = self.run(main, [300], import_site=True) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" @@ -55,7 +55,7 @@ i += int(long(string.digits[i % len(string.digits)], 16)) return i - log = self.run(main, [1100]) + log = self.run(main, [1100], import_site=True) assert log.result == main(1100) loop, = log.loops_by_filename(self.filepath) assert loop.match(""" diff --git a/pypy/module/sys/app.py b/pypy/module/sys/app.py --- a/pypy/module/sys/app.py +++ b/pypy/module/sys/app.py @@ -66,11 +66,11 @@ return None copyright_str = """ -Copyright 2003-2011 PyPy development team. +Copyright 2003-2012 PyPy development team. All Rights Reserved. For further information, see -Portions Copyright (c) 2001-2008 Python Software Foundation. +Portions Copyright (c) 2001-2012 Python Software Foundation. All Rights Reserved. Portions Copyright (c) 2000 BeOpen.com. diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -528,6 +528,9 @@ set_param(driver, name1, int(value)) except ValueError: raise + break + else: + raise ValueError set_user_param._annspecialcase_ = 'specialize:arg(0)' diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -32,6 +32,9 @@ def getres(self): return self.res + def getdescr(self): + return self.descr + def is_guard(self): return self._is_guard @@ -182,7 +185,10 @@ return self.code.map[self.bytecode_no] def getlineno(self): - return self.getopcode().lineno + code = self.getopcode() + if code is None: + return None + return code.lineno lineno = property(getlineno) def getline_starts_here(self): diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py --- a/pypy/tool/jitlogparser/storage.py +++ b/pypy/tool/jitlogparser/storage.py @@ -6,7 +6,6 @@ import py import os from lib_pypy.disassembler import dis -from pypy.tool.jitlogparser.parser import Function from pypy.tool.jitlogparser.module_finder import gather_all_code_objs class LoopStorage(object): diff --git a/pypy/translator/sandbox/pypy_interact.py b/pypy/translator/sandbox/pypy_interact.py --- a/pypy/translator/sandbox/pypy_interact.py +++ b/pypy/translator/sandbox/pypy_interact.py @@ -13,7 +13,8 @@ ATM this only works with PyPy translated with Boehm or the semispace or generation GCs. --timeout=N limit execution time to N (real-time) seconds. - --log=FILE log all user input into the FILE + --log=FILE log all user input into the FILE. + --verbose log all proxied system calls. Note that you can get readline-like behavior with a tool like 'ledit', provided you use enough -u options: @@ -30,15 +31,15 @@ LIB_ROOT = os.path.dirname(os.path.dirname(pypy.__file__)) class PyPySandboxedProc(VirtualizedSandboxedProc, SimpleIOSandboxedProc): - debug = True argv0 = '/bin/pypy-c' virtual_cwd = '/tmp' virtual_env = {} virtual_console_isatty = True - def __init__(self, executable, arguments, tmpdir=None): + def __init__(self, executable, arguments, tmpdir=None, debug=True): self.executable = executable = os.path.abspath(executable) self.tmpdir = tmpdir + self.debug = debug super(PyPySandboxedProc, self).__init__([self.argv0] + arguments, executable=executable) @@ -68,12 +69,13 @@ if __name__ == '__main__': from getopt import getopt # and not gnu_getopt! - options, arguments = getopt(sys.argv[1:], 't:h', + options, arguments = getopt(sys.argv[1:], 't:hv', ['tmp=', 'heapsize=', 'timeout=', 'log=', - 'help']) + 'verbose', 'help']) tmpdir = None timeout = None logfile = None + debug = False extraoptions = [] def help(): @@ -105,6 +107,8 @@ timeout = int(value) elif option == '--log': logfile = value + elif option in ['-v', '--verbose']: + debug = True elif option in ['-h', '--help']: help() else: @@ -114,7 +118,7 @@ help() sandproc = PyPySandboxedProc(arguments[0], extraoptions + arguments[1:], - tmpdir=tmpdir) + tmpdir=tmpdir, debug=debug) if timeout is not None: sandproc.settimeout(timeout, interrupt_main=True) if logfile is not None: diff --git a/pypy/translator/sandbox/sandlib.py b/pypy/translator/sandbox/sandlib.py --- a/pypy/translator/sandbox/sandlib.py +++ b/pypy/translator/sandbox/sandlib.py @@ -4,25 +4,29 @@ for the outer process, which can run CPython or PyPy. """ -import py import sys, os, posixpath, errno, stat, time -from pypy.tool.ansi_print import AnsiLog import subprocess from pypy.tool.killsubprocess import killsubprocess from pypy.translator.sandbox.vfs import UID, GID +import py -class MyAnsiLog(AnsiLog): - KW_TO_COLOR = { - 'call': ((34,), False), - 'result': ((34,), False), - 'exception': ((34,), False), - 'vpath': ((35,), False), - 'timeout': ((1, 31), True), - } +def create_log(): + """Make and return a log for the sandbox to use, if needed.""" + # These imports are local to avoid importing pypy if we don't need to. + from pypy.tool.ansi_print import AnsiLog -log = py.log.Producer("sandlib") -py.log.setconsumer("sandlib", MyAnsiLog()) + class MyAnsiLog(AnsiLog): + KW_TO_COLOR = { + 'call': ((34,), False), + 'result': ((34,), False), + 'exception': ((34,), False), + 'vpath': ((35,), False), + 'timeout': ((1, 31), True), + } + log = py.log.Producer("sandlib") + py.log.setconsumer("sandlib", MyAnsiLog()) + return log # Note: we use lib_pypy/marshal.py instead of the built-in marshal # for two reasons. The built-in module could be made to segfault @@ -127,6 +131,7 @@ for the external functions xxx that you want to support. """ debug = False + log = None os_level_sandboxing = False # Linux only: /proc/PID/seccomp def __init__(self, args, executable=None): @@ -143,6 +148,9 @@ self.currenttimeout = None self.currentlyidlefrom = None + if self.debug: + self.log = create_log() + def withlock(self, function, *args, **kwds): lock = self.popenlock if lock is not None: @@ -170,7 +178,8 @@ if delay <= 0.0: break # expired! time.sleep(min(delay*1.001, 1)) - log.timeout("timeout!") + if self.log: + self.log.timeout("timeout!") self.kill() #if interrupt_main: # if hasattr(os, 'kill'): @@ -247,22 +256,22 @@ args = read_message(child_stdout) except EOFError, e: break - if self.debug and not self.is_spam(fnname, *args): - log.call('%s(%s)' % (fnname, + if self.log and not self.is_spam(fnname, *args): + self.log.call('%s(%s)' % (fnname, ', '.join([shortrepr(x) for x in args]))) try: answer, resulttype = self.handle_message(fnname, *args) except Exception, e: tb = sys.exc_info()[2] write_exception(child_stdin, e, tb) - if self.debug: + if self.log: if str(e): - log.exception('%s: %s' % (e.__class__.__name__, e)) + self.log.exception('%s: %s' % (e.__class__.__name__, e)) else: - log.exception('%s' % (e.__class__.__name__,)) + self.log.exception('%s' % (e.__class__.__name__,)) else: - if self.debug and not self.is_spam(fnname, *args): - log.result(shortrepr(answer)) + if self.log and not self.is_spam(fnname, *args): + self.log.result(shortrepr(answer)) try: write_message(child_stdin, 0) # error code - 0 for ok write_message(child_stdin, answer, resulttype) @@ -441,7 +450,8 @@ node = dirnode.join(name) else: node = dirnode - log.vpath('%r => %r' % (vpath, node)) + if self.log: + self.log.vpath('%r => %r' % (vpath, node)) return node def do_ll_os__ll_os_stat(self, vpathname): From noreply at buildbot.pypy.org Tue Jan 3 23:45:32 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 3 Jan 2012 23:45:32 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: add test to cover no identity value in reduce Message-ID: <20120103224532.AEA5C82B1C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51004:6c64bd2290f9 Date: 2012-01-04 00:44 +0200 http://bitbucket.org/pypy/pypy/changeset/6c64bd2290f9/ Log: add test to cover no identity value in reduce diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -736,11 +736,13 @@ raises(TypeError, 'a.sum(2, 3)') - def test_sumND(self): + def test_reduceND(self): from numpypy import arange a = arange(15).reshape(5, 3) assert (a.sum(0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() + assert (a.max(0) == [12, 13, 14]).all() + assert (a.max(1) == [2, 5, 8, 11, 14]).all() def test_identity(self): from numpypy import identity, array From noreply at buildbot.pypy.org Wed Jan 4 00:06:00 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 4 Jan 2012 00:06:00 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: update individual donations Message-ID: <20120103230600.0865082B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r300:0236eec67b94 Date: 2012-01-04 01:04 +0200 http://bitbucket.org/pypy/pypy.org/changeset/0236eec67b94/ Log: update individual donations diff --git a/don1.html b/don1.html --- a/don1.html +++ b/don1.html @@ -13,7 +13,7 @@ }); - $4369 of $105000 (4.2%) + $4882 of $105000 (4.6%)
diff --git a/don3.html b/don3.html --- a/don3.html +++ b/don3.html @@ -13,7 +13,7 @@ }); - $2567 of $60000 (4.2%) + $5820 of $60000 (9.7%)
From noreply at buildbot.pypy.org Wed Jan 4 00:06:01 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 4 Jan 2012 00:06:01 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: merge Message-ID: <20120103230601.724CA82B1C@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r301:ce65b7d4c181 Date: 2012-01-04 01:05 +0200 http://bitbucket.org/pypy/pypy.org/changeset/ce65b7d4c181/ Log: merge diff --git a/compat.html b/compat.html --- a/compat.html +++ b/compat.html @@ -52,7 +52,7 @@

PyPy has alpha/beta-level support for the CPython C API, however, as of 1.7 release this feature is not yet complete. Many libraries will require a bit of effort to work, but there are known success stories. Check out -PyPy blog for updates.

+PyPy blog for updates, as well as the Compatibility Wiki.

C extensions need to be recompiled for PyPy in order to work. Depending on your build system, it might work out of the box or will be slightly harder.

Standard library modules supported by PyPy, in alphabetical order:

diff --git a/source/compat.txt b/source/compat.txt --- a/source/compat.txt +++ b/source/compat.txt @@ -11,7 +11,9 @@ PyPy has **alpha/beta-level** support for the `CPython C API`_, however, as of 1.7 release this feature is not yet complete. Many libraries will require a bit of effort to work, but there are known success stories. Check out -PyPy blog for updates. +PyPy blog for updates, as well as the `Compatibility Wiki`__. + +.. __: https://bitbucket.org/pypy/compatibility/wiki/Home C extensions need to be recompiled for PyPy in order to work. Depending on your build system, it might work out of the box or will be slightly harder. From noreply at buildbot.pypy.org Wed Jan 4 00:12:10 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 4 Jan 2012 00:12:10 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: kill dead code, with an assert and a comment Message-ID: <20120103231210.127FC82B1C@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51005:e27a9330285a Date: 2012-01-04 01:11 +0200 http://bitbucket.org/pypy/pypy/changeset/e27a9330285a/ Log: kill dead code, with an assert and a comment diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -780,6 +780,8 @@ target_len = self.values.shape[self.dim] #sig = self.find_sig(result.shape) ##Don't do this, infinite recursion sig = self.create_sig(result.shape) + if not isinstance(self.values, W_NDimSlice): + abc=kil ri = ArrayIterator(result.size) si = axis_iter_from_arr(self.values, self.dim) while not ri.done(): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -155,7 +155,9 @@ def allocate_iter(self, arr, res_shape, chunklist): if chunklist: - return self.allocate_view_iter(arr, res_shape, chunklist) + #How did we get here? + assert NotImplemented + #return self.allocate_view_iter(arr, res_shape, chunklist) return ArrayIterator(arr.size) def eval(self, frame, arr): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -743,6 +743,9 @@ assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() + b = a.copy() + #b should be an array, not a view + assert (b.sum(1) == [3, 12, 21, 30, 39]).all() def test_identity(self): from numpypy import identity, array From noreply at buildbot.pypy.org Wed Jan 4 11:24:05 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 4 Jan 2012 11:24:05 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: remove debug cruft Message-ID: <20120104102405.4CFA282111@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51006:e318c72388d9 Date: 2012-01-04 12:23 +0200 http://bitbucket.org/pypy/pypy/changeset/e318c72388d9/ Log: remove debug cruft diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -780,8 +780,6 @@ target_len = self.values.shape[self.dim] #sig = self.find_sig(result.shape) ##Don't do this, infinite recursion sig = self.create_sig(result.shape) - if not isinstance(self.values, W_NDimSlice): - abc=kil ri = ArrayIterator(result.size) si = axis_iter_from_arr(self.values, self.dim) while not ri.done(): From noreply at buildbot.pypy.org Wed Jan 4 14:02:41 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 14:02:41 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): adjusted code to meet the latest refactoring, made first test pass Message-ID: <20120104130241.956A382111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51007:634bd8357b6e Date: 2012-01-04 14:02 +0100 http://bitbucket.org/pypy/pypy/changeset/634bd8357b6e/ Log: (bivab, hager): adjusted code to meet the latest refactoring, made first test pass diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -997,11 +997,11 @@ self.stdx(source_reg.value, 0, r.SCRATCH.value) self.free_scratch_reg() - def b_offset(self, offset): + def b_offset(self, target): curpos = self.currpos() - target_ofs = offset - curpos - assert target_ofs < (1 << 24) - self.b(target_ofs) + offset = target - curpos + assert offset < (1 << 24) + self.b(offset) def b_cond_offset(self, offset, condition): BI = condition[0] diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -7,7 +7,7 @@ MAX_REG_PARAMS) from pypy.jit.metainterp.history import (JitCellToken, TargetToken, - AbstractFailDescr, FLOAT, INT) + AbstractFailDescr, FLOAT, INT, REF) from pypy.rlib.objectmodel import we_are_translated from pypy.jit.backend.ppc.ppcgen.helper.assembler import (count_reg_args, Saved_Volatiles) @@ -273,7 +273,52 @@ _mixin_ = True def emit_finish(self, op, arglocs, regalloc): - self.gen_exit_stub(op.getdescr(), op.getarglist(), arglocs) + for i in range(len(arglocs) - 1): + loc = arglocs[i] + box = op.getarg(i) + if loc is None: + continue + if loc.is_reg(): + if box.type == REF: + adr = self.fail_boxes_ptr.get_addr_for_num(i) + elif box.type == INT: + adr = self.fail_boxes_int.get_addr_for_num(i) + else: + assert 0 + self.mc.alloc_scratch_reg(adr) + self.mc.storex(loc.value, 0, r.SCRATCH.value) + self.mc.free_scratch_reg() + elif loc.is_vfp_reg(): + assert box.type == FLOAT + assert 0, "not implemented yet" + elif loc.is_stack() or loc.is_imm() or loc.is_imm_float(): + if box.type == FLOAT: + assert 0, "not implemented yet" + elif box.type == REF or box.type == INT: + if box.type == REF: + adr = self.fail_boxes_ptr.get_addr_for_num(i) + elif box.type == INT: + adr = self.fail_boxes_int.get_addr_for_num(i) + else: + assert 0 + self.mc.alloc_scratch_reg() + self.mov_loc_loc(loc, r.SCRATCH) + # store content of r5 temporary in ENCODING AREA + self.mc.store(r.r5.value, r.SPP.value, 0) + self.mc.load_imm(r.r5, adr) + self.mc.store(r.SCRATCH.value, r.r5.value, 0) + self.mc.free_scratch_reg() + # restore r5 + self.mc.load(r.r5.value, r.SPP.value, 0) + else: + assert 0 + # note: no exception should currently be set in llop.get_exception_addr + # even if this finish may be an exit_frame_with_exception (in this case + # the exception instance is in arglocs[0]). + addr = self.cpu.get_on_leave_jitted_int(save_exception=False) + self.mc.call(addr) + self.mc.load_imm(r.RES, arglocs[-1].value) + self._gen_epilogue(self.mc) def emit_jump(self, op, arglocs, regalloc): descr = op.getdescr() diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -88,6 +88,10 @@ OFFSET_SPP_TO_OLD_BACKCHAIN = (OFFSET_SPP_TO_GPR_SAVE_AREA + GPR_SAVE_AREA + FPR_SAVE_AREA) + OFFSET_STACK_ARGS = OFFSET_SPP_TO_OLD_BACKCHAIN + BACKCHAIN_SIZE * WORD + if IS_PPC_64: + OFFSET_STACK_ARGS += MAX_REG_PARAMS * WORD + def __init__(self, cpu, failargs_limit=1000): self.cpu = cpu self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) @@ -118,12 +122,6 @@ mc.load(reg.value, spp_reg.value, self.OFFSET_SPP_TO_GPR_SAVE_AREA + WORD * i) - def _make_prologue(self, target_pos, frame_depth): - self._make_frame(frame_depth) - curpos = self.mc.currpos() - offset = target_pos - curpos - self.mc.b(offset) - # The code generated here allocates a new stackframe # and is the first machine code to be executed. def _make_frame(self, frame_depth): @@ -143,7 +141,10 @@ # compute spilling pointer (SPP) self.mc.addi(r.SPP.value, r.SP.value, frame_depth - self.OFFSET_SPP_TO_OLD_BACKCHAIN) + + # save nonvolatile registers self._save_nonvolatiles() + # save r31, use r30 as scratch register # this is safe because r30 has been saved already assert NONVOLATILES[-1] == r.SPP @@ -180,6 +181,7 @@ regs = rffi.cast(rffi.CCHARP, spp_loc) i = -1 fail_index = -1 + import pdb; pdb.set_trace() while(True): i += 1 fail_index += 1 @@ -347,117 +349,23 @@ reg = r.MANAGED_REGS[i] mc.store(reg.value, r.SPP.value, i * WORD) - # Load parameters from fail args into locations (stack or registers) - def gen_bootstrap_code(self, nonfloatlocs, inputargs): - for i in range(len(nonfloatlocs)): - loc = nonfloatlocs[i] - arg = inputargs[i] - assert arg.type != FLOAT - if arg.type == INT: - addr = self.fail_boxes_int.get_addr_for_num(i) - elif arg.type == REF: - addr = self.fail_boxes_ptr.get_addr_for_num(i) - else: - assert 0, "%s not supported" % arg.type - if loc.is_reg(): - reg = loc - else: - reg = r.SCRATCH - self.mc.load_from_addr(reg, addr) - if loc.is_stack(): - self.regalloc_mov(r.SCRATCH, loc) - - def gen_direct_bootstrap_code(self, loophead, looptoken, inputargs, frame_depth): - self._make_frame(frame_depth) - nonfloatlocs = looptoken._ppc_arglocs[0] - - reg_args = count_reg_args(inputargs) - - stack_locs = len(inputargs) - reg_args - - selected_reg = 0 - count = 0 - nonfloat_args = [] - nonfloat_regs = [] - # load reg args - for i in range(reg_args): - arg = inputargs[i] - if arg.type == FLOAT and count % 2 != 0: - assert 0, "not implemented yet" - reg = r.PARAM_REGS[selected_reg] - - if arg.type == FLOAT: - assert 0, "not implemented yet" - else: - nonfloat_args.append(reg) - nonfloat_regs.append(nonfloatlocs[i]) - - if arg.type == FLOAT: - assert 0, "not implemented yet" - else: - selected_reg += 1 - count += 1 - - # remap values stored in core registers - self.mc.alloc_scratch_reg() - remap_frame_layout(self, nonfloat_args, nonfloat_regs, r.SCRATCH) - self.mc.free_scratch_reg() - - # load values passed on the stack to the corresponding locations - if IS_PPC_32: - stack_position = self.OFFSET_SPP_TO_OLD_BACKCHAIN\ - + BACKCHAIN_SIZE * WORD - else: - stack_position = self.OFFSET_SPP_TO_OLD_BACKCHAIN\ - + (BACKCHAIN_SIZE + MAX_REG_PARAMS) * WORD - - count = 0 - for i in range(reg_args, len(inputargs)): - arg = inputargs[i] - if arg.type == FLOAT: - assert 0, "not implemented yet" - else: - loc = nonfloatlocs[i] - if loc.is_reg(): - self.mc.load(loc.value, r.SPP.value, stack_position) - count += 1 - elif loc.is_vfp_reg(): - assert 0, "not implemented yet" - elif loc.is_stack(): - if loc.type == FLOAT: - assert 0, "not implemented yet" - elif loc.type == INT or loc.type == REF: - count += 1 - self.mc.alloc_scratch_reg() - self.mc.load(r.SCRATCH.value, r.SPP.value, stack_position) - self.mov_loc_loc(r.SCRATCH, loc) - self.mc.free_scratch_reg() - else: - assert 0, 'invalid location' - else: - assert 0, 'invalid location' - if loc.type == FLOAT: - assert 0, "not implemented yet" - else: - size = 1 - stack_position += size * WORD - - #sp_patch_location = self._prepare_sp_patch_position() + def gen_bootstrap_code(self, loophead, spilling_area): + self._make_frame(spilling_area) self.mc.b_offset(loophead) - #self._patch_sp_offset(sp_patch_location, looptoken._ppc_frame_depth) def setup(self, looptoken, operations): - assert self.memcpy_addr != 0 self.current_clt = looptoken.compiled_loop_token operations = self.cpu.gc_ll_descr.rewrite_assembler(self.cpu, operations, self.current_clt.allgcrefs) + assert self.memcpy_addr != 0 self.mc = PPCBuilder() self.pending_guards = [] allblocks = self.get_asmmemmgr_blocks(looptoken) self.datablockwrapper = MachineDataBlockWrapper(self.cpu.asmmemmgr, allblocks) - self.stack_in_use = False self.max_stack_params = 0 + self.target_tokens_currently_compiling = {} + return operations def setup_once(self): gc_ll_descr = self.cpu.gc_ll_descr @@ -470,62 +378,62 @@ self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) def assemble_loop(self, inputargs, operations, looptoken, log): - clt = CompiledLoopToken(self.cpu, looptoken.number) clt.allgcrefs = [] looptoken.compiled_loop_token = clt + clt._debug_nbargs = len(inputargs) - self.setup(looptoken, operations) + if not we_are_translated(): + assert len(set(inputargs)) == len(inputargs) + + operations = self.setup(looptoken, operations) self.startpos = self.mc.currpos() longevity = compute_vars_longevity(inputargs, operations) regalloc = Regalloc(longevity, assembler=self, frame_manager=PPCFrameManager()) - nonfloatlocs = regalloc.prepare_loop(inputargs, operations, looptoken) + regalloc.prepare_loop(inputargs, operations) regalloc_head = self.mc.currpos() - self.gen_bootstrap_code(nonfloatlocs, inputargs) - - loophead = self.mc.currpos() # address of actual loop - looptoken._ppc_loop_code = loophead - looptoken._ppc_arglocs = [nonfloatlocs] - looptoken._ppc_bootstrap_code = 0 - - self._walk_operations(operations, regalloc) start_pos = self.mc.currpos() - self.framesize = frame_depth = self.compute_frame_depth(regalloc) - looptoken._ppc_frame_manager_depth = regalloc.frame_manager.frame_depth - self._make_prologue(regalloc_head, frame_depth) + clt.frame_depth = -1 + spilling_area = self._assemble(operations, regalloc) + clt.frame_depth = spilling_area direct_bootstrap_code = self.mc.currpos() - self.gen_direct_bootstrap_code(loophead, looptoken, inputargs, frame_depth) + frame_depth = self.compute_frame_depth(spilling_area) + self.gen_bootstrap_code(start_pos, frame_depth) self.write_pending_failure_recoveries() if IS_PPC_64: - fdescrs = self.gen_64_bit_func_descrs() - loop_start = self.materialize_loop(looptoken, False) - looptoken._ppc_bootstrap_code = loop_start + fdescr = self.gen_64_bit_func_descr() + + # write instructions to memory + loop_start = self.materialize_loop(looptoken, True) real_start = loop_start + direct_bootstrap_code if IS_PPC_32: - looptoken._ppc_direct_bootstrap_code = real_start + looptoken._ppc_func_addr = real_start else: - self.write_64_bit_func_descr(fdescrs[0], real_start) - looptoken._ppc_direct_bootstrap_code = fdescrs[0] + self.write_64_bit_func_descr(fdescr, real_start) + looptoken._ppc_func_addr = fdescr - real_start = loop_start + start_pos - if IS_PPC_32: - looptoken.ppc_code = real_start - else: - self.write_64_bit_func_descr(fdescrs[1], real_start) - looptoken.ppc_code = fdescrs[1] self.process_pending_guards(loop_start) if not we_are_translated(): print 'Loop', inputargs, operations self.mc._dump_trace(loop_start, 'loop_%s.asm' % self.cpu.total_compiled_loops) print 'Done assembling loop with token %r' % looptoken + self._teardown() - self._teardown() + def _assemble(self, operations, regalloc): + regalloc.compute_hint_frame_locations(operations) + self._walk_operations(operations, regalloc) + frame_depth = regalloc.frame_manager.get_frame_depth() + jump_target_descr = regalloc.jump_target_descr + if jump_target_descr is not None: + frame_depth = max(frame_depth, + jump_target_descr._ppc_clt.frame_depth) + return frame_depth def assemble_bridge(self, faildescr, inputargs, operations, looptoken, log): self.setup(looptoken, operations) @@ -598,7 +506,6 @@ i += 1 mem[j] = chr(0xFF) - n = self.cpu.get_fail_descr_number(descr) encode32(mem, j+1, n) return memaddr @@ -671,9 +578,7 @@ return True def gen_64_bit_func_descrs(self): - d0 = self.datablockwrapper.malloc_aligned(3*WORD, alignment=1) - d1 = self.datablockwrapper.malloc_aligned(3*WORD, alignment=1) - return [d0, d1] + return self.datablockwrapper.malloc_aligned(3*WORD, alignment=1) def write_64_bit_func_descr(self, descr, start_addr): data = rffi.cast(rffi.CArrayPtr(lltype.Signed), descr) @@ -681,11 +586,11 @@ data[1] = 0 data[2] = 0 - def compute_frame_depth(self, regalloc): + def compute_frame_depth(self, spilling_area): PARAMETER_AREA = self.max_stack_params * WORD if IS_PPC_64: PARAMETER_AREA += MAX_REG_PARAMS * WORD - SPILLING_AREA = regalloc.frame_manager.frame_depth * WORD + SPILLING_AREA = spilling_area * WORD frame_depth = ( GPR_SAVE_AREA + FPR_SAVE_AREA diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -98,11 +98,13 @@ class PPCFrameManager(FrameManager): def __init__(self): FrameManager.__init__(self) - self.frame_depth = 0 + self.used = [] @staticmethod def frame_pos(loc, type): num_words = PPCFrameManager.frame_size(type) + if type == FLOAT: + assert 0, "not implemented yet" return locations.StackLocation(loc, num_words=num_words, type=type) @staticmethod @@ -112,31 +114,63 @@ return 1 class Regalloc(object): + def __init__(self, longevity, frame_manager=None, assembler=None): self.cpu = assembler.cpu - self.longevity = longevity self.frame_manager = frame_manager self.assembler = assembler self.rm = PPCRegisterManager(longevity, frame_manager, assembler) + self.jump_target_descr = None - def prepare_loop(self, inputargs, operations, looptoken): - loop_consts = compute_loop_consts(inputargs, operations[-1], looptoken) - inputlen = len(inputargs) - nonfloatlocs = [None] * len(inputargs) - for i in range(inputlen): - arg = inputargs[i] - assert not isinstance(arg, Const) - if arg not in loop_consts and self.longevity[arg][1] > -1: - self.try_allocate_reg(arg) - loc = self.loc(arg) - nonfloatlocs[i] = loc - self.possibly_free_vars(inputargs) - return nonfloatlocs + def _prepare(self, inputargs, operations): + longevity, last_real_usage = compute_vars_longevity( + inputargs, operations) + self.longevity = longevity + self.last_real_usage = last_real_usage + fm = self.frame_manager + asm = self.assembler + self.rm = PPCRegisterManager(longevity, fm, asm) + + def prepare_loop(self, inputargs, operations): + self._prepare(inputargs, operations) + self._set_initial_bindings(inputargs) + self.possibly_free_vars(list(inputargs)) + + def prepare_bridge(self, inputargs, arglocs, ops): + self._prepare(inputargs, ops) + self._update_bindings(arglocs, inputargs) + + def _set_initial_bindings(self, inputargs): + arg_index = 0 + count = 0 + n_register_args = len(r.PARAM_REGS) + cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD + 1 + for box in inputargs: + assert isinstance(box, Box) + # handle inputargs in argument registers + if box.type == FLOAT and arg_index % 2 != 0: + assert 0, "not implemented yet" + if arg_index < n_register_args: + if box.type == FLOAT: + assert 0, "not implemented yet" + else: + loc = r.PARAM_REGS[arg_index] + self.try_allocate_reg(box, selected_reg=loc) + arg_index += 1 + else: + # treat stack args as stack locations with a negative offset + if box.type == FLOAT: + assert 0, "not implemented yet" + else: + cur_frame_pos -= 1 + count += 1 + loc = self.frame_manager.frame_pos(cur_frame_pos, box.type) + self.frame_manager.set_binding(box, loc) def update_bindings(self, locs, frame_depth, inputargs): used = {} i = 0 - self.frame_manager.frame_depth = frame_depth + #self.frame_manager.frame_depth = frame_depth for loc in locs: arg = inputargs[i] i += 1 @@ -296,20 +330,20 @@ prepare_int_is_zero = prepare_unary_cmp() def prepare_finish(self, op): - args = [locations.imm(self.frame_manager.frame_depth)] + args = [None] * (op.numargs() + 1) for i in range(op.numargs()): arg = op.getarg(i) if arg: - args.append(self.loc(arg)) + args[i] = self.loc(arg) self.possibly_free_var(arg) - else: - args.append(None) + n = self.cpu.get_fail_descr_number(op.getdescr()) + args[-1] = imm(n) return args def _prepare_guard(self, op, args=None): if args is None: args = [] - args.append(imm(self.frame_manager.frame_depth)) + args.append(imm(len(self.frame_manager.used))) for arg in op.getfailargs(): if arg: args.append(self.loc(arg)) @@ -405,6 +439,65 @@ prepare_guard_nonnull_class = prepare_guard_class + def compute_hint_frame_locations(self, operations): + # optimization only: fill in the 'hint_frame_locations' dictionary + # of rm and xrm based on the JUMP at the end of the loop, by looking + # at where we would like the boxes to be after the jump. + op = operations[-1] + if op.getopnum() != rop.JUMP: + return + self.final_jump_op = op + descr = op.getdescr() + assert isinstance(descr, TargetToken) + if descr._ppc_loop_code != 0: + # if the target LABEL was already compiled, i.e. if it belongs + # to some already-compiled piece of code + self._compute_hint_frame_locations_from_descr(descr) + #else: + # The loop ends in a JUMP going back to a LABEL in the same loop. + # We cannot fill 'hint_frame_locations' immediately, but we can + # wait until the corresponding prepare_op_label() to know where the + # we would like the boxes to be after the jump. + + def _compute_hint_frame_locations_from_descr(self, descr): + arglocs = self.assembler.target_arglocs(descr) + jump_op = self.final_jump_op + assert len(arglocs) == jump_op.numargs() + for i in range(jump_op.numargs()): + box = jump_op.getarg(i) + if isinstance(box, Box): + loc = arglocs[i] + if loc is not None and loc.is_stack(): + self.frame_manager.hint_frame_locations[box] = loc + + def prepare_op_jump(self, op): + descr = op.getdescr() + assert isinstance(descr, TargetToken) + self.jump_target_descr = descr + arglocs = self.assembler.target_arglocs(descr) + + # get temporary locs + tmploc = r.SCRATCH + + # Part about non-floats + src_locations1 = [] + dst_locations1 = [] + + # Build the two lists + for i in range(op.numargs()): + box = op.getarg(i) + src_loc = self.loc(box) + dst_loc = arglocs[i] + if box.type != FLOAT: + src_locations1.append(src_loc) + dst_locations1.append(dst_loc) + else: + assert 0, "not implemented yet" + + remap_frame_layout(self.assembler, src_locations1, + dst_locations1, tmploc) + return [] + def prepare_guard_call_release_gil(self, op, guard_op): # first, close the stack in the sense of the asmgcc GC root tracker gcrootmap = self.cpu.gc_ll_descr.gcrootmap diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -43,7 +43,6 @@ self.asm.setup_once() def compile_loop(self, inputargs, operations, looptoken, log=False): - self.saved_descr = {} self.asm.assemble_loop(inputargs, operations, looptoken, log) def compile_bridge(self, faildescr, inputargs, operations, @@ -66,24 +65,26 @@ self.asm.fail_boxes_ptr.setitem(index, null) # executes the stored machine code in the token - def execute_token(self, looptoken): - addr = looptoken.ppc_code - func = rffi.cast(lltype.Ptr(self.BOOTSTRAP_TP), addr) - fail_index = self._execute_call(func) - return self.get_fail_descr_from_number(fail_index) + def make_execute_token(self, *ARGS): + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, lltype.Signed)) - def _execute_call(self, func): - prev_interpreter = None - if not self.translate_support_code: - prev_interpreter = LLInterpreter.current_interpreter - LLInterpreter.current_interpreter = self.debug_ll_interpreter - res = 0 - try: - res = func() - finally: + def execute_token(executable_token, *args): + clt = executable_token.compiled_loop_token + assert len(args) == clt._debug_nbargs + # + addr = executable_token._ppc_func_addr + func = rffi.cast(FUNCPTR, addr) + prev_interpreter = None # help flow space if not self.translate_support_code: - LLInterpreter.current_interpreter = prev_interpreter - return res + prev_interpreter = LLInterpreter.current_interpreter + LLInterpreter.current_interpreter = self.debug_ll_interpreter + try: + fail_index = func(*args) + finally: + if not self.translate_support_code: + LLInterpreter.current_interpreter = prev_interpreter + return self.get_fail_descr_from_number(fail_index) + return execute_token @staticmethod def cast_ptr_to_int(x): From noreply at buildbot.pypy.org Wed Jan 4 14:13:29 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 14:13:29 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: renamed function Message-ID: <20120104131329.C0E8C82111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51008:bd859a60caa8 Date: 2012-01-04 05:12 -0800 http://bitbucket.org/pypy/pypy/changeset/bd859a60caa8/ Log: renamed function diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -577,7 +577,7 @@ return False return True - def gen_64_bit_func_descrs(self): + def gen_64_bit_func_descr(self): return self.datablockwrapper.malloc_aligned(3*WORD, alignment=1) def write_64_bit_func_descr(self, descr, start_addr): From noreply at buildbot.pypy.org Wed Jan 4 14:23:39 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 14:23:39 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: don't print debug output Message-ID: <20120104132339.25F5E82111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51009:f95845ede0d3 Date: 2012-01-04 14:20 +0100 http://bitbucket.org/pypy/pypy/changeset/f95845ede0d3/ Log: don't print debug output diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -409,7 +409,7 @@ fdescr = self.gen_64_bit_func_descr() # write instructions to memory - loop_start = self.materialize_loop(looptoken, True) + loop_start = self.materialize_loop(looptoken, False) real_start = loop_start + direct_bootstrap_code if IS_PPC_32: From noreply at buildbot.pypy.org Wed Jan 4 14:23:40 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 14:23:40 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: replace LoopToken with JitCellToken and kill unused functions from runner Message-ID: <20120104132340.780C682111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51010:45e11554d5ad Date: 2012-01-04 14:21 +0100 http://bitbucket.org/pypy/pypy/changeset/45e11554d5ad/ Log: replace LoopToken with JitCellToken and kill unused functions from runner diff --git a/pypy/jit/backend/ppc/runner.py b/pypy/jit/backend/ppc/runner.py --- a/pypy/jit/backend/ppc/runner.py +++ b/pypy/jit/backend/ppc/runner.py @@ -52,13 +52,6 @@ self.asm.assemble_bridge(faildescr, inputargs, operations, original_loop_token, log=log) - # set value in fail_boxes_int - def set_future_value_int(self, index, value_int): - self.asm.fail_boxes_int.setitem(index, value_int) - - def set_future_value_ref(self, index, pointer): - self.asm.fail_boxes_ptr.setitem(index, pointer) - def clear_latest_values(self, count): null = lltype.nullptr(llmemory.GCREF.TO) for index in range(count): diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -121,10 +121,9 @@ prev_box = next_box operations.append(ResOperation(rop.FINISH, [prev_box], None, descr=BasicFailDescr(1))) inputargs = [i0] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) - self.cpu.set_future_value_int(0, 20) - fail = self.cpu.execute_token(looptoken) + fail = self.cpu.execute_token(looptoken, 20) res = self.cpu.get_latest_value_int(0) assert res == 520 assert fail.identifier == 1 @@ -197,7 +196,7 @@ i0_1 = BoxInt() i1_1 = BoxInt() i2_1 = BoxInt() - looptoken1 = LoopToken() + looptoken1 = JitCellToken() operations1 = [ ResOperation(rop.INT_ADD, [i0_1, ConstInt(1)], i1_1), ResOperation(rop.INT_LE, [i1_1, ConstInt(9)], i2_1), @@ -218,7 +217,7 @@ i0_2 = BoxInt() i1_2 = BoxInt() i2_2 = BoxInt() - looptoken2 = LoopToken() + looptoken2 = JitCellToken() operations2 = [ ResOperation(rop.INT_ADD, [i0_2, ConstInt(1)], i1_2), ResOperation(rop.INT_LE, [i1_2, ConstInt(19)], i2_2), @@ -533,7 +532,7 @@ ResOperation(rop.FINISH, [ptr], None, descr=BasicFailDescr(1)) ] inputargs = [i0, ptr, i1] - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(inputargs, operations, looptoken) self.cpu.set_future_value_int(0, 10) self.cpu.set_future_value_ref(1, u_box.value) @@ -549,7 +548,7 @@ finish(i0) ''' loop = parse(ops, namespace=locals()) - looptoken = LoopToken() + looptoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) self.cpu.set_future_value_int(0, 42) From noreply at buildbot.pypy.org Wed Jan 4 14:23:42 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 14:23:42 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge Message-ID: <20120104132342.5413282111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51011:cc871ff2daed Date: 2012-01-04 14:23 +0100 http://bitbucket.org/pypy/pypy/changeset/cc871ff2daed/ Log: merge diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -577,7 +577,7 @@ return False return True - def gen_64_bit_func_descrs(self): + def gen_64_bit_func_descr(self): return self.datablockwrapper.malloc_aligned(3*WORD, alignment=1) def write_64_bit_func_descr(self, descr, start_addr): From noreply at buildbot.pypy.org Wed Jan 4 14:59:35 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 14:59:35 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): adjust implementation of JUMP, implement LABEL Message-ID: <20120104135935.AC05782111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51012:96c640b286a3 Date: 2012-01-04 14:58 +0100 http://bitbucket.org/pypy/pypy/changeset/96c640b286a3/ Log: (bivab, hager): adjust implementation of JUMP, implement LABEL diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -321,16 +321,21 @@ self._gen_epilogue(self.mc) def emit_jump(self, op, arglocs, regalloc): + # The backend's logic assumes that the target code is in a piece of + # assembler that was also called with the same number of arguments, + # so that the locations [ebp+8..] of the input arguments are valid + # stack locations both before and after the jump. + # descr = op.getdescr() - assert isinstance(descr, LoopToken) - if descr._ppc_bootstrap_code == 0: + assert isinstance(descr, TargetToken) + my_nbargs = self.current_clt._debug_nbargs + target_nbargs = descr._ppc_clt._debug_nbargs + assert my_nbargs == target_nbargs + + if descr in self.target_tokens_currently_compiling: self.mc.b_offset(descr._ppc_loop_code) else: - target = descr._ppc_bootstrap_code + descr._ppc_loop_code - self.mc.b_abs(target) - new_fd = max(regalloc.frame_manager.frame_depth, - descr._ppc_frame_manager_depth) - regalloc.frame_manager.frame_depth = new_fd + self.mc.b_abs(descr._ppc_loop_code) def emit_same_as(self, op, arglocs, regalloc): argloc, resloc = arglocs diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -1,7 +1,8 @@ from pypy.jit.backend.llsupport.regalloc import (RegisterManager, FrameManager, TempBox, compute_vars_longevity) from pypy.jit.backend.ppc.ppcgen.arch import (WORD, MY_COPY_OF_REGS) -from pypy.jit.backend.ppc.ppcgen.jump import remap_frame_layout_mixed +from pypy.jit.backend.ppc.ppcgen.jump import (remap_frame_layout_mixed, + remap_frame_layout) from pypy.jit.backend.ppc.ppcgen.locations import imm from pypy.jit.backend.ppc.ppcgen.helper.regalloc import (_check_imm_arg, prepare_cmp_op, @@ -20,6 +21,10 @@ import pypy.jit.backend.ppc.ppcgen.register as r from pypy.jit.codewriter import heaptracker +# xxx hack: set a default value for TargetToken._arm_loop_code. If 0, we know +# that it is a LABEL that was not compiled yet. +TargetToken._ppc_loop_code = 0 + class TempInt(TempBox): type = INT @@ -525,17 +530,30 @@ def prepare_jump(self, op): descr = op.getdescr() - assert isinstance(descr, LoopToken) - nonfloatlocs = descr._ppc_arglocs[0] + assert isinstance(descr, TargetToken) + self.jump_target_descr = descr + arglocs = self.assembler.target_arglocs(descr) - tmploc = r.r0 - src_locs1 = [self.loc(op.getarg(i)) for i in range(op.numargs()) - if op.getarg(i).type != FLOAT] - assert tmploc not in nonfloatlocs - dst_locs1 = [loc for loc in nonfloatlocs if loc is not None] - remap_frame_layout_mixed(self.assembler, - src_locs1, dst_locs1, tmploc, - [], [], None) + # get temporary locs + tmploc = r.SCRATCH + + # Part about non-floats + src_locations1 = [] + dst_locations1 = [] + + # Build the four lists + for i in range(op.numargs()): + box = op.getarg(i) + src_loc = self.loc(box) + dst_loc = arglocs[i] + if box.type != FLOAT: + src_locations1.append(src_loc) + dst_locations1.append(dst_loc) + else: + assert 0, "not implemented yet" + + remap_frame_layout(self.assembler, src_locations1, + dst_locations1, tmploc) return [] def prepare_setfield_gc(self, op): @@ -870,6 +888,46 @@ self.possibly_free_var(op.result) return [res_loc] + def prepare_label(self, op): + # XXX big refactoring needed? + descr = op.getdescr() + assert isinstance(descr, TargetToken) + inputargs = op.getarglist() + arglocs = [None] * len(inputargs) + # + # we use force_spill() on the boxes that are not going to be really + # used any more in the loop, but that are kept alive anyway + # by being in a next LABEL's or a JUMP's argument or fail_args + # of some guard + position = self.rm.position + for arg in inputargs: + assert isinstance(arg, Box) + if self.last_real_usage.get(arg, -1) <= position: + self.force_spill_var(arg) + + # + for i in range(len(inputargs)): + arg = inputargs[i] + assert isinstance(arg, Box) + loc = self.loc(arg) + arglocs[i] = loc + if loc.is_reg(): + self.frame_manager.mark_as_free(arg) + # + descr._ppc_arglocs = arglocs + descr._ppc_loop_code = self.assembler.mc.currpos() + descr._ppc_clt = self.assembler.current_clt + self.assembler.target_tokens_currently_compiling[descr] = None + self.possibly_free_vars_for_op(op) + # + # if the LABEL's descr is precisely the target of the JUMP at the + # end of the same loop, i.e. if what we are compiling is a single + # loop that ends up jumping to this LABEL, then we can now provide + # the hints about the expected position of the spilled variables. + jump_op = self.final_jump_op + if jump_op is not None and jump_op.getdescr() is descr: + self._compute_hint_frame_locations_from_descr(descr) + def prepare_guard_call_may_force(self, op, guard_op): faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) From noreply at buildbot.pypy.org Wed Jan 4 14:59:37 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 14:59:37 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add some methods and fix little bugs Message-ID: <20120104135937.3739082111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51013:fe53dc8f2aea Date: 2012-01-04 14:59 +0100 http://bitbucket.org/pypy/pypy/changeset/fe53dc8f2aea/ Log: add some methods and fix little bugs diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -181,7 +181,6 @@ regs = rffi.cast(rffi.CCHARP, spp_loc) i = -1 fail_index = -1 - import pdb; pdb.set_trace() while(True): i += 1 fail_index += 1 @@ -396,6 +395,7 @@ regalloc_head = self.mc.currpos() start_pos = self.mc.currpos() + looptoken._ppc_loop_code = start_pos clt.frame_depth = -1 spilling_area = self._assemble(operations, regalloc) clt.frame_depth = spilling_area @@ -607,6 +607,14 @@ return frame_depth + def fixup_target_tokens(self, rawstart): + for targettoken in self.target_tokens_currently_compiling: + targettoken._ppc_loop_code += rawstart + self.target_tokens_currently_compiling = None + + def target_arglocs(self, looptoken): + return looptoken._ppc_arglocs + def materialize_loop(self, looptoken, show): self.mc.prepare_insts_blocks(show) self.datablockwrapper.done() From noreply at buildbot.pypy.org Wed Jan 4 15:15:51 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 15:15:51 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): fix test Message-ID: <20120104141551.B37AC82111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51014:e719f82117db Date: 2012-01-04 15:15 +0100 http://bitbucket.org/pypy/pypy/changeset/e719f82117db/ Log: (bivab, hager): fix test diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -197,18 +197,19 @@ i1_1 = BoxInt() i2_1 = BoxInt() looptoken1 = JitCellToken() + targettoken1 = TargetToken() operations1 = [ + ResOperation(rop.LABEL, [i0_1], None, descr=targettoken1), ResOperation(rop.INT_ADD, [i0_1, ConstInt(1)], i1_1), ResOperation(rop.INT_LE, [i1_1, ConstInt(9)], i2_1), ResOperation(rop.GUARD_TRUE, [i2_1], None, descr=BasicFailDescr(2)), - ResOperation(rop.JUMP, [i1_1], None, descr=looptoken1), + ResOperation(rop.JUMP, [i1_1], None, descr=targettoken1), ] inputargs1 = [i0_1] - operations1[2].setfailargs([i1_1]) + operations1[3].setfailargs([i1_1]) self.cpu.compile_loop(inputargs1, operations1, looptoken1) - self.cpu.set_future_value_int(0, 2) - fail1 = self.cpu.execute_token(looptoken1) + fail1 = self.cpu.execute_token(looptoken1, 2) assert fail1.identifier == 2 res1 = self.cpu.get_latest_value_int(0) assert res1 == 10 @@ -218,18 +219,19 @@ i1_2 = BoxInt() i2_2 = BoxInt() looptoken2 = JitCellToken() + targettoken2 = TargetToken() operations2 = [ + ResOperation(rop.LABEL, [i0_2], None, descr=targettoken2), ResOperation(rop.INT_ADD, [i0_2, ConstInt(1)], i1_2), ResOperation(rop.INT_LE, [i1_2, ConstInt(19)], i2_2), ResOperation(rop.GUARD_TRUE, [i2_2], None, descr=BasicFailDescr(2)), - ResOperation(rop.JUMP, [i1_2], None, descr=looptoken2), + ResOperation(rop.JUMP, [i1_2], None, descr=targettoken2), ] inputargs2 = [i0_2] - operations2[2].setfailargs([i1_2]) + operations2[3].setfailargs([i1_2]) self.cpu.compile_loop(inputargs2, operations2, looptoken2) - self.cpu.set_future_value_int(0, 2) - fail2 = self.cpu.execute_token(looptoken2) + fail2 = self.cpu.execute_token(looptoken2, 2) assert fail2.identifier == 2 res2 = self.cpu.get_latest_value_int(0) assert res2 == 20 From noreply at buildbot.pypy.org Wed Jan 4 15:48:38 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 4 Jan 2012 15:48:38 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: debugging Message-ID: <20120104144838.33DAA82111@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r51015:5bf961464624 Date: 2012-01-04 11:16 +0100 http://bitbucket.org/pypy/pypy/changeset/5bf961464624/ Log: debugging diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -146,9 +146,11 @@ def generalize_state(self, start_label, stop_label): if self.jump_to_start_label(start_label, stop_label): # At the end of the preamble, don't generalize much + debug_print('Generalize preamble') KillHugeIntBounds(self.optimizer).apply() else: # At the end of a bridge about to force a retrcae + debug_print('Generalize for retrace') KillIntBounds(self.optimizer).apply() self.optimizer.kill_consts_at_end_of_preamble = True @@ -184,6 +186,9 @@ modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(jump_args) + debug_start('jit-log-virtualstate') + virtual_state.debug_print('Exporting ') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jump_args] inputargs = virtual_state.make_inputargs(values, self.optimizer) diff --git a/pypy/module/pypyjit/test_pypy_c/test_misc.py b/pypy/module/pypyjit/test_pypy_c/test_misc.py --- a/pypy/module/pypyjit/test_pypy_c/test_misc.py +++ b/pypy/module/pypyjit/test_pypy_c/test_misc.py @@ -22,6 +22,7 @@ # "i" is virtual. However, in this specific case the two loops happen # to contain the very same operations loop0, loop1 = log.loops_by_filename(self.filepath) + expected = """ i9 = int_le(i7, i8) guard_true(i9, descr=...) @@ -37,7 +38,7 @@ # XXX: The retracing fails to form a loop since j # becomes constant 0 after the bridge and constant 1 at the end of the # loop. A bridge back to the peramble is produced instead. - #assert loop1.match(expected) + assert loop1.match(expected) def test_factorial(self): def fact(n): From noreply at buildbot.pypy.org Wed Jan 4 15:48:39 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 4 Jan 2012 15:48:39 +0100 (CET) Subject: [pypy-commit] pypy default: passing test Message-ID: <20120104144839.5D8EF82111@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r51016:656cfd21d520 Date: 2012-01-04 11:27 +0100 http://bitbucket.org/pypy/pypy/changeset/656cfd21d520/ Log: passing test diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -442,6 +442,22 @@ """ self.optimize_loop(ops, expected) + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): From noreply at buildbot.pypy.org Wed Jan 4 15:48:40 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 4 Jan 2012 15:48:40 +0100 (CET) Subject: [pypy-commit] pypy default: some more debug prints Message-ID: <20120104144840.D322482111@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r51017:83a92dcf51e3 Date: 2012-01-04 11:41 +0100 http://bitbucket.org/pypy/pypy/changeset/83a92dcf51e3/ Log: some more debug prints diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -271,6 +271,10 @@ if newresult is not op.result and not newvalue.is_constant(): op = ResOperation(rop.SAME_AS, [op.result], newresult) self.optimizer._newoperations.append(op) + if self.optimizer.loop.logops: + debug_print(' Falling back to add extra: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + self.optimizer.flush() self.optimizer.emitting_dissabled = False @@ -435,7 +439,13 @@ return for a in op.getarglist(): if not isinstance(a, Const) and a not in seen: - self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen) + self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, + seen) + + if self.optimizer.loop.logops: + debug_print(' Emitting short op: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + optimizer.send_extra_operation(op) seen[op.result] = True if op.is_ovf(): From noreply at buildbot.pypy.org Wed Jan 4 15:48:42 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 4 Jan 2012 15:48:42 +0100 (CET) Subject: [pypy-commit] pypy default: hg merge Message-ID: <20120104144842.4767A82111@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r51018:5afb4fd1f372 Date: 2012-01-04 11:42 +0100 http://bitbucket.org/pypy/pypy/changeset/5afb4fd1f372/ Log: hg merge diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -380,6 +380,9 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -409,7 +412,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -422,8 +425,9 @@ else: concrete.to_str(space, 1, res, indent=' ') if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -840,80 +844,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) + if i < self.shape[0] - 1: + builder.append(ccomma +'\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe @@ -1185,6 +1189,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -158,6 +158,7 @@ assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): from numpypy import ndarray, array, dtype @@ -179,6 +180,20 @@ ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1, 2]) + assert x.ndim == 1 + x = array([[1, 2], [3, 4]]) + assert x.ndim == 2 + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but numpypy raises an + # TypeError + raises(TypeError, 'x.ndim = 3') + def test_init(self): from numpypy import zeros a = zeros(15) @@ -725,19 +740,19 @@ a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): from numpypy import array @@ -954,13 +969,13 @@ def test_tolist_view(self): from numpypy import array - a = array([[1,2],[3,4]]) + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): from numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] @@ -1090,11 +1105,11 @@ from numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): @@ -1233,6 +1248,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1275,17 +1291,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1333,7 +1349,6 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): from numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail @@ -1347,6 +1362,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): from numpypy import array, zeros + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1354,14 +1370,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1374,6 +1402,16 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): from numpypy import array, zeros @@ -1417,7 +1455,7 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -8,10 +8,12 @@ from pypy.tool import logparser from pypy.jit.tool.jitoutput import parse_prof from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range, - find_ids, TraceWithIds, + find_ids, OpMatcher, InvalidMatch) class BaseTestPyPyC(object): + log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary' + def setup_class(cls): if '__pypy__' not in sys.builtin_module_names: py.test.skip("must run this test with pypy") @@ -52,8 +54,7 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - print cmdline, logfile - env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)} + env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -98,7 +98,8 @@ end = time.time() return end - start # - log = self.run(main, [get_libc_name(), 200], threshold=150) + log = self.run(main, [get_libc_name(), 200], threshold=150, + import_site=True) assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead loops = log.loops_by_id('sleep') assert len(loops) == 1 # make sure that we actually JITted the loop @@ -121,7 +122,7 @@ return fabs._ptr.getaddr(), x libm_name = get_libm_name(sys.platform) - log = self.run(main, [libm_name]) + log = self.run(main, [libm_name], import_site=True) fabs_addr, res = log.result assert res == -4.0 loop, = log.loops_by_filename(self.filepath) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -15,7 +15,7 @@ i += letters[i % len(letters)] == uletters[i % len(letters)] return i - log = self.run(main, [300]) + log = self.run(main, [300], import_site=True) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" @@ -55,7 +55,7 @@ i += int(long(string.digits[i % len(string.digits)], 16)) return i - log = self.run(main, [1100]) + log = self.run(main, [1100], import_site=True) assert log.result == main(1100) loop, = log.loops_by_filename(self.filepath) assert loop.match(""" diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -185,7 +185,10 @@ return self.code.map[self.bytecode_no] def getlineno(self): - return self.getopcode().lineno + code = self.getopcode() + if code is None: + return None + return code.lineno lineno = property(getlineno) def getline_starts_here(self): diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py --- a/pypy/tool/jitlogparser/storage.py +++ b/pypy/tool/jitlogparser/storage.py @@ -6,7 +6,6 @@ import py import os from lib_pypy.disassembler import dis -from pypy.tool.jitlogparser.parser import Function from pypy.tool.jitlogparser.module_finder import gather_all_code_objs class LoopStorage(object): From noreply at buildbot.pypy.org Wed Jan 4 15:48:43 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 4 Jan 2012 15:48:43 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: hg merge default Message-ID: <20120104144843.8FA2482111@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r51019:ca3d9cd45ea5 Date: 2012-01-04 13:07 +0100 http://bitbucket.org/pypy/pypy/changeset/ca3d9cd45ea5/ Log: hg merge default diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -442,6 +442,22 @@ """ self.optimize_loop(ops, expected) + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -287,6 +287,10 @@ if newresult is not op.result and not newvalue.is_constant(): op = ResOperation(rop.SAME_AS, [op.result], newresult) self.optimizer._newoperations.append(op) + if self.optimizer.loop.logops: + debug_print(' Falling back to add extra: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + self.optimizer.flush() self.optimizer.emitting_dissabled = False @@ -451,7 +455,13 @@ return for a in op.getarglist(): if not isinstance(a, Const) and a not in seen: - self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen) + self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, + seen) + + if self.optimizer.loop.logops: + debug_print(' Emitting short op: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + optimizer.send_extra_operation(op) seen[op.result] = True if op.is_ovf(): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -380,6 +380,9 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -409,7 +412,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -422,8 +425,9 @@ else: concrete.to_str(space, 1, res, indent=' ') if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -840,80 +844,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) + if i < self.shape[0] - 1: + builder.append(ccomma +'\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe @@ -1185,6 +1189,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -158,6 +158,7 @@ assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): from numpypy import ndarray, array, dtype @@ -179,6 +180,20 @@ ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1, 2]) + assert x.ndim == 1 + x = array([[1, 2], [3, 4]]) + assert x.ndim == 2 + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but numpypy raises an + # TypeError + raises(TypeError, 'x.ndim = 3') + def test_init(self): from numpypy import zeros a = zeros(15) @@ -725,19 +740,19 @@ a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): from numpypy import array @@ -954,13 +969,13 @@ def test_tolist_view(self): from numpypy import array - a = array([[1,2],[3,4]]) + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): from numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] @@ -1090,11 +1105,11 @@ from numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): @@ -1233,6 +1248,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1275,17 +1291,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1333,7 +1349,6 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): from numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail @@ -1347,6 +1362,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): from numpypy import array, zeros + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1354,14 +1370,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1374,6 +1402,16 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): from numpypy import array, zeros @@ -1417,7 +1455,7 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -8,10 +8,12 @@ from pypy.tool import logparser from pypy.jit.tool.jitoutput import parse_prof from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range, - find_ids, TraceWithIds, + find_ids, OpMatcher, InvalidMatch) class BaseTestPyPyC(object): + log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary' + def setup_class(cls): if '__pypy__' not in sys.builtin_module_names: py.test.skip("must run this test with pypy") @@ -52,8 +54,7 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - print cmdline, logfile - env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)} + env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -98,7 +98,8 @@ end = time.time() return end - start # - log = self.run(main, [get_libc_name(), 200], threshold=150) + log = self.run(main, [get_libc_name(), 200], threshold=150, + import_site=True) assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead loops = log.loops_by_id('sleep') assert len(loops) == 1 # make sure that we actually JITted the loop @@ -121,7 +122,7 @@ return fabs._ptr.getaddr(), x libm_name = get_libm_name(sys.platform) - log = self.run(main, [libm_name]) + log = self.run(main, [libm_name], import_site=True) fabs_addr, res = log.result assert res == -4.0 loop, = log.loops_by_filename(self.filepath) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -15,7 +15,7 @@ i += letters[i % len(letters)] == uletters[i % len(letters)] return i - log = self.run(main, [300]) + log = self.run(main, [300], import_site=True) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" @@ -55,7 +55,7 @@ i += int(long(string.digits[i % len(string.digits)], 16)) return i - log = self.run(main, [1100]) + log = self.run(main, [1100], import_site=True) assert log.result == main(1100) loop, = log.loops_by_filename(self.filepath) assert loop.match(""" diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -185,7 +185,10 @@ return self.code.map[self.bytecode_no] def getlineno(self): - return self.getopcode().lineno + code = self.getopcode() + if code is None: + return None + return code.lineno lineno = property(getlineno) def getline_starts_here(self): diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py --- a/pypy/tool/jitlogparser/storage.py +++ b/pypy/tool/jitlogparser/storage.py @@ -6,7 +6,6 @@ import py import os from lib_pypy.disassembler import dis -from pypy.tool.jitlogparser.parser import Function from pypy.tool.jitlogparser.module_finder import gather_all_code_objs class LoopStorage(object): From noreply at buildbot.pypy.org Wed Jan 4 17:51:20 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 4 Jan 2012 17:51:20 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): implement bridges Message-ID: <20120104165120.D524A82111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51020:51d8610b324a Date: 2012-01-04 17:51 +0100 http://bitbucket.org/pypy/pypy/changeset/51d8610b324a/ Log: (bivab, hager): implement bridges diff --git a/pypy/jit/backend/ppc/ppcgen/codebuilder.py b/pypy/jit/backend/ppc/ppcgen/codebuilder.py --- a/pypy/jit/backend/ppc/ppcgen/codebuilder.py +++ b/pypy/jit/backend/ppc/ppcgen/codebuilder.py @@ -1158,6 +1158,17 @@ assert self.r0_in_use self.r0_in_use = False + def _dump_trace(self, addr, name, formatter=-1): + if not we_are_translated(): + if formatter != -1: + name = name % formatter + dir = udir.ensure('asm', dir=True) + f = dir.join(name).open('wb') + data = rffi.cast(rffi.CCHARP, addr) + for i in range(self.currpos()): + f.write(data[i]) + f.close() + class BranchUpdater(PPCAssembler): def __init__(self): PPCAssembler.__init__(self) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -237,16 +237,14 @@ self.fail_force_index = spp_loc return descr - def decode_inputargs(self, enc, inputargs, regalloc): + def decode_inputargs(self, enc): locs = [] j = 0 - for i in range(len(inputargs)): + while enc[j] != self.END_OF_LOCS: res = enc[j] - if res == self.END_OF_LOCS: - assert 0, 'reached end of encoded area' - while res == self.EMPTY_LOC: + if res == self.EMPTY_LOC: j += 1 - res = enc[j] + continue assert res in [self.INT_TYPE, self.REF_TYPE],\ 'location type is not supported' @@ -257,12 +255,22 @@ # XXX decode imm if necessary assert 0, 'Imm Locations are not supported' elif res == self.STACK_LOC: + if res_type == self.FLOAT_TYPE: + t = FLOAT + elif res_type == self.INT_TYPE: + t = INT + else: + t = REF + assert t != FLOAT stack_loc = decode32(enc, j+1) - loc = regalloc.frame_manager.frame_pos(stack_loc, INT) + PPCFrameManager.frame_pos(stack_loc, t) j += 4 else: # REG_LOC - reg = ord(res) - loc = r.MANAGED_REGS[r.get_managed_reg_index(reg)] + if res_type == self.FLOAT_TYPE: + assert 0, "not implemented yet" + else: + reg = ord(res) + loc = r.MANAGED_REGS[r.get_managed_reg_index(reg)] j += 1 locs.append(loc) return locs @@ -388,8 +396,7 @@ operations = self.setup(looptoken, operations) self.startpos = self.mc.currpos() longevity = compute_vars_longevity(inputargs, operations) - regalloc = Regalloc(longevity, assembler=self, - frame_manager=PPCFrameManager()) + regalloc = Regalloc(assembler=self, frame_manager=PPCFrameManager()) regalloc.prepare_loop(inputargs, operations) regalloc_head = self.mc.currpos() @@ -409,7 +416,8 @@ fdescr = self.gen_64_bit_func_descr() # write instructions to memory - loop_start = self.materialize_loop(looptoken, False) + loop_start = self.materialize_loop(looptoken, True) + self.fixup_target_tokens(loop_start) real_start = loop_start + direct_bootstrap_code if IS_PPC_32: @@ -435,28 +443,31 @@ jump_target_descr._ppc_clt.frame_depth) return frame_depth + + # XXX stack needs to be moved if bridge needs to much space def assemble_bridge(self, faildescr, inputargs, operations, looptoken, log): - self.setup(looptoken, operations) + operations = self.setup(looptoken, operations) assert isinstance(faildescr, AbstractFailDescr) code = faildescr._failure_recovery_code enc = rffi.cast(rffi.CCHARP, code) - longevity = compute_vars_longevity(inputargs, operations) - regalloc = Regalloc(longevity, assembler=self, - frame_manager=PPCFrameManager()) + frame_depth = faildescr._ppc_frame_depth + arglocs = self.decode_inputargs(enc) + if not we_are_translated(): + assert len(inputargs) == len(arglocs) - #sp_patch_location = self._prepare_sp_patch_position() - frame_depth = faildescr._ppc_frame_depth - locs = self.decode_inputargs(enc, inputargs, regalloc) - regalloc.update_bindings(locs, frame_depth, inputargs) + regalloc = Regalloc(assembler=self, frame_manager=PPCFrameManager()) + regalloc.prepare_bridge(inputargs, arglocs, operations) - self._walk_operations(operations, regalloc) + spilling_area = self._assemble(operations, regalloc) + self.write_pending_failure_recoveries() - #self._patch_sp_offset(sp_patch_location, - # regalloc.frame_manager.frame_depth) - self.write_pending_failure_recoveries() - bridge_start = self.materialize_loop(looptoken, False) - self.process_pending_guards(bridge_start) - self.patch_trace(faildescr, looptoken, bridge_start, regalloc) + rawstart = self.materialize_loop(looptoken, True) + self.process_pending_guards(rawstart) + self.patch_trace(faildescr, looptoken, rawstart, regalloc) + + self.fixup_target_tokens(rawstart) + self.current_clt.frame_depth = max(self.current_clt.frame_depth, + spilling_area) self._teardown() # For an explanation of the encoding, see @@ -615,7 +626,7 @@ def target_arglocs(self, looptoken): return looptoken._ppc_arglocs - def materialize_loop(self, looptoken, show): + def materialize_loop(self, looptoken, show=False): self.mc.prepare_insts_blocks(show) self.datablockwrapper.done() self.datablockwrapper = None diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -120,12 +120,12 @@ class Regalloc(object): - def __init__(self, longevity, frame_manager=None, assembler=None): + def __init__(self, frame_manager=None, assembler=None): self.cpu = assembler.cpu self.frame_manager = frame_manager self.assembler = assembler - self.rm = PPCRegisterManager(longevity, frame_manager, assembler) self.jump_target_descr = None + self.final_jump_op = None def _prepare(self, inputargs, operations): longevity, last_real_usage = compute_vars_longevity( @@ -172,20 +172,19 @@ loc = self.frame_manager.frame_pos(cur_frame_pos, box.type) self.frame_manager.set_binding(box, loc) - def update_bindings(self, locs, frame_depth, inputargs): + def _update_bindings(self, locs, inputargs): used = {} i = 0 - #self.frame_manager.frame_depth = frame_depth for loc in locs: arg = inputargs[i] i += 1 if loc.is_reg(): self.rm.reg_bindings[arg] = loc elif loc.is_vfp_reg(): - assert 0, "not implemented yet" + assert 0, "not supported" else: assert loc.is_stack() - self.frame_manager.frame_bindings[arg] = loc + self.frame_manager.set_binding(arg, loc) used[loc] = None # XXX combine with x86 code and move to llsupport From noreply at buildbot.pypy.org Wed Jan 4 20:25:26 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 4 Jan 2012 20:25:26 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: change create_sig to find_sig Message-ID: <20120104192526.6B66082111@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51021:ca13cff50c3a Date: 2012-01-04 21:23 +0200 http://bitbucket.org/pypy/pypy/changeset/ca13cff50c3a/ Log: change create_sig to find_sig diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -748,14 +748,14 @@ class Reduce(VirtualArray): - def __init__(self, ufunc, name, dim, res_dtype, values, identity=None): + def __init__(self, binfunc, name, dim, res_dtype, values, identity=None): shape = values.shape[0:dim] + values.shape[dim+1:len(values.shape)] VirtualArray.__init__(self, name, shape, res_dtype) self.values = values self.size = 1 for s in shape: self.size *= s - self.ufunc = ufunc + self.binfunc = binfunc self.res_dtype = res_dtype self.dim = dim self.identity = identity @@ -767,7 +767,7 @@ def create_sig(self, res_shape): if self.forced_result is not None: return self.forced_result.create_sig(res_shape) - return signature.ReduceSignature(self.ufunc, self.name, self.res_dtype, + return signature.ReduceSignature(self.binfunc, self.name, self.res_dtype, signature.ViewSignature(self.res_dtype), self.values.create_sig(res_shape)) @@ -778,13 +778,15 @@ shapelen = len(result.shape) objlen = len(self.values.shape) target_len = self.values.shape[self.dim] - #sig = self.find_sig(result.shape) ##Don't do this, infinite recursion - sig = self.create_sig(result.shape) + sig = self.values.find_sig(result.shape) + #sig = self.create_sig(result.shape) ri = ArrayIterator(result.size) si = axis_iter_from_arr(self.values, self.dim) while not ri.done(): + # explanation: we want to start the frame at the beginning of + # an axis: use si.indices to create a chunk (slice) + # in self.values chunks = [] - #for i in range(objlen - 1, -1, -1): for i in range(objlen): if i == self.dim: chunks.append((0, target_len, 1, target_len)) @@ -798,9 +800,9 @@ else: value = self.identity.convert_to(dtype) while not frame.done(): - assert isinstance(sig, signature.ReduceSignature) + assert isinstance(sig, signature.ViewSignature) nextval = sig.eval(frame, self.values).convert_to(dtype) - value = sig.binfunc(dtype, value, nextval) + value = self.binfunc(dtype, value, nextval) frame.next(shapelen) result.dtype.setitem(result.storage, ri.offset, value) ri = ri.next(shapelen) From noreply at buildbot.pypy.org Wed Jan 4 20:31:55 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 4 Jan 2012 20:31:55 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: remove some code that's not necessary any more - in-progress Message-ID: <20120104193155.DA75282111@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51022:88295e485a01 Date: 2012-01-03 23:17 +0200 http://bitbucket.org/pypy/pypy/changeset/88295e485a01/ Log: remove some code that's not necessary any more - in-progress diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -596,20 +596,6 @@ return fn(*greenargs) self.should_unroll_one_iteration = should_unroll_one_iteration - if hasattr(jd.jitdriver, 'on_compile'): - def on_compile(logger, token, operations, type, greenkey): - greenargs = unwrap_greenkey(greenkey) - return jd.jitdriver.on_compile(logger, token, operations, type, - *greenargs) - def on_compile_bridge(logger, orig_token, operations, n): - return jd.jitdriver.on_compile_bridge(logger, orig_token, - operations, n) - jd.on_compile = on_compile - jd.on_compile_bridge = on_compile_bridge - else: - jd.on_compile = lambda *args: None - jd.on_compile_bridge = lambda *args: None - redargtypes = ''.join([kind[0] for kind in jd.red_args_types]) def get_assembler_token(greenkey): From noreply at buildbot.pypy.org Wed Jan 4 20:31:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 4 Jan 2012 20:31:57 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: remove cruft Message-ID: <20120104193157.1375582111@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51023:e38fface989e Date: 2012-01-04 21:31 +0200 http://bitbucket.org/pypy/pypy/changeset/e38fface989e/ Log: remove cruft diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -762,7 +762,6 @@ def _del_sources(self): self.values = None - pass def create_sig(self, res_shape): if self.forced_result is not None: @@ -779,7 +778,6 @@ objlen = len(self.values.shape) target_len = self.values.shape[self.dim] sig = self.values.find_sig(result.shape) - #sig = self.create_sig(result.shape) ri = ArrayIterator(result.size) si = axis_iter_from_arr(self.values, self.dim) while not ri.done(): From noreply at buildbot.pypy.org Thu Jan 5 00:00:36 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 5 Jan 2012 00:00:36 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix test_spilling Message-ID: <20120104230036.8E0D682111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51024:c7d964550b10 Date: 2012-01-04 14:40 -0800 http://bitbucket.org/pypy/pypy/changeset/c7d964550b10/ Log: fix test_spilling diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -553,8 +553,7 @@ looptoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) - self.cpu.set_future_value_int(0, 42) - fail = self.cpu.execute_token(looptoken) + fail = self.cpu.execute_token(looptoken, 42) result = self.cpu.get_latest_value_int(0) assert result == 42 From noreply at buildbot.pypy.org Thu Jan 5 00:00:37 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 5 Jan 2012 00:00:37 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add get_loc_index Message-ID: <20120104230037.BA98182111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51025:2116589f3b9e Date: 2012-01-04 14:59 -0800 http://bitbucket.org/pypy/pypy/changeset/2116589f3b9e/ Log: add get_loc_index diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -118,6 +118,13 @@ assert 0, "TODO" return 1 + @staticmethod + def get_loc_index(loc): + assert loc.is_stack() + if loc.type == FLOAT: + assert 0, "not implemented yet" + return loc.position + class Regalloc(object): def __init__(self, frame_manager=None, assembler=None): From noreply at buildbot.pypy.org Thu Jan 5 01:38:48 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 5 Jan 2012 01:38:48 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix basic field operations Message-ID: <20120105003848.6FF9182111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51026:b3a8f9928a2e Date: 2012-01-04 16:38 -0800 http://bitbucket.org/pypy/pypy/changeset/b3a8f9928a2e/ Log: fix basic field operations diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -382,11 +382,10 @@ self._emit_call(force_index, adr, arglist, regalloc, op.result) descr = op.getdescr() #XXX Hack, Hack, Hack - if op.result and not we_are_translated() and not isinstance(descr, - LoopToken): + if op.result and not we_are_translated(): #XXX check result type loc = regalloc.rm.call_result_location(op.result) - size = descr.get_result_size(False) + size = descr.get_result_size() signed = descr.is_result_signed() self._ensure_result_bit_extension(loc, size, signed) @@ -529,10 +528,8 @@ #XXX Hack, Hack, Hack if not we_are_translated(): - descr = op.getdescr() - size = descr.get_field_size(False) - signed = descr.is_field_signed() - self._ensure_result_bit_extension(res, size, signed) + signed = op.getdescr().is_field_signed() + self._ensure_result_bit_extension(res, size.value, signed) emit_getfield_raw = emit_getfield_gc emit_getfield_raw_pure = emit_getfield_gc diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -20,6 +20,9 @@ from pypy.jit.codewriter.effectinfo import EffectInfo import pypy.jit.backend.ppc.ppcgen.register as r from pypy.jit.codewriter import heaptracker +from pypy.jit.backend.llsupport.descr import unpack_arraydescr +from pypy.jit.backend.llsupport.descr import unpack_fielddescr +from pypy.jit.backend.llsupport.descr import unpack_interiorfielddescr # xxx hack: set a default value for TargetToken._arm_loop_code. If 0, we know # that it is a LABEL that was not compiled yet. @@ -565,7 +568,7 @@ def prepare_setfield_gc(self, op): boxes = list(op.getarglist()) b0, b1 = boxes - ofs, size, ptr = self._unpack_fielddescr(op.getdescr()) + ofs, size, ptr = unpack_fielddescr(op.getdescr()) base_loc, base_box = self._ensure_value_is_boxed(b0, boxes) boxes.append(base_box) value_loc, value_box = self._ensure_value_is_boxed(b1, boxes) @@ -583,7 +586,7 @@ def prepare_getfield_gc(self, op): a0 = op.getarg(0) - ofs, size, ptr = self._unpack_fielddescr(op.getdescr()) + ofs, size, ptr = unpack_fielddescr(op.getdescr()) base_loc, base_box = self._ensure_value_is_boxed(a0) c_ofs = ConstInt(ofs) if _check_imm_arg(c_ofs): From noreply at buildbot.pypy.org Thu Jan 5 02:46:55 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 5 Jan 2012 02:46:55 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: made test_array_basic pass Message-ID: <20120105014655.ECD0B82111@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51027:81584461b2f7 Date: 2012-01-04 17:46 -0800 http://bitbucket.org/pypy/pypy/changeset/81584461b2f7/ Log: made test_array_basic pass diff --git a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py @@ -1,14 +1,17 @@ from pypy.jit.metainterp.history import ConstInt +from pypy.rlib.objectmodel import we_are_translated def _check_imm_arg(arg, size=0xFF, allow_zero=True): - if isinstance(arg, ConstInt): - i = arg.getint() - if allow_zero: - lower_bound = i >= 0 - else: - lower_bound = i > 0 - return i <= size and lower_bound - return False + #assert not isinstance(arg, ConstInt) + #if not we_are_translated(): + # if not isinstance(arg, int): + # import pdb; pdb.set_trace() + i = arg + if allow_zero: + lower_bound = i >= 0 + else: + lower_bound = i > 0 + return i <= size and lower_bound def prepare_cmp_op(): def f(self, op): diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -545,9 +545,10 @@ self.mc.load(res.value, base_loc.value, ofs.value) def emit_setarrayitem_gc(self, op, arglocs, regalloc): - value_loc, base_loc, ofs_loc, scale, ofs, scratch_reg = arglocs + value_loc, base_loc, ofs_loc, scale, ofs = arglocs + assert ofs_loc.is_reg() if scale.value > 0: - scale_loc = scratch_reg + scale_loc = r.SCRATCH if IS_PPC_32: self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: @@ -557,9 +558,8 @@ # add the base offset if ofs.value > 0: - assert scale_loc is not r.r0 - self.mc.addi(r.r0.value, scale_loc.value, ofs.value) - scale_loc = r.r0 + self.mc.addi(r.SCRATCH.value, scale_loc.value, ofs.value) + scale_loc = r.SCRATCH if scale.value == 3: self.mc.stdx(value_loc.value, base_loc.value, scale_loc.value) @@ -575,9 +575,10 @@ emit_setarrayitem_raw = emit_setarrayitem_gc def emit_getarrayitem_gc(self, op, arglocs, regalloc): - res, base_loc, ofs_loc, scale, ofs, scratch_reg = arglocs + res, base_loc, ofs_loc, scale, ofs = arglocs + assert ofs_loc.is_reg() if scale.value > 0: - scale_loc = scratch_reg + scale_loc = r.SCRATCH if IS_PPC_32: self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: @@ -587,9 +588,8 @@ # add the base offset if ofs.value > 0: - assert scale_loc is not r.r0 - self.mc.addi(r.r0.value, scale_loc.value, ofs.value) - scale_loc = r.r0 + self.mc.addi(r.SCRATCH.value, scale_loc.value, ofs.value) + scale_loc = r.SCRATCH if scale.value == 3: self.mc.ldx(res.value, base_loc.value, scale_loc.value) @@ -605,7 +605,7 @@ #XXX Hack, Hack, Hack if not we_are_translated(): descr = op.getdescr() - size = descr.get_item_size(False) + size = descr.itemsize signed = descr.is_item_signed() self._ensure_result_bit_extension(res, size, signed) diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -416,7 +416,7 @@ fdescr = self.gen_64_bit_func_descr() # write instructions to memory - loop_start = self.materialize_loop(looptoken, True) + loop_start = self.materialize_loop(looptoken, False) self.fixup_target_tokens(loop_start) real_start = loop_start + direct_bootstrap_code @@ -461,7 +461,7 @@ spilling_area = self._assemble(operations, regalloc) self.write_pending_failure_recoveries() - rawstart = self.materialize_loop(looptoken, True) + rawstart = self.materialize_loop(looptoken, False) self.process_pending_guards(rawstart) self.patch_trace(faildescr, looptoken, rawstart, regalloc) diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -17,6 +17,7 @@ from pypy.jit.backend.ppc.ppcgen import locations from pypy.rpython.lltypesystem import rffi, lltype, rstr from pypy.jit.backend.llsupport import symbolic +from pypy.jit.backend.llsupport.descr import ArrayDescr from pypy.jit.codewriter.effectinfo import EffectInfo import pypy.jit.backend.ppc.ppcgen.register as r from pypy.jit.codewriter import heaptracker @@ -606,8 +607,8 @@ def prepare_arraylen_gc(self, op): arraydescr = op.getdescr() - assert isinstance(arraydescr, BaseArrayDescr) - ofs = arraydescr.get_ofs_length(self.cpu.translate_support_code) + assert isinstance(arraydescr, ArrayDescr) + ofs = arraydescr.lendescr.offset arg = op.getarg(0) base_loc, base_box = self._ensure_value_is_boxed(arg) self.possibly_free_vars([arg, base_box]) @@ -617,42 +618,31 @@ return [res, base_loc, imm(ofs)] def prepare_setarrayitem_gc(self, op): - b0, b1, b2 = boxes = list(op.getarglist()) - _, scale, ofs, _, ptr = self._unpack_arraydescr(op.getdescr()) - - base_loc, base_box = self._ensure_value_is_boxed(b0, boxes) - boxes.append(base_box) - ofs_loc, ofs_box = self._ensure_value_is_boxed(b1, boxes) - boxes.append(ofs_box) - #XXX check if imm would be fine here - value_loc, value_box = self._ensure_value_is_boxed(b2, boxes) - boxes.append(value_box) - if scale > 0: - tmp, box = self.allocate_scratch_reg(forbidden_vars=boxes) - boxes.append(box) - else: - tmp = None - self.possibly_free_vars(boxes) - return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs), tmp] + a0, a1, a2 = list(op.getarglist()) + size, ofs, _ = unpack_arraydescr(op.getdescr()) + scale = get_scale(size) + args = op.getarglist() + base_loc, _ = self._ensure_value_is_boxed(a0, args) + ofs_loc, _ = self._ensure_value_is_boxed(a1, args) + value_loc, _ = self._ensure_value_is_boxed(a2, args) + assert _check_imm_arg(ofs) + return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs)] prepare_setarrayitem_raw = prepare_setarrayitem_gc def prepare_getarrayitem_gc(self, op): a0, a1 = boxes = list(op.getarglist()) - _, scale, ofs, _, ptr = self._unpack_arraydescr(op.getdescr()) + size, ofs, _ = unpack_arraydescr(op.getdescr()) + scale = get_scale(size) base_loc, base_box = self._ensure_value_is_boxed(a0, boxes) boxes.append(base_box) ofs_loc, ofs_box = self._ensure_value_is_boxed(a1, boxes) boxes.append(ofs_box) - if scale > 0: - tmp, box = self.allocate_scratch_reg(forbidden_vars=boxes) - boxes.append(box) - else: - tmp = None self.possibly_free_vars(boxes) res = self.force_allocate_reg(op.result) self.possibly_free_var(op.result) - return [res, base_loc, ofs_loc, imm(scale), imm(ofs), tmp] + assert _check_imm_arg(ofs) + return [res, base_loc, ofs_loc, imm(scale), imm(ofs)] prepare_getarrayitem_raw = prepare_getarrayitem_gc prepare_getarrayitem_gc_pure = prepare_getarrayitem_gc @@ -1042,6 +1032,13 @@ operations = [notimplemented] * (rop._LAST + 1) operations_with_guard = [notimplemented_with_guard] * (rop._LAST + 1) +def get_scale(size): + scale = 0 + while (1 << scale) < size: + scale += 1 + assert (1 << scale) == size + return scale + for key, value in rop.__dict__.items(): key = key.lower() if key.startswith('_'): From noreply at buildbot.pypy.org Thu Jan 5 09:50:13 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Thu, 5 Jan 2012 09:50:13 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add the talk I gave at dagstuhl Message-ID: <20120105085013.76633821F8@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: extradoc Changeset: r4000:165f8a7523d2 Date: 2012-01-05 09:48 +0100 http://bitbucket.org/pypy/extradoc/changeset/165f8a7523d2/ Log: add the talk I gave at dagstuhl diff --git a/talk/dagstuhl2012/figures/all_numbers.png b/talk/dagstuhl2012/figures/all_numbers.png new file mode 100644 index 0000000000000000000000000000000000000000..9076ac193fc9ba1954e24e2ae372ec7e1e1f44e6 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/metatrace01.pdf b/talk/dagstuhl2012/figures/metatrace01.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0b7181b5a476093c16ff1233f37535378ef7bf8a GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/telco.png b/talk/dagstuhl2012/figures/telco.png new file mode 100644 index 0000000000000000000000000000000000000000..56033389dab8bfe8211ffd4de5bfcb23cdc94b0f GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace-levels-metatracing.svg b/talk/dagstuhl2012/figures/trace-levels-metatracing.svg new file mode 100644 --- /dev/null +++ b/talk/dagstuhl2012/figures/trace-levels-metatracing.svg @@ -0,0 +1,833 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + CPU + + Interpreter + + User Program + + + f1 + + + + + f2 + + + + + main_loop + + + + + + BINARY_ADD + + + + JUMP_IF_FALSE + + + + + + + ... + ... + + + + + + + + + + Trace for f1 + + ops frommain_loop...ops fromBINARY_ADD...more ops frommain_loop...ops_fromJUMP_IF_FALSEguard(...)jump to start + + + Tracer + + + CPU + + + diff --git a/talk/dagstuhl2012/figures/trace-levels-tracing.svg b/talk/dagstuhl2012/figures/trace-levels-tracing.svg new file mode 100644 --- /dev/null +++ b/talk/dagstuhl2012/figures/trace-levels-tracing.svg @@ -0,0 +1,991 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + CPU + + Interpreter + + + + + main_loop + + + + + + BINARY_ADD + + + + JUMP_IF_FALSE + + + + + + + ... + ... + + + + + + Tracer + + + CPU + + + + + + + CPU + User Program + + + f1 + + + + + f2 + + + + diff --git a/talk/dagstuhl2012/figures/trace01.pdf b/talk/dagstuhl2012/figures/trace01.pdf new file mode 100644 index 0000000000000000000000000000000000000000..252b5089e72d3626e636cd02397204a464c7ca22 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace02.pdf b/talk/dagstuhl2012/figures/trace02.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ece12fe0c3f96856afea26c49d92ade630db9328 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace03.pdf b/talk/dagstuhl2012/figures/trace03.pdf new file mode 100644 index 0000000000000000000000000000000000000000..04b38b8996eb2c297214c017bbe1cce1f8f64bdb GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace04.pdf b/talk/dagstuhl2012/figures/trace04.pdf new file mode 100644 index 0000000000000000000000000000000000000000..472b798aeae005652fc0d749ed6571117c5819d9 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace05.pdf b/talk/dagstuhl2012/figures/trace05.pdf new file mode 100644 index 0000000000000000000000000000000000000000..977e3bbda8d4d349f27f06fcdaa3bd100e95e1a7 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/meta-tracing-pypy.pdf b/talk/dagstuhl2012/meta-tracing-pypy.pdf new file mode 100644 index 0000000000000000000000000000000000000000..910ebeedd1a286e8104d7f59848d1bfa80eed050 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/talk.tex b/talk/dagstuhl2012/talk.tex new file mode 100644 --- /dev/null +++ b/talk/dagstuhl2012/talk.tex @@ -0,0 +1,417 @@ +\documentclass[utf8x]{beamer} + +% This file is a solution template for: + +% - Talk at a conference/colloquium. +% - Talk length is about 20min. +% - Style is ornate. + +\mode +{ + \usetheme{Warsaw} + % or ... + + %\setbeamercovered{transparent} + % or whatever (possibly just delete it) +} + + +\usepackage[english]{babel} +\usepackage{listings} +\usepackage{fancyvrb} +\usepackage{ulem} +\usepackage{color} +\usepackage{alltt} +\usepackage{hyperref} + +\usepackage[utf8x]{inputenc} + + +\newcommand\redsout[1]{{\color{red}\sout{\hbox{\color{black}{#1}}}}} +\newcommand{\noop}{} + +% or whatever + +% Or whatever. Note that the encoding and the font should match. If T1 +% does not look nice, try deleting the line with the fontenc. + + +\title{Meta-Tracing in the PyPy Project} + +\author[Carl Friedrich Bolz et. al.]{\emph{Carl Friedrich Bolz}\inst{1} \and Antonio Cuni\inst{1} \and Maciej Fijałkowski\inst{2} \and Michael Leuschel\inst{1} \and Samuele Pedroni\inst{3} \and Armin Rigo\inst{1} \and many~more} +% - Give the names in the same order as the appear in the paper. +% - Use the \inst{?} command only if the authors have different +% affiliation. + +\institute[Heinrich-Heine-Universität Düsseldorf] +{$^1$Heinrich-Heine-Universität Düsseldorf, STUPS Group, Germany \and + + $^2$merlinux GmbH, Hildesheim, Germany \and + + $^3$Canonical +} + +\date{Foundations of Scription Languages, Dagstuhl, 5th January 2012} +% - Either use conference name or its abbreviation. +% - Not really informative to the audience, more for people (including +% yourself) who are reading the slides online + + +% If you have a file called "university-logo-filename.xxx", where xxx +% is a graphic format that can be processed by latex or pdflatex, +% resp., then you can add a logo as follows: + + + + +% Delete this, if you do not want the table of contents to pop up at +% the beginning of each subsection: +%\AtBeginSubsection[] +%{ +% \begin{frame} +% \frametitle{Outline} +% \tableofcontents[currentsection,currentsubsection] +% \end{frame} +%} + + +% If you wish to uncover everything in a step-wise fashion, uncomment +% the following command: + +%\beamerdefaultoverlayspecification{<+->} + + +\begin{document} + +\begin{frame} + \titlepage +\end{frame} + +\begin{frame} + \frametitle{Good JIT Compilers for Scripting Languages are Hard} + \begin{itemize} + \item recent languages like Python, Ruby, JS, PHP have complex core semantics + \item many corner cases, even hard to interpret correctly + \item particularly in contexts where you have limited resources (like + academic, Open Source) + \end{itemize} + \pause + \begin{block}{Problems} + \begin{enumerate} + \item implement all corner-cases of semantics correctly + \item ... and the common cases efficiently + \item while maintaing reasonable simplicity in the implementation + \end{enumerate} + \end{block} +\end{frame} + +\begin{frame} + \frametitle{Example: Attribute Reads in Python} + What happens when an attribute \texttt{x.m} is read? (simplified) + \pause + \begin{itemize} + \item check for \texttt{x.\_\_getattribute\_\_}, if there, call it + \pause + \item look for the attribute in the object's dictionary, if it's there, return it + \pause + \item walk up the MRO and look in each class' dictionary for the attribute + \pause + \item if the attribute is found, call its \texttt{\_\_get\_\_} attribute and return the result + \pause + \item if the attribute is not found, look for \texttt{x.\_\_getattr\_\_}, if there, call it + \pause + \item raise an \texttt{AttributeError} + \end{itemize} +\end{frame} + +\begin{frame} + \frametitle{An Interpreter} + \includegraphics[scale=0.5]{figures/trace01.pdf} +\end{frame} + +\begin{frame} + \frametitle{A Tracing JIT} + \includegraphics[scale=0.5]{figures/trace02.pdf} +\end{frame} + +\begin{frame} + \frametitle{A Tracing JIT} + \includegraphics[scale=0.5]{figures/trace03.pdf} +\end{frame} + +\begin{frame} + \frametitle{A Tracing JIT} + \includegraphics[scale=0.5]{figures/trace04.pdf} +\end{frame} + +\begin{frame} + \frametitle{Tracing JITs} + Advantages: + \begin{itemize} + \item can be added to existing VM + \item interpreter does a lot of work + \item can fall back to interpreter for uncommon paths + \end{itemize} + \pause + \begin{block}{Problems} + \begin{itemize} + \item traces typically contain bytecodes + \item many scripting languages have bytecodes that contain complex logic + \item need to expand the bytecode in the trace into something more explicit + \item this duplicates the language semantics in the tracer/optimizer + \end{itemize} + \end{block} +\end{frame} + +\begin{frame} + \frametitle{Idea of Meta-Tracing} + \includegraphics[scale=0.5]{figures/trace05.pdf} +\end{frame} + +\begin{frame} + \frametitle{Meta-Tracing} + \includegraphics[scale=0.5]{figures/metatrace01.pdf} +\end{frame} + +\begin{frame} + \frametitle{Meta-Tracing JITs} + \begin{block}{Advantages:} + \begin{itemize} + \item semantics are always like that of the interpreter + \item trace fully contains language semantics + \item meta-tracers can be reused for various interpreters + \end{itemize} + \end{block} + \pause + a few meta-tracing systems have been built: + \begin{itemize} + \item Sullivan et.al. describe a meta-tracer using the Dynamo RIO system + \item Yermolovich et.al. run a Lua implementation on top of a tracing JS implementation + \item SPUR is a tracing JIT for CLR bytecodes, which is used to speed up a JS implementation in C\# + \end{itemize} +\end{frame} + +\begin{frame} + \frametitle{PyPy} + A general environment for implementing scripting languages + \pause + \begin{block}{Approach} + \begin{itemize} + \item write an interpreter for the language in RPython + \item compilable to an efficient C-based VM + \pause + \item (RPython is a restricted subset of Python) + \end{itemize} + \end{block} + \pause +\end{frame} + +\begin{frame} + \frametitle{PyPy's Meta-Tracing JIT} + \begin{itemize} + \item PyPy contains a meta-tracing JIT for interpreters in RPython + \item needs a few source-code hints (or annotations) \emph{in the interpreter} + \item allows interpreter-author to express language specific type feedback + \item contains powerful general optimizations + \pause + \item general techniques to deal with reified frames + \end{itemize} +\end{frame} + + + +\begin{frame} + \frametitle{Language Implementations Done with PyPy} + \begin{itemize} + \item Most complete language implemented: Python + \item regular expression matcher of Python standard library + \item A reasonably complete Prolog + \item Converge (previous talk) + \item lots of experiments (Squeak, Gameboy emulator, JS, start of a PHP, Haskell, ...) + \end{itemize} +\end{frame} + + +\begin{frame} + \frametitle{Some Benchmarks for Python} + \begin{itemize} + \item benchmarks done using PyPy's Python interpreter + \item about 30'000 lines of code + \end{itemize} +\end{frame} + +\begin{frame} + \includegraphics[scale=0.3]{figures/all_numbers.png} +\end{frame} + +\begin{frame} + \frametitle{Telco Benchmark} + \includegraphics[scale=0.3]{figures/telco.png} +\end{frame} + +\begin{frame} + \frametitle{Conclusion} + \begin{itemize} + \item writing good JITs for recent scripting languages is too hard! + \item only reasonable if the language is exceptionally simple + \item or if somebody has a lot of money + \item PyPy is one point in a large design space of meta-solutions + \item uses tracing on the level of the interpreter (meta-tracing) to get speed + \pause + \item \textbf{In a way, the exact approach is not too important: let's write more meta-tools!} + \end{itemize} +\end{frame} + +\begin{frame} + \frametitle{Thank you! Questions?} + \begin{itemize} + \item writing good JITs for recent scripting languages is too hard! + \item only reasonable if the language is exceptionally simple + \item or if somebody has a lot of money + \item PyPy is one point in a large design space of meta-solutions + \item uses tracing on the level of the interpreter (meta-tracing) to get speed + \item \textbf{In a way, the exact approach is not too important: let's write more meta-tools!} + \end{itemize} +\end{frame} + +\begin{frame} + \frametitle{Possible Further Slides} + \hyperlink{necessary-hints}{\beamergotobutton{}} Getting Meta-Tracing to Work + + \hyperlink{feedback}{\beamergotobutton{}} Language-Specific Runtime Feedback + + \hyperlink{optimizations}{\beamergotobutton{}} Powerful General Optimizations + + \hyperlink{virtualizables}{\beamergotobutton{}} Optimizing Reified Frames + + \hyperlink{which-langs}{\beamergotobutton{}} Which Languages Can Meta-Tracing be Used With? + + \hyperlink{OOVM}{\beamergotobutton{}} Using OO VMs as an implementation substrate + + \hyperlink{PE}{\beamergotobutton{}} Comparison with Partial Evaluation + +\end{frame} + +\begin{frame}[label=necessary-hints] + \frametitle{Getting Meta-Tracing to Work} + \begin{itemize} + \item Interpreter author needs add some hints to the interpreter + \item one hint to identify the bytecode dispatch loop + \item one hint to identify the jump bytecode + \item with these in place, meta-tracing works + \item but produces non-optimal code + \end{itemize} +\end{frame} + + +\begin{frame}[label=feedback] + \frametitle{Language-Specific Runtime Feedback} + Problems of Naive Meta-Tracing: + \begin{itemize} + \item user-level types are normal instances on the implementation level + \item thus no runtime feedback of user-level types + \item tracer does not know about invariants in the interpreter + \end{itemize} + \pause + \begin{block}{Solution in PyPy} + \begin{itemize} + \item introduce more hints that the interpreter-author can use + \item hints are annotation in the interpreter + \item they give information to the meta-tracer + \pause + \item one to induce runtime feedback of arbitrary information (typically types) + \item the second one to influence constant folding + \end{itemize} + \end{block} +\end{frame} + + +\begin{frame}[label=optimizations] + \frametitle{Powerful General Optimizations} + \begin{itemize} + \item Very powerful general optimizations on traces + \pause + \begin{block}{Heap Optimizations} + \begin{itemize} + \item escape analysis/allocation removal + \item remove short-lived objects + \item gets rid of the overhead of boxing primitive types + \item also reduces overhead of constant heap accesses + \end{itemize} + \end{block} + \end{itemize} +\end{frame} + +\begin{frame}[label=virtualizables] + \frametitle{Optimizing Reified Frames} + \begin{itemize} + \item Common problem in scripting languages + \item frames are reified in the language, i.e. can be accessed via reflection + \item used to implement the debugger in the language itself + \item or for more advanced usecases (backtracking in Smalltalk) + \item when using a JIT, quite expensive to keep them up-to-date + \pause + \begin{block}{Solution in PyPy} + \begin{itemize} + \item General mechanism for updating reified frames lazily + \item use deoptimization when frame objects are accessed by the program + \item interpreter just needs to mark the frame class + \end{itemize} + \end{block} + \end{itemize} +\end{frame} + + +\begin{frame}[label=which-langs] + \frametitle{Bonus: Which Languages Can Meta-Tracing be Used With?} + \begin{itemize} + \item To make meta-tracing useful, there needs to be some kind of runtime variability + \item that means it definitely works for all dynamically typed languages + \item ... but also for other languages with polymorphism that is not resolvable at compile time + \item most languages that have any kind of runtime work + \end{itemize} +\end{frame} + +\begin{frame}[label=OOVM] + \frametitle{Bonus: Using OO VMs as an implementation substrate} + \begin{block}{Benefits} + \begin{itemize} + \item higher level of implementation + \item the VM supplies a GC and mostly a JIT + \item better interoperability than what the C level provides + \item \texttt{invokedynamic} should make it possible to get language-specific runtime feedback + \end{itemize} + \end{block} + \pause + \begin{block}{Problems} + \begin{itemize} + \item can be hard to map concepts of the scripting language to + the host OO VM + \item performance is often not improved, and can be very bad, because of this + semantic mismatch + \item getting good performance needs a huge amount of tweaking + \item tools not really prepared to deal with people that care about + the shape of the generated assembler + \end{itemize} + \end{block} + \pause +\end{frame} + +\begin{frame}[label=PE] + \frametitle{Bonus: Comparison with Partial Evaluation} + \begin{itemize} + \pause + \item the only difference between meta-tracing and partial evaluation is that meta-tracing works + \pause + \item ... mostly kidding + \pause + \item very similar from the motivation and ideas + \item PE was never scaled up to perform well on large interpreters + \item classical PE mostly ahead of time + \item PE tried very carefully to select the right paths to inline and optimize + \item quite often this fails and inlines too much or too little + \item tracing is much more pragmatic: simply look what happens + \end{itemize} +\end{frame} + +\end{document} From noreply at buildbot.pypy.org Thu Jan 5 18:20:41 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 5 Jan 2012 18:20:41 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add INSTANCE_PTR_EQ, INSTANCE_PTR_NE Message-ID: <20120105172041.9DFC682247@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51028:8690df3243d0 Date: 2012-01-05 02:23 -0800 http://bitbucket.org/pypy/pypy/changeset/8690df3243d0/ Log: add INSTANCE_PTR_EQ, INSTANCE_PTR_NE diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -161,6 +161,9 @@ emit_ptr_eq = emit_int_eq emit_ptr_ne = emit_int_ne + emit_instance_ptr_eq = emit_ptr_eq + emit_instance_ptr_ne = emit_ptr_ne + def emit_int_neg(self, op, arglocs, regalloc): l0, res = arglocs self.mc.neg(res.value, l0.value) diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -336,6 +336,9 @@ prepare_ptr_eq = prepare_int_eq prepare_ptr_ne = prepare_int_ne + prepare_instance_ptr_eq = prepare_ptr_eq + prepare_instance_ptr_ne = prepare_ptr_ne + prepare_uint_lt = prepare_cmp_op() prepare_uint_le = prepare_cmp_op() prepare_uint_gt = prepare_cmp_op() From noreply at buildbot.pypy.org Thu Jan 5 18:20:42 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 5 Jan 2012 18:20:42 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: implemented CALL_MALLOC_GC and removed old new_XXX ops Message-ID: <20120105172042.CB59A8224E@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51029:e6f908f85e4c Date: 2012-01-05 02:50 -0800 http://bitbucket.org/pypy/pypy/changeset/e6f908f85e4c/ Log: implemented CALL_MALLOC_GC and removed old new_XXX ops diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -799,6 +799,10 @@ _mixin_ = True + def emit_call_malloc_gc(self, op, arglocs, regalloc): + self.emit_call(op, arglocs, regalloc) + self.propagate_memoryerror_if_r3_is_null() + # from: ../x86/regalloc.py:750 # called from regalloc # XXX kill this function at some point @@ -817,14 +821,6 @@ self._emit_call(force_index, self.malloc_func_addr, [size_box], regalloc, result=result) - def emit_new(self, op, arglocs, regalloc): - # XXX do exception handling here! - pass - - def emit_new_with_vtable(self, op, arglocs, regalloc): - classint = arglocs[0].value - self.set_vtable(op.result, classint) - def set_vtable(self, box, vtable): if self.cpu.vtable_offset is not None: adr = rffi.cast(lltype.Signed, vtable) @@ -832,15 +828,6 @@ self.mc.store(r.SCRATCH.value, r.RES.value, self.cpu.vtable_offset) self.mc.free_scratch_reg() - def emit_new_array(self, op, arglocs, regalloc): - self.propagate_memoryerror_if_r3_is_null() - if len(arglocs) > 0: - value_loc, base_loc, ofs_length = arglocs - self.mc.store(value_loc.value, base_loc.value, ofs_length.value) - - emit_newstr = emit_new_array - emit_newunicode = emit_new_array - def write_new_force_index(self): # for shadowstack only: get a new, unused force_index number and # write it to FORCE_INDEX_OFS. Used to record the call shape diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -358,6 +358,10 @@ args[-1] = imm(n) return args + def prepare_call_malloc_gc(self, op): + args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] + return args + def _prepare_guard(self, op, args=None): if args is None: args = [] @@ -781,79 +785,6 @@ prepare_cast_ptr_to_int = prepare_same_as prepare_cast_int_to_ptr = prepare_same_as - def prepare_new(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - # XXX introduce the fastpath for malloc - arglocs = self._prepare_args_for_new_op(op.getdescr()) - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, self.assembler.malloc_func_addr, - arglocs, self, result=op.result) - self.possibly_free_vars(arglocs) - self.possibly_free_var(op.result) - return [] - - def prepare_new_with_vtable(self, op): - classint = op.getarg(0).getint() - descrsize = heaptracker.vtable2descr(self.cpu, classint) - # XXX add fastpath for allocation - callargs = self._prepare_args_for_new_op(descrsize) - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, self.assembler.malloc_func_addr, - callargs, self, result=op.result) - self.possibly_free_vars(callargs) - self.possibly_free_var(op.result) - return [imm(classint)] - - def prepare_new_array(self, op): - gc_ll_descr = self.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newarray is not None: - # framework GC - box_num_elem = op.getarg(0) - if isinstance(box_num_elem, ConstInt): - num_elem = box_num_elem.value - # XXX implement fastpath for malloc - args = self.assembler.cpu.gc_ll_descr.args_for_new_array( - op.getdescr()) - argboxes = [ConstInt(x) for x in args] - argboxes.append(box_num_elem) - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, self.assembler.malloc_array_func_addr, - argboxes, self, result=op.result) - return [] - # boehm GC - itemsize, scale, basesize, ofs_length, _ = ( - self._unpack_arraydescr(op.getdescr())) - return self._malloc_varsize(basesize, ofs_length, itemsize, op) - - def prepare_newstr(self, op): - gc_ll_descr = self.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newstr is not None: - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, - self.assembler.malloc_str_func_addr, [op.getarg(0)], - self, op.result) - return [] - # boehm GC - ofs_items, itemsize, ofs = symbolic.get_array_token(rstr.STR, - self.cpu.translate_support_code) - assert itemsize == 1 - return self._malloc_varsize(ofs_items, ofs, itemsize, op) - - def prepare_newunicode(self, op): - gc_ll_descr = self.cpu.gc_ll_descr - if gc_ll_descr.get_funcptr_for_newunicode is not None: - force_index = self.assembler.write_new_force_index() - self.assembler._emit_call(force_index, - self.assembler.malloc_unicode_func_addr, - [op.getarg(0)], self, op.result) - return [] - # boehm GC - ofs_items, _, ofs = symbolic.get_array_token(rstr.UNICODE, - self.cpu.translate_support_code) - _, itemsize, _ = symbolic.get_array_token(rstr.UNICODE, - self.cpu.translate_support_code) - return self._malloc_varsize(ofs_items, ofs, itemsize, op) - def prepare_call(self, op): effectinfo = op.getdescr().get_extra_info() if effectinfo is not None: From noreply at buildbot.pypy.org Thu Jan 5 18:26:32 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 5 Jan 2012 18:26:32 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: Make kill_consts separate from force_at_end_of_preamble to be able to call it earlier and thereby allow the jumpargs to be updated which is needed for non-virtal constants among the original jumpargs Message-ID: <20120105172632.ADA0582247@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r51030:75ff444ada2d Date: 2012-01-04 19:25 +0100 http://bitbucket.org/pypy/pypy/changeset/75ff444ada2d/ Log: Make kill_consts separate from force_at_end_of_preamble to be able to call it earlier and thereby allow the jumpargs to be updated which is needed for non-virtal constants among the original jumpargs diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -124,15 +124,18 @@ def get_key_box(self): return self.box - def force_at_end_of_preamble(self, already_forced, optforce): - if optforce.optimizer.kill_consts_at_end_of_preamble and self.is_constant(): + def kill_consts(self, already_killed, opt): + if self.is_constant(): constbox = self.box box = constbox.clonebox() op = ResOperation(rop.SAME_AS, [constbox], box) - optforce.optimizer._newoperations.append(op) + opt.optimizer._newoperations.append(op) return OptValue(box) return self + def force_at_end_of_preamble(self, already_forced, optforce): + return self + def get_args_for_fail(self, modifier): pass @@ -358,7 +361,6 @@ self.optimizer = self self.optpure = None self.optearlyforce = None - self.kill_consts_at_end_of_preamble = False if loop is not None: self.call_pure_results = loop.call_pure_results diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -152,7 +152,12 @@ # At the end of a bridge about to force a retrcae debug_print('Generalize for retrace') KillIntBounds(self.optimizer).apply() - self.optimizer.kill_consts_at_end_of_preamble = True + + jump_args = stop_label.getarglist() + already_killed = {} + values = [self.getvalue(box).kill_consts(already_killed, self.optimizer) + for box in jump_args] + stop_label.initarglist([v.get_key_box() for v in values]) def jump_to_start_label(self, start_label, stop_label): if not start_label or not stop_label: diff --git a/pypy/jit/metainterp/optimizeopt/virtualize.py b/pypy/jit/metainterp/optimizeopt/virtualize.py --- a/pypy/jit/metainterp/optimizeopt/virtualize.py +++ b/pypy/jit/metainterp/optimizeopt/virtualize.py @@ -40,6 +40,9 @@ return value return OptValue(self.force_box(optforce)) + def kill_consts(self, already_killed, opt): + return self + def make_virtual_info(self, modifier, fieldnums): if fieldnums is None: return self._make_virtual(modifier) @@ -128,6 +131,15 @@ self._fields[ofs] = self._fields[ofs].force_at_end_of_preamble(already_forced, optforce) return self + def kill_consts(self, already_killed, opt): + if self in already_killed: + return self + already_killed[self] = self + if self._fields: + for ofs in self._fields.keys(): + self._fields[ofs] = self._fields[ofs].kill_consts(already_killed, opt) + return self + def _really_force(self, optforce): op = self.source_op assert op is not None @@ -262,6 +274,14 @@ self._items[index] = self._items[index].force_at_end_of_preamble(already_forced, optforce) return self + def kill_consts(self, already_killed, opt): + if self in already_killed: + return self + already_killed[self] = self + for index in range(len(self._items)): + self._items[index] = self._items[index].kill_consts(already_killed, opt) + return self + def _really_force(self, optforce): assert self.source_op is not None if not we_are_translated(): @@ -357,6 +377,15 @@ self._items[index][descr] = self._items[index][descr].force_at_end_of_preamble(already_forced, optforce) return self + def kill_consts(self, already_killed, opt): + if self in already_killed: + return self + already_killed[self] = self + for index in range(len(self._items)): + for descr in self._items[index].keys(): + self._items[index][descr] = self._items[index][descr].kill_consts(already_killed, opt) + return self + def _make_virtual(self, modifier): return modifier.make_varraystruct(self.arraydescr, self._get_list_of_descrs()) diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -520,8 +520,7 @@ def getvalue(self, box): return self.optimizer.getvalue(box) - def state(self, box): - value = self.getvalue(box) + def state(self, value): box = value.get_key_box() try: info = self.info[box] @@ -529,7 +528,7 @@ if value.is_virtual(): self.info[box] = info = value.make_virtual_info(self, None) flds = self.fieldboxes[box] - info.fieldstate = [self.state(b) for b in flds] + info.fieldstate = [self.state(self.getvalue(b)) for b in flds] else: self.info[box] = info = self.make_not_virtual(value) return info @@ -550,7 +549,7 @@ value.get_args_for_fail(self) else: self.make_not_virtual(value) - return VirtualState([self.state(box) for box in jump_args]) + return VirtualState([self.state(value) for value in values]) def make_not_virtual(self, value): return NotVirtualStateInfo(value) diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -949,6 +949,151 @@ self.check_aborted_count(0) self.check_target_token_count(3) + def test_nested_loops_const(self): + class Int(object): + def __init__(self, val): + self.val = val + bytecode = "iajb+JI" + def get_printable_location(i): + return "%d: %s" % (i, bytecode[i]) + myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'c', 'sa', 'i', 'j'], + get_printable_location=get_printable_location) + def f(n): + pc = sa = c = 0 + i = j = Int(0) + while pc < len(bytecode): + myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + op = bytecode[pc] + if op == 'i': + i = Int(0) + elif op == 'j': + j = Int(0) + elif op == '+': + sa += (i.val + 2) * (j.val + 2) + c + elif op == 'a': + i = Int(i.val + 1) + c = 42 + elif op == 'b': + j = Int(j.val + 1) + c = 7 + elif op == 'J': + if j.val < n: + pc -= 2 + myjitdriver.can_enter_jit(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + continue + elif op == 'I': + if i.val < n: + pc -= 5 + myjitdriver.can_enter_jit(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + continue + pc += 1 + return sa + + res = self.meta_interp(f, [10]) + assert res == f(10) + self.check_aborted_count(0) + self.check_target_token_count(3) + self.check_trace_count(3) + self.check_resops(int_mul=3) + + def test_nested_loops_const_array(self): + class Int(object): + def __init__(self, val): + self.val = val + bytecode = "iajb+JI" + def get_printable_location(i): + return "%d: %s" % (i, bytecode[i]) + myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'sa', 'i', 'j', 'c'], + get_printable_location=get_printable_location) + def f(n): + pc = sa = 0 + c = [0] + i = j = Int(0) + while pc < len(bytecode): + myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + op = bytecode[pc] + if op == 'i': + i = Int(0) + elif op == 'j': + j = Int(0) + elif op == '+': + sa += (i.val + 2) * (j.val + 2) + c[0] + elif op == 'a': + i = Int(i.val + 1) + c = [42, 2] + elif op == 'b': + j = Int(j.val + 1) + c = [7, 3] + elif op == 'J': + if j.val < n: + pc -= 2 + myjitdriver.can_enter_jit(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + continue + elif op == 'I': + if i.val < n: + pc -= 5 + myjitdriver.can_enter_jit(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + continue + pc += 1 + return sa + + res = self.meta_interp(f, [10]) + assert res == f(10) + self.check_aborted_count(0) + self.check_target_token_count(3) + self.check_trace_count(3) + self.check_resops(int_mul=3) + + def test_nested_loops_const_dict(self): + class Int(object): + def __init__(self, val): + self.val = val + bytecode = "iajb+JI" + def get_printable_location(i): + return "%d: %s" % (i, bytecode[i]) + myjitdriver = JitDriver(greens = ['pc'], reds = ['n', 'sa', 'i', 'j', 'c'], + get_printable_location=get_printable_location) + def f(n): + pc = sa = 0 + c = {} + i = j = Int(0) + while pc < len(bytecode): + myjitdriver.jit_merge_point(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + op = bytecode[pc] + if op == 'i': + i = Int(0) + elif op == 'j': + j = Int(0) + elif op == '+': + sa += (i.val + 2) * (j.val + 2) + c[0] + elif op == 'a': + i = Int(i.val + 1) + c = {} + c[0] = 42 + elif op == 'b': + j = Int(j.val + 1) + c = {} + c[0] = 7 + elif op == 'J': + if j.val < n: + pc -= 2 + myjitdriver.can_enter_jit(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + continue + elif op == 'I': + if i.val < n: + pc -= 5 + myjitdriver.can_enter_jit(pc=pc, n=n, sa=sa, i=i, j=j, c=c) + continue + pc += 1 + return sa + + res = self.meta_interp(f, [10]) + assert res == f(10) + self.check_aborted_count(0) + self.check_target_token_count(3) + self.check_trace_count(3) + self.check_resops(int_mul=3) + class VirtualMiscTests: def test_multiple_equal_virtuals(self): From noreply at buildbot.pypy.org Thu Jan 5 18:26:34 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 5 Jan 2012 18:26:34 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: Dont rename boxes in the label, only their bidnings to values Message-ID: <20120105172634.9B3F682247@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r51031:4d40b1df8e90 Date: 2012-01-04 20:07 +0100 http://bitbucket.org/pypy/pypy/changeset/4d40b1df8e90/ Log: Dont rename boxes in the label, only their bidnings to values diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -157,7 +157,8 @@ already_killed = {} values = [self.getvalue(box).kill_consts(already_killed, self.optimizer) for box in jump_args] - stop_label.initarglist([v.get_key_box() for v in values]) + for i in range(len(jump_args)): + self.optimizer.make_equal_to(jump_args[i], values[i], replace=True) def jump_to_start_label(self, start_label, stop_label): if not start_label or not stop_label: From noreply at buildbot.pypy.org Thu Jan 5 18:26:36 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 5 Jan 2012 18:26:36 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: fix tests, we no longer specialice retraces on constants Message-ID: <20120105172636.AE2BA82247@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r51032:76eace813ed4 Date: 2012-01-04 20:08 +0100 http://bitbucket.org/pypy/pypy/changeset/76eace813ed4/ Log: fix tests, we no longer specialice retraces on constants diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2593,12 +2593,19 @@ def f(n, limit): set_param(myjitdriver, 'retrace_limit', limit) - sa = i = a = 0 + sa = i = 0 + a = [] while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) - a = i/4 - a = hint(a, promote=True) - sa += a + if i/4 == 0: + a = [1, 2, 3] + elif i/4 == 1: + a = [1, 2, 3, 4] + elif i/4 == 2: + a = [1, 2, 3, 4, 5] + elif i/4 == 3: + a = [1, 2, 3, 4, 5, 6] + sa += len(a) i += 1 return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) @@ -2614,12 +2621,19 @@ def f(n, limit): set_param(myjitdriver, 'retrace_limit', 3) set_param(myjitdriver, 'max_retrace_guards', limit) - sa = i = a = 0 + sa = i = 0 + a = [] while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a) - a = i/4 - a = hint(a, promote=True) - sa += a + if i/4 == 0: + a = [1, 2, 3] + elif i/4 == 1: + a = [1, 2, 3, 4] + elif i/4 == 2: + a = [1, 2, 3, 4, 5] + elif i/4 == 3: + a = [1, 2, 3, 4, 5, 6] + sa += len(a) i += 1 return sa assert self.meta_interp(f, [20, 1]) == f(20, 1) @@ -2634,16 +2648,23 @@ 'node']) def f(n, limit): set_param(myjitdriver, 'retrace_limit', limit) - sa = i = a = 0 + sa = i = 0 + a = [] node = [1, 2, 3] node[1] = n while i < n: myjitdriver.jit_merge_point(n=n, i=i, sa=sa, a=a, node=node) - a = i/4 - a = hint(a, promote=True) + if i/4 == 0: + a = [1, 2, 3] + elif i/4 == 1: + a = [1, 2, 3, 4] + elif i/4 == 2: + a = [1, 2, 3, 4, 5] + elif i/4 == 3: + a = [1, 2, 3, 4, 5, 6] if i&1 == 0: sa += node[i%3] - sa += a + sa += len(a) i += 1 return sa assert self.meta_interp(f, [20, 2]) == f(20, 2) From noreply at buildbot.pypy.org Thu Jan 5 18:26:37 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 5 Jan 2012 18:26:37 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: fix test (see comment) Message-ID: <20120105172637.DBADB82247@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r51033:ed9ad0e9eacc Date: 2012-01-04 20:25 +0100 http://bitbucket.org/pypy/pypy/changeset/ed9ad0e9eacc/ Log: fix test (see comment) diff --git a/pypy/jit/metainterp/test/test_send.py b/pypy/jit/metainterp/test/test_send.py --- a/pypy/jit/metainterp/test/test_send.py +++ b/pypy/jit/metainterp/test/test_send.py @@ -377,10 +377,11 @@ res = self.meta_interp(f, [198], policy=StopAtXPolicy(State.externfn.im_func)) assert res == f(198) - # we get four TargetTokens: one for each of the 3 getvalue functions, - # and one entering from the interpreter (the preamble) + # We get three TargetTokens: preamble, first loop specizlized to z=25 + # and retraced loop not specialized. The retraced loop can be used for + # both z=2 and z=1001 as retraces are not specialized to constants. self.check_jitcell_token_count(1) - self.check_target_token_count(4) + self.check_target_token_count(3) def test_two_behaviors(self): py.test.skip("XXX fix me!!!!!!! problem in optimize.py") From noreply at buildbot.pypy.org Thu Jan 5 18:26:42 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Thu, 5 Jan 2012 18:26:42 +0100 (CET) Subject: [pypy-commit] pypy jit-usable_retrace_2: extract loops at end of bridges too Message-ID: <20120105172642.7A60782247@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: jit-usable_retrace_2 Changeset: r51034:ba02b282da2e Date: 2012-01-05 10:41 +0100 http://bitbucket.org/pypy/pypy/changeset/ba02b282da2e/ Log: extract loops at end of bridges too diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -44,6 +44,7 @@ # # run a child pypy-c with logging enabled logfile = self.filepath.new(ext='.log') + print logfile # cmdline = [sys.executable] if not import_site: @@ -53,6 +54,7 @@ for key, value in jitopts.items()] cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) + print ' '.join(cmdline) # env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, diff --git a/pypy/module/pypyjit/test_pypy_c/test_misc.py b/pypy/module/pypyjit/test_pypy_c/test_misc.py --- a/pypy/module/pypyjit/test_pypy_c/test_misc.py +++ b/pypy/module/pypyjit/test_pypy_c/test_misc.py @@ -21,7 +21,7 @@ # not virtual, then "i" is written and thus we get a new loop where # "i" is virtual. However, in this specific case the two loops happen # to contain the very same operations - loop0, loop1 = log.loops_by_filename(self.filepath) + loop0, loop1, loop2 = log.loops_by_filename(self.filepath) expected = """ i9 = int_le(i7, i8) @@ -39,6 +39,7 @@ # becomes constant 0 after the bridge and constant 1 at the end of the # loop. A bridge back to the peramble is produced instead. assert loop1.match(expected) + assert loop2.match(expected) def test_factorial(self): def fact(n): diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py --- a/pypy/tool/jitlogparser/storage.py +++ b/pypy/tool/jitlogparser/storage.py @@ -76,7 +76,12 @@ op.percentage = ((getattr(loop, 'count', 1) * 100) / max(getattr(parent, 'count', 1), 1)) loop.no = no - continue + + labels = [op for op in loop.operations if op.name == 'label'] + jumpop = loop.operations[-1] + if not (labels and jumpop.name == 'jump' and + jumpop.getdescr() == labels[-1].getdescr()): + continue res.append(loop) self.loops = res return res From noreply at buildbot.pypy.org Thu Jan 5 18:54:56 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 5 Jan 2012 18:54:56 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype-strings: close merged branch Message-ID: <20120105175456.1B7C282247@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype-strings Changeset: r51035:2adf19881a7c Date: 2012-01-05 11:50 -0600 http://bitbucket.org/pypy/pypy/changeset/2adf19881a7c/ Log: close merged branch From noreply at buildbot.pypy.org Thu Jan 5 18:54:57 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 5 Jan 2012 18:54:57 +0100 (CET) Subject: [pypy-commit] pypy numpy-ndim-size: close this branch, all its features were added elsewhere as far as I can tell Message-ID: <20120105175457.5400582247@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-ndim-size Changeset: r51036:51e67e28230a Date: 2012-01-05 11:52 -0600 http://bitbucket.org/pypy/pypy/changeset/51e67e28230a/ Log: close this branch, all its features were added elsewhere as far as I can tell From noreply at buildbot.pypy.org Thu Jan 5 18:54:58 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 5 Jan 2012 18:54:58 +0100 (CET) Subject: [pypy-commit] pypy numpy-sort: close branch, it's been totally invalidated, create a fresh branch to work on it Message-ID: <20120105175458.79DE582247@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-sort Changeset: r51037:aaab53d723c0 Date: 2012-01-05 11:52 -0600 http://bitbucket.org/pypy/pypy/changeset/aaab53d723c0/ Log: close branch, it's been totally invalidated, create a fresh branch to work on it From noreply at buildbot.pypy.org Thu Jan 5 18:54:59 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 5 Jan 2012 18:54:59 +0100 (CET) Subject: [pypy-commit] pypy numpy-dtype: close branch, a different approach was taken Message-ID: <20120105175459.B4AF082247@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-dtype Changeset: r51038:e1b50a7fd007 Date: 2012-01-05 11:52 -0600 http://bitbucket.org/pypy/pypy/changeset/e1b50a7fd007/ Log: close branch, a different approach was taken From noreply at buildbot.pypy.org Thu Jan 5 18:55:00 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 5 Jan 2012 18:55:00 +0100 (CET) Subject: [pypy-commit] pypy numpy-complex: close branch, different approach taken Message-ID: <20120105175500.EC8D282247@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-complex Changeset: r51039:1436740d3b9b Date: 2012-01-05 11:53 -0600 http://bitbucket.org/pypy/pypy/changeset/1436740d3b9b/ Log: close branch, different approach taken From noreply at buildbot.pypy.org Thu Jan 5 18:55:02 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Thu, 5 Jan 2012 18:55:02 +0100 (CET) Subject: [pypy-commit] pypy jit-raw-array-of-struct: close, never went anywhere and not needed anymore Message-ID: <20120105175502.1E2AE82247@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: jit-raw-array-of-struct Changeset: r51040:c260a0d96e73 Date: 2012-01-05 11:54 -0600 http://bitbucket.org/pypy/pypy/changeset/c260a0d96e73/ Log: close, never went anywhere and not needed anymore From noreply at buildbot.pypy.org Thu Jan 5 21:03:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 21:03:46 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: Implement inventing descrs for jumps and labels (and everything else) Message-ID: <20120105200346.C6E3582247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51041:6ee1610cf4a4 Date: 2012-01-05 21:58 +0200 http://bitbucket.org/pypy/pypy/changeset/6ee1610cf4a4/ Log: Implement inventing descrs for jumps and labels (and everything else) diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -92,8 +92,14 @@ def get_descr(self, poss_descr): if poss_descr.startswith('<'): return None - else: + try: return self._consts[poss_descr] + except KeyError: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -6,7 +6,7 @@ from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat - from pypy.jit.metainterp.history import BasicFailDescr + from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.history import get_const_ptr_for_string diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,8 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken,\ + TargetToken class BaseTestOparser(object): @@ -243,6 +244,16 @@ b = loop.getboxes() assert isinstance(b.sum0, BoxInt) + def test_label(self): + x = """ + [i0] + label(i0, descr=1) + jump(i0, descr=1) + """ + loop = self.parse(x) + assert loop.operations[0].getdescr() is loop.operations[1].getdescr() + assert isinstance(loop.operations[0].getdescr(), TargetToken) + class ForbiddenModule(object): def __init__(self, name, old_mod): From noreply at buildbot.pypy.org Thu Jan 5 21:03:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 21:03:47 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: invent new descrs only for labels Message-ID: <20120105200347.F212982247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51042:97c7263a2d0d Date: 2012-01-05 22:03 +0200 http://bitbucket.org/pypy/pypy/changeset/97c7263a2d0d/ Log: invent new descrs only for labels diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -89,17 +89,18 @@ assert typ == 'class' return self.model.ConstObj(ootype.cast_to_object(obj)) - def get_descr(self, poss_descr): + def get_descr(self, poss_descr, allow_invent): if poss_descr.startswith('<'): return None try: return self._consts[poss_descr] except KeyError: - int(poss_descr) - token = self.model.JitCellToken() - tt = self.model.TargetToken(token) - self._consts[poss_descr] = tt - return tt + if allow_invent: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: @@ -192,7 +193,8 @@ poss_descr = allargs[-1].strip() if poss_descr.startswith('descr='): - descr = self.get_descr(poss_descr[len('descr='):]) + descr = self.get_descr(poss_descr[len('descr='):], + opname == 'label') allargs = allargs[:-1] for arg in allargs: arg = arg.strip() diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -42,6 +42,10 @@ class JitCellToken(object): I_am_a_descr = True + class TargetToken(object): + def __init__(self, jct): + pass + class BasicFailDescr(object): I_am_a_descr = True From noreply at buildbot.pypy.org Thu Jan 5 21:50:48 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 21:50:48 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: add extra return values from assemble_loop/assemble_bridge Message-ID: <20120105205048.CB2D282247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51043:62b1ef8e5cd7 Date: 2012-01-05 22:41 +0200 http://bitbucket.org/pypy/pypy/changeset/62b1ef8e5cd7/ Log: add extra return values from assemble_loop/assemble_bridge diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -2974,6 +2975,53 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i0] + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + _, asm, asmlen = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + _, basm, basmlen = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert asmlen != 0 + cpuname = autodetect_main_model_and_size() + if 'x86' in cpuname: + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + def checkops(mc, startline, ops): + for i in range(startline, len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i - startline]) + + data = ctypes.string_at(asm, asmlen) + mc = list(machine_code_dump(data, asm, cpuname)) + assert len(mc) == 5 + checkops(mc, 1, ['add', 'test', 'je', 'jmp']) + data = ctypes.string_at(basm, basmlen) + mc = list(machine_code_dump(data, basm, cpuname)) + assert len(mc) == 4 + checkops(mc, 1, ['lea', 'mov', 'jmp']) + else: + raise Exception("Implement this test for your CPU") + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -411,6 +411,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -476,7 +477,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return (ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -503,6 +505,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -537,7 +540,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return ops_offset, startpos + rawstart, codeendpos - startpos def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -416,7 +416,7 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken)[0] debug._log = None # assert ops_offset is looptoken._x86_ops_offset From noreply at buildbot.pypy.org Thu Jan 5 21:50:53 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 21:50:53 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: finish refactoring - move on_compile/on_compile hooks to jitportal, probably Message-ID: <20120105205053.0E1D182247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51044:74cc4b1b667e Date: 2012-01-05 22:50 +0200 http://bitbucket.org/pypy/pypy/changeset/74cc4b1b667e/ Log: finish refactoring - move on_compile/on_compile hooks to jitportal, probably breaks pypy diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -141,6 +141,7 @@ self._compile_loop_or_bridge(c, inputargs, operations, clt) old, oldindex = faildescr._compiled_fail llimpl.compile_redirect_fail(old, oldindex, c) + return None, 0, 0 def compile_loop(self, inputargs, operations, jitcell_token, log=True, name=''): @@ -155,6 +156,7 @@ clt.compiled_version = c jitcell_token.compiled_loop_token = clt self._compile_loop_or_bridge(c, inputargs, operations, clt) + return None, 0, 0 def free_loop_and_bridges(self, compiled_loop_token): for c in compiled_loop_token.loop_and_bridges: diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -296,8 +296,7 @@ patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) + portal = metainterp_sd.warmrunnerdesc.portal loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -311,11 +310,16 @@ metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + tp = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) + ops_offset, asmstart, asmlen = tp finally: debug_stop("jit-backend") - metainterp_sd.profiler.end_backend() + metainterp_sd.profiler.end_backend() + portal.on_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, type, greenkey, + asmstart, asmlen) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() @@ -332,8 +336,7 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) + portal = metainterp_sd.warmrunnerdesc.portal if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) @@ -342,11 +345,15 @@ operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + tp = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, + original_loop_token) + ops_offset, asmstart, asmlen = tp finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + portal.on_compile_bridge(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + original_loop_token, operations, n, asmstart, + asmlen) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -56,8 +56,6 @@ greenfield_info = None result_type = result_kind portal_runner_ptr = "???" - on_compile = lambda *args: None - on_compile_bridge = lambda *args: None stats = history.Stats() cpu = CPUClass(rtyper, stats, None, False) diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -53,8 +53,6 @@ call_pure_results = {} class jitdriver_sd: warmstate = FakeState() - on_compile = staticmethod(lambda *args: None) - on_compile_bridge = staticmethod(lambda *args: None) virtualizable_info = None def test_compile_loop(): diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -10,57 +10,6 @@ def getloc2(g): return "in jitdriver2, with g=%d" % g -class JitDriverTests(object): - def test_on_compile(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = looptoken - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - i += 1 - - self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "loop")] - self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] - - def test_on_compile_bridge(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = loop - def on_compile_bridge(self, logger, orig_token, operations, n): - assert 'bridge' not in called - called['bridge'] = orig_token - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - if i >= 4: - i += 2 - i += 1 - - self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] - - -class TestLLtypeSingle(JitDriverTests, LLJitMixin): - pass - class MultipleJitDriversTests(object): def test_simple(self): diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -37,3 +37,60 @@ res = self.meta_interp(f, [100, 7], policy=JitPolicy(portal)) assert res == 721 assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 + + def test_on_compile(self): + called = {} + + class MyJitPortal(JitPortal): + def on_compile(self, jitdriver, logger, looptoken, operations, + type, greenkey, asmaddr, asmlen): + assert asmaddr == 0 + assert asmlen == 0 + called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken + + portal = MyJitPortal() + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + i += 1 + + self.meta_interp(loop, [1, 4], policy=JitPolicy(portal)) + assert sorted(called.keys()) == [(4, 1, "loop")] + self.meta_interp(loop, [2, 4], policy=JitPolicy(portal)) + assert sorted(called.keys()) == [(4, 1, "loop"), + (4, 2, "loop")] + + def test_on_compile_bridge(self): + called = {} + + class MyJitPortal(JitPortal): + def on_compile(self, jitdriver, logger, looptoken, operations, + type, greenkey, asmaddr, asmlen): + assert asmaddr == 0 + assert asmlen == 0 + called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken + + def on_compile_bridge(self, jitdriver, logger, orig_token, + operations, n, asmstart, asmlen): + assert 'bridge' not in called + called['bridge'] = orig_token + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + if i >= 4: + i += 2 + i += 1 + + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal())) + assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] + diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -741,8 +741,8 @@ greenkey where it started, reason is a string why it got aborted """ - def on_compile(self, jitdriver, logger, looptoken, operations, greenkey, - asmaddr, asmlen): + def on_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey, asmaddr, asmlen): """ A hook called when loop is compiled. Overwrite for your own jitdriver if you want to do something special, like call applevel code. @@ -751,6 +751,7 @@ logger - an instance of jit.metainterp.logger.LogOperations asmaddr - (int) raw address of assembler block asmlen - assembler block length + type - either 'loop' or 'entry bridge' """ def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, From noreply at buildbot.pypy.org Thu Jan 5 21:54:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 21:54:03 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: skip pointless test on llgraph Message-ID: <20120105205403.7179C82247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51045:404a51debbaa Date: 2012-01-05 22:53 +0200 http://bitbucket.org/pypy/pypy/changeset/404a51debbaa/ Log: skip pointless test on llgraph diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2976,6 +2976,9 @@ assert res == -10 def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") from pypy.jit.backend.x86.tool.viewcode import machine_code_dump import ctypes ops = """ From noreply at buildbot.pypy.org Thu Jan 5 22:00:35 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 22:00:35 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: fix a test Message-ID: <20120105210035.4F44C82247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51046:d329360b0a1c Date: 2012-01-05 23:00 +0200 http://bitbucket.org/pypy/pypy/changeset/d329360b0a1c/ Log: fix a test diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") From noreply at buildbot.pypy.org Thu Jan 5 22:30:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 22:30:51 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: add name attribute to jitdrivers. start shifting code around in module/pypyjit Message-ID: <20120105213051.1856582247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51047:032bbe1b32c3 Date: 2012-01-05 23:30 +0200 http://bitbucket.org/pypy/pypy/changeset/032bbe1b32c3/ Log: add name attribute to jitdrivers. start shifting code around in module/pypyjit diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -16,24 +16,28 @@ virtualizables=['frame'], reds=['result_size', 'frame', 'ri', 'self', 'result'], get_printable_location=signature.new_printable_location('numpy'), + name='numpy', ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('all'), + name='numpy_all', ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('any'), + name='numpy_any', ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['self', 'frame', 'source', 'res_iter'], get_printable_location=signature.new_printable_location('slice'), + name='numpy_slice', ) def _find_shape_and_elems(space, w_iterable): @@ -297,6 +301,7 @@ greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), + name='numpy_' + op_name, ) def loop(self): sig = self.find_sig() diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -14,6 +14,7 @@ virtualizables = ["frame"], reds = ["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), + name='numpy_reduce', ) class W_Ufunc(Wrappable): diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -13,11 +13,8 @@ from pypy.interpreter.pycode import PyCode, CO_GENERATOR from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame -from pypy.interpreter.gateway import unwrap_spec from opcode import opmap from pypy.rlib.nonconst import NonConstant -from pypy.jit.metainterp.resoperation import rop -from pypy.module.pypyjit.interp_resop import debug_merge_point_from_boxes PyFrame._virtualizable2_ = ['last_instr', 'pycode', 'valuestackdepth', 'locals_stack_w[*]', @@ -51,72 +48,19 @@ def should_unroll_one_iteration(next_instr, is_being_profiled, bytecode): return (bytecode.co_flags & CO_GENERATOR) != 0 -def wrap_oplist(space, logops, operations): - list_w = [] - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - list_w.append(space.wrap(debug_merge_point_from_boxes( - op.getarglist()))) - else: - list_w.append(space.wrap(logops.repr_of_resop(op))) - return list_w - class PyPyJitDriver(JitDriver): reds = ['frame', 'ec'] greens = ['next_instr', 'is_being_profiled', 'pycode'] virtualizables = ['frame'] - def on_compile(self, logger, looptoken, operations, type, next_instr, - is_being_profiled, ll_pycode): - from pypy.rpython.annlowlevel import cast_base_ptr_to_instance - - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap(type), - space.newtuple([pycode, - space.wrap(next_instr), - space.wrap(is_being_profiled)]), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap('bridge'), - space.wrap(n), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - pypyjitdriver = PyPyJitDriver(get_printable_location = get_printable_location, get_jitcell_at = get_jitcell_at, set_jitcell_at = set_jitcell_at, confirm_enter_jit = confirm_enter_jit, can_never_inline = can_never_inline, should_unroll_one_iteration = - should_unroll_one_iteration) + should_unroll_one_iteration, + name='pypyjit') class __extend__(PyFrame): diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -6,6 +6,17 @@ from pypy.rpython.lltypesystem import lltype from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.lltypesystem.rclass import OBJECT +from pypy.jit.metainterp.resoperation import rop + +def wrap_oplist(space, logops, operations): + list_w = [] + for op in operations: + if op.getopnum() == rop.DEBUG_MERGE_POINT: + list_w.append(space.wrap(debug_merge_point_from_boxes( + op.getarglist()))) + else: + list_w.append(space.wrap(logops.repr_of_resop(op))) + return list_w class W_DebugMergePoint(Wrappable): """ A class representing debug_merge_point JIT operation diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -3,6 +3,8 @@ from pypy.module.pypyjit.interp_jit import Cache from pypy.interpreter.error import OperationError from pypy.jit.metainterp.jitprof import counter_names +from pypy.module.pypyjit.interp_resop import wrap_oplist +from pypy.interpreter.pycode import PyCode class PyPyPortal(JitPortal): def on_abort(self, reason, jitdriver, greenkey): @@ -19,6 +21,51 @@ e.write_unraisable(space, "jit hook ", cache.w_abort_hook) cache.in_recursion = False + def on_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey, asmstart, asmlen): + from pypy.rpython.annlowlevel import cast_base_ptr_to_instance + + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_compile_hook): + logops = logger._make_log_operations() + list_w = wrap_oplist(space, logops, operations) + pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) + cache.in_recursion = True + try: + space.call_function(cache.w_compile_hook, + space.wrap('main'), + space.wrap(type), + space.newtuple([pycode, + space.wrap(next_instr), + space.wrap(is_being_profiled)]), + space.newlist(list_w)) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + cache.in_recursion = False + + def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, + n, asm, asmlen): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_compile_hook): + logops = logger._make_log_operations() + list_w = wrap_oplist(space, logops, operations) + cache.in_recursion = True + try: + space.call_function(cache.w_compile_hook, + space.wrap('main'), + space.wrap('bridge'), + space.wrap(n), + space.newlist(list_w)) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + cache.in_recursion = False + pypy_portal = PyPyPortal() class PyPyJitPolicy(JitPolicy): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -409,13 +409,16 @@ active = True # if set to False, this JitDriver is ignored virtualizables = [] + name = 'jitdriver' def __init__(self, greens=None, reds=None, virtualizables=None, get_jitcell_at=None, set_jitcell_at=None, get_printable_location=None, confirm_enter_jit=None, - can_never_inline=None, should_unroll_one_iteration=None): + can_never_inline=None, should_unroll_one_iteration=None, + name='jitdriver'): if greens is not None: self.greens = greens + self.name = name if reds is not None: self.reds = reds if not hasattr(self, 'greens') or not hasattr(self, 'reds'): diff --git a/pypy/rlib/rsre/rsre_jit.py b/pypy/rlib/rsre/rsre_jit.py --- a/pypy/rlib/rsre/rsre_jit.py +++ b/pypy/rlib/rsre/rsre_jit.py @@ -5,7 +5,7 @@ active = True def __init__(self, name, debugprint, **kwds): - JitDriver.__init__(self, **kwds) + JitDriver.__init__(self, name='rsre_' + name, **kwds) # def get_printable_location(*args): # we print based on indices in 'args'. We first print From noreply at buildbot.pypy.org Thu Jan 5 22:58:27 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 22:58:27 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: pass also ops_offset, for good measure Message-ID: <20120105215827.1CFBC82247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51048:9f4f6c879538 Date: 2012-01-05 23:34 +0200 http://bitbucket.org/pypy/pypy/changeset/9f4f6c879538/ Log: pass also ops_offset, for good measure diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -319,7 +319,7 @@ metainterp_sd.profiler.end_backend() portal.on_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, original_jitcell_token, loop.operations, type, greenkey, - asmstart, asmlen) + ops_offset, asmstart, asmlen) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() @@ -352,8 +352,8 @@ debug_stop("jit-backend") metainterp_sd.profiler.end_backend() portal.on_compile_bridge(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, - original_loop_token, operations, n, asmstart, - asmlen) + original_loop_token, operations, n, ops_offset, + asmstart, asmlen) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -43,7 +43,7 @@ class MyJitPortal(JitPortal): def on_compile(self, jitdriver, logger, looptoken, operations, - type, greenkey, asmaddr, asmlen): + type, greenkey, ops_offset, asmaddr, asmlen): assert asmaddr == 0 assert asmlen == 0 called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken @@ -70,13 +70,13 @@ class MyJitPortal(JitPortal): def on_compile(self, jitdriver, logger, looptoken, operations, - type, greenkey, asmaddr, asmlen): + type, greenkey, ops_offset, asmaddr, asmlen): assert asmaddr == 0 assert asmlen == 0 called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken def on_compile_bridge(self, jitdriver, logger, orig_token, - operations, n, asmstart, asmlen): + operations, n, ops_offset, asmstart, asmlen): assert 'bridge' not in called called['bridge'] = orig_token From noreply at buildbot.pypy.org Thu Jan 5 22:58:28 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 22:58:28 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: oops Message-ID: <20120105215828.52D2582247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51049:9e3906158e08 Date: 2012-01-05 23:57 +0200 http://bitbucket.org/pypy/pypy/changeset/9e3906158e08/ Log: oops diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -745,7 +745,7 @@ """ def on_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey, asmaddr, asmlen): + greenkey, ops_offset, asmaddr, asmlen): """ A hook called when loop is compiled. Overwrite for your own jitdriver if you want to do something special, like call applevel code. @@ -758,7 +758,7 @@ """ def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, - fail_descr_no, asmaddr, asmlen): + fail_descr_no, ops_offset, asmaddr, asmlen): """ A hook called when a bridge is compiled. Overwrite for your own jitdriver if you want to do something special """ From noreply at buildbot.pypy.org Thu Jan 5 23:44:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 23:44:34 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: fix tests Message-ID: <20120105224434.BAAD682247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51050:ad94daae774d Date: 2012-01-06 00:13 +0200 http://bitbucket.org/pypy/pypy/changeset/ad94daae774d/ Log: fix tests diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -296,7 +296,6 @@ patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - portal = metainterp_sd.warmrunnerdesc.portal loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -316,10 +315,12 @@ ops_offset, asmstart, asmlen = tp finally: debug_stop("jit-backend") - metainterp_sd.profiler.end_backend() - portal.on_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, - original_jitcell_token, loop.operations, type, greenkey, - ops_offset, asmstart, asmlen) + metainterp_sd.profiler.end_backend() + if metainterp_sd.warmrunnerdesc is not None: + portal = metainterp_sd.warmrunnerdesc.portal + portal.on_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, type, + greenkey, ops_offset, asmstart, asmlen) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() @@ -336,7 +337,6 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - portal = metainterp_sd.warmrunnerdesc.portal if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) @@ -351,9 +351,12 @@ finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() - portal.on_compile_bridge(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, - original_loop_token, operations, n, ops_offset, - asmstart, asmlen) + if metainterp_sd.warmrunnerdesc is not None: + portal = metainterp_sd.warmrunnerdesc.portal + portal.on_compile_bridge(jitdriver_sd.jitdriver, + metainterp_sd.logger_ops, + original_loop_token, operations, n, ops_offset, + asmstart, asmlen) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") From noreply at buildbot.pypy.org Thu Jan 5 23:44:35 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 5 Jan 2012 23:44:35 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: fix pypyjit module, more work tomorrow Message-ID: <20120105224435.E42C182247@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51051:e7ce390271a7 Date: 2012-01-06 00:43 +0200 http://bitbucket.org/pypy/pypy/changeset/e7ce390271a7/ Log: fix pypyjit module, more work tomorrow diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -7,8 +7,8 @@ interpleveldefs = { 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', - 'set_compile_hook': 'interp_jit.set_compile_hook', - 'set_abort_hook': 'interp_jit.set_abort_hook', + 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_abort_hook': 'interp_resop.set_abort_hook', 'DebugMergePoint': 'interp_resop.W_DebugMergePoint', } diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -14,7 +14,6 @@ from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame from opcode import opmap -from pypy.rlib.nonconst import NonConstant PyFrame._virtualizable2_ = ['last_instr', 'pycode', 'valuestackdepth', 'locals_stack_w[*]', @@ -167,48 +166,3 @@ '''For testing. Invokes callable(...), but without letting the JIT follow the call.''' return space.call_args(w_callable, __args__) - -class Cache(object): - in_recursion = False - - def __init__(self, space): - self.w_compile_hook = space.w_None - self.w_abort_hook = space.w_None - -def set_compile_hook(space, w_hook): - """ set_compile_hook(hook) - - Set a compiling hook that will be called each time a loop is compiled. - The hook will be called with the following signature: - hook(merge_point_type, loop_type, greenkey or guard_number, operations) - - for now merge point type is always `main` - - loop_type can be either `loop` `entry_bridge` or `bridge` - in case loop is not `bridge`, greenkey will be a set of constants - for jit merge point. in case it's `main` it'll be a tuple - (code, offset, is_being_profiled) - - Note that jit hook is not reentrant. It means that if the code - inside the jit hook is itself jitted, it will get compiled, but the - jit hook won't be called for that. - - XXX write down what else - """ - cache = space.fromcache(Cache) - cache.w_compile_hook = w_hook - cache.in_recursion = NonConstant(False) - return space.w_None - -def set_abort_hook(space, w_hook): - """ set_abort_hook(hook) - - Set a hook (callable) that will be called each time there is tracing - aborted due to some reason. The hook will be called with string describing - the reason as an argument - """ - cache = space.fromcache(Cache) - cache.w_abort_hook = w_hook - cache.in_recursion = NonConstant(False) - return space.w_None - diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -7,8 +7,76 @@ from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.jit.metainterp.resoperation import rop +from pypy.rlib.nonconst import NonConstant -def wrap_oplist(space, logops, operations): + +class Cache(object): + in_recursion = False + + def __init__(self, space): + self.w_compile_hook = space.w_None + self.w_abort_hook = space.w_None + +def wrap_greenkey(space, jitdriver, greenkey): + if jitdriver.name == 'pypyjit': + next_instr = greenkey[0].getint() + is_being_profiled = greenkey[1].getint() + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + greenkey[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return space.newtuple([space.wrap(pycode), space.wrap(next_instr), + space.newbool(is_being_profiled)]) + else: + return space.wrap('who knows?') + +def set_compile_hook(space, w_hook): + """ set_compile_hook(hook) + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + """ + cache = space.fromcache(Cache) + cache.w_compile_hook = w_hook + cache.in_recursion = NonConstant(False) + return space.w_None + +def set_abort_hook(space, w_hook): + """ set_abort_hook(hook) + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. + """ + cache = space.fromcache(Cache) + cache.w_abort_hook = w_hook + cache.in_recursion = NonConstant(False) + return space.w_None + +def wrap_oplist(space, logops, operations, ops_offset): list_w = [] for op in operations: if op.getopnum() == rop.DEBUG_MERGE_POINT: diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,10 +1,8 @@ from pypy.jit.codewriter.policy import JitPolicy from pypy.rlib.jit import JitPortal -from pypy.module.pypyjit.interp_jit import Cache from pypy.interpreter.error import OperationError from pypy.jit.metainterp.jitprof import counter_names -from pypy.module.pypyjit.interp_resop import wrap_oplist -from pypy.interpreter.pycode import PyCode +from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey class PyPyPortal(JitPortal): def on_abort(self, reason, jitdriver, greenkey): @@ -16,52 +14,43 @@ cache.in_recursion = True try: space.call_function(cache.w_abort_hook, + space.wrap(jitdriver.name), + wrap_greenkey(space, jitdriver, greenkey), space.wrap(counter_names[reason])) except OperationError, e: e.write_unraisable(space, "jit hook ", cache.w_abort_hook) cache.in_recursion = False def on_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey, asmstart, asmlen): - from pypy.rpython.annlowlevel import cast_base_ptr_to_instance + greenkey, ops_offset, asmstart, asmlen): + self._compile_hook(jitdriver, logger, operations, type, + ops_offset, asmstart, asmlen, + wrap_greenkey(self.space, jitdriver, greenkey)) + def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, + n, ops_offset, asmstart, asmlen): + self._compile_hook(jitdriver, logger, operations, 'bridge', + ops_offset, asmstart, asmlen, + self.space.wrap(n)) + + def _compile_hook(self, jitdriver, logger, operations, type, + ops_offset, asmstart, asmlen, w_arg): space = self.space cache = space.fromcache(Cache) if cache.in_recursion: return if space.is_true(cache.w_compile_hook): logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) + list_w = wrap_oplist(space, logops, operations, ops_offset) cache.in_recursion = True try: space.call_function(cache.w_compile_hook, - space.wrap('main'), + space.wrap(jitdriver.name), space.wrap(type), - space.newtuple([pycode, - space.wrap(next_instr), - space.wrap(is_being_profiled)]), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - - def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, - n, asm, asmlen): - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap('bridge'), - space.wrap(n), - space.newlist(list_w)) + w_arg, + space.newlist(list_w), + space.wrap(asmstart), + space.wrap(asmlen)) except OperationError, e: e.write_unraisable(space, "jit hook ", cache.w_compile_hook) cache.in_recursion = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -44,11 +44,13 @@ greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] def interp_on_compile(): - pypyjitdriver.on_compile(logger, JitCellToken(), oplist, 'loop', - 0, False, ll_code) + pypy_portal.on_compile(pypyjitdriver, logger, JitCellToken(), + oplist, 'loop', greenkey, {}, 0, 0) def interp_on_compile_bridge(): - pypyjitdriver.on_compile_bridge(logger, JitCellToken(), oplist, 0) + pypy_portal.on_compile_bridge(pypyjitdriver, logger, + JitCellToken(), oplist, 0, + {}, 0, 0) def interp_on_abort(): pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey) @@ -61,21 +63,21 @@ import pypyjit all = [] - def hook(*args): - assert args[0] == 'main' - assert args[1] in ['loop', 'bridge'] - all.append(args[2:]) + def hook(name, looptype, tuple_or_guard_no, ops, asmstart, asmlen): + all.append((name, looptype, tuple_or_guard_no, ops)) self.on_compile() pypyjit.set_compile_hook(hook) assert not all self.on_compile() assert len(all) == 1 - assert all[0][0][0].co_name == 'f' - assert all[0][0][1] == 0 - assert all[0][0][2] == False - assert len(all[0][1]) == 3 - assert 'int_add' in all[0][1][0] + elem = all[0] + assert elem[0] == 'pypyjit' + assert elem[2][0].co_name == 'f' + assert elem[2][1] == 0 + assert elem[2][2] == False + assert len(elem[3]) == 3 + assert 'int_add' in elem[3][0] self.on_compile_bridge() assert len(all) == 2 pypyjit.set_compile_hook(None) @@ -136,9 +138,9 @@ import pypyjit l = [] - def hook(reason): - l.append(reason) + def hook(jitdriver_name, greenkey, reason): + l.append((jitdriver_name, reason)) pypyjit.set_abort_hook(hook) self.on_abort() - assert l == ['ABORT_TOO_LONG'] + assert l == [('pypyjit', 'ABORT_TOO_LONG')] From noreply at buildbot.pypy.org Fri Jan 6 09:57:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 09:57:04 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: work in progress on improving the JITted ops representation Message-ID: <20120106085704.40C3782007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51052:b7cde06c0fc6 Date: 2012-01-06 10:56 +0200 http://bitbucket.org/pypy/pypy/changeset/b7cde06c0fc6/ Log: work in progress on improving the JITted ops representation diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -9,7 +9,7 @@ 'residual_call': 'interp_jit.residual_call', 'set_compile_hook': 'interp_resop.set_compile_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', - 'DebugMergePoint': 'interp_resop.W_DebugMergePoint', + 'ResOperation': 'interp_resop.WrappedOp', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -77,44 +77,21 @@ return space.w_None def wrap_oplist(space, logops, operations, ops_offset): - list_w = [] - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - list_w.append(space.wrap(debug_merge_point_from_boxes( - op.getarglist()))) - else: - list_w.append(space.wrap(logops.repr_of_resop(op))) - return list_w + return [WrappedOp(op, ops_offset, logops) for op in operations] -class W_DebugMergePoint(Wrappable): - """ A class representing debug_merge_point JIT operation +class WrappedOp(Wrappable): + """ A class representing a single ResOperation, wrapped nicely """ - - def __init__(self, mp_no, offset, pycode): - self.mp_no = mp_no - self.offset = offset - self.pycode = pycode + def __init__(self, op, ops_offset, logops): + self.op = op + self.offset = ops_offset[op] + self.logops = logops # for __repr__ def descr_repr(self, space): - return space.wrap('DebugMergePoint()') + return space.wrap(self.logops.repr_of_resop(self.op)) - at unwrap_spec(mp_no=int, offset=int, pycode=PyCode) -def new_debug_merge_point(space, w_tp, mp_no, offset, pycode): - return W_DebugMergePoint(mp_no, offset, pycode) - -def debug_merge_point_from_boxes(boxes): - mp_no = boxes[0].getint() - offset = boxes[2].getint() - llcode = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), - boxes[4].getref_base()) - pycode = cast_base_ptr_to_instance(PyCode, llcode) - assert pycode is not None - return W_DebugMergePoint(mp_no, offset, pycode) - -W_DebugMergePoint.typedef = TypeDef( - 'DebugMergePoint', - __new__ = interp2app(new_debug_merge_point), - __doc__ = W_DebugMergePoint.__doc__, - __repr__ = interp2app(W_DebugMergePoint.descr_repr), - code = interp_attrproperty('pycode', W_DebugMergePoint), +WrappedOp.typedef = TypeDef( + 'ResOperation', + __doc__ = WrappedOp.__doc__, + __repr__ = interp2app(WrappedOp.descr_repr), ) diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -26,9 +26,9 @@ space = gettestobjspace(usemodules=('pypyjit',)) cls.space = space w_f = space.appexec([], """(): - def f(): + def function(): pass - return f + return function """) cls.w_f = w_f ll_code = cast_instance_to_base_ptr(w_f.code) @@ -42,15 +42,19 @@ guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] + offset = {} + for i, op in enumerate(oplist): + offset[op] = i def interp_on_compile(): pypy_portal.on_compile(pypyjitdriver, logger, JitCellToken(), - oplist, 'loop', greenkey, {}, 0, 0) + oplist, 'loop', greenkey, offset, + 0, 0) def interp_on_compile_bridge(): pypy_portal.on_compile_bridge(pypyjitdriver, logger, - JitCellToken(), oplist, 0, - {}, 0, 0) + JitCellToken(), oplist, 0, + offset, 0, 0) def interp_on_abort(): pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey) @@ -73,7 +77,7 @@ assert len(all) == 1 elem = all[0] assert elem[0] == 'pypyjit' - assert elem[2][0].co_name == 'f' + assert elem[2][0].co_name == 'function' assert elem[2][1] == 0 assert elem[2][2] == False assert len(elem[3]) == 3 @@ -125,14 +129,9 @@ pypyjit.set_compile_hook(hook) self.on_compile() - dmp = l[0][3][1] - assert isinstance(dmp, pypyjit.DebugMergePoint) - assert dmp.code is self.f.func_code - - def test_creation(self): - import pypyjit - dmp = pypyjit.DebugMergePoint(0, 0, self.f.func_code) - assert dmp.code is self.f.func_code + op = l[0][3][1] + assert isinstance(op, pypyjit.ResOperation) + assert 'function' in repr(op) def test_on_abort(self): import pypyjit From noreply at buildbot.pypy.org Fri Jan 6 09:57:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 09:57:33 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: bah, fix translation Message-ID: <20120106085733.9281982007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51053:45d0a0377adb Date: 2012-01-06 10:57 +0200 http://bitbucket.org/pypy/pypy/changeset/45d0a0377adb/ Log: bah, fix translation diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -25,7 +25,7 @@ greenkey[2].getref_base()) pycode = cast_base_ptr_to_instance(PyCode, ll_code) return space.newtuple([space.wrap(pycode), space.wrap(next_instr), - space.newbool(is_being_profiled)]) + space.newbool(bool(is_being_profiled))]) else: return space.wrap('who knows?') From noreply at buildbot.pypy.org Fri Jan 6 10:06:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 10:06:45 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: expose a bit more and improve tests Message-ID: <20120106090645.8EC9E82007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51054:967f730f60fb Date: 2012-01-06 11:06 +0200 http://bitbucket.org/pypy/pypy/changeset/967f730f60fb/ Log: expose a bit more and improve tests diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,5 @@ -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app from pypy.interpreter.pycode import PyCode @@ -90,8 +90,16 @@ def descr_repr(self, space): return space.wrap(self.logops.repr_of_resop(self.op)) + def descr_num(self, space): + return space.wrap(self.op.getopnum()) + + def descr_name(self, space): + return space.wrap(self.op.getopname()) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, __repr__ = interp2app(WrappedOp.descr_repr), + name = GetSetProperty(WrappedOp.descr_name), + num = GetSetProperty(WrappedOp.descr_num), ) diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -1,24 +1,36 @@ import py from pypy.conftest import gettestobjspace, option +from pypy.interpreter.gateway import interp2app from pypy.interpreter.pycode import PyCode -from pypy.interpreter.gateway import interp2app from pypy.jit.metainterp.history import JitCellToken, ConstInt, ConstPtr -from pypy.jit.metainterp.resoperation import ResOperation, rop +from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.logger import Logger from pypy.rpython.annlowlevel import (cast_instance_to_base_ptr, cast_base_ptr_to_instance) from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.module.pypyjit.interp_jit import pypyjitdriver from pypy.module.pypyjit.policy import pypy_portal from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG +class MockJitDriverSD(object): + class warmstate(object): + @staticmethod + def get_location_str(boxes): + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + boxes[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return pycode.co_name + class MockSD(object): class cpu(object): ts = llhelper + jitdrivers_sd = [MockJitDriverSD] + class AppTestJitHook(object): def setup_class(cls): if option.runappdirect: @@ -62,6 +74,7 @@ cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) + cls.w_int_add_num = space.wrap(rop.INT_ADD) def test_on_compile(self): import pypyjit @@ -81,7 +94,9 @@ assert elem[2][1] == 0 assert elem[2][2] == False assert len(elem[3]) == 3 - assert 'int_add' in elem[3][0] + int_add = elem[3][0] + assert int_add.name == 'int_add' + assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 pypyjit.set_compile_hook(None) From noreply at buildbot.pypy.org Fri Jan 6 10:09:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 10:09:51 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: an attempt to fix translation (likely not working) Message-ID: <20120106090951.B459582007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51055:37c20c819ee1 Date: 2012-01-06 11:09 +0200 http://bitbucket.org/pypy/pypy/changeset/37c20c819ee1/ Log: an attempt to fix translation (likely not working) diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -79,16 +79,23 @@ def wrap_oplist(space, logops, operations, ops_offset): return [WrappedOp(op, ops_offset, logops) for op in operations] +# annotation hint + +def dummy_repr_for_resop(op): + return NonConstant('stuff') + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ + repr_for_resop = dummy_repr_for_resop + def __init__(self, op, ops_offset, logops): self.op = op self.offset = ops_offset[op] - self.logops = logops # for __repr__ + self.repr_of_resop = logops.repr_of_resop def descr_repr(self, space): - return space.wrap(self.logops.repr_of_resop(self.op)) + return space.wrap(self.repr_of_resop(self.op)) def descr_num(self, space): return space.wrap(self.op.getopnum()) From noreply at buildbot.pypy.org Fri Jan 6 10:26:55 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 10:26:55 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: hopefully (ugly) fix translation. Message-ID: <20120106092655.2405182007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51056:0002a1e634e4 Date: 2012-01-06 11:26 +0200 http://bitbucket.org/pypy/pypy/changeset/0002a1e634e4/ Log: hopefully (ugly) fix translation. diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -3,13 +3,13 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app from pypy.interpreter.pycode import PyCode +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import lltype from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.lltypesystem.rclass import OBJECT -from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib.nonconst import NonConstant - class Cache(object): in_recursion = False @@ -79,23 +79,27 @@ def wrap_oplist(space, logops, operations, ops_offset): return [WrappedOp(op, ops_offset, logops) for op in operations] -# annotation hint - -def dummy_repr_for_resop(op): - return NonConstant('stuff') + at unwrap_spec(no=int) +def new_resop(space, w_tp, no): + # this is mostly an annotation hint + if NonConstant(True): + raise OperationError(space.w_ValueError, + space.wrap("for annotation only")) + return space.wrap(WrappedOp(ResOperation(no, [], None), -1)) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ - repr_for_resop = dummy_repr_for_resop - - def __init__(self, op, ops_offset, logops): + def __init__(self, op, ops_offset, logops=None): self.op = op self.offset = ops_offset[op] - self.repr_of_resop = logops.repr_of_resop + if logops is not None: + self.repr_of_resop = logops.repr_of_resop(op) # XXX fix this, horribly inefficient + else: + self.repr_of_resop = "repr" def descr_repr(self, space): - return space.wrap(self.repr_of_resop(self.op)) + return space.wrap(self.repr_of_resop) def descr_num(self, space): return space.wrap(self.op.getopnum()) @@ -105,8 +109,10 @@ WrappedOp.typedef = TypeDef( 'ResOperation', + __new__ = interp2app(new_resop), __doc__ = WrappedOp.__doc__, __repr__ = interp2app(WrappedOp.descr_repr), name = GetSetProperty(WrappedOp.descr_name), num = GetSetProperty(WrappedOp.descr_num), ) +WrappedOp.acceptable_as_base_class = False From noreply at buildbot.pypy.org Fri Jan 6 10:33:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 10:33:34 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: another attempt Message-ID: <20120106093335.022FB82007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51057:4e8f6daccede Date: 2012-01-06 11:33 +0200 http://bitbucket.org/pypy/pypy/changeset/4e8f6daccede/ Log: another attempt diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -77,7 +77,7 @@ return space.w_None def wrap_oplist(space, logops, operations, ops_offset): - return [WrappedOp(op, ops_offset, logops) for op in operations] + return [WrappedOp(op, ops_offset[op], logops) for op in operations] @unwrap_spec(no=int) def new_resop(space, w_tp, no): @@ -85,14 +85,15 @@ if NonConstant(True): raise OperationError(space.w_ValueError, space.wrap("for annotation only")) - return space.wrap(WrappedOp(ResOperation(no, [], None), -1)) + op = ResOperation(no, [], None) + return space.wrap(WrappedOp(op, 13)) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ - def __init__(self, op, ops_offset, logops=None): + def __init__(self, op, offset, logops=None): self.op = op - self.offset = ops_offset[op] + self.offset = offset if logops is not None: self.repr_of_resop = logops.repr_of_resop(op) # XXX fix this, horribly inefficient else: From noreply at buildbot.pypy.org Fri Jan 6 10:36:12 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 10:36:12 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: yet another go Message-ID: <20120106093612.1536882007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51058:d8f5fca2919d Date: 2012-01-06 11:35 +0200 http://bitbucket.org/pypy/pypy/changeset/d8f5fca2919d/ Log: yet another go diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -77,7 +77,7 @@ return space.w_None def wrap_oplist(space, logops, operations, ops_offset): - return [WrappedOp(op, ops_offset[op], logops) for op in operations] + return [WrappedOp(op, ops_offset[op], logops.repr_of_resop(op)) for op in operations] @unwrap_spec(no=int) def new_resop(space, w_tp, no): @@ -86,18 +86,15 @@ raise OperationError(space.w_ValueError, space.wrap("for annotation only")) op = ResOperation(no, [], None) - return space.wrap(WrappedOp(op, 13)) + return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ - def __init__(self, op, offset, logops=None): + def __init__(self, op, offset, repr_of_resop): self.op = op self.offset = offset - if logops is not None: - self.repr_of_resop = logops.repr_of_resop(op) # XXX fix this, horribly inefficient - else: - self.repr_of_resop = "repr" + self.repr_of_resop = repr_of_resop def descr_repr(self, space): return space.wrap(self.repr_of_resop) From noreply at buildbot.pypy.org Fri Jan 6 10:53:11 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 10:53:11 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: be slightly more abstract Message-ID: <20120106095311.6E04582007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51059:6895bfac2d92 Date: 2012-01-06 11:52 +0200 http://bitbucket.org/pypy/pypy/changeset/6895bfac2d92/ Log: be slightly more abstract diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -81,11 +81,13 @@ @unwrap_spec(no=int) def new_resop(space, w_tp, no): + from pypy.jit.metainterp.history import AbstractValue, AbstractDescr # this is mostly an annotation hint if NonConstant(True): raise OperationError(space.w_ValueError, space.wrap("for annotation only")) - op = ResOperation(no, [], None) + op = ResOperation(no, [AbstractValue()], AbstractValue(), + descr=AbstractDescr()) return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) class WrappedOp(Wrappable): From noreply at buildbot.pypy.org Fri Jan 6 11:07:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 11:07:03 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: bah, can be none Message-ID: <20120106100703.9A72682007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51060:1d703b15bd80 Date: 2012-01-06 12:06 +0200 http://bitbucket.org/pypy/pypy/changeset/1d703b15bd80/ Log: bah, can be none diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -86,8 +86,11 @@ if NonConstant(True): raise OperationError(space.w_ValueError, space.wrap("for annotation only")) - op = ResOperation(no, [AbstractValue()], AbstractValue(), - descr=AbstractDescr()) + if no: + op = ResOperation(no, [AbstractValue()], AbstractValue(), + descr=AbstractDescr()) + else: + op = ResOperation(no, [], None, descr=None) return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) class WrappedOp(Wrappable): From noreply at buildbot.pypy.org Fri Jan 6 11:31:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 11:31:51 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: wtf Message-ID: <20120106103151.8FA9182007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51061:307dbbf5067d Date: 2012-01-06 12:31 +0200 http://bitbucket.org/pypy/pypy/changeset/307dbbf5067d/ Log: wtf diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -90,7 +90,7 @@ op = ResOperation(no, [AbstractValue()], AbstractValue(), descr=AbstractDescr()) else: - op = ResOperation(no, [], None, descr=None) + op = ResOperation(no, [None], None, descr=None) return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) class WrappedOp(Wrappable): From noreply at buildbot.pypy.org Fri Jan 6 11:50:13 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 11:50:13 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: grrrr; Message-ID: <20120106105013.5275482007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51062:7de1c76c02c4 Date: 2012-01-06 12:49 +0200 http://bitbucket.org/pypy/pypy/changeset/7de1c76c02c4/ Log: grrrr; diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -89,6 +89,7 @@ if no: op = ResOperation(no, [AbstractValue()], AbstractValue(), descr=AbstractDescr()) + op.setdescr(None) else: op = ResOperation(no, [None], None, descr=None) return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) From noreply at buildbot.pypy.org Fri Jan 6 12:08:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 12:08:42 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: make sure we can modify the list Message-ID: <20120106110842.60FE482007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51063:3826773898b1 Date: 2012-01-06 13:07 +0200 http://bitbucket.org/pypy/pypy/changeset/3826773898b1/ Log: make sure we can modify the list diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -92,6 +92,7 @@ op.setdescr(None) else: op = ResOperation(no, [None], None, descr=None) + op.setarg(0, AbstractValue()) # list is mutated return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) class WrappedOp(Wrappable): From noreply at buildbot.pypy.org Fri Jan 6 12:22:38 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 12:22:38 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: another hack... Message-ID: <20120106112238.C7BAA82007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51064:18807d528b49 Date: 2012-01-06 13:22 +0200 http://bitbucket.org/pypy/pypy/changeset/18807d528b49/ Log: another hack... diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -92,7 +92,7 @@ op.setdescr(None) else: op = ResOperation(no, [None], None, descr=None) - op.setarg(0, AbstractValue()) # list is mutated + op.setarg(NonConstant(0), AbstractValue()) # list is mutated return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) class WrappedOp(Wrappable): From noreply at buildbot.pypy.org Fri Jan 6 12:23:58 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 6 Jan 2012 12:23:58 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: turn AxisIterator inside out Message-ID: <20120106112358.8D52582007@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51065:44fd891a3a1c Date: 2012-01-06 13:01 +0200 http://bitbucket.org/pypy/pypy/changeset/44fd891a3a1c/ Log: turn AxisIterator inside out diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -103,60 +103,87 @@ def next(self, shapelen): return self -def axis_iter_from_arr(arr, dim=-1, start=None): - if start is None: - start = [] +def axis_iter_from_arr(arr, dim=-1): # The assert is needed for zjit tests from pypy.module.micronumpy.interp_numarray import ConcreteArray assert isinstance(arr, ConcreteArray) return AxisIterator(arr.start, arr.strides, arr.backstrides, arr.shape, - dim, start) + dim) class AxisIterator(object): - """ This object will return offsets of each start of a stride on the - desired dimension, starting at "start" which is an index along - each axis + """ Accept an addition argument dim + Redorder the dimensions to iterate over dim most often. + Set a flag at the end of each run over dim. """ - def __init__(self, arr_start, strides, backstrides, shape, dim, start): + def __init__(self, arr_start, strides, backstrides, shape, dim): self.shape = shape self.shapelen = len(shape) self.indices = [0] * len(shape) - self.done = False + self._done = False + self.axis_done = False self.offset = arr_start - self.dim = len(shape) - 1 + self.dim = dim + self.dim_order = [] + if self.dim >=0: + self.dim_order.append(self.dim) + for i in range(self.shapelen - 1, -1, -1): + if i == self.dim: + continue + self.dim_order.append(i) self.strides = strides self.backstrides = backstrides - if dim >= 0: - self.dim = dim - if len(start) == len(shape): - for i in range(len(start)): - self.offset += strides[i] * start[i] + + def done(self): + return self._done def next(self, shapelen): #shapelen will always be one less than self.shapelen offset = self.offset + axis_done = False indices = [0] * self.shapelen for i in range(self.shapelen): indices[i] = self.indices[i] - for i in range(self.shapelen - 1, -1, -1): - if i == self.dim: - continue + for i in self.dim_order: if indices[i] < self.shape[i] - 1: indices[i] += 1 offset += self.strides[i] break else: + if i == self.dim: + axis_done = True indices[i] = 0 offset -= self.backstrides[i] else: - self.done = True + self._done = True res = instantiate(AxisIterator) + res.axis_done = axis_done res.offset = offset res.indices = indices res.strides = self.strides + res.dim_order = self.dim_order res.backstrides = self.backstrides res.shape = self.shape res.shapelen = self.shapelen res.dim = self.dim - res.done = self.done + res._done = self._done return res + +# ------ other iterators that are not part of the computation frame ---------- +class SkipLastAxisIterator(object): + def __init__(self, arr): + self.arr = arr + self.indices = [0] * (len(arr.shape) - 1) + self.done = False + self.offset = arr.start + def next(self): + for i in range(len(self.arr.shape) - 2, -1, -1): + if self.indices[i] < self.arr.shape[i] - 1: + self.indices[i] += 1 + self.offset += self.arr.strides[i] + break + else: + self.indices[i] = 0 + self.offset -= self.arr.backstrides[i] + else: + self.done = True + diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator,\ - view_iter_from_arr, OneDimIterator, axis_iter_from_arr +from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ + view_iter_from_arr, SkipLastAxisIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -603,11 +603,12 @@ def getitem(self, item): raise NotImplementedError - def find_sig(self, res_shape=None): + def find_sig(self, res_shape=None, arr=None): """ find a correct signature for the array """ res_shape = res_shape or self.shape - return signature.find_sig(self.create_sig(res_shape), self) + arr = arr or self + return signature.find_sig(self.create_sig(res_shape), arr) def descr_array_iface(self, space): if not self.shape: @@ -756,9 +757,10 @@ for s in shape: self.size *= s self.binfunc = binfunc - self.res_dtype = res_dtype + self.dtype = res_dtype self.dim = dim self.identity = identity + self.computing = False def _del_sources(self): self.values = None @@ -767,46 +769,42 @@ def create_sig(self, res_shape): if self.forced_result is not None: return self.forced_result.create_sig(res_shape) - return signature.ReduceSignature(self.binfunc, self.name, self.res_dtype, - signature.ViewSignature(self.res_dtype), - self.values.create_sig(res_shape)) + return signature.ReduceSignature(self.binfunc, self.name, self.dtype, + signature.ScalarSignature(self.dtype), + self.values.create_sig(res_shape)) + + def get_identity(self, sig, frame, shapelen): + #XXX does this allocate? Yes :( + #XXX is this inlinable? Yes :) + if self.identity is None: + value = sig.eval(frame, self.values).convert_to(self.dtype) + frame.next(shapelen) + else: + value = self.identity.convert_to(self.dtype) + return value def compute(self): - dtype = self.res_dtype + self.computing = True + dtype = self.dtype result = W_NDimArray(self.size, self.shape, dtype) self.values = self.values.get_concrete() shapelen = len(result.shape) objlen = len(self.values.shape) - target_len = self.values.shape[self.dim] - sig = self.values.find_sig(result.shape) - #sig = self.create_sig(result.shape) + sig = self.find_sig(res_shape=result.shape,arr=self.values) ri = ArrayIterator(result.size) - si = axis_iter_from_arr(self.values, self.dim) - while not ri.done(): - # explanation: we want to start the frame at the beginning of - # an axis: use si.indices to create a chunk (slice) - # in self.values - chunks = [] - for i in range(objlen): - if i == self.dim: - chunks.append((0, target_len, 1, target_len)) - else: - chunks.append((si.indices[i], 0, 0, 1)) - frame = sig.create_frame(self.values, - res_shape=[target_len], chunks = [chunks, ]) - if self.identity is None: - value = sig.eval(frame, self.values).convert_to(dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(dtype) - while not frame.done(): - assert isinstance(sig, signature.ViewSignature) - nextval = sig.eval(frame, self.values).convert_to(dtype) - value = self.binfunc(dtype, value, nextval) - frame.next(shapelen) + frame = sig.create_frame(self.values, dim=self.dim) + value = self.get_identity(sig, frame, shapelen) + while not frame.done(): + #XXX add jit_merge_point ? + if frame.iterators[0].axis_done: + value = self.get_identity(sig, frame, shapelen) + ri = ri.next(shapelen) + assert isinstance(sig, signature.ReduceSignature) + nextval = sig.eval(frame, self.values).convert_to(dtype) + value = self.binfunc(dtype, value, nextval) result.dtype.setitem(result.storage, ri.offset, value) - ri = ri.next(shapelen) - si = si.next(shapelen) + frame.next(shapelen) + assert ri.done return result @@ -1036,19 +1034,19 @@ self.size * itemsize ) else: - dest = axis_iter_from_arr(self) - source = axis_iter_from_arr(w_value) + dest = SkipLastAxisIterator(self) + source = SkipLastAxisIterator(w_value) while not dest.done: rffi.c_memcpy( rffi.ptradd(self.storage, dest.offset * itemsize), rffi.ptradd(w_value.storage, source.offset * itemsize), self.shape[-1] * itemsize ) - source = source.next(shapelen) - dest = dest.next(shapelen) + source.next() + dest.next() def _sliceloop(self, source, res_shape): - sig = source.find_sig(res_shape) + sig = source.find_sig(res_shape=res_shape) frame = sig.create_frame(source, res_shape) res_iter = view_iter_from_arr(self) shapelen = len(res_shape) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -135,7 +135,7 @@ sig = find_sig(ReduceSignature(self.func, self.name, dtype, ScalarSignature(dtype), obj.create_sig(obj.shape)), obj) - frame = sig.create_frame(obj) + frame = sig.create_frame(obj,dim=-1) if self.identity is None: value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,7 +1,7 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator + OneDimIterator, ConstantIterator, axis_iter_from_arr from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib.jit import hint, unroll_safe, promote @@ -95,13 +95,13 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None, chunks=None): + def create_frame(self, arr, res_shape=None, chunks=None, dim=-1): if chunks is None: chunks = [] res_shape = res_shape or arr.shape iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, chunks) + self._create_iter(iterlist, arraylist, arr, res_shape, chunks, dim) return NumpyEvalFrame(iterlist, arraylist) @@ -143,7 +143,7 @@ assert isinstance(concr, ConcreteArray) self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) @@ -171,7 +171,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -212,12 +212,12 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) chunklist.append(arr.chunks) self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist) + chunklist, dim) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -253,11 +253,11 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist) + chunklist, dim) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 @@ -298,14 +298,14 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist) + chunklist, dim) self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist) + chunklist, dim) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 @@ -319,8 +319,21 @@ self.right.debug_repr()) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): - self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist) + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + if dim<0: + self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist, dim) + else: + from pypy.module.micronumpy.interp_numarray import ConcreteArray + concr = arr.get_concrete() + assert isinstance(concr, ConcreteArray) + storage = concr.storage + if self.iter_no >= len(iterlist): + _iter = axis_iter_from_arr(concr, dim) + from interp_iter import AxisIterator + assert isinstance(_iter, AxisIterator) + iterlist.append(_iter) + if self.array_no >= len(arraylist): + arraylist.append(storage) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) diff --git a/pypy/module/micronumpy/test/test_iterators.py b/pypy/module/micronumpy/test/test_iterators.py deleted file mode 100644 --- a/pypy/module/micronumpy/test/test_iterators.py +++ /dev/null @@ -1,61 +0,0 @@ - -from pypy.module.micronumpy.interp_iter import axis_iter_from_arr -from pypy.module.micronumpy.interp_numarray import W_NDimArray - - -class MockDtype(object): - def malloc(self, size): - return None - - -class TestAxisIteratorDirect(object): - def test_axis_iterator(self): - a = W_NDimArray(5 * 3, [5, 3], MockDtype(), 'C') - i = axis_iter_from_arr(a) - ret = [] - while not i.done: - ret.append(i.offset) - i = i.next(1) - assert ret == [0, 3, 6, 9, 12] - a = W_NDimArray(7 * 5 * 3, [7, 5, 3], MockDtype(), 'C') - i = axis_iter_from_arr(a) - ret = [] - while not i.done: - ret.append(i.offset) - i = i.next(1) - assert ret == [3 * v for v in range(7 * 5)] - i = axis_iter_from_arr(a, 2) - ret = [] - while not i.done: - ret.append(i.offset) - i = i.next(1) - assert ret == [3 * v for v in range(7 * 5)] - i = axis_iter_from_arr(a, 1) - ret = [] - while not i.done: - ret.append(i.offset) - i = i.next(1) - assert ret == [ 0, 1, 2, 15, 16, 17, 30, 31, 32, 45, 46, 47, - 60, 61, 62, 75, 76, 77, 90, 91, 92] - - def test_axis_iterator_with_start(self): - a = W_NDimArray(7 * 5 * 3, [7, 5, 3], MockDtype(), 'C') - i = axis_iter_from_arr(a, start=[0, 0, 0]) - ret = [] - while not i.done: - ret.append(i.offset) - i = i.next(2) - assert ret == [3 * v for v in range(7 * 5)] - i = axis_iter_from_arr(a, start=[1, 1, 0]) - ret = [] - while not i.done: - ret.append(i.offset) - i = i.next(2) - assert ret == [3 * v + 18 for v in range(7 * 5)] - i = axis_iter_from_arr(a, 1, [2, 0, 2]) - ret = [] - while not i.done: - ret.append(i.offset) - i = i.next(2) - assert ret == [v + 32 for v in [ 0, 1, 2, 15, 16, 17, 30, 31, 32, - 45, 46, 47, 60, 61, 62, 75, 76, 77, 90, 91, 92]] diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -246,6 +246,10 @@ c = b.copy() assert (c == b).all() + a = arange(15).reshape(5,3) + b = a.copy() + assert (b == a).all() + def test_iterator_init(self): from numpypy import array a = array(range(5)) From noreply at buildbot.pypy.org Fri Jan 6 12:24:00 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 6 Jan 2012 12:24:00 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: merge Message-ID: <20120106112400.03CB182007@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51066:188fd8e68904 Date: 2012-01-06 13:02 +0200 http://bitbucket.org/pypy/pypy/changeset/188fd8e68904/ Log: merge diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -764,7 +764,6 @@ def _del_sources(self): self.values = None - pass def create_sig(self, res_shape): if self.forced_result is not None: From noreply at buildbot.pypy.org Fri Jan 6 12:24:01 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 6 Jan 2012 12:24:01 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: cleanup Message-ID: <20120106112401.9718C82007@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51067:5e2f7e1b6648 Date: 2012-01-06 13:22 +0200 http://bitbucket.org/pypy/pypy/changeset/5e2f7e1b6648/ Log: cleanup diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -124,7 +124,7 @@ self.offset = arr_start self.dim = dim self.dim_order = [] - if self.dim >=0: + if self.dim >= 0: self.dim_order.append(self.dim) for i in range(self.shapelen - 1, -1, -1): if i == self.dim: @@ -132,7 +132,7 @@ self.dim_order.append(i) self.strides = strides self.backstrides = backstrides - + def done(self): return self._done @@ -175,6 +175,7 @@ self.indices = [0] * (len(arr.shape) - 1) self.done = False self.offset = arr.start + def next(self): for i in range(len(self.arr.shape) - 2, -1, -1): if self.indices[i] < self.arr.shape[i] - 1: @@ -186,4 +187,3 @@ self.offset -= self.arr.backstrides[i] else: self.done = True - diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -567,7 +567,7 @@ def descr_mean(self, space, w_dim=None): if space.is_w(w_dim, space.w_None): w_dim = space.wrap(-1) - return space.div(self.descr_sum_promote(space, w_dim), + return space.div(self.descr_sum_promote(space, w_dim), space.wrap(self.size)) def descr_nonzero(self, space): @@ -750,7 +750,7 @@ class Reduce(VirtualArray): def __init__(self, binfunc, name, dim, res_dtype, values, identity=None): - shape = values.shape[0:dim] + values.shape[dim+1:len(values.shape)] + shape = values.shape[0:dim] + values.shape[dim + 1:len(values.shape)] VirtualArray.__init__(self, name, shape, res_dtype) self.values = values self.size = 1 @@ -789,12 +789,12 @@ self.values = self.values.get_concrete() shapelen = len(result.shape) objlen = len(self.values.shape) - sig = self.find_sig(res_shape=result.shape,arr=self.values) + sig = self.find_sig(res_shape=result.shape, arr=self.values) ri = ArrayIterator(result.size) frame = sig.create_frame(self.values, dim=self.dim) value = self.get_identity(sig, frame, shapelen) while not frame.done(): - #XXX add jit_merge_point ? + #XXX add jit_merge_point if frame.iterators[0].axis_done: value = self.get_identity(sig, frame, shapelen) ri = ri.next(shapelen) @@ -942,7 +942,7 @@ view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: - builder.append(ccomma +'\n' + indent + '...' + ncomma) + builder.append(ccomma + '\n' + indent + '...' + ncomma) i = self.shape[0] - 3 else: i += 1 diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -135,7 +135,7 @@ sig = find_sig(ReduceSignature(self.func, self.name, dtype, ScalarSignature(dtype), obj.create_sig(obj.shape)), obj) - frame = sig.create_frame(obj,dim=-1) + frame = sig.create_frame(obj, dim=-1) if self.identity is None: value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) @@ -408,7 +408,8 @@ identity = extra_kwargs.get("identity") if identity is not None: - identity = interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) + identity = \ + interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -320,8 +320,9 @@ class ReduceSignature(Call2): def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): - if dim<0: - self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist, dim) + if dim < 0: + self.right._create_iter(iterlist, arraylist, arr, res_shape, + chunklist, dim) else: from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() @@ -330,7 +331,7 @@ if self.iter_no >= len(iterlist): _iter = axis_iter_from_arr(concr, dim) from interp_iter import AxisIterator - assert isinstance(_iter, AxisIterator) + assert isinstance(_iter, AxisIterator) iterlist.append(_iter) if self.array_no >= len(arraylist): arraylist.append(storage) From noreply at buildbot.pypy.org Fri Jan 6 12:35:24 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 6 Jan 2012 12:35:24 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: translation error Message-ID: <20120106113524.9955882007@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51068:83403a06fc8c Date: 2012-01-06 13:34 +0200 http://bitbucket.org/pypy/pypy/changeset/83403a06fc8c/ Log: translation error diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -110,7 +110,7 @@ return AxisIterator(arr.start, arr.strides, arr.backstrides, arr.shape, dim) -class AxisIterator(object): +class AxisIterator(BaseIterator): """ Accept an addition argument dim Redorder the dimensions to iterate over dim most often. Set a flag at the end of each run over dim. From noreply at buildbot.pypy.org Fri Jan 6 12:43:18 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 12:43:18 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: I'm getting sick of this slowly Message-ID: <20120106114318.CBAAE82007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51069:e1915f5fd176 Date: 2012-01-06 13:42 +0200 http://bitbucket.org/pypy/pypy/changeset/e1915f5fd176/ Log: I'm getting sick of this slowly diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -93,6 +93,7 @@ else: op = ResOperation(no, [None], None, descr=None) op.setarg(NonConstant(0), AbstractValue()) # list is mutated + op.setarg(NonConstant(0), None) # setarg arg can be None return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) class WrappedOp(Wrappable): From noreply at buildbot.pypy.org Fri Jan 6 13:15:12 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 13:15:12 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: screw hacks, provide annotation by hand. I wonder if this is enough (very likely Message-ID: <20120106121512.594A682007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51070:a469f09278ce Date: 2012-01-06 14:14 +0200 http://bitbucket.org/pypy/pypy/changeset/a469f09278ce/ Log: screw hacks, provide annotation by hand. I wonder if this is enough (very likely not) diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -7,7 +7,7 @@ from pypy.rpython.lltypesystem import lltype from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.lltypesystem.rclass import OBJECT -from pypy.jit.metainterp.resoperation import rop, ResOperation +from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant class Cache(object): @@ -79,23 +79,6 @@ def wrap_oplist(space, logops, operations, ops_offset): return [WrappedOp(op, ops_offset[op], logops.repr_of_resop(op)) for op in operations] - at unwrap_spec(no=int) -def new_resop(space, w_tp, no): - from pypy.jit.metainterp.history import AbstractValue, AbstractDescr - # this is mostly an annotation hint - if NonConstant(True): - raise OperationError(space.w_ValueError, - space.wrap("for annotation only")) - if no: - op = ResOperation(no, [AbstractValue()], AbstractValue(), - descr=AbstractDescr()) - op.setdescr(None) - else: - op = ResOperation(no, [None], None, descr=None) - op.setarg(NonConstant(0), AbstractValue()) # list is mutated - op.setarg(NonConstant(0), None) # setarg arg can be None - return space.wrap(WrappedOp(op, NonConstant(13), NonConstant('repr'))) - class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -115,10 +98,26 @@ WrappedOp.typedef = TypeDef( 'ResOperation', - __new__ = interp2app(new_resop), __doc__ = WrappedOp.__doc__, __repr__ = interp2app(WrappedOp.descr_repr), name = GetSetProperty(WrappedOp.descr_name), num = GetSetProperty(WrappedOp.descr_num), ) WrappedOp.acceptable_as_base_class = False + +from pypy.rpython.extregistry import ExtRegistryEntry + +class WrappedOpRegistry(ExtRegistryEntry): + _type_ = WrappedOp + + def compute_annotation(self): + from pypy.annotation import model as annmodel + clsdef = self.bookkeeper.getuniqueclassdef(WrappedOp) + if not clsdef.attrs: + resopclsdef = self.bookkeeper.getuniqueclassdef(AbstractResOp) + attrs = {'offset': annmodel.SomeInteger(), + 'repr_of_resop': annmodel.SomeString(can_be_None=False), + 'op': annmodel.SomeInstance(resopclsdef)} + for attrname, s_v in attrs.iteritems(): + clsdef.generalize_attr(attrname, s_v) + return annmodel.SomeInstance(clsdef, can_be_None=True) From noreply at buildbot.pypy.org Fri Jan 6 13:16:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 13:16:23 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: better than nothing Message-ID: <20120106121623.E226282007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51071:5b4f622c57aa Date: 2012-01-06 14:15 +0200 http://bitbucket.org/pypy/pypy/changeset/5b4f622c57aa/ Log: better than nothing diff --git a/pypy/module/pypyjit/test/test_ztranslation.py b/pypy/module/pypyjit/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test/test_ztranslation.py @@ -0,0 +1,5 @@ + +from pypy.objspace.fake.checkmodule import checkmodule + +def test_pypyjit_translates(): + checkmodule('pypyjit') From noreply at buildbot.pypy.org Fri Jan 6 17:23:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 17:23:19 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: shuffle stuff a bit Message-ID: <20120106162319.E93C682007@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51072:d498e3a9f2e2 Date: 2012-01-06 18:22 +0200 http://bitbucket.org/pypy/pypy/changeset/d498e3a9f2e2/ Log: shuffle stuff a bit diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -725,13 +725,6 @@ return hop.genop('jit_marker', vlist, resulttype=lltype.Void) -def record_known_class(value, cls): - """ - Assure the JIT that value is an instance of cls. This is not a precise - class check, unlike a guard_class. - """ - assert isinstance(value, cls) - class JitPortal(object): """ This is the main connector between the JIT and the interpreter. Several methods on portal will be invoked at various stages of JIT running @@ -767,6 +760,13 @@ """ Returns various statistics """ +def record_known_class(value, cls): + """ + Assure the JIT that value is an instance of cls. This is not a precise + class check, unlike a guard_class. + """ + assert isinstance(value, cls) + class Entry(ExtRegistryEntry): _about_ = record_known_class From noreply at buildbot.pypy.org Fri Jan 6 18:15:25 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Fri, 6 Jan 2012 18:15:25 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Cast value in convert_to_imm, which fixes test_new_with_vtable. Message-ID: <20120106171525.112A082CAB@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r51073:5227411080a8 Date: 2012-01-06 12:15 -0500 http://bitbucket.org/pypy/pypy/changeset/5227411080a8/ Log: Cast value in convert_to_imm, which fixes test_new_with_vtable. diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -89,7 +89,8 @@ def convert_to_imm(self, c): if isinstance(c, ConstInt): - return locations.ImmLocation(c.value) + val = rffi.cast(lltype.Signed, c.value) + return locations.ImmLocation(val) else: assert isinstance(c, ConstPtr) return locations.ImmLocation(rffi.cast(lltype.Signed, c.value)) From noreply at buildbot.pypy.org Fri Jan 6 18:59:47 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 6 Jan 2012 18:59:47 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Start implementing the new section. Message-ID: <20120106175947.B239982CAB@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51074:719f30faa809 Date: 2012-01-02 22:42 +0100 http://bitbucket.org/pypy/pypy/changeset/719f30faa809/ Log: Start implementing the new section. diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -39,7 +39,7 @@ # let us know if the 'tid' is valid or is just a word-aligned address): MARK_BYTE_1 = 0x6D # 'm', 109 MARK_BYTE_2 = 0x4B # 'K', 75 -MARK_BYTE_3 = 0x23 # '#', 35 +MARK_BYTE_3 = 0x25 # '%', 37 MARK_BYTE_STATIC = 0x53 # 'S', 83 # Next lower byte: a combination of flags. FL_WITHHASH = 0x0100 @@ -75,24 +75,17 @@ # The default size of the nursery: use 6 MB by default. # Environment variable: PYPY_GC_NURSERY "nursery_size": 6*1024*1024, - - # Trigger another major collection when 'N+(F-1)*P' bytes survived - # minor collections, where N = nursery_size, P = bytes surviving - # the previous major collection, and F is the fill_factor here. - # Environment variable: PYPY_GC_MAJOR_COLLECT - "fill_factor": 1.75, } def __init__(self, config, read_from_env=False, nursery_size=32*WORD, - fill_factor=2.0, + fill_factor=2.0, # xxx kill **kwds): GCBase.__init__(self, config, **kwds) self.read_from_env = read_from_env self.nursery_size = nursery_size - self.fill_factor = fill_factor # self.main_thread_ident = ll_thread.get_ident() # non-transl. debug only # @@ -110,12 +103,12 @@ # that was not scanned yet. self._init_writebarrier_logic() # - def trigger_collection_now(): + def _nursery_full(additional_size): # a hack to reduce the code size in _account_for_nursery(): - # avoids both 'self' and the default argument value to be passed - self.trigger_next_collection() - trigger_collection_now._dont_inline_ = True - self.trigger_collection_now = trigger_collection_now + # avoids the 'self' argument. + self.nursery_full(additional_size) + _nursery_full._dont_inline_ = True + self._nursery_full = _nursery_full def _initialize(self): # Initialize the GC. In normal translated program, this function @@ -163,25 +156,25 @@ # self.collector.setup() # - self.set_nursery_size(self.nursery_size) + self.set_minimal_nursery_size(self.nursery_size) if self.read_from_env: # newsize = env.read_from_env('PYPY_GC_NURSERY') if newsize > 0: - self.set_nursery_size(newsize) - # - fillfact = env.read_float_from_env('PYPY_GC_MAJOR_COLLECT') - if fillfact > 1.0: - self.fill_factor = fillfact + self.set_minimal_nursery_size(newsize) # - debug_print("nursery size:", self.nursery_size) - debug_print("fill factor: ", self.fill_factor) + debug_print("minimal nursery size:", self.minimal_nursery_size) debug_stop("gc-startup") - def set_nursery_size(self, newsize): - self.nursery_size = newsize - self.nursery_size_still_available = newsize - self.size_still_available_before_major = newsize + def set_minimal_nursery_size(self, newsize): + # See concurrentgen.txt. At the start of the process, 'newsize' is + # a quarter of the total memory size. + newsize = min(newsize, (sys.maxint - 65535) // 4) + self.minimal_nursery_size = r_uint(newsize) + self.total_memory_size = r_uint(4 * newsize) # total size + self.nursery_size = r_uint(newsize) # size of the '->new...' box + self.old_objects_size = r_uint(0) # size of the 'old objs' box + self.nursery_size_still_available = intmask(self.nursery_size) def _teardown(self): "Stop the collector thread after tests have run." @@ -208,7 +201,9 @@ hdr.tid = self.combine(typeid, MARK_BYTE_STATIC, 0) def malloc_fixedsize_clear(self, typeid, size, - needs_finalizer=False, contains_weakptr=False): + needs_finalizer=False, + finalizer_is_light=False, + contains_weakptr=False): # # Case of finalizers (test constant-folded) if needs_finalizer: @@ -267,7 +262,7 @@ def _account_for_nursery(self, additional_size): self.nursery_size_still_available -= additional_size if self.nursery_size_still_available < 0: - self.trigger_collection_now() + self._nursery_full(additional_size) _account_for_nursery._always_inline_ = True # ---------- @@ -379,15 +374,63 @@ self.get_mark(obj) self.extra_objects_to_mark.append(obj) + # ---------- - def wait_for_the_end_of_collection(self): + def nursery_full(self, additional_size): + # See concurrentgen.txt. + # + assert self.nursery_size_still_available < 0 + # + # Handle big allocations specially + if additional_size > intmask(self.total_memory_size >> 4): + xxxxxxxxxxxx + self.handle_big_allocation(additional_size) + return + # + if self.collector.running <= 0: + # + # The previous collection finished. If necessary, synchronize + # the main thread with it. + self.sync_end_of_collection() + # + # Expand the nursery if we can, up to 25% of total_memory_size. + # In some cases, the limiting factor is that the nursery size + # plus the old objects size must not be larger than + # total_memory_size. + expand_to = self.total_memory_size >> 2 + expand_to = min(expand_to, self.total_memory_size - + self.old_objects_size) + self.nursery_size_still_available += intmask(expand_to - + self.nursery_size) + self.nursery_size = expand_to + # + # If 'nursery_size_still_available' has been increased to a + # nonnegative number, then we are done: we can just continue + # filling the nursery. + if self.nursery_size_still_available >= 0: + return + # + # Else, we trigger the next minor collection now. + self._start_minor_collection() + # + # Now there is no new object left. Reset the nursery size + # to be 3/4*total_memory_size - old_objects_size, and no + # more than 25% of total_memory_size. + newsize = (self.total_memory_size >> 2) * 3 - self.old_objects_size + newsize = min(newsize, self.total_memory_size >> 2) + self.nursery_size = newsize + self.nursery_size_still_available = newsize + return + + xxx + + + def sync_end_of_collection(self): """In the mutator thread: wait for the minor collection currently - running (if any) to finish.""" + running (if any) to finish, and synchronize the two threads.""" if self.collector.running != 0: debug_start("gc-stop") self._stop_collection() - debug_print("size_still_available_before_major =", - self.size_still_available_before_major) debug_stop("gc-stop") # # We must *not* run execute_finalizers_ll() here, because it @@ -397,6 +440,7 @@ ll_assert(self.collector.running == 0, "collector thread not paused?") + def _stop_collection(self): self.acquire(self.finished_lock) self.collector.running = 0 @@ -435,6 +479,7 @@ def collect(self, gen=4): + return """ gen=0: Trigger a minor collection if none is running. Never blocks, except if it happens to start a major collection. diff --git a/pypy/rpython/memory/gc/concurrentgen.txt b/pypy/rpython/memory/gc/concurrentgen.txt --- a/pypy/rpython/memory/gc/concurrentgen.txt +++ b/pypy/rpython/memory/gc/concurrentgen.txt @@ -5,173 +5,6 @@ Goal: reduce the total real time by moving a part of the GC to its own thread that can run in parallel with the main execution thread. -On current modern hardware with at least two cores, the two cores can -read the same area of memory concurrently. If one of the cores writes -to this area, then I believe that the core doing the writing works at -full speed, whereas the core doing the reading suffers from waiting for -the data to move to it; but it's still ok because the data usually moves -in a cache-to-cache bus, not via the main memory. Also, if an area of -memory is written to by one core, and then read and written to by the -other core only, then performance is fine. The bad case is the one in -which both cores continously read and write the same area of memory. - -So, assuming that the main thread reads and writes to random objects all -the time, it means that the GC thread should *only read* from the -objects. Conversely, the data structures built by the GC thread should -only be *read* from the main thread. In particular: when the GC thread -does marking, it should use off-objects bits; and sweeping should be -done by adding free objects to lists that are not chained lists. In -this way the GC thread never writes to the object's memory. Similarly, -for the same reason, the GC thread should not reset areas of memory to -zero in the background. - -This goal is not reached so far: both threads read and write the object -mark byte; there are no off-objects bits. - - -************************************************************ - Minor collection cycles of the "concurrentgen" collector -************************************************************ - -Objects mark byte: - - cym: young objs (and all flagged objs) - cam: aging objs - com: old objs - 'S': static prebuilt objs with no heap pointer - -cym = current_young_marker -cam = current_aging_marker -com = current_old_marker - -The write barrier activates when writing into an object whose -mark byte is different from 'cym'. - - ------------------------------------------------------------- - -Step 1. Only the mutator runs. - - old obj flagged obj old obj - | - | - v - young obj... - -Write barrier: change "old obj" to "flagged obj" - (if mark != cym: - mark = cym (used to be com or 'S') - record the object in the "flagged" list) - - note that we consider that flagged old objs are again young objects - ------------------------------------------------------------- - -Step 2. Preparation for running the collector. (Still single-threaded.) - - - young objs -> aging objs - (exchange the values of 'cam' and 'cym'. - there was no 'cam' object, so now there is no 'cym' object) - - - collect roots; add roots and flagged objs to the "gray objs" list - - - unflag objs (i.e. empty the "flagged" list) - ------------------------------------------------------------- - -Step 3. Parallel execution of the collector, mark phase - - old obj old obj old obj - - aging obj aging obj - - new young obj... - - -Collector thread: - - for each gray obj: - skip obj if not an aging obj (i.e. if mark != cam: continue) - for each obj found by tracing: - add to gray objs (if not an aging obj, will be skipped later) - gray obj -> black obj (i.e. mark = com) - -Write barrier: - - - perform as a "deletion barrier", detecting changes done to aging objs - (i.e. if mark == cam, - mark = com - trace and add to gray objs) - - also flag old-or-aging objs that point to new young objs - (if mark != cym: - mark = cym (used to be com or 'S') - record the object in the "flagged" list) - -Threading issues: - - - it's possible that both threads will trace the same object, if we're - unlucky, but it does not have buggy effects - - the "mark = com" in the collector thread can conflict with the - "mark = cym" in the mutator write barrier, but again, it should not - have buggy effects beyond occasionally triggering the write barrier - twice on the same object, adding it twice in "flagged" (and never more) - - it is essential to have "mark = com" _after_ tracing in the collector - thread; otherwise, the write barrier in the mutator thread would be - ignored in case it occurs between the two, and then the tracing done - by the collector thread doesn't see the original values any more. - - the detection of "we are done" in the collector thread needs to - account for the write barrier currently tracing and adding more - objects to "gray objs". - ------------------------------------------------------------- - -Step 4. Parallel execution of the collector, sweep phase - - for obj in previous nursery: - if obj is "black": (i.e. if mark != cam) - make the obj old ( nothing to do here, mark already ok) - else: - return the object to the available list - after this there are no more aging objects - -Write barrier: - - - flag old objs that point to new young objs - (should not see any 'cam' object any more here) - - - -************************************************************ - MAJOR collection cycles of the "concurrentgen" collector -************************************************************ - -Works mostly like a minor collection cycle. The only difference -is in step 2, which is replaced with: - - -Step 2+. Preparation for running a major collection. (Still single-threaded.) - - - force a minor collection's marking step to occur sequentially - (steps 2 and 3), to get rid of 'cym' objects. Objects are left - either 'cam' (non-marked) or 'com' (marked). - - - empty the "flagged" list - - - collect roots; add roots to the "gray objs" list - - - com <-> cam - (exchange the values of 'com' and 'cam'. - there are no 'cym' object right now. - the newly 'com' objects are the ones marked unreachable above.) - - -Major collections only worry about old objects. To avoid serializing -the complete major collection, we serialize the minor collection's -marking step that occurs first; the goal is to be sure that all objects -are in the 'com' state. We can minimize the non-parallelized delay -introduced by this step by doing the major collection just after the -previous minor collection finished, when the quantity of new young -objects should still be small. - ************************************************************ @@ -181,7 +14,13 @@ The objects are never physically moving with this GC; in the pictures below, they "move" only in the sense that their age changes. -Allocate new objects until 25% of the total RAM is reached: +Objects have 4 possible ages: "new" when they are newly allocated; +"aging" when they are in the process of being marked by the GC thread; +"old" when they survived a minor collection; and "static" is used to +mark the static prebuilt GC objects, at least until they grow a pointer +to a dynamic GC object. + +We allocate new objects until 25% of the total RAM is reached: 25% 25% 50% +-----------+-----------+-----------------------+ @@ -298,3 +137,176 @@ Additionally we fix an absolute minimum (at least 6 MB), to avoid doing a large number of tiny minor collections, ending up spending all of our time in Step 2 scanning the stack of the process. + + + +************************************************************ + Notes about running two threads +************************************************************ + +On current modern hardware with at least two cores, the two cores can +read the same area of memory concurrently. If one of the cores writes +to this area, then I believe that the core doing the writing works at +full speed, whereas the core doing the reading suffers from waiting for +the data to move to it; but it's still ok because the data usually moves +in a cache-to-cache bus, not via the main memory. Also, if an area of +memory is written to by one core, and then read and written to by the +other core only, then performance is fine. The bad case is the one in +which both cores continously read and write the same area of memory. + +So, assuming that the main thread reads and writes to random objects all +the time, it means that the GC thread should *only read* from the +objects. Conversely, the data structures built by the GC thread should +only be *read* from the main thread. In particular: when the GC thread +does marking, it should use off-objects bits; and sweeping should be +done by adding free objects to lists that are not chained lists. In +this way the GC thread never writes to the object's memory. Similarly, +for the same reason, the GC thread should not reset areas of memory to +zero in the background. + +This goal is not reached so far: both threads read and write the object +mark byte; there are no off-objects bits. + + +************************************************************ + Minor collection cycles of the "concurrentgen" collector +************************************************************ + +Objects mark byte: + + cym: young objs (and all flagged objs) + cam: aging objs + com: old objs + 'S': static prebuilt objs with no heap pointer + +cym = current_young_marker +cam = current_aging_marker +com = current_old_marker + +The write barrier activates when writing into an object whose +mark byte is different from 'cym'. + + +------------------------------------------------------------ + +Step 1. Only the mutator runs. + + old obj flagged obj old obj + | + | + v + young obj... + +Write barrier: change "old obj" to "flagged obj" + (if mark != cym: + mark = cym (used to be com or 'S') + record the object in the "flagged" list) + - note that we consider that flagged old objs are again young objects + +------------------------------------------------------------ + +Step 2. Preparation for running the collector. (Still single-threaded.) + + - young objs -> aging objs + (exchange the values of 'cam' and 'cym'. + there was no 'cam' object, so now there is no 'cym' object) + + - collect roots; add roots and flagged objs to the "gray objs" list + + - unflag objs (i.e. empty the "flagged" list) + +------------------------------------------------------------ + +Step 3. Parallel execution of the collector, mark phase + + old obj old obj old obj + + aging obj aging obj + + new young obj... + + +Collector thread: + + for each gray obj: + skip obj if not an aging obj (i.e. if mark != cam: continue) + for each obj found by tracing: + add to gray objs (if not an aging obj, will be skipped later) + gray obj -> black obj (i.e. mark = com) + +Write barrier: + + - perform as a "deletion barrier", detecting changes done to aging objs + (i.e. if mark == cam, + mark = com + trace and add to gray objs) + - also flag old-or-aging objs that point to new young objs + (if mark != cym: + mark = cym (used to be com or 'S') + record the object in the "flagged" list) + +Threading issues: + + - it's possible that both threads will trace the same object, if we're + unlucky, but it does not have buggy effects + - the "mark = com" in the collector thread can conflict with the + "mark = cym" in the mutator write barrier, but again, it should not + have buggy effects beyond occasionally triggering the write barrier + twice on the same object, adding it twice in "flagged" (and never more) + - it is essential to have "mark = com" _after_ tracing in the collector + thread; otherwise, the write barrier in the mutator thread would be + ignored in case it occurs between the two, and then the tracing done + by the collector thread doesn't see the original values any more. + - the detection of "we are done" in the collector thread needs to + account for the write barrier currently tracing and adding more + objects to "gray objs". + +------------------------------------------------------------ + +Step 4. Parallel execution of the collector, sweep phase + + for obj in previous nursery: + if obj is "black": (i.e. if mark != cam) + make the obj old ( nothing to do here, mark already ok) + else: + return the object to the available list + after this there are no more aging objects + +Write barrier: + + - flag old objs that point to new young objs + (should not see any 'cam' object any more here) + + + +************************************************************ + MAJOR collection cycles of the "concurrentgen" collector +************************************************************ + +Works mostly like a minor collection cycle. The only difference +is in step 2, which is replaced with: + + +Step 2+. Preparation for running a major collection. (Still single-threaded.) + + - force a minor collection's marking step to occur sequentially + (steps 2 and 3), to get rid of 'cym' objects. Objects are left + either 'cam' (non-marked) or 'com' (marked). + + - empty the "flagged" list + + - collect roots; add roots to the "gray objs" list + + - com <-> cam + (exchange the values of 'com' and 'cam'. + there are no 'cym' object right now. + the newly 'com' objects are the ones marked unreachable above.) + + +Major collections only worry about old objects. To avoid serializing +the complete major collection, we serialize the minor collection's +marking step that occurs first; the goal is to be sure that all objects +are in the 'com' state. We can minimize the non-parallelized delay +introduced by this step by doing the major collection just after the +previous minor collection finished, when the quantity of new young +objects should still be small. From noreply at buildbot.pypy.org Fri Jan 6 18:59:48 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 6 Jan 2012 18:59:48 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: test_many_objects passes again :-) Message-ID: <20120106175948.E25B982CAC@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51075:4f6d92354eb2 Date: 2012-01-06 18:59 +0100 http://bitbucket.org/pypy/pypy/changeset/4f6d92354eb2/ Log: test_many_objects passes again :-) diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -172,8 +172,8 @@ newsize = min(newsize, (sys.maxint - 65535) // 4) self.minimal_nursery_size = r_uint(newsize) self.total_memory_size = r_uint(4 * newsize) # total size - self.nursery_size = r_uint(newsize) # size of the '->new...' box - self.old_objects_size = r_uint(0) # size of the 'old objs' box + self.nursery_size = r_uint(newsize) # size of the '->new...' box + self.old_objects_size = r_uint(0) # approx size of 'old objs' box self.nursery_size_still_available = intmask(self.nursery_size) def _teardown(self): @@ -387,11 +387,11 @@ self.handle_big_allocation(additional_size) return # - if self.collector.running <= 0: - # - # The previous collection finished. If necessary, synchronize - # the main thread with it. - self.sync_end_of_collection() + waiting_for_major_collection = self.collector.major_collection_phase != 0 + # + if (self.collector.running == 0 or + self.stop_collection(wait=waiting_for_major_collection)): + # The previous collection finished. # # Expand the nursery if we can, up to 25% of total_memory_size. # In some cases, the limiting factor is that the nursery size @@ -413,16 +413,34 @@ # Else, we trigger the next minor collection now. self._start_minor_collection() # - # Now there is no new object left. Reset the nursery size - # to be 3/4*total_memory_size - old_objects_size, and no - # more than 25% of total_memory_size. - newsize = (self.total_memory_size >> 2) * 3 - self.old_objects_size - newsize = min(newsize, self.total_memory_size >> 2) - self.nursery_size = newsize - self.nursery_size_still_available = newsize - return + # Now there is no new object left. Reset the nursery size to + # be at most 25% of total_memory_size, and initially no more than + # 3/4*total_memory_size - old_objects_size. If that value is not + # positive, then we immediately go into major collection mode. + three_quarters = (self.total_memory_size >> 2) * 3 + if self.old_objects_size < three_quarters: + newsize = three_quarters - self.old_objects_size + newsize = min(newsize, self.total_memory_size >> 2) + self.nursery_size = newsize + self.nursery_size_still_available = intmask(newsize) + return - xxx + yyy + + else: + # The previous collection is not finished yet. + # At this point we want a full collection to occur. + debug_start("gc-major") + # + # We have to first wait for the previous minor collection to finish: + debug_start("gc-major-wait") + self.stop_collection(wait=True) + debug_stop("gc-major-wait") + # + # Start the major collection. + self._start_major_collection() + # + debug_stop("gc-major") def sync_end_of_collection(self): @@ -441,8 +459,12 @@ "collector thread not paused?") - def _stop_collection(self): - self.acquire(self.finished_lock) + def stop_collection(self, wait): + if wait: + self.acquire(self.finished_lock) + else: + if not self.try_acquire(self.finished_lock): + return False self.collector.running = 0 #debug_print("collector.running = 0") # @@ -454,6 +476,8 @@ # if self.DEBUG: self.debug_check_lists() + # + return True def execute_finalizers_ll(self): @@ -576,7 +600,7 @@ self._start_minor_collection(major_collection_phase=1) # # Wait for it to finish - self._stop_collection() + self.stop_collection(wait=True) # # Assert that this list is still empty (cleared by the call to # _start_minor_collection) @@ -676,13 +700,16 @@ ll_thread.c_thread_acquirelock(lock, 1) else: assert ll_thread.get_ident() == self.main_thread_ident - while rffi.cast(lltype.Signed, - ll_thread.c_thread_acquirelock(lock, 0)) == 0: + while not self.try_acquire(lock): time.sleep(0.05) # ---------- EXCEPTION FROM THE COLLECTOR THREAD ---------- if hasattr(self.collector, '_exc_info'): self._reraise_from_collector_thread() + def try_acquire(self, lock): + res = ll_thread.c_thread_acquirelock(lock, 0) + return rffi.cast(lltype.Signed, res) != 0 + def release(self, lock): ll_thread.c_thread_releaselock(lock) @@ -793,6 +820,7 @@ # * collector.running == -1: Done. # The mutex_lock is acquired to go from 1 to 2, and from 2 to 3. self.running = 0 + self.major_collection_phase = 0 # # when the collection starts, we make all young objects aging and # move 'new_young_objects' into 'aging_objects' @@ -903,24 +931,22 @@ # Else release mutex_lock and try again. self.release(self.mutex_lock) # - # When we sweep during minor collections, we subtract the size of + # When we sweep during minor collections, we add the size of # the surviving now-old objects to the following field. Note # that the write barrier may make objects young again, without # decreasing the value here. During the following minor - # collection this variable will be decreased *again*. When the + # collection this variable will be increased *again*. When the # write barrier triggers on an aging object, it is random whether # its size ends up being accounted here or not --- but it will # be at the following minor collection, because the object is - # young again. So, careful about underflows. - sz = self.gc.size_still_available_before_major - if sz < 0 or r_uint(sz) < surviving_size: - sz = -1 # trigger major collect the next time + # young again. So, careful about overflows. + ll_assert(surviving_size <= self.gc.total_memory_size, + "surviving_size too large") + limit = self.gc.total_memory_size - surviving_size + if self.gc.old_objects_size <= limit: + self.gc.old_objects_size += surviving_size else: - # do the difference, which results in a non-negative number - # (in particular, surviving_size <= sys.maxint here) - sz -= intmask(surviving_size) - # - self.gc.size_still_available_before_major = sz + self.gc.old_objects_size = self.gc.total_memory_size # self.running = 2 #debug_print("collection_running = 2") diff --git a/pypy/rpython/memory/gc/concurrentgen.txt b/pypy/rpython/memory/gc/concurrentgen.txt --- a/pypy/rpython/memory/gc/concurrentgen.txt +++ b/pypy/rpython/memory/gc/concurrentgen.txt @@ -87,7 +87,7 @@ |it| free |< old objects | +--+---------+----------------------------------+ -Then we do "Step 2+" above, forcing another synchronous minor +Then we do "Step 2+" below, forcing another synchronous minor collection. (This is the only time during which the main thread has to wait for the collector to finish; hopefully it is a very minor issue, because it occurs only at the start of a major collection, and we wait From noreply at buildbot.pypy.org Fri Jan 6 19:01:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 19:01:45 +0100 (CET) Subject: [pypy-commit] pypy separate-applevel-numpy: close the branch will be redone Message-ID: <20120106180145.D7CCA82CAB@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: separate-applevel-numpy Changeset: r51076:16ea77edcb5e Date: 2012-01-05 09:24 +0200 http://bitbucket.org/pypy/pypy/changeset/16ea77edcb5e/ Log: close the branch will be redone From noreply at buildbot.pypy.org Fri Jan 6 19:01:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 19:01:47 +0100 (CET) Subject: [pypy-commit] pypy import-numpy: rename the interplevel module Message-ID: <20120106180147.0831882CAB@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: import-numpy Changeset: r51077:be18c0104736 Date: 2012-01-05 15:25 +0200 http://bitbucket.org/pypy/pypy/changeset/be18c0104736/ Log: rename the interplevel module diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -9,7 +9,7 @@ appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule From noreply at buildbot.pypy.org Fri Jan 6 19:01:48 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 19:01:48 +0100 (CET) Subject: [pypy-commit] pypy default: make coding guide a bit more up to date when it comes to RPython definition Message-ID: <20120106180148.2C83382CAB@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51078:b67e65d709e1 Date: 2012-01-06 20:00 +0200 http://bitbucket.org/pypy/pypy/changeset/b67e65d709e1/ Log: make coding guide a bit more up to date when it comes to RPython definition diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,7 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. This layout makes the number of types to take care about quite limited. From noreply at buildbot.pypy.org Fri Jan 6 20:50:55 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 20:50:55 +0100 (CET) Subject: [pypy-commit] pypy import-numpy: fix tests Message-ID: <20120106195055.EA01382CAB@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: import-numpy Changeset: r51079:4fa4cf3c8f35 Date: 2012-01-06 21:50 +0200 http://bitbucket.org/pypy/pypy/changeset/4fa4cf3c8f35/ Log: fix tests diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +36,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +44,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +58,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +156,19 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -181,7 +181,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +196,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +218,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +241,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +251,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +260,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +275,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +286,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -315,7 +315,7 @@ def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +330,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +339,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +352,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,33 +3,33 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpypy import array, mean + from _numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -161,7 +161,7 @@ class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -176,12 +176,12 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_ndim(self): - from numpypy import array + from _numpypy import array x = array(0.2) assert x.ndim == 0 x = array([1, 2]) @@ -190,12 +190,12 @@ assert x.ndim == 2 x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) assert x.ndim == 3 - # numpy actually raises an AttributeError, but numpypy raises an + # numpy actually raises an AttributeError, but _numpypy raises an # TypeError raises(TypeError, 'x.ndim = 3') def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -204,7 +204,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -215,13 +215,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -230,7 +230,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -247,12 +247,12 @@ assert (c == b).all() def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -261,7 +261,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -271,7 +271,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -279,7 +279,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -290,7 +290,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -301,7 +301,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -320,7 +320,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -328,14 +328,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -344,13 +344,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -359,7 +359,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -379,7 +379,7 @@ a.shape = (1,) def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -392,7 +392,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -428,13 +428,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -447,7 +447,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -455,20 +455,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -477,14 +477,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -492,34 +492,34 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -527,7 +527,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -559,7 +559,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -567,14 +567,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -584,7 +584,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -592,14 +592,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -612,7 +612,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -620,14 +620,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -638,7 +638,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -649,7 +649,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -660,7 +660,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -674,7 +674,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -688,7 +688,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpypy import array + from _numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -696,7 +696,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -706,7 +706,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -720,13 +720,13 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -735,8 +735,8 @@ assert a.sum() == 5 def test_identity(self): - from numpypy import identity, array - from numpypy import int32, float64, dtype + from _numpypy import identity, array + from _numpypy import int32, float64, dtype a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') @@ -755,32 +755,32 @@ assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -801,14 +801,14 @@ assert a.argmax() == 2 def test_argmin(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -817,7 +817,7 @@ assert b.all() == True def test_any(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -826,7 +826,7 @@ assert c.any() == False def test_dot(self): - from numpypy import array, dot + from _numpypy import array, dot a = array(range(5)) assert a.dot(a) == 30.0 @@ -836,14 +836,14 @@ assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all() def test_dot_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype, float64, int8, bool_ + from _numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -860,7 +860,7 @@ def test_comparison(self): import operator - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -879,7 +879,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpypy import array + from _numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -889,7 +889,7 @@ assert not bool(array([0])) def test_slice_assignment(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[::-1] = a assert (a == [0, 1, 2, 1, 0]).all() @@ -899,8 +899,8 @@ assert (a == [8, 6, 4, 2, 0]).all() def test_debug_repr(self): - from numpypy import zeros, sin - from numpypy.pypy import debug_repr + from _numpypy import zeros, sin + from _numpypy.pypy import debug_repr a = zeros(1) assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' @@ -914,8 +914,8 @@ assert debug_repr(b) == 'Array' def test_remove_invalidates(self): - from numpypy import array - from numpypy.pypy import remove_invalidates + from _numpypy import array + from _numpypy.pypy import remove_invalidates a = array([1, 2, 3]) b = a + a remove_invalidates(a) @@ -923,7 +923,7 @@ assert b[0] == 28 def test_virtual_views(self): - from numpypy import arange + from _numpypy import arange a = arange(15) c = (a + a) d = c[::2] @@ -941,7 +941,7 @@ assert b[1] == 2 def test_tolist_scalar(self): - from numpypy import int32, bool_ + from _numpypy import int32, bool_ x = int32(23) assert x.tolist() == 23 assert type(x.tolist()) is int @@ -949,13 +949,13 @@ assert y.tolist() is True def test_tolist_zerodim(self): - from numpypy import array + from _numpypy import array x = array(3) assert x.tolist() == 3 assert type(x.tolist()) is int def test_tolist_singledim(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.tolist() == [0, 1, 2, 3, 4] assert type(a.tolist()[0]) is int @@ -963,17 +963,17 @@ assert b.tolist() == [0.2, 0.4, 0.6] def test_tolist_multidim(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert a.tolist() == [[1, 2], [3, 4]] def test_tolist_view(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): - from numpypy import array + from _numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] @@ -981,23 +981,23 @@ class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpypy - a = numpypy.zeros((2, 2)) + import _numpypy + a = _numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpypy - assert numpypy.zeros(1).shape == (1,) - assert numpypy.zeros((2, 2)).shape == (2, 2) - assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpypy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpypy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpypy.zeros(())) - raises(ValueError, numpypy.array, [[1, 2], 3]) + import _numpypy + assert _numpypy.zeros(1).shape == (1,) + assert _numpypy.zeros((2, 2)).shape == (2, 2) + assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(_numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, _numpypy.zeros(())) + raises(ValueError, _numpypy.array, [[1, 2], 3]) def test_getsetitem(self): - import numpypy - a = numpypy.zeros((2, 3, 1)) + import _numpypy + a = _numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -1008,8 +1008,8 @@ assert a[1, -1, 0] == 3 def test_slices(self): - import numpypy - a = numpypy.zeros((4, 3, 2)) + import _numpypy + a = _numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -1042,51 +1042,51 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpypy - raises(ValueError, numpypy.array, [[1], 2]) - raises(ValueError, numpypy.array, [[1, 2], [3]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) - a = numpypy.array([[1, 2], [4, 5]]) + import _numpypy + raises(ValueError, _numpypy.array, [[1], 2]) + raises(ValueError, _numpypy.array, [[1, 2], [3]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = _numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpypy - a = numpypy.zeros((3, 4)) + import _numpypy + a = _numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpypy.array([[1, 2], [3, 4]]) + a = _numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpypy.array([5, 6]) + a[1] = _numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpypy.array([8, 10]) + a[:, 1] = _numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpypy.array([11, 12]) + a[0, :: -1] = _numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == \ array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] @@ -1097,12 +1097,12 @@ assert c[1][1] == 12 def test_multidim_ones(self): - from numpypy import ones + from _numpypy import ones a = ones((1, 2, 3)) assert a[0, 1, 2] == 1.0 def test_multidim_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) a[:, 1:3] = b[:, 1:3] @@ -1113,21 +1113,21 @@ assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((10, 10)) b = ones(10) a[:, :] = b assert a[3, 5] == 1 def test_broadcast_shape_agreement(self): - from numpypy import zeros, array + from _numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) @@ -1141,7 +1141,7 @@ assert c.all() def test_broadcast_scalar(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 5), 'd') a[:, 1] = 3 assert a[2, 1] == 3 @@ -1152,14 +1152,14 @@ assert a[3, 2] == 0 def test_broadcast_call2(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((4, 1, 5)) b = ones((4, 3, 5)) b[:] = (a + a) assert (b == zeros((4, 3, 5))).all() def test_broadcast_virtualview(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = arange(8).reshape([2, 2, 2]) b = (a + a)[1, 1] c = zeros((2, 2, 2)) @@ -1167,13 +1167,13 @@ assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all() def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert a.argmax() == 5 assert a[:2, ].argmax() == 3 def test_broadcast_wrong_shapes(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 3, 2)) b = zeros((4, 2)) exc = raises(ValueError, lambda: a + b) @@ -1181,7 +1181,7 @@ " together with shapes (4,3,2) (4,2)" def test_reduce(self): - from numpypy import array + from _numpypy import array a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) assert a.sum() == (13 * 12) / 2 b = a[1:, 1::2] @@ -1189,7 +1189,7 @@ assert c.sum() == (6 + 8 + 10 + 12) * 2 def test_transpose(self): - from numpypy import array + from _numpypy import array a = array(((range(3), range(3, 6)), (range(6, 9), range(9, 12)), (range(12, 15), range(15, 18)), @@ -1208,7 +1208,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from numpypy import array, flatiter + from _numpypy import array, flatiter a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1223,23 +1223,23 @@ assert s == 140 def test_flatiter_array_conv(self): - from numpypy import array, dot + from _numpypy import array, dot a = array([1, 2, 3]) assert dot(a.flat, a.flat) == 14 def test_flatiter_varray(self): - from numpypy import ones + from _numpypy import ones a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] def test_slice_copy(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((10, 10)) b = a[0].copy() assert (b == zeros(10)).all() def test_array_interface(self): - from numpypy import array + from _numpypy import array a = array([1, 2, 3]) i = a.__array_interface__ assert isinstance(i['data'][0], int) @@ -1261,7 +1261,7 @@ def test_fromstring(self): import sys - from numpypy import fromstring, array, uint8, float32, int32 + from _numpypy import fromstring, array, uint8, float32, int32 a = fromstring(self.data) for i in range(4): @@ -1325,7 +1325,7 @@ assert (u == [1, 0]).all() def test_fromstring_types(self): - from numpypy import (fromstring, int8, int16, int32, int64, uint8, + from _numpypy import (fromstring, int8, int16, int32, int64, uint8, uint16, uint32, float32, float64) a = fromstring('\xFF', dtype=int8) @@ -1350,7 +1350,7 @@ assert j[0] == 12 def test_fromstring_invalid(self): - from numpypy import fromstring, uint16, uint8, int32 + from _numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail raises(ValueError, fromstring, "\x01\x02\x03") #3 bytes is not modulo 2 bytes (int16) @@ -1361,7 +1361,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpypy import array, zeros + from _numpypy import array, zeros int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" @@ -1389,7 +1389,7 @@ assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1414,7 +1414,7 @@ [500, 1001]])''' def test_repr_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -1429,7 +1429,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -1462,7 +1462,7 @@ assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' def test_str_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -1478,7 +1478,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array, dtype + from _numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) @@ -1500,7 +1500,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_app_reshape(self): - from numpypy import arange, array, dtype, reshape + from _numpypy import arange, array, dtype, reshape a = arange(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ztranslation.py b/pypy/module/micronumpy/test/test_ztranslation.py --- a/pypy/module/micronumpy/test/test_ztranslation.py +++ b/pypy/module/micronumpy/test/test_ztranslation.py @@ -5,4 +5,4 @@ # XXX: If there are signatures floating around this might explode. This fix # is ugly. signature.known_sigs.clear() - checkmodule('micronumpy') + checkmodule('_numpypy') From noreply at buildbot.pypy.org Fri Jan 6 20:50:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 20:50:57 +0100 (CET) Subject: [pypy-commit] pypy import-numpy: fix test_ztrasnlation Message-ID: <20120106195057.1AE9382CAB@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: import-numpy Changeset: r51080:d06444d3b317 Date: 2012-01-06 21:50 +0200 http://bitbucket.org/pypy/pypy/changeset/d06444d3b317/ Log: fix test_ztrasnlation diff --git a/pypy/module/micronumpy/test/test_ztranslation.py b/pypy/module/micronumpy/test/test_ztranslation.py --- a/pypy/module/micronumpy/test/test_ztranslation.py +++ b/pypy/module/micronumpy/test/test_ztranslation.py @@ -5,4 +5,4 @@ # XXX: If there are signatures floating around this might explode. This fix # is ugly. signature.known_sigs.clear() - checkmodule('_numpypy') + checkmodule('micronumpy') From noreply at buildbot.pypy.org Fri Jan 6 20:57:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 20:57:50 +0100 (CET) Subject: [pypy-commit] pypy import-numpy: fix more tests, thanks alex Message-ID: <20120106195750.8169C82CAB@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: import-numpy Changeset: r51081:9b39d93ee949 Date: 2012-01-06 21:57 +0200 http://bitbucket.org/pypy/pypy/changeset/9b39d93ee949/ Log: fix more tests, thanks alex diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -14,29 +14,29 @@ return mean(a) def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n,n), dtype=dtype) for i in range(n): a[i][i] = 1 return a def mean(a): if not hasattr(a, "mean"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.mean() def sum(a): if not hasattr(a, "sum"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.sum() def min(a): if not hasattr(a, "min"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.min() def max(a): if not hasattr(a, "max"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.max() def arange(start, stop=None, step=1, dtype=None): @@ -47,9 +47,9 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i @@ -90,5 +90,5 @@ you should assign the new shape to the shape attribute of the array ''' if not hasattr(a, 'reshape'): - a = numpypy.array(a) + a = _numpypy.array(a) return a.reshape(shape) From noreply at buildbot.pypy.org Fri Jan 6 20:59:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 20:59:45 +0100 (CET) Subject: [pypy-commit] pypy import-numpy: fix more tests Message-ID: <20120106195945.D984682CAB@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: import-numpy Changeset: r51082:31e655394304 Date: 2012-01-06 21:59 +0200 http://bitbucket.org/pypy/pypy/changeset/31e655394304/ Log: fix more tests diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpypy import add, ufunc + from _numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpypy import add, multiply, sin + from _numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpypy import add, sin + from _numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpypy import negative, sign, minimum + from _numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpypy import array, ndarray, negative, minimum + from _numpypy import array, ndarray, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpypy import array, absolute + from _numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpypy import array, add + from _numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpypy import array, divide + from _numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -114,7 +114,7 @@ assert (divide(array([-10]), array([2])) == array([-5])).all() def test_fabs(self): - from numpypy import array, fabs + from _numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -123,7 +123,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpypy import array, minimum + from _numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -132,7 +132,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpypy import array, maximum + from _numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -145,7 +145,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpypy import array, multiply + from _numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -154,7 +154,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpypy import array, sign, dtype + from _numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -173,7 +173,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpypy import array, reciprocal + from _numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -182,7 +182,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpypy import array, subtract + from _numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -191,7 +191,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpypy import array, floor + from _numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -200,7 +200,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpypy import array, copysign + from _numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -216,7 +216,7 @@ def test_exp(self): import math - from numpypy import array, exp + from _numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -230,7 +230,7 @@ def test_sin(self): import math - from numpypy import array, sin + from _numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -243,7 +243,7 @@ def test_cos(self): import math - from numpypy import array, cos + from _numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -252,7 +252,7 @@ def test_tan(self): import math - from numpypy import array, tan + from _numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -262,7 +262,7 @@ def test_arcsin(self): import math - from numpypy import array, arcsin + from _numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -276,7 +276,7 @@ def test_arccos(self): import math - from numpypy import array, arccos + from _numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -291,7 +291,7 @@ def test_arctan(self): import math - from numpypy import array, arctan + from _numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -304,7 +304,7 @@ def test_arcsinh(self): import math - from numpypy import arcsinh, inf + from _numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -312,7 +312,7 @@ def test_arctanh(self): import math - from numpypy import arctanh + from _numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -323,7 +323,7 @@ def test_sqrt(self): import math - from numpypy import sqrt + from _numpypy import sqrt nan, inf = float("nan"), float("inf") data = [1, 2, 3, inf] @@ -333,13 +333,13 @@ assert math.isnan(sqrt(nan)) def test_reduce_errors(self): - from numpypy import sin, add + from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpypy import add, maximum + from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -348,7 +348,7 @@ def test_comparisons(self): import operator - from numpypy import equal, not_equal, less, less_equal, greater, greater_equal + from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), From noreply at buildbot.pypy.org Fri Jan 6 21:14:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 6 Jan 2012 21:14:45 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: Add hashes. Message-ID: <20120106201445.80C8B82CAB@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r302:3b4a745b2e65 Date: 2012-01-06 21:13 +0100 http://bitbucket.org/pypy/pypy.org/changeset/3b4a745b2e65/ Log: Add hashes. diff --git a/download.html b/download.html --- a/download.html +++ b/download.html @@ -183,9 +183,11 @@ ceb8dfe7d9d1aeb558553b91b381a1a8 pypy-1.7-linux64.tar.bz2 8a6e2583902bc6f2661eb3c96b45f4e3 pypy-1.7-linux.tar.bz2 ff979054fc8e17b4973ffebb9844b159 pypy-1.7-osx64.tar.bz2 +fd0ad58b92ca0933c087bb93a82fda9e release-1.7.tar.bz2 d364e3aa0dd5e0e1ad7f1800a0bfa7e87250c8bb pypy-1.7-linux64.tar.bz2 68554c4cbcc20b03ff56b6a1495a6ecf8f24b23a pypy-1.7-linux.tar.bz2 cedeb1d6bf0431589f62e8c95b71fbfe6c4e7b96 pypy-1.7-osx64.tar.bz2 +b4be3a8dc69cd838a49382867db3c41864b9e8d9 release-1.7.tar.bz2 diff --git a/source/download.txt b/source/download.txt --- a/source/download.txt +++ b/source/download.txt @@ -182,7 +182,9 @@ ceb8dfe7d9d1aeb558553b91b381a1a8 pypy-1.7-linux64.tar.bz2 8a6e2583902bc6f2661eb3c96b45f4e3 pypy-1.7-linux.tar.bz2 ff979054fc8e17b4973ffebb9844b159 pypy-1.7-osx64.tar.bz2 + fd0ad58b92ca0933c087bb93a82fda9e release-1.7.tar.bz2 d364e3aa0dd5e0e1ad7f1800a0bfa7e87250c8bb pypy-1.7-linux64.tar.bz2 68554c4cbcc20b03ff56b6a1495a6ecf8f24b23a pypy-1.7-linux.tar.bz2 cedeb1d6bf0431589f62e8c95b71fbfe6c4e7b96 pypy-1.7-osx64.tar.bz2 + b4be3a8dc69cd838a49382867db3c41864b9e8d9 release-1.7.tar.bz2 From noreply at buildbot.pypy.org Fri Jan 6 21:14:46 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 6 Jan 2012 21:14:46 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: merge heads Message-ID: <20120106201446.981E982CAB@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r303:c809887b1d6e Date: 2012-01-06 21:14 +0100 http://bitbucket.org/pypy/pypy.org/changeset/c809887b1d6e/ Log: merge heads diff --git a/don1.html b/don1.html --- a/don1.html +++ b/don1.html @@ -13,7 +13,7 @@ }); - $4369 of $105000 (4.2%) + $4882 of $105000 (4.6%)
diff --git a/don3.html b/don3.html --- a/don3.html +++ b/don3.html @@ -13,7 +13,7 @@ }); - $2567 of $60000 (4.2%) + $5820 of $60000 (9.7%)
From noreply at buildbot.pypy.org Fri Jan 6 23:18:44 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 6 Jan 2012 23:18:44 +0100 (CET) Subject: [pypy-commit] pypy import-numpy: abandon this approach Message-ID: <20120106221844.BB87A82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: import-numpy Changeset: r51083:eb12a969ddf7 Date: 2012-01-07 00:18 +0200 http://bitbucket.org/pypy/pypy/changeset/eb12a969ddf7/ Log: abandon this approach From noreply at buildbot.pypy.org Sat Jan 7 02:37:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 02:37:51 +0100 (CET) Subject: [pypy-commit] pypy numpy-concatenate: closed branch that went nowhere Message-ID: <20120107013751.2B93C82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-concatenate Changeset: r51084:c62c1d1837b7 Date: 2012-01-07 03:37 +0200 http://bitbucket.org/pypy/pypy/changeset/c62c1d1837b7/ Log: closed branch that went nowhere From noreply at buildbot.pypy.org Sat Jan 7 11:06:18 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:18 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Redo the explicit collect(), at least the most useful case. Message-ID: <20120107100618.84F3D82BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51085:735c5da06f6c Date: 2012-01-06 19:05 +0100 http://bitbucket.org/pypy/pypy/changeset/735c5da06f6c/ Log: Redo the explicit collect(), at least the most useful case. diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -433,9 +433,7 @@ debug_start("gc-major") # # We have to first wait for the previous minor collection to finish: - debug_start("gc-major-wait") self.stop_collection(wait=True) - debug_stop("gc-major-wait") # # Start the major collection. self._start_major_collection() @@ -443,13 +441,11 @@ debug_stop("gc-major") - def sync_end_of_collection(self): + def wait_for_the_end_of_collection(self): """In the mutator thread: wait for the minor collection currently running (if any) to finish, and synchronize the two threads.""" if self.collector.running != 0: - debug_start("gc-stop") - self._stop_collection() - debug_stop("gc-stop") + self.stop_collection(wait=True) # # We must *not* run execute_finalizers_ll() here, because it # can start the next collection, and then this function returns @@ -461,7 +457,9 @@ def stop_collection(self, wait): if wait: + debug_start("gc-stop") self.acquire(self.finished_lock) + debug_stop("gc-stop") else: if not self.try_acquire(self.finished_lock): return False @@ -503,7 +501,13 @@ def collect(self, gen=4): + debug_start("gc-forced-collect") + self.trigger_next_collection(force_major_collection=True) + self.wait_for_the_end_of_collection() + self.execute_finalizers_ll() + debug_stop("gc-forced-collect") return + # XXX reimplement this: """ gen=0: Trigger a minor collection if none is running. Never blocks, except if it happens to start a major collection. @@ -532,15 +536,14 @@ self.execute_finalizers_ll() debug_stop("gc-forced-collect") - def trigger_next_collection(self, force_major_collection=False): - """In the mutator thread: triggers the next minor collection.""" + def trigger_next_collection(self, force_major_collection): + """In the mutator thread: triggers the next minor or major collection.""" # # In case the previous collection is not over yet, wait for it self.wait_for_the_end_of_collection() # # Choose between a minor and a major collection - if (force_major_collection or - self.size_still_available_before_major < 0): + if force_major_collection: self._start_major_collection() else: self._start_minor_collection() From noreply at buildbot.pypy.org Sat Jan 7 11:06:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:19 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head 0fe83ac4f0da on branch numpy-full-fromstring Message-ID: <20120107100619.DB3A882BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51086:cf8c8221023a Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/cf8c8221023a/ Log: Merge closed head 0fe83ac4f0da on branch numpy-full-fromstring From noreply at buildbot.pypy.org Sat Jan 7 11:06:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:21 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head 10e52e09cda7 on branch windows-no-err-dlg Message-ID: <20120107100621.0286F82BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51087:fc8babbb0d49 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/fc8babbb0d49/ Log: Merge closed head 10e52e09cda7 on branch windows-no-err-dlg From noreply at buildbot.pypy.org Sat Jan 7 11:06:22 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:22 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head bae684cd82fb on branch counter-decay Message-ID: <20120107100622.1B69882BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51088:274493f9237a Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/274493f9237a/ Log: Merge closed head bae684cd82fb on branch counter-decay From noreply at buildbot.pypy.org Sat Jan 7 11:06:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:23 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head 6b116d5dea60 on branch numpy-faster-setslice Message-ID: <20120107100623.296A682BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51089:907165accd25 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/907165accd25/ Log: Merge closed head 6b116d5dea60 on branch numpy-faster-setslice From noreply at buildbot.pypy.org Sat Jan 7 11:06:24 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:24 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head 93bb4d305fdb on branch nedbat-sandbox-2 Message-ID: <20120107100624.3C68E82BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51090:57ce7dbc2991 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/57ce7dbc2991/ Log: Merge closed head 93bb4d305fdb on branch nedbat-sandbox-2 From noreply at buildbot.pypy.org Sat Jan 7 11:06:25 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:25 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head cef73b42fc52 on branch numpypy-repr-fix Message-ID: <20120107100625.49F3782BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51091:6ea46dc2c7b0 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/6ea46dc2c7b0/ Log: Merge closed head cef73b42fc52 on branch numpypy-repr-fix From noreply at buildbot.pypy.org Sat Jan 7 11:06:26 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:26 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head 2adf19881a7c on branch numpy-dtype-strings Message-ID: <20120107100626.64D1F82BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51092:5bfacfad4b18 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/5bfacfad4b18/ Log: Merge closed head 2adf19881a7c on branch numpy-dtype-strings From noreply at buildbot.pypy.org Sat Jan 7 11:06:28 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:28 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head 51e67e28230a on branch numpy-ndim-size Message-ID: <20120107100628.4B14982BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51093:4c2484433848 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/4c2484433848/ Log: Merge closed head 51e67e28230a on branch numpy-ndim-size From noreply at buildbot.pypy.org Sat Jan 7 11:06:29 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:29 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head aaab53d723c0 on branch numpy-sort Message-ID: <20120107100629.5758382BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51094:1b7c79d96aa3 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/1b7c79d96aa3/ Log: Merge closed head aaab53d723c0 on branch numpy-sort From noreply at buildbot.pypy.org Sat Jan 7 11:06:30 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:30 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head e1b50a7fd007 on branch numpy-dtype Message-ID: <20120107100630.669E882BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51095:cb83722c2596 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/cb83722c2596/ Log: Merge closed head e1b50a7fd007 on branch numpy-dtype From noreply at buildbot.pypy.org Sat Jan 7 11:06:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:31 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head 1436740d3b9b on branch numpy-complex Message-ID: <20120107100631.B2AFD82BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51096:3efb35fc9cd7 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/3efb35fc9cd7/ Log: Merge closed head 1436740d3b9b on branch numpy-complex From noreply at buildbot.pypy.org Sat Jan 7 11:06:32 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:32 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head c260a0d96e73 on branch jit-raw-array-of-struct Message-ID: <20120107100632.D1C6182BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51097:6da401a761cc Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/6da401a761cc/ Log: Merge closed head c260a0d96e73 on branch jit-raw-array-of-struct From noreply at buildbot.pypy.org Sat Jan 7 11:06:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:33 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head 16ea77edcb5e on branch separate-applevel-numpy Message-ID: <20120107100633.E112A82BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51098:7aae8a854792 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/7aae8a854792/ Log: Merge closed head 16ea77edcb5e on branch separate-applevel-numpy From noreply at buildbot.pypy.org Sat Jan 7 11:06:35 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:35 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head eb12a969ddf7 on branch import-numpy Message-ID: <20120107100635.177D682BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51099:8947a5d05606 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/8947a5d05606/ Log: Merge closed head eb12a969ddf7 on branch import-numpy From noreply at buildbot.pypy.org Sat Jan 7 11:06:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:36 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: Merge closed head c62c1d1837b7 on branch numpy-concatenate Message-ID: <20120107100636.44CE882BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51100:84207b40e275 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/84207b40e275/ Log: Merge closed head c62c1d1837b7 on branch numpy-concatenate From noreply at buildbot.pypy.org Sat Jan 7 11:06:37 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 11:06:37 +0100 (CET) Subject: [pypy-commit] pypy closed-branches: re-close this branch Message-ID: <20120107100637.5C5FF82BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: closed-branches Changeset: r51101:77e727fe4df1 Date: 2012-01-07 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/77e727fe4df1/ Log: re-close this branch From noreply at buildbot.pypy.org Sat Jan 7 12:01:54 2012 From: noreply at buildbot.pypy.org (hager) Date: Sat, 7 Jan 2012 12:01:54 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove test_return_pointer because it is obsolete now Message-ID: <20120107110154.726C382BFF@wyvern.cs.uni-duesseldorf.de> Author: Sven Hager Branch: ppc-jit-backend Changeset: r51102:5152aab1cfbb Date: 2012-01-07 12:01 +0100 http://bitbucket.org/pypy/pypy/changeset/5152aab1cfbb/ Log: remove test_return_pointer because it is obsolete now diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -523,25 +523,6 @@ def test_ovf_operations_reversed(self): self.test_ovf_operations(reversed=True) - - def test_return_pointer(self): - u_box, U_box = self.alloc_instance(self.U) - i0 = BoxInt() - i1 = BoxInt() - ptr = BoxPtr() - - operations = [ - ResOperation(rop.FINISH, [ptr], None, descr=BasicFailDescr(1)) - ] - inputargs = [i0, ptr, i1] - looptoken = JitCellToken() - self.cpu.compile_loop(inputargs, operations, looptoken) - self.cpu.set_future_value_int(0, 10) - self.cpu.set_future_value_ref(1, u_box.value) - self.cpu.set_future_value_int(2, 20) - fail = self.cpu.execute_token(looptoken) - result = self.cpu.get_latest_value_ref(0) - assert result == u_box.value def test_spilling(self): ops = ''' From noreply at buildbot.pypy.org Sat Jan 7 12:15:59 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 12:15:59 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: add some jit hooks, a bit ugly but works Message-ID: <20120107111559.D357A82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51103:4c1b9df9819d Date: 2012-01-07 13:15 +0200 http://bitbucket.org/pypy/pypy/changeset/4c1b9df9819d/ Log: add some jit hooks, a bit ugly but works diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -1,8 +1,10 @@ from pypy.rlib.jit import JitDriver, JitPortal +from pypy.rlib import jit_hooks from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import JitPolicy from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT +from pypy.jit.metainterp.resoperation import rop class TestJitPortal(LLJitMixin): def test_abort_quasi_immut(self): @@ -94,3 +96,21 @@ self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal())) assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] + def test_resop_interface(self): + driver = JitDriver(greens = [], reds = ['i']) + + def loop(i): + while i > 0: + driver.jit_merge_point(i=i) + i -= 1 + + def main(): + loop(1) + op = jit_hooks.resop_new(rop.INT_ADD, + [jit_hooks.boxint_new(3), + jit_hooks.boxint_new(4)], + jit_hooks.boxint_new(1)) + return jit_hooks.resop_opnum(op) + + res = self.meta_interp(main, []) + assert res == rop.INT_ADD diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -112,7 +112,7 @@ return ll_meta_interp(function, args, backendopt=backendopt, translate_support_code=True, **kwds) -def _find_jit_marker(graphs, marker_name): +def _find_jit_marker(graphs, marker_name, check_driver=True): results = [] for graph in graphs: for block in graph.iterblocks(): @@ -120,8 +120,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - (op.args[1].value is None or - op.args[1].value.active)): # the jitdriver + (not check_driver or op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -140,6 +140,9 @@ "found several jit_merge_points in the same graph") return results +def find_access_helpers(graphs): + return _find_jit_marker(graphs, 'access_helper', False) + def locate_jit_merge_point(graph): [(graph, block, pos)] = find_jit_merge_points([graph]) return block, pos, block.operations[pos] @@ -217,6 +220,7 @@ verbose = False # not self.cpu.translate_support_code self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() + self.rewrite_access_helpers() self.rewrite_set_param() self.rewrite_force_virtual(vrefinfo) self.rewrite_force_quasi_immutable() @@ -621,6 +625,20 @@ graph = self.annhelper.getgraph(func, args_s, s_result) return self.annhelper.graph2delayed(graph, FUNC) + def rewrite_access_helpers(self): + ah = find_access_helpers(self.translator.graphs) + for graph, block, index in ah: + op = block.operations[index] + self.rewrite_access_helper(op) + + def rewrite_access_helper(self, op): + ARGS = [arg.concretetype for arg in op.args[2:]] + RESULT = op.result.concretetype + ptr = self.helper_func(lltype.Ptr(lltype.FuncType(ARGS, RESULT)), + op.args[1].value) + op.opname = 'direct_call' + op.args = [Constant(ptr, lltype.Void)] + op.args[2:] + def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: self.rewrite_jit_merge_point(jd, policy) diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -777,7 +777,8 @@ assert isinstance(s_inst, annmodel.SomeInstance) def specialize_call(self, hop): - from pypy.rpython.lltypesystem import lltype, rclass + from pypy.rpython.lltypesystem import rclass, lltype + classrepr = rclass.get_type_repr(hop.rtyper) hop.exception_cannot_occur() diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/jit_hooks.py @@ -0,0 +1,58 @@ + +from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.annotation import model as annmodel +from pypy.rpython.lltypesystem import llmemory, lltype +from pypy.rpython.lltypesystem import rclass +from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\ + cast_base_ptr_to_instance + +def register_helper(helper, s_result): + + class Entry(ExtRegistryEntry): + _about_ = helper + + def compute_result_annotation(self, *args): + return s_result + + def specialize_call(self, hop): + from pypy.rpython.lltypesystem import lltype + + c_func = hop.inputconst(lltype.Void, helper) + c_name = hop.inputconst(lltype.Void, 'access_helper') + args_v = [hop.inputarg(arg, arg=i) + for i, arg in enumerate(hop.args_r)] + return hop.genop('jit_marker', [c_name, c_func] + args_v, + resulttype=hop.r_result) + +def _cast_to_box(llref): + from pypy.jit.metainterp.history import AbstractValue + + ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) + return cast_base_ptr_to_instance(AbstractValue, ptr) + +def resop_new(no, llargs, llres): + from pypy.jit.metainterp.history import ResOperation + + args = [_cast_to_box(llarg) for llarg in llargs] + res = _cast_to_box(llres) + rop = ResOperation(no, args, res) + return lltype.cast_opaque_ptr(llmemory.GCREF, + cast_instance_to_base_ptr(rop)) + +register_helper(resop_new, annmodel.SomePtr(llmemory.GCREF)) + +def boxint_new(no): + from pypy.jit.metainterp.history import BoxInt + return lltype.cast_opaque_ptr(llmemory.GCREF, + cast_instance_to_base_ptr(BoxInt(no))) + +register_helper(boxint_new, annmodel.SomePtr(llmemory.GCREF)) + +def resop_opnum(llop): + from pypy.jit.metainterp.resoperation import AbstractResOp + + opptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llop) + op = cast_base_ptr_to_instance(AbstractResOp, opptr) + return op.getopnum() + +register_helper(resop_opnum, annmodel.SomeInteger()) From noreply at buildbot.pypy.org Sat Jan 7 12:23:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 12:23:22 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: simplify and remove dead code Message-ID: <20120107112322.AB10D82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51104:6c698de2d866 Date: 2012-01-07 13:22 +0200 http://bitbucket.org/pypy/pypy/changeset/6c698de2d866/ Log: simplify and remove dead code diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -104,20 +104,3 @@ num = GetSetProperty(WrappedOp.descr_num), ) WrappedOp.acceptable_as_base_class = False - -from pypy.rpython.extregistry import ExtRegistryEntry - -class WrappedOpRegistry(ExtRegistryEntry): - _type_ = WrappedOp - - def compute_annotation(self): - from pypy.annotation import model as annmodel - clsdef = self.bookkeeper.getuniqueclassdef(WrappedOp) - if not clsdef.attrs: - resopclsdef = self.bookkeeper.getuniqueclassdef(AbstractResOp) - attrs = {'offset': annmodel.SomeInteger(), - 'repr_of_resop': annmodel.SomeString(can_be_None=False), - 'op': annmodel.SomeInstance(resopclsdef)} - for attrname, s_v in attrs.iteritems(): - clsdef.generalize_attr(attrname, s_v) - return annmodel.SomeInstance(clsdef, can_be_None=True) diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -30,29 +30,32 @@ ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) return cast_base_ptr_to_instance(AbstractValue, ptr) +def _cast_to_resop(llref): + from pypy.jit.metainterp.resoperation import AbstractResOp + + ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) + return cast_base_ptr_to_instance(AbstractResOp, ptr) + +def _cast_to_gcref(obj): + return lltype.cast_opaque_ptr(llmemory.GCREF, + cast_instance_to_base_ptr(obj)) + def resop_new(no, llargs, llres): from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llarg) for llarg in llargs] res = _cast_to_box(llres) - rop = ResOperation(no, args, res) - return lltype.cast_opaque_ptr(llmemory.GCREF, - cast_instance_to_base_ptr(rop)) + return _cast_to_gcref(ResOperation(no, args, res)) register_helper(resop_new, annmodel.SomePtr(llmemory.GCREF)) def boxint_new(no): from pypy.jit.metainterp.history import BoxInt - return lltype.cast_opaque_ptr(llmemory.GCREF, - cast_instance_to_base_ptr(BoxInt(no))) + return _cast_to_gcref(BoxInt(no)) register_helper(boxint_new, annmodel.SomePtr(llmemory.GCREF)) def resop_opnum(llop): - from pypy.jit.metainterp.resoperation import AbstractResOp - - opptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llop) - op = cast_base_ptr_to_instance(AbstractResOp, opptr) - return op.getopnum() + return _cast_to_resop(llop).getopnum() register_helper(resop_opnum, annmodel.SomeInteger()) From noreply at buildbot.pypy.org Sat Jan 7 12:49:26 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 12:49:26 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: add stuff to test_ztranslation and make it pass Message-ID: <20120107114926.DE4F482BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51105:76c53cca9b18 Date: 2012-01-07 13:48 +0200 http://bitbucket.org/pypy/pypy/changeset/76c53cca9b18/ Log: add stuff to test_ztranslation and make it pass diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,9 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_opnum from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory class TranslationTest: @@ -22,6 +24,7 @@ # - jitdriver hooks # - two JITs # - string concatenation, slicing and comparison + # - jit hooks interface class Frame(object): _virtualizable2_ = ['l[*]'] @@ -91,7 +94,9 @@ return f.i # def main(i, j): - return f(i) - f2(i+j, i, j) + op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], + boxint_new(8)) + return f(i) - f2(i+j, i, j) + resop_opnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -1,4 +1,5 @@ import sys, py +from pypy.tool.sourcetools import func_with_new_name from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\ cast_base_ptr_to_instance, hlstr @@ -634,10 +635,14 @@ def rewrite_access_helper(self, op): ARGS = [arg.concretetype for arg in op.args[2:]] RESULT = op.result.concretetype - ptr = self.helper_func(lltype.Ptr(lltype.FuncType(ARGS, RESULT)), - op.args[1].value) + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + # make sure we make a copy of function so it no longer belongs + # to extregistry + func = op.args[1].value + func = func_with_new_name(func, func.func_name + '_compiled') + ptr = self.helper_func(FUNCPTR, func) op.opname = 'direct_call' - op.args = [Constant(ptr, lltype.Void)] + op.args[2:] + op.args = [Constant(ptr, FUNCPTR)] + op.args[2:] def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -5,6 +5,7 @@ from pypy.rpython.lltypesystem import rclass from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\ cast_base_ptr_to_instance +from pypy.rlib.objectmodel import specialize def register_helper(helper, s_result): @@ -36,6 +37,7 @@ ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) return cast_base_ptr_to_instance(AbstractResOp, ptr) + at specialize.argtype(0) def _cast_to_gcref(obj): return lltype.cast_opaque_ptr(llmemory.GCREF, cast_instance_to_base_ptr(obj)) @@ -43,7 +45,7 @@ def resop_new(no, llargs, llres): from pypy.jit.metainterp.history import ResOperation - args = [_cast_to_box(llarg) for llarg in llargs] + args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] res = _cast_to_box(llres) return _cast_to_gcref(ResOperation(no, args, res)) From noreply at buildbot.pypy.org Sat Jan 7 12:53:54 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 12:53:54 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: an attempt to use the new interface Message-ID: <20120107115354.283A382BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51106:51d2eea00745 Date: 2012-01-07 13:53 +0200 http://bitbucket.org/pypy/pypy/changeset/51d2eea00745/ Log: an attempt to use the new interface diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,14 +1,15 @@ from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import unwrap_spec, interp2app +from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode from pypy.interpreter.error import OperationError -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant +from pypy.rlib import jit_hooks class Cache(object): in_recursion = False @@ -77,7 +78,19 @@ return space.w_None def wrap_oplist(space, logops, operations, ops_offset): - return [WrappedOp(op, ops_offset[op], logops.repr_of_resop(op)) for op in operations] + return [WrappedOp(jit_hooks._cast_to_gcref(op), + ops_offset[op], + logops.repr_of_resop(op)) for op in operations] + + at unwrap_spec(num=int, offset=int, repr=str) +def descr_new_resop(space, num, w_args, w_res=NoneNotWrapped, offset=-1, + repr=''): + args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in w_args] + if w_res is None: + llres = lltype.nullptr(llmemory.GCREF) + else: + llres = jit_hooks.boxint_new(space.int_w(w_res)) + return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely @@ -91,16 +104,13 @@ return space.wrap(self.repr_of_resop) def descr_num(self, space): - return space.wrap(self.op.getopnum()) - - def descr_name(self, space): - return space.wrap(self.op.getopname()) + return space.wrap(jit_hooks.resop_opnum(self.op)) WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, + __new__ = interp2app(descr_new_resop), __repr__ = interp2app(WrappedOp.descr_repr), - name = GetSetProperty(WrappedOp.descr_name), num = GetSetProperty(WrappedOp.descr_num), ) WrappedOp.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -95,7 +95,7 @@ assert elem[2][2] == False assert len(elem[3]) == 3 int_add = elem[3][0] - assert int_add.name == 'int_add' + #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 From noreply at buildbot.pypy.org Sat Jan 7 13:02:43 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 13:02:43 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: what's untested is broken Message-ID: <20120107120243.7C60582BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51107:80d78ac9430f Date: 2012-01-07 14:02 +0200 http://bitbucket.org/pypy/pypy/changeset/80d78ac9430f/ Log: what's untested is broken diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -83,9 +83,10 @@ logops.repr_of_resop(op)) for op in operations] @unwrap_spec(num=int, offset=int, repr=str) -def descr_new_resop(space, num, w_args, w_res=NoneNotWrapped, offset=-1, +def descr_new_resop(space, w_tp, num, w_args, w_res=NoneNotWrapped, offset=-1, repr=''): - args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in w_args] + args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in + space.listview(w_args)] if w_res is None: llres = lltype.nullptr(llmemory.GCREF) else: diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -158,3 +158,9 @@ pypyjit.set_abort_hook(hook) self.on_abort() assert l == [('pypyjit', 'ABORT_TOO_LONG')] + + def test_creation(self): + import pypyjit + + op = pypyjit.ResOperation(self.int_add_num, [1, 3], 4) + assert op.num == self.int_add_num From noreply at buildbot.pypy.org Sat Jan 7 13:07:39 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 13:07:39 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: bah Message-ID: <20120107120739.8D5FB82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51108:ac3f9f68d10e Date: 2012-01-07 14:07 +0200 http://bitbucket.org/pypy/pypy/changeset/ac3f9f68d10e/ Log: bah diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -88,7 +88,7 @@ args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in space.listview(w_args)] if w_res is None: - llres = lltype.nullptr(llmemory.GCREF) + llres = lltype.nullptr(llmemory.GCREF.TO) else: llres = jit_hooks.boxint_new(space.int_w(w_res)) return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) From noreply at buildbot.pypy.org Sat Jan 7 13:30:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 13:30:03 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: run those in a slightly different order, so we rewrite them before jitcodes Message-ID: <20120107123003.9963682BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51109:4e244fe6f3f8 Date: 2012-01-07 14:29 +0200 http://bitbucket.org/pypy/pypy/changeset/4e244fe6f3f8/ Log: run those in a slightly different order, so we rewrite them before jitcodes diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -219,9 +219,9 @@ self.portal = policy.portal verbose = False # not self.cpu.translate_support_code + self.rewrite_access_helpers() self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() - self.rewrite_access_helpers() self.rewrite_set_param() self.rewrite_force_virtual(vrefinfo) self.rewrite_force_quasi_immutable() From noreply at buildbot.pypy.org Sat Jan 7 13:45:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 13:45:22 +0100 (CET) Subject: [pypy-commit] pypy translation-time-measurments: add some measurments Message-ID: <20120107124522.D3B8582BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: translation-time-measurments Changeset: r51110:6c5f73bd4ec9 Date: 2012-01-07 14:44 +0200 http://bitbucket.org/pypy/pypy/changeset/6c5f73bd4ec9/ Log: add some measurments diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -1,5 +1,6 @@ import sys import types +import time from pypy.tool.ansi_print import ansi_log, raise_nicer_exception from pypy.tool.pairtype import pair from pypy.tool.error import (format_blocked_annotation_error, @@ -25,6 +26,7 @@ import pypy.rpython.extfuncregistry # has side effects import pypy.rlib.nonconst # has side effects + self.counter = {} if translator is None: # interface for tests from pypy.translator.translator import TranslationContext @@ -247,9 +249,17 @@ block, graph = self.pendingblocks.popitem() if annmodel.DEBUG: self.flowin_block = block # we need to keep track of block + t0 = time.time() self.processblock(graph, block) + tk = time.time() + self.counter[graph] = self.counter.get(graph, 0) + tk - t0 self.policy.no_more_blocks_to_annotate(self) if not self.pendingblocks: + import os + f = open('/tmp/annotator%d' % os.getpid(), 'w') + for k, v in self.counter.iteritems(): + f.write('%s: %d' % (k, v)) + f.close() break # finished # make sure that the return variables of all graphs is annotated if self.added_blocks is not None: From noreply at buildbot.pypy.org Sat Jan 7 13:53:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 13:53:57 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: for what is worth, don't look into interp_resop for now. It's hard enough to Message-ID: <20120107125357.D6A4B82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51111:41fea5d51df6 Date: 2012-01-07 14:53 +0200 http://bitbucket.org/pypy/pypy/changeset/41fea5d51df6/ Log: for what is worth, don't look into interp_resop for now. It's hard enough to get this working. diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -69,12 +69,16 @@ modname == 'thread.os_thread'): return True if '.' in modname: - modname, _ = modname.split('.', 1) + modname, rest = modname.split('.', 1) + else: + rest = '' if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', 'mmap', 'marshal']: + if modname == 'pypyjit' and 'interp_resop' in rest: + return False return True return False diff --git a/pypy/module/pypyjit/test/test_policy.py b/pypy/module/pypyjit/test/test_policy.py --- a/pypy/module/pypyjit/test/test_policy.py +++ b/pypy/module/pypyjit/test/test_policy.py @@ -52,6 +52,7 @@ for modname in 'pypyjit', 'signal', 'micronumpy', 'math', 'imp': assert pypypolicy.look_inside_pypy_module(modname) assert pypypolicy.look_inside_pypy_module(modname + '.foo') + assert not pypypolicy.look_inside_pypy_module('pypyjit.interp_resop') def test_see_jit_module(): assert pypypolicy.look_inside_pypy_module('pypyjit.interp_jit') From noreply at buildbot.pypy.org Sat Jan 7 14:07:43 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 14:07:43 +0100 (CET) Subject: [pypy-commit] pypy translation-time-measurments: style Message-ID: <20120107130743.6E59682BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: translation-time-measurments Changeset: r51112:cd2fd844ad80 Date: 2012-01-07 15:07 +0200 http://bitbucket.org/pypy/pypy/changeset/cd2fd844ad80/ Log: style diff --git a/pypy/annotation/annrpython.py b/pypy/annotation/annrpython.py --- a/pypy/annotation/annrpython.py +++ b/pypy/annotation/annrpython.py @@ -258,7 +258,7 @@ import os f = open('/tmp/annotator%d' % os.getpid(), 'w') for k, v in self.counter.iteritems(): - f.write('%s: %d' % (k, v)) + f.write('%s: %f\n' % (k, v)) f.close() break # finished # make sure that the return variables of all graphs is annotated From noreply at buildbot.pypy.org Sat Jan 7 14:12:11 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 14:12:11 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: I believe this is an actual problem Message-ID: <20120107131211.6423B82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51113:5ca484324ef7 Date: 2012-01-07 15:11 +0200 http://bitbucket.org/pypy/pypy/changeset/5ca484324ef7/ Log: I believe this is an actual problem diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -492,7 +492,7 @@ except ValueError: debug_print("Bridge out of guard", descr_number, "was already compiled!") - return + raise self.setup(original_loop_token) if log: From noreply at buildbot.pypy.org Sat Jan 7 14:51:52 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 7 Jan 2012 14:51:52 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: Add support for PyInterpreterState.next. Message-ID: <20120107135152.B602682BFF@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51114:416009084c6f Date: 2012-01-07 12:10 +0100 http://bitbucket.org/pypy/pypy/changeset/416009084c6f/ Log: cpyext: Add support for PyInterpreterState.next. Always NULL, since there is only one interpreter... diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -5,7 +5,7 @@ struct _is; /* Forward */ typedef struct _is { - int _foo; + struct _is *next; } PyInterpreterState; typedef struct _ts { diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -2,7 +2,10 @@ cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) from pypy.rpython.lltypesystem import rffi, lltype -PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ())) +PyInterpreterStateStruct = lltype.ForwardReference() +PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) +cpython_struct( + "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) @@ -54,7 +57,8 @@ class InterpreterState(object): def __init__(self, space): - self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True) + self.interpreter_state = lltype.malloc( + PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) def new_thread_state(self): capsule = ThreadStateCapsule() diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -37,6 +37,7 @@ def test_thread_state_interp(self, space, api): ts = api.PyThreadState_Get() assert ts.c_interp == api.PyInterpreterState_Head() + assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO) def test_basic_threadstate_dance(self, space, api): # Let extension modules call these functions, From noreply at buildbot.pypy.org Sat Jan 7 14:51:53 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 7 Jan 2012 14:51:53 +0100 (CET) Subject: [pypy-commit] pypy default: Fix checkmodule.py for almost all modules Message-ID: <20120107135153.E0B7682BFF@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51115:e8a394c064fd Date: 2012-01-07 13:08 +0100 http://bitbucket.org/pypy/pypy/changeset/e8a394c064fd/ Log: Fix checkmodule.py for almost all modules diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -42,7 +42,7 @@ 'argv' : 'state.get(space).w_argv', 'py3kwarning' : 'space.w_False', 'warnoptions' : 'state.get(space).w_warnoptions', - 'builtin_module_names' : 'state.w_None', + 'builtin_module_names' : 'space.w_None', 'pypy_getudir' : 'state.pypy_getudir', # not translated 'pypy_initial_path' : 'state.pypy_initial_path', diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py --- a/pypy/objspace/fake/checkmodule.py +++ b/pypy/objspace/fake/checkmodule.py @@ -1,8 +1,10 @@ from pypy.objspace.fake.objspace import FakeObjSpace, W_Root +from pypy.config.pypyoption import get_pypy_config def checkmodule(modname): - space = FakeObjSpace() + config = get_pypy_config(translating=True) + space = FakeObjSpace(config) mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__']) # force computation and record what we wrap module = mod.Module(space, W_Root()) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -93,9 +93,9 @@ class FakeObjSpace(ObjSpace): - def __init__(self): + def __init__(self, config=None): self._seen_extras = [] - ObjSpace.__init__(self) + ObjSpace.__init__(self, config=config) def float_w(self, w_obj): is_root(w_obj) @@ -135,6 +135,9 @@ def newfloat(self, x): return w_some_obj() + def newcomplex(self, x, y): + return w_some_obj() + def marshal_w(self, w_obj): "NOT_RPYTHON" raise NotImplementedError @@ -215,6 +218,10 @@ expected_length = 3 return [w_some_obj()] * expected_length + def unpackcomplex(self, w_complex): + is_root(w_complex) + return 1.1, 2.2 + def allocate_instance(self, cls, w_subtype): is_root(w_subtype) return instantiate(cls) @@ -232,6 +239,11 @@ def exec_(self, *args, **kwds): pass + def createexecutioncontext(self): + ec = ObjSpace.createexecutioncontext(self) + ec._py_repr = None + return ec + # ---------- def translates(self, func=None, argtypes=None, **kwds): @@ -267,18 +279,21 @@ ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', 'dict', 'unicode', 'complex', 'slice', 'bool', - 'type', 'basestring']): + 'type', 'basestring', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # for (name, _, arity, _) in ObjSpace.MethodTable: args = ['w_%d' % i for i in range(arity)] + params = args[:] d = {'is_root': is_root, 'w_some_obj': w_some_obj} + if name in ('get',): + params[-1] += '=None' exec compile2("""\ def meth(self, %s): %s return w_some_obj() - """ % (', '.join(args), + """ % (', '.join(params), '; '.join(['is_root(%s)' % arg for arg in args]))) in d meth = func_with_new_name(d['meth'], name) setattr(FakeObjSpace, name, meth) @@ -301,9 +316,12 @@ pass FakeObjSpace.default_compiler = FakeCompiler() -class FakeModule(object): +class FakeModule(Wrappable): + def __init__(self): + self.w_dict = w_some_obj() def get(self, name): name + "xx" # check that it's a string return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py --- a/pypy/objspace/fake/test/test_objspace.py +++ b/pypy/objspace/fake/test/test_objspace.py @@ -40,7 +40,7 @@ def test_constants(self): space = self.space space.translates(lambda: (space.w_None, space.w_True, space.w_False, - space.w_int, space.w_str, + space.w_int, space.w_str, space.w_object, space.w_TypeError)) def test_wrap(self): @@ -72,3 +72,9 @@ def test_newlist(self): self.space.newlist([W_Root(), W_Root()]) + + def test_default_values(self): + # the __get__ method takes either 2 or 3 arguments + space = self.space + space.translates(lambda: (space.get(W_Root(), W_Root()), + space.get(W_Root(), W_Root(), W_Root()))) From noreply at buildbot.pypy.org Sat Jan 7 14:51:55 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 7 Jan 2012 14:51:55 +0100 (CET) Subject: [pypy-commit] pypy default: cpyext: export Py_ByteArrayType and Py_MemoryViewType Message-ID: <20120107135155.118FF82BFF@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51116:4bace20eef15 Date: 2012-01-07 13:15 +0100 http://bitbucket.org/pypy/pypy/changeset/4bace20eef15/ Log: cpyext: export Py_ByteArrayType and Py_MemoryViewType diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize @@ -387,6 +388,8 @@ "Float": "space.w_float", "Long": "space.w_long", "Complex": "space.w_complex", + "ByteArray": "space.w_bytearray", + "MemoryView": "space.gettypeobject(W_MemoryView.typedef)", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', 'NotImplemented': 'space.type(space.w_NotImplemented)', From noreply at buildbot.pypy.org Sat Jan 7 14:51:56 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 7 Jan 2012 14:51:56 +0100 (CET) Subject: [pypy-commit] pypy default: Add stubs for PyObject_GetBuffer: pypy does not yet implement Message-ID: <20120107135156.3BC1382BFF@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51117:75b3dbc7d326 Date: 2012-01-07 13:41 +0100 http://bitbucket.org/pypy/pypy/changeset/75b3dbc7d326/ Log: Add stubs for PyObject_GetBuffer: pypy does not yet implement the new buffer interface. diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py --- a/pypy/module/cpyext/buffer.py +++ b/pypy/module/cpyext/buffer.py @@ -1,6 +1,36 @@ +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, Py_buffer) +from pypy.module.cpyext.pyobject import PyObject + + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyObject_CheckBuffer(space, w_obj): + """Return 1 if obj supports the buffer interface otherwise 0.""" + return 0 # the bf_getbuffer field is never filled by cpyext + + at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real], + rffi.INT_real, error=-1) +def PyObject_GetBuffer(space, w_obj, view, flags): + """Export obj into a Py_buffer, view. These arguments must + never be NULL. The flags argument is a bit field indicating what + kind of buffer the caller is prepared to deal with and therefore what + kind of buffer the exporter is allowed to return. The buffer interface + allows for complicated memory sharing possibilities, but some caller may + not be able to handle all the complexity but may want to see if the + exporter will let them take a simpler view to its memory. + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a BufferError unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + Py_buffer structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + 0 is returned on success and -1 on error.""" + raise OperationError(space.w_TypeError, space.wrap( + 'PyPy does not yet implement the new buffer interface')) @cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL) def PyBuffer_IsContiguous(space, view, fortran): diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -123,10 +123,6 @@ typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *); typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **); -typedef int (*objobjproc)(PyObject *, PyObject *); -typedef int (*visitproc)(PyObject *, void *); -typedef int (*traverseproc)(PyObject *, visitproc, void *); - /* Py3k buffer interface */ typedef struct bufferinfo { void *buf; @@ -153,6 +149,41 @@ typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); + /* Flags for getting buffers */ +#define PyBUF_SIMPLE 0 +#define PyBUF_WRITABLE 0x0001 +/* we used to include an E, backwards compatible alias */ +#define PyBUF_WRITEABLE PyBUF_WRITABLE +#define PyBUF_FORMAT 0x0004 +#define PyBUF_ND 0x0008 +#define PyBUF_STRIDES (0x0010 | PyBUF_ND) +#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) +#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) +#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) +#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE) +#define PyBUF_CONTIG_RO (PyBUF_ND) + +#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE) +#define PyBUF_STRIDED_RO (PyBUF_STRIDES) + +#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT) + +#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT) + + +#define PyBUF_READ 0x100 +#define PyBUF_WRITE 0x200 +#define PyBUF_SHADOW 0x400 +/* end Py3k buffer interface */ + +typedef int (*objobjproc)(PyObject *, PyObject *); +typedef int (*visitproc)(PyObject *, void *); +typedef int (*traverseproc)(PyObject *, visitproc, void *); + typedef struct { /* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all arguments are guaranteed to be of the object's type (modulo diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -34,141 +34,6 @@ @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyObject_CheckBuffer(space, obj): - """Return 1 if obj supports the buffer interface otherwise 0.""" - raise NotImplementedError - - at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1) -def PyObject_GetBuffer(space, obj, view, flags): - """Export obj into a Py_buffer, view. These arguments must - never be NULL. The flags argument is a bit field indicating what - kind of buffer the caller is prepared to deal with and therefore what - kind of buffer the exporter is allowed to return. The buffer interface - allows for complicated memory sharing possibilities, but some caller may - not be able to handle all the complexity but may want to see if the - exporter will let them take a simpler view to its memory. - - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a BufferError unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - Py_buffer structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. - - 0 is returned on success and -1 on error. - - The following table gives possible values to the flags arguments. - - Flag - - Description - - PyBUF_SIMPLE - - This is the default flag state. The returned - buffer may or may not have writable memory. The - format of the data will be assumed to be unsigned - bytes. This is a "stand-alone" flag constant. It - never needs to be '|'d to the others. The exporter - will raise an error if it cannot provide such a - contiguous buffer of bytes. - - PyBUF_WRITABLE - - The returned buffer must be writable. If it is - not writable, then raise an error. - - PyBUF_STRIDES - - This implies PyBUF_ND. The returned - buffer must provide strides information (i.e. the - strides cannot be NULL). This would be used when - the consumer can handle strided, discontiguous - arrays. Handling strides automatically assumes - you can handle shape. The exporter can raise an - error if a strided representation of the data is - not possible (i.e. without the suboffsets). - - PyBUF_ND - - The returned buffer must provide shape - information. The memory will be assumed C-style - contiguous (last dimension varies the - fastest). The exporter may raise an error if it - cannot provide this kind of contiguous buffer. If - this is not given then shape will be NULL. - - PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS - - These flags indicate that the contiguity returned - buffer must be respectively, C-contiguous (last - dimension varies the fastest), Fortran contiguous - (first dimension varies the fastest) or either - one. All of these flags imply - PyBUF_STRIDES and guarantee that the - strides buffer info structure will be filled in - correctly. - - PyBUF_INDIRECT - - This flag indicates the returned buffer must have - suboffsets information (which can be NULL if no - suboffsets are needed). This can be used when - the consumer can handle indirect array - referencing implied by these suboffsets. This - implies PyBUF_STRIDES. - - PyBUF_FORMAT - - The returned buffer must have true format - information if this flag is provided. This would - be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An - exporter should always be able to provide this - information if requested. If format is not - explicitly requested then the format must be - returned as NULL (which means 'B', or - unsigned bytes) - - PyBUF_STRIDED - - This is equivalent to (PyBUF_STRIDES | - PyBUF_WRITABLE). - - PyBUF_STRIDED_RO - - This is equivalent to (PyBUF_STRIDES). - - PyBUF_RECORDS - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_RECORDS_RO - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT). - - PyBUF_FULL - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_FULL_RO - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT). - - PyBUF_CONTIG - - This is equivalent to (PyBUF_ND | - PyBUF_WRITABLE). - - PyBUF_CONTIG_RO - - This is equivalent to (PyBUF_ND).""" raise NotImplementedError @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) From noreply at buildbot.pypy.org Sat Jan 7 14:58:01 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 14:58:01 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: oops, a test and a fix Message-ID: <20120107135801.797CC82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51118:6249a65d583e Date: 2012-01-07 15:57 +0200 http://bitbucket.org/pypy/pypy/changeset/6249a65d583e/ Log: oops, a test and a fix diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -79,7 +79,7 @@ def wrap_oplist(space, logops, operations, ops_offset): return [WrappedOp(jit_hooks._cast_to_gcref(op), - ops_offset[op], + ops_offset.get(op, 0), logops.repr_of_resop(op)) for op in operations] @unwrap_spec(num=int, offset=int, repr=str) diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -56,7 +56,8 @@ greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] offset = {} for i, op in enumerate(oplist): - offset[op] = i + if i != 1: + offset[op] = i def interp_on_compile(): pypy_portal.on_compile(pypyjitdriver, logger, JitCellToken(), From noreply at buildbot.pypy.org Sat Jan 7 18:01:14 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Sat, 7 Jan 2012 18:01:14 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: my dates Message-ID: <20120107170114.832F782BFF@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r4001:218b3a396820 Date: 2012-01-07 18:01 +0100 http://bitbucket.org/pypy/extradoc/changeset/218b3a396820/ Log: my dates diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt --- a/sprintinfo/leysin-winter-2012/people.txt +++ b/sprintinfo/leysin-winter-2012/people.txt @@ -12,6 +12,7 @@ ==================== ============== ======================= Armin Rigo private David Schneider 17/22 ermina +Antonio Cuni 16/22 ermina ==================== ============== ======================= From noreply at buildbot.pypy.org Sat Jan 7 18:15:27 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Sat, 7 Jan 2012 18:15:27 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add a note about my dates Message-ID: <20120107171527.852A382BFF@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r4002:4f9fc086064f Date: 2012-01-07 18:15 +0100 http://bitbucket.org/pypy/extradoc/changeset/4f9fc086064f/ Log: add a note about my dates diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt --- a/sprintinfo/leysin-winter-2012/people.txt +++ b/sprintinfo/leysin-winter-2012/people.txt @@ -12,7 +12,7 @@ ==================== ============== ======================= Armin Rigo private David Schneider 17/22 ermina -Antonio Cuni 16/22 ermina +Antonio Cuni 16/22 ermina, might arrive on the 15th ==================== ============== ======================= From noreply at buildbot.pypy.org Sat Jan 7 18:28:24 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sat, 7 Jan 2012 18:28:24 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Add myself Message-ID: <20120107172824.47D8782BFF@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: extradoc Changeset: r4003:09006ec5a359 Date: 2012-01-07 18:26 +0100 http://bitbucket.org/pypy/extradoc/changeset/09006ec5a359/ Log: Add myself diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt --- a/sprintinfo/leysin-winter-2012/people.txt +++ b/sprintinfo/leysin-winter-2012/people.txt @@ -12,6 +12,7 @@ ==================== ============== ======================= Armin Rigo private David Schneider 17/22 ermina +Romain Guillebert 15/22 ermina ==================== ============== ======================= From noreply at buildbot.pypy.org Sat Jan 7 18:28:25 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sat, 7 Jan 2012 18:28:25 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Merge heads Message-ID: <20120107172825.6489682BFF@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: extradoc Changeset: r4004:9601f2597df0 Date: 2012-01-07 18:27 +0100 http://bitbucket.org/pypy/extradoc/changeset/9601f2597df0/ Log: Merge heads diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt --- a/sprintinfo/leysin-winter-2012/people.txt +++ b/sprintinfo/leysin-winter-2012/people.txt @@ -12,6 +12,7 @@ ==================== ============== ======================= Armin Rigo private David Schneider 17/22 ermina +Antonio Cuni 16/22 ermina, might arrive on the 15th Romain Guillebert 15/22 ermina ==================== ============== ======================= From noreply at buildbot.pypy.org Sat Jan 7 19:08:39 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 19:08:39 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Random progress. Message-ID: <20120107180839.34B0C82BFF@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51119:38b03b6eef08 Date: 2012-01-07 14:46 +0100 http://bitbucket.org/pypy/pypy/changeset/38b03b6eef08/ Log: Random progress. diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -16,22 +16,13 @@ # # A concurrent generational mark&sweep GC. # -# This uses a separate thread to run the minor collections in parallel. -# See concurrentgen.txt for some details. -# -# Based on observations of the timing of collections with "minimark" -# (on translate.py): about 15% of the time in minor collections -# (including 2% in walk_roots), and about 7% in major collections. -# So out of a total of 22% this should parallelize 20%. -# +# This uses a separate thread to run the collections in parallel. # This is an entirely non-moving collector, with a generational write # barrier adapted to the concurrent marking done by the collector thread. +# See concurrentgen.txt for some details. # WORD = LONG_BIT // 8 -WORD_POWER_2 = {32: 2, 64: 3}[LONG_BIT] -assert 1 << WORD_POWER_2 == WORD -FLOAT_ALMOST_MAXINT = float(sys.maxint) * 0.9999 # Objects start with an integer 'tid', which is decomposed as follows. @@ -49,8 +40,8 @@ class ConcurrentGenGC(GCBase): _alloc_flavor_ = "raw" - inline_simple_malloc = True - inline_simple_malloc_varsize = True + #inline_simple_malloc = True + #inline_simple_malloc_varsize = True needs_deletion_barrier = True needs_weakref_read_barrier = True prebuilt_gc_objects_are_static_roots = False @@ -59,7 +50,7 @@ HDRPTR = lltype.Ptr(lltype.ForwardReference()) HDR = lltype.Struct('header', ('tid', lltype.Signed), - ('next', HDRPTR)) # <-- kill me later + ('next', HDRPTR)) # <-- kill me later XXX HDRPTR.TO.become(HDR) HDRSIZE = llmemory.sizeof(HDR) NULL = lltype.nullptr(HDR) @@ -85,7 +76,7 @@ **kwds): GCBase.__init__(self, config, **kwds) self.read_from_env = read_from_env - self.nursery_size = nursery_size + self.minimal_nursery_size = nursery_size # self.main_thread_ident = ll_thread.get_ident() # non-transl. debug only # @@ -106,6 +97,7 @@ def _nursery_full(additional_size): # a hack to reduce the code size in _account_for_nursery(): # avoids the 'self' argument. + assert self.nursery_size_still_available < 0 self.nursery_full(additional_size) _nursery_full._dont_inline_ = True self._nursery_full = _nursery_full @@ -156,7 +148,7 @@ # self.collector.setup() # - self.set_minimal_nursery_size(self.nursery_size) + self.set_minimal_nursery_size(self.minimal_nursery_size) if self.read_from_env: # newsize = env.read_from_env('PYPY_GC_NURSERY') @@ -176,6 +168,21 @@ self.old_objects_size = r_uint(0) # approx size of 'old objs' box self.nursery_size_still_available = intmask(self.nursery_size) + def update_total_memory_size(self): + # compute the new value for 'total_memory_size': it should be + # twice old_objects_size, but never less than 2/3rd of the old value, + # and at least 4 * minimal_nursery_size. + absolute_maximum = r_uint(-1) + if self.old_objects_size < absolute_maximum // 2: + tms = self.old_objects_size * 2 + else: + tms = absolute_maximum + tms = max(tms, self.total_memory_size // 3 * 2) + tms = max(tms, 4 * self.minimal_nursery_size) + self.total_memory_size = tms + debug_print("total memory size:", tms) + + def _teardown(self): "Stop the collector thread after tests have run." self.wait_for_the_end_of_collection() @@ -261,6 +268,8 @@ def _account_for_nursery(self, additional_size): self.nursery_size_still_available -= additional_size + debug_print("malloc:", additional_size, + "still_available:", self.nursery_size_still_available) if self.nursery_size_still_available < 0: self._nursery_full(additional_size) _account_for_nursery._always_inline_ = True @@ -379,19 +388,14 @@ def nursery_full(self, additional_size): # See concurrentgen.txt. # - assert self.nursery_size_still_available < 0 - # # Handle big allocations specially if additional_size > intmask(self.total_memory_size >> 4): xxxxxxxxxxxx self.handle_big_allocation(additional_size) return # - waiting_for_major_collection = self.collector.major_collection_phase != 0 - # - if (self.collector.running == 0 or - self.stop_collection(wait=waiting_for_major_collection)): - # The previous collection finished. + if self.collector.running == 0 or self.stop_collection(): + # The previous collection finished; no collection is running now. # # Expand the nursery if we can, up to 25% of total_memory_size. # In some cases, the limiting factor is that the nursery size @@ -400,15 +404,17 @@ expand_to = self.total_memory_size >> 2 expand_to = min(expand_to, self.total_memory_size - self.old_objects_size) - self.nursery_size_still_available += intmask(expand_to - - self.nursery_size) - self.nursery_size = expand_to - # - # If 'nursery_size_still_available' has been increased to a - # nonnegative number, then we are done: we can just continue - # filling the nursery. - if self.nursery_size_still_available >= 0: - return + if expand_to > self.nursery_size: + debug_print("expanded nursery size:", expand_to) + self.nursery_size_still_available += intmask(expand_to - + self.nursery_size) + self.nursery_size = expand_to + # + # If 'nursery_size_still_available' has been increased to a + # nonnegative number, then we are done: we can just continue + # filling the nursery. + if self.nursery_size_still_available >= 0: + return # # Else, we trigger the next minor collection now. self._start_minor_collection() @@ -423,46 +429,45 @@ newsize = min(newsize, self.total_memory_size >> 2) self.nursery_size = newsize self.nursery_size_still_available = intmask(newsize) + debug_print("nursery size:", self.nursery_size) + debug_print("total memory size:", self.total_memory_size) return - yyy - - else: - # The previous collection is not finished yet. - # At this point we want a full collection to occur. - debug_start("gc-major") - # - # We have to first wait for the previous minor collection to finish: - self.stop_collection(wait=True) - # - # Start the major collection. - self._start_major_collection() - # - debug_stop("gc-major") + # The previous collection is likely not finished yet. + # At this point we want a full collection to occur. + debug_start("gc-major") + # + # We have to first wait for the previous minor collection to finish: + self.wait_for_the_end_of_collection() + # + # Start the major collection. + self._start_major_collection() + # + debug_stop("gc-major") def wait_for_the_end_of_collection(self): - """In the mutator thread: wait for the minor collection currently - running (if any) to finish, and synchronize the two threads.""" if self.collector.running != 0: self.stop_collection(wait=True) - # - # We must *not* run execute_finalizers_ll() here, because it - # can start the next collection, and then this function returns - # with a collection in progress, which it should not. Be careful - # to call execute_finalizers_ll() in the caller somewhere. - ll_assert(self.collector.running == 0, - "collector thread not paused?") - def stop_collection(self, wait): - if wait: - debug_start("gc-stop") - self.acquire(self.finished_lock) + def stop_collection(self, wait=False): + ll_assert(self.collector.running != 0, "stop_collection: running == 0") + # + major_collection = (self.collector.major_collection_phase == 2) + debug_start("gc-stop") + try: + debug_print("wait:", int(wait)) + if major_collection: + debug_print("ending a major collection") + if wait or major_collection: + self.acquire(self.finished_lock) + else: + if not self.try_acquire(self.finished_lock): + return False + finally: + debug_print("old objects size:", self.old_objects_size) debug_stop("gc-stop") - else: - if not self.try_acquire(self.finished_lock): - return False self.collector.running = 0 #debug_print("collector.running = 0") # @@ -475,6 +480,11 @@ if self.DEBUG: self.debug_check_lists() # + if major_collection: + self.collector.major_collection_phase = 0 + # Update the total memory usage to 2 times the old objects' size + self.update_total_memory_size() + # return True @@ -502,8 +512,9 @@ def collect(self, gen=4): debug_start("gc-forced-collect") - self.trigger_next_collection(force_major_collection=True) self.wait_for_the_end_of_collection() + self._start_major_collection() + self.nursery_full(0) self.execute_finalizers_ll() debug_stop("gc-forced-collect") return @@ -527,29 +538,6 @@ gen>=4: Do a full synchronous major collection. """ - debug_start("gc-forced-collect") - debug_print("collect, gen =", gen) - if gen >= 1 or self.collector.running <= 0: - self.trigger_next_collection(gen >= 3) - if gen == 2 or gen >= 4: - self.wait_for_the_end_of_collection() - self.execute_finalizers_ll() - debug_stop("gc-forced-collect") - - def trigger_next_collection(self, force_major_collection): - """In the mutator thread: triggers the next minor or major collection.""" - # - # In case the previous collection is not over yet, wait for it - self.wait_for_the_end_of_collection() - # - # Choose between a minor and a major collection - if force_major_collection: - self._start_major_collection() - else: - self._start_minor_collection() - # - self.execute_finalizers_ll() - def _start_minor_collection(self, major_collection_phase=0): # @@ -633,10 +621,12 @@ self.collector.delayed_aging_objects = self.collector.aging_objects self.collector.aging_objects = self.old_objects self.old_objects = self.NULL - #self.collect_weakref_pages = self.weakref_pages #self.collect_finalizer_pages = self.finalizer_pages # + # Now there are no more old objects + self.old_objects_size = r_uint(0) + # # Start again the collector thread self._start_collection_common(major_collection_phase=2) # @@ -652,7 +642,6 @@ self.collector.running = 1 #debug_print("collector.running = 1") self.release(self.ready_to_start_lock) - self.nursery_size_still_available = self.nursery_size def _add_stack_root(self, root): # NB. it's ok to edit 'gray_objects' from the mutator thread here, @@ -943,8 +932,10 @@ # its size ends up being accounted here or not --- but it will # be at the following minor collection, because the object is # young again. So, careful about overflows. - ll_assert(surviving_size <= self.gc.total_memory_size, - "surviving_size too large") + if surviving_size > self.gc.total_memory_size: + debug_print("surviving_size too large!", + surviving_size, self.gc.total_memory_size) + ll_assert(False, "surviving_size too large") limit = self.gc.total_memory_size - surviving_size if self.gc.old_objects_size <= limit: self.gc.old_objects_size += surviving_size From noreply at buildbot.pypy.org Sat Jan 7 19:08:40 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 19:08:40 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Tweak tweak tweak. Message-ID: <20120107180840.5DA9882C00@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51120:73514c0443a5 Date: 2012-01-07 19:01 +0100 http://bitbucket.org/pypy/pypy/changeset/73514c0443a5/ Log: Tweak tweak tweak. diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -63,20 +63,19 @@ # Automatically adjust the remaining parameters from the environment. "read_from_env": True, - # The default size of the nursery: use 6 MB by default. - # Environment variable: PYPY_GC_NURSERY - "nursery_size": 6*1024*1024, + # The minimal RAM usage: use 24 MB by default. + # Environment variable: PYPY_GC_MIN + "min_heap_size": 6*1024*1024, } def __init__(self, config, read_from_env=False, - nursery_size=32*WORD, - fill_factor=2.0, # xxx kill + min_heap_size=128*WORD, **kwds): GCBase.__init__(self, config, **kwds) self.read_from_env = read_from_env - self.minimal_nursery_size = nursery_size + self.min_heap_size = r_uint(min_heap_size) # self.main_thread_ident = ll_thread.get_ident() # non-transl. debug only # @@ -93,14 +92,6 @@ # is a collection running and the mutator tries to change an object # that was not scanned yet. self._init_writebarrier_logic() - # - def _nursery_full(additional_size): - # a hack to reduce the code size in _account_for_nursery(): - # avoids the 'self' argument. - assert self.nursery_size_still_available < 0 - self.nursery_full(additional_size) - _nursery_full._dont_inline_ = True - self._nursery_full = _nursery_full def _initialize(self): # Initialize the GC. In normal translated program, this function @@ -118,7 +109,9 @@ # contains the objects that the write barrier re-marked as young # (so they are "old young objects"). self.new_young_objects = self.NULL + self.new_young_objects_size = r_uint(0) self.old_objects = self.NULL + self.old_objects_size = r_uint(0) # total size of self.old_objects # # See concurrentgen.txt for more information about these fields. self.current_young_marker = MARK_BYTE_1 @@ -148,37 +141,48 @@ # self.collector.setup() # - self.set_minimal_nursery_size(self.minimal_nursery_size) + self.set_min_heap_size(self.min_heap_size) if self.read_from_env: # - newsize = env.read_from_env('PYPY_GC_NURSERY') + newsize = env.read_from_env('PYPY_GC_MIN') if newsize > 0: - self.set_minimal_nursery_size(newsize) + self.set_min_heap_size(r_uint(newsize)) # - debug_print("minimal nursery size:", self.minimal_nursery_size) + debug_print("minimal heap size:", self.min_heap_size) debug_stop("gc-startup") - def set_minimal_nursery_size(self, newsize): - # See concurrentgen.txt. At the start of the process, 'newsize' is - # a quarter of the total memory size. - newsize = min(newsize, (sys.maxint - 65535) // 4) - self.minimal_nursery_size = r_uint(newsize) - self.total_memory_size = r_uint(4 * newsize) # total size - self.nursery_size = r_uint(newsize) # size of the '->new...' box - self.old_objects_size = r_uint(0) # approx size of 'old objs' box - self.nursery_size_still_available = intmask(self.nursery_size) + def set_min_heap_size(self, newsize): + # See concurrentgen.txt. + self.min_heap_size = newsize + self.total_memory_size = newsize # total heap size + self.nursery_limit = newsize >> 2 # total size of the '->new...' box + # + # The in-use portion of the '->new...' box contains the objs + # that are in the 'new_young_objects' list. The total of their + # size is 'new_young_objects_size'. + # + # The 'old objects' box contains the objs that are in the + # 'old_objects' list. The total of their size is 'old_objects_size'. + # + # The write barrier occasionally resets the mark byte of objects + # to 'young'. This is done without adding or removing objects + # to the above lists, and consequently without correcting the + # '*_size' variables. Because of that, the 'old_objects' lists + # may contain a few objects that are not marked 'old' any more, + # and conversely, prebuilt objects may end up marked 'old' but + # are never added to the 'old_objects' list. def update_total_memory_size(self): # compute the new value for 'total_memory_size': it should be # twice old_objects_size, but never less than 2/3rd of the old value, - # and at least 4 * minimal_nursery_size. + # and at least 'min_heap_size' absolute_maximum = r_uint(-1) if self.old_objects_size < absolute_maximum // 2: tms = self.old_objects_size * 2 else: tms = absolute_maximum tms = max(tms, self.total_memory_size // 3 * 2) - tms = max(tms, 4 * self.minimal_nursery_size) + tms = max(tms, self.min_heap_size) self.total_memory_size = tms debug_print("total memory size:", tms) @@ -228,7 +232,6 @@ size_gc_header = self.gcheaderbuilder.size_gc_header totalsize = size_gc_header + size rawtotalsize = raw_malloc_usage(totalsize) - self._account_for_nursery(rawtotalsize) adr = llarena.arena_malloc(rawtotalsize, 2) if adr == llmemory.NULL: raise MemoryError @@ -237,7 +240,11 @@ hdr = self.header(obj) hdr.tid = self.combine(typeid, self.current_young_marker, 0) hdr.next = self.new_young_objects + debug_print("malloc:", rawtotalsize, obj) self.new_young_objects = hdr + self.new_young_objects_size += r_uint(rawtotalsize) + if self.new_young_objects_size > self.nursery_limit: + self.nursery_overflowed(obj) return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) def malloc_varsize_clear(self, typeid, length, size, itemsize, @@ -253,7 +260,6 @@ raise MemoryError # rawtotalsize = raw_malloc_usage(totalsize) - self._account_for_nursery(rawtotalsize) adr = llarena.arena_malloc(rawtotalsize, 2) if adr == llmemory.NULL: raise MemoryError @@ -263,17 +269,13 @@ hdr = self.header(obj) hdr.tid = self.combine(typeid, self.current_young_marker, 0) hdr.next = self.new_young_objects + debug_print("malloc:", rawtotalsize, obj) self.new_young_objects = hdr + self.new_young_objects_size += r_uint(rawtotalsize) + if self.new_young_objects_size > self.nursery_limit: + self.nursery_overflowed(obj) return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) - def _account_for_nursery(self, additional_size): - self.nursery_size_still_available -= additional_size - debug_print("malloc:", additional_size, - "still_available:", self.nursery_size_still_available) - if self.nursery_size_still_available < 0: - self._nursery_full(additional_size) - _account_for_nursery._always_inline_ = True - # ---------- # Other functions in the GC API @@ -322,7 +324,7 @@ cym = self.current_young_marker com = self.current_old_marker mark = self.get_mark(obj) - #debug_print("deletion_barrier:", mark, obj) + debug_print("deletion_barrier:", mark, obj) # if mark == com: # most common case, make it fast # @@ -385,16 +387,12 @@ # ---------- - def nursery_full(self, additional_size): - # See concurrentgen.txt. + def nursery_overflowed(self, newest_obj): + # See concurrentgen.txt. Called after the nursery overflowed. # - # Handle big allocations specially - if additional_size > intmask(self.total_memory_size >> 4): - xxxxxxxxxxxx - self.handle_big_allocation(additional_size) - return + debug_start("gc-nursery-full") # - if self.collector.running == 0 or self.stop_collection(): + if self.previous_collection_finished(): # The previous collection finished; no collection is running now. # # Expand the nursery if we can, up to 25% of total_memory_size. @@ -404,46 +402,51 @@ expand_to = self.total_memory_size >> 2 expand_to = min(expand_to, self.total_memory_size - self.old_objects_size) - if expand_to > self.nursery_size: - debug_print("expanded nursery size:", expand_to) - self.nursery_size_still_available += intmask(expand_to - - self.nursery_size) - self.nursery_size = expand_to + if expand_to > self.nursery_limit: + debug_print("expanding nursery limit to:", expand_to) + self.nursery_limit = expand_to # - # If 'nursery_size_still_available' has been increased to a - # nonnegative number, then we are done: we can just continue - # filling the nursery. - if self.nursery_size_still_available >= 0: + # If 'new_young_objects_size' is not greater than this + # expanded 'nursery_size', then we are done: we can just + # continue filling the nursery. + if self.new_young_objects_size <= self.nursery_limit: + debug_stop("gc-nursery-full") return # # Else, we trigger the next minor collection now. + self.flagged_objects.append(newest_obj) self._start_minor_collection() # - # Now there is no new object left. Reset the nursery size to - # be at most 25% of total_memory_size, and initially no more than - # 3/4*total_memory_size - old_objects_size. If that value is not - # positive, then we immediately go into major collection mode. + # Now there is no new object left. + ll_assert(self.new_young_objects_size == r_uint(0), + "new object left behind?") + # + # Reset the nursery size to be at most 25% of + # total_memory_size, and initially no more than + # 3/4*total_memory_size - old_objects_size. If that value + # is not positive, then we immediately go into major + # collection mode. three_quarters = (self.total_memory_size >> 2) * 3 if self.old_objects_size < three_quarters: newsize = three_quarters - self.old_objects_size newsize = min(newsize, self.total_memory_size >> 2) - self.nursery_size = newsize - self.nursery_size_still_available = intmask(newsize) - debug_print("nursery size:", self.nursery_size) + self.nursery_limit = newsize debug_print("total memory size:", self.total_memory_size) + debug_print("initial nursery limit:", self.nursery_limit) + debug_stop("gc-nursery-full") return # The previous collection is likely not finished yet. # At this point we want a full collection to occur. - debug_start("gc-major") + debug_print("starting a major collection") # # We have to first wait for the previous minor collection to finish: self.wait_for_the_end_of_collection() # # Start the major collection. - self._start_major_collection() + self._start_major_collection(newest_obj) # - debug_stop("gc-major") + debug_stop("gc-nursery-full") def wait_for_the_end_of_collection(self): @@ -451,23 +454,27 @@ self.stop_collection(wait=True) - def stop_collection(self, wait=False): + def previous_collection_finished(self): + return self.collector.running == 0 or self.stop_collection(wait=False) + + + def stop_collection(self, wait): ll_assert(self.collector.running != 0, "stop_collection: running == 0") # + debug_start("gc-stop") major_collection = (self.collector.major_collection_phase == 2) - debug_start("gc-stop") - try: - debug_print("wait:", int(wait)) - if major_collection: - debug_print("ending a major collection") - if wait or major_collection: - self.acquire(self.finished_lock) - else: - if not self.try_acquire(self.finished_lock): - return False - finally: - debug_print("old objects size:", self.old_objects_size) - debug_stop("gc-stop") + if major_collection or wait: + debug_print("waiting for the end of collection, major =", + int(major_collection)) + self.acquire(self.finished_lock) + else: + if not self.try_acquire(self.finished_lock): + debug_print("minor collection not finished!") + debug_stop("gc-stop") + return False + # + debug_print("old objects size:", self.old_objects_size) + debug_stop("gc-stop") self.collector.running = 0 #debug_print("collector.running = 0") # @@ -513,8 +520,8 @@ def collect(self, gen=4): debug_start("gc-forced-collect") self.wait_for_the_end_of_collection() - self._start_major_collection() - self.nursery_full(0) + self._start_major_collection(llmemory.NULL) + self.wait_for_the_end_of_collection() self.execute_finalizers_ll() debug_stop("gc-forced-collect") return @@ -541,7 +548,7 @@ def _start_minor_collection(self, major_collection_phase=0): # - debug_start("gc-start") + debug_start("gc-minor-start") # # Scan the stack roots and the refs in non-GC objects self.root_walker.walk_roots( @@ -575,19 +582,22 @@ # Copy a few 'mutator' fields to 'collector' fields self.collector.aging_objects = self.new_young_objects self.new_young_objects = self.NULL + self.new_young_objects_size = r_uint(0) #self.collect_weakref_pages = self.weakref_pages #self.collect_finalizer_pages = self.finalizer_pages # # Start the collector thread self._start_collection_common(major_collection_phase) # - debug_stop("gc-start") + debug_stop("gc-minor-start") - def _start_major_collection(self): + def _start_major_collection(self, newest_obj): # debug_start("gc-major-collection") # # Force a minor collection's marking step to occur now + if newest_obj: + self.flagged_objects.append(newest_obj) self._start_minor_collection(major_collection_phase=1) # # Wait for it to finish @@ -600,6 +610,10 @@ ll_assert(self.new_young_objects == self.NULL, "new_young_obejcts should be empty here") # + # Keep this newest_obj alive + if newest_obj: + self.collector.gray_objects.append(newest_obj) + # # Scan again the stack roots and the refs in non-GC objects self.root_walker.walk_roots( ConcurrentGenGC._add_stack_root, # stack roots @@ -621,12 +635,10 @@ self.collector.delayed_aging_objects = self.collector.aging_objects self.collector.aging_objects = self.old_objects self.old_objects = self.NULL + self.old_objects_size = r_uint(0) #self.collect_weakref_pages = self.weakref_pages #self.collect_finalizer_pages = self.finalizer_pages # - # Now there are no more old objects - self.old_objects_size = r_uint(0) - # # Start again the collector thread self._start_collection_common(major_collection_phase=2) # @@ -647,7 +659,8 @@ # NB. it's ok to edit 'gray_objects' from the mutator thread here, # because the collector thread is not running yet obj = root.address[0] - #debug_print("_add_stack_root", obj) + debug_print("_add_stack_root", obj) + assert 'DEAD' not in repr(obj) self.get_mark(obj) self.collector.gray_objects.append(obj) @@ -672,19 +685,28 @@ def debug_check_lists(self): # just check that they are correct, non-infinite linked lists - self.debug_check_list(self.new_young_objects) - self.debug_check_list(self.old_objects) + self.debug_check_list(self.new_young_objects, + self.new_young_objects_size) + self.debug_check_list(self.old_objects, self.old_objects_size) - def debug_check_list(self, list): + def debug_check_list(self, list, totalsize): previous = self.NULL count = 0 + size = r_uint(0) + size_gc_header = self.gcheaderbuilder.size_gc_header while list != self.NULL: - # prevent constant-folding, and detects loops + obj = llmemory.cast_ptr_to_adr(list) + size_gc_header + size1 = size_gc_header + self.get_size(obj) + print "debug:", llmemory.raw_malloc_usage(size1) + size += llmemory.raw_malloc_usage(size1) + # detect loops ll_assert(list != previous, "loop!") count += 1 if count & (count-1) == 0: # only on powers of two, to previous = list # detect loops of any size list = list.next + print "\tTOTAL:", size + ll_assert(size == totalsize, "bogus total size in linked list") return count def acquire(self, lock): @@ -890,14 +912,12 @@ def collector_mark(self): - surviving_size = r_uint(0) - # while True: # # Do marking. The following function call is interrupted # if the mutator's write barrier adds new objects to # 'extra_objects_to_mark'. - surviving_size += self._collect_mark() + self._collect_mark() # # Move the objects from 'extra_objects_to_mark' to # 'gray_objects'. This requires the mutex lock. @@ -923,31 +943,11 @@ # Else release mutex_lock and try again. self.release(self.mutex_lock) # - # When we sweep during minor collections, we add the size of - # the surviving now-old objects to the following field. Note - # that the write barrier may make objects young again, without - # decreasing the value here. During the following minor - # collection this variable will be increased *again*. When the - # write barrier triggers on an aging object, it is random whether - # its size ends up being accounted here or not --- but it will - # be at the following minor collection, because the object is - # young again. So, careful about overflows. - if surviving_size > self.gc.total_memory_size: - debug_print("surviving_size too large!", - surviving_size, self.gc.total_memory_size) - ll_assert(False, "surviving_size too large") - limit = self.gc.total_memory_size - surviving_size - if self.gc.old_objects_size <= limit: - self.gc.old_objects_size += surviving_size - else: - self.gc.old_objects_size = self.gc.total_memory_size - # self.running = 2 #debug_print("collection_running = 2") self.release(self.mutex_lock) def _collect_mark(self): - surviving_size = r_uint(0) extra_objects_to_mark = self.gc.extra_objects_to_mark cam = self.current_aging_marker com = self.current_old_marker @@ -956,9 +956,6 @@ if self.get_mark(obj) != cam: continue # - # Record the object's size - surviving_size += raw_malloc_usage(self.gc.get_size(obj)) - # # Scan the content of 'obj'. We use a snapshot-at-the- # beginning order, meaning that we want to scan the state # of the object as it was at the beginning of the current @@ -980,6 +977,7 @@ # we scan a modified content --- and the original content # is never scanned. # + debug_print("mark:", obj) self.gc.trace(obj, self._collect_add_pending, None) self.set_mark(obj, com) # @@ -991,8 +989,6 @@ # reference further objects that will soon be accessed too. if extra_objects_to_mark.non_empty(): break - # - return surviving_size def _collect_add_pending(self, root, ignored): obj = root.address[0] @@ -1004,10 +1000,12 @@ def collector_sweep(self): if self.major_collection_phase != 1: # no sweeping during phase 1 + self.update_size = self.gc.old_objects_size lst = self._collect_do_sweep(self.aging_objects, self.current_aging_marker, self.gc.old_objects) self.gc.old_objects = lst + self.gc.old_objects_size = self.update_size # self.running = -1 #debug_print("collection_running = -1") @@ -1017,6 +1015,7 @@ # Finish the delayed sweep from the previous minor collection. # The objects left unmarked were left with 'cam', which is # now 'com' because we switched their values. + self.update_size = r_uint(0) lst = self._collect_do_sweep(self.delayed_aging_objects, self.current_old_marker, self.aging_objects) @@ -1024,6 +1023,7 @@ self.delayed_aging_objects = self.NULL def _collect_do_sweep(self, hdr, still_not_marked, linked_list): + size_gc_header = self.gc.gcheaderbuilder.size_gc_header # while hdr != self.NULL: nexthdr = hdr.next @@ -1031,6 +1031,7 @@ if mark == still_not_marked: # the object is still not marked. Free it. blockadr = llmemory.cast_ptr_to_adr(hdr) + debug_print("free:", blockadr + size_gc_header) blockadr = llarena.getfakearenaaddress(blockadr) llarena.arena_free(blockadr) # @@ -1043,6 +1044,11 @@ hdr.next = linked_list linked_list = hdr # + # count its size + obj = llmemory.cast_ptr_to_adr(hdr) + size_gc_header + size1 = size_gc_header + self.gc.get_size(obj) + self.update_size += llmemory.raw_malloc_usage(size1) + # hdr = nexthdr # return linked_list From noreply at buildbot.pypy.org Sat Jan 7 19:08:41 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 7 Jan 2012 19:08:41 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: fix. now test_direct passes Message-ID: <20120107180841.8135582CAA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51121:2a11fde484ba Date: 2012-01-07 19:08 +0100 http://bitbucket.org/pypy/pypy/changeset/2a11fde484ba/ Log: fix. now test_direct passes diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -269,6 +269,8 @@ hdr = self.header(obj) hdr.tid = self.combine(typeid, self.current_young_marker, 0) hdr.next = self.new_young_objects + totalsize = llarena.round_up_for_allocation(totalsize) + rawtotalsize = raw_malloc_usage(totalsize) debug_print("malloc:", rawtotalsize, obj) self.new_young_objects = hdr self.new_young_objects_size += r_uint(rawtotalsize) From noreply at buildbot.pypy.org Sat Jan 7 21:01:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 21:01:47 +0100 (CET) Subject: [pypy-commit] pypy default: merge import-numpy, rename numpypy to _numpypy Message-ID: <20120107200147.D22E282BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51122:cc8f110fc52d Date: 2012-01-07 22:00 +0200 http://bitbucket.org/pypy/pypy/changeset/cc8f110fc52d/ Log: merge import-numpy, rename numpypy to _numpypy diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -9,7 +9,7 @@ appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -14,29 +14,29 @@ return mean(a) def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n,n), dtype=dtype) for i in range(n): a[i][i] = 1 return a def mean(a): if not hasattr(a, "mean"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.mean() def sum(a): if not hasattr(a, "sum"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.sum() def min(a): if not hasattr(a, "min"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.min() def max(a): if not hasattr(a, "max"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.max() def arange(start, stop=None, step=1, dtype=None): @@ -47,9 +47,9 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i @@ -90,5 +90,5 @@ you should assign the new shape to the shape attribute of the array ''' if not hasattr(a, 'reshape'): - a = numpypy.array(a) + a = _numpypy.array(a) return a.reshape(shape) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +36,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +44,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +58,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +156,19 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -181,7 +181,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +196,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +218,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +241,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +251,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +260,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +275,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +286,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -315,7 +315,7 @@ def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +330,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +339,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +352,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,33 +3,33 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpypy import array, mean + from _numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -161,7 +161,7 @@ class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -176,12 +176,12 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_ndim(self): - from numpypy import array + from _numpypy import array x = array(0.2) assert x.ndim == 0 x = array([1, 2]) @@ -190,12 +190,12 @@ assert x.ndim == 2 x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) assert x.ndim == 3 - # numpy actually raises an AttributeError, but numpypy raises an + # numpy actually raises an AttributeError, but _numpypy raises an # TypeError raises(TypeError, 'x.ndim = 3') def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -204,7 +204,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -215,13 +215,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -230,7 +230,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -247,12 +247,12 @@ assert (c == b).all() def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -261,7 +261,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -271,7 +271,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -279,7 +279,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -290,7 +290,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -301,7 +301,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -320,7 +320,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -328,14 +328,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -344,13 +344,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -359,7 +359,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -379,7 +379,7 @@ a.shape = (1,) def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -392,7 +392,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -428,13 +428,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -447,7 +447,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -455,20 +455,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -477,14 +477,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -492,34 +492,34 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -527,7 +527,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -559,7 +559,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -567,14 +567,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -584,7 +584,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -592,14 +592,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -612,7 +612,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -620,14 +620,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -638,7 +638,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -649,7 +649,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -660,7 +660,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -674,7 +674,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -688,7 +688,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpypy import array + from _numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -696,7 +696,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -706,7 +706,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -720,13 +720,13 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -735,8 +735,8 @@ assert a.sum() == 5 def test_identity(self): - from numpypy import identity, array - from numpypy import int32, float64, dtype + from _numpypy import identity, array + from _numpypy import int32, float64, dtype a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') @@ -755,32 +755,32 @@ assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -801,14 +801,14 @@ assert a.argmax() == 2 def test_argmin(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -817,7 +817,7 @@ assert b.all() == True def test_any(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -826,7 +826,7 @@ assert c.any() == False def test_dot(self): - from numpypy import array, dot + from _numpypy import array, dot a = array(range(5)) assert a.dot(a) == 30.0 @@ -836,14 +836,14 @@ assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all() def test_dot_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype, float64, int8, bool_ + from _numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -860,7 +860,7 @@ def test_comparison(self): import operator - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -879,7 +879,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpypy import array + from _numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -889,7 +889,7 @@ assert not bool(array([0])) def test_slice_assignment(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[::-1] = a assert (a == [0, 1, 2, 1, 0]).all() @@ -899,8 +899,8 @@ assert (a == [8, 6, 4, 2, 0]).all() def test_debug_repr(self): - from numpypy import zeros, sin - from numpypy.pypy import debug_repr + from _numpypy import zeros, sin + from _numpypy.pypy import debug_repr a = zeros(1) assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' @@ -914,8 +914,8 @@ assert debug_repr(b) == 'Array' def test_remove_invalidates(self): - from numpypy import array - from numpypy.pypy import remove_invalidates + from _numpypy import array + from _numpypy.pypy import remove_invalidates a = array([1, 2, 3]) b = a + a remove_invalidates(a) @@ -923,7 +923,7 @@ assert b[0] == 28 def test_virtual_views(self): - from numpypy import arange + from _numpypy import arange a = arange(15) c = (a + a) d = c[::2] @@ -941,7 +941,7 @@ assert b[1] == 2 def test_tolist_scalar(self): - from numpypy import int32, bool_ + from _numpypy import int32, bool_ x = int32(23) assert x.tolist() == 23 assert type(x.tolist()) is int @@ -949,13 +949,13 @@ assert y.tolist() is True def test_tolist_zerodim(self): - from numpypy import array + from _numpypy import array x = array(3) assert x.tolist() == 3 assert type(x.tolist()) is int def test_tolist_singledim(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.tolist() == [0, 1, 2, 3, 4] assert type(a.tolist()[0]) is int @@ -963,17 +963,17 @@ assert b.tolist() == [0.2, 0.4, 0.6] def test_tolist_multidim(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert a.tolist() == [[1, 2], [3, 4]] def test_tolist_view(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): - from numpypy import array + from _numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] @@ -981,23 +981,23 @@ class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpypy - a = numpypy.zeros((2, 2)) + import _numpypy + a = _numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpypy - assert numpypy.zeros(1).shape == (1,) - assert numpypy.zeros((2, 2)).shape == (2, 2) - assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpypy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpypy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpypy.zeros(())) - raises(ValueError, numpypy.array, [[1, 2], 3]) + import _numpypy + assert _numpypy.zeros(1).shape == (1,) + assert _numpypy.zeros((2, 2)).shape == (2, 2) + assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(_numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, _numpypy.zeros(())) + raises(ValueError, _numpypy.array, [[1, 2], 3]) def test_getsetitem(self): - import numpypy - a = numpypy.zeros((2, 3, 1)) + import _numpypy + a = _numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -1008,8 +1008,8 @@ assert a[1, -1, 0] == 3 def test_slices(self): - import numpypy - a = numpypy.zeros((4, 3, 2)) + import _numpypy + a = _numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -1042,51 +1042,51 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpypy - raises(ValueError, numpypy.array, [[1], 2]) - raises(ValueError, numpypy.array, [[1, 2], [3]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) - a = numpypy.array([[1, 2], [4, 5]]) + import _numpypy + raises(ValueError, _numpypy.array, [[1], 2]) + raises(ValueError, _numpypy.array, [[1, 2], [3]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = _numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpypy - a = numpypy.zeros((3, 4)) + import _numpypy + a = _numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpypy.array([[1, 2], [3, 4]]) + a = _numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpypy.array([5, 6]) + a[1] = _numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpypy.array([8, 10]) + a[:, 1] = _numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpypy.array([11, 12]) + a[0, :: -1] = _numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == \ array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] @@ -1097,12 +1097,12 @@ assert c[1][1] == 12 def test_multidim_ones(self): - from numpypy import ones + from _numpypy import ones a = ones((1, 2, 3)) assert a[0, 1, 2] == 1.0 def test_multidim_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) a[:, 1:3] = b[:, 1:3] @@ -1113,21 +1113,21 @@ assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((10, 10)) b = ones(10) a[:, :] = b assert a[3, 5] == 1 def test_broadcast_shape_agreement(self): - from numpypy import zeros, array + from _numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) @@ -1141,7 +1141,7 @@ assert c.all() def test_broadcast_scalar(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 5), 'd') a[:, 1] = 3 assert a[2, 1] == 3 @@ -1152,14 +1152,14 @@ assert a[3, 2] == 0 def test_broadcast_call2(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((4, 1, 5)) b = ones((4, 3, 5)) b[:] = (a + a) assert (b == zeros((4, 3, 5))).all() def test_broadcast_virtualview(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = arange(8).reshape([2, 2, 2]) b = (a + a)[1, 1] c = zeros((2, 2, 2)) @@ -1167,13 +1167,13 @@ assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all() def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert a.argmax() == 5 assert a[:2, ].argmax() == 3 def test_broadcast_wrong_shapes(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 3, 2)) b = zeros((4, 2)) exc = raises(ValueError, lambda: a + b) @@ -1181,7 +1181,7 @@ " together with shapes (4,3,2) (4,2)" def test_reduce(self): - from numpypy import array + from _numpypy import array a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) assert a.sum() == (13 * 12) / 2 b = a[1:, 1::2] @@ -1189,7 +1189,7 @@ assert c.sum() == (6 + 8 + 10 + 12) * 2 def test_transpose(self): - from numpypy import array + from _numpypy import array a = array(((range(3), range(3, 6)), (range(6, 9), range(9, 12)), (range(12, 15), range(15, 18)), @@ -1208,7 +1208,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from numpypy import array, flatiter + from _numpypy import array, flatiter a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1223,23 +1223,23 @@ assert s == 140 def test_flatiter_array_conv(self): - from numpypy import array, dot + from _numpypy import array, dot a = array([1, 2, 3]) assert dot(a.flat, a.flat) == 14 def test_flatiter_varray(self): - from numpypy import ones + from _numpypy import ones a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] def test_slice_copy(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((10, 10)) b = a[0].copy() assert (b == zeros(10)).all() def test_array_interface(self): - from numpypy import array + from _numpypy import array a = array([1, 2, 3]) i = a.__array_interface__ assert isinstance(i['data'][0], int) @@ -1261,7 +1261,7 @@ def test_fromstring(self): import sys - from numpypy import fromstring, array, uint8, float32, int32 + from _numpypy import fromstring, array, uint8, float32, int32 a = fromstring(self.data) for i in range(4): @@ -1325,7 +1325,7 @@ assert (u == [1, 0]).all() def test_fromstring_types(self): - from numpypy import (fromstring, int8, int16, int32, int64, uint8, + from _numpypy import (fromstring, int8, int16, int32, int64, uint8, uint16, uint32, float32, float64) a = fromstring('\xFF', dtype=int8) @@ -1350,7 +1350,7 @@ assert j[0] == 12 def test_fromstring_invalid(self): - from numpypy import fromstring, uint16, uint8, int32 + from _numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail raises(ValueError, fromstring, "\x01\x02\x03") #3 bytes is not modulo 2 bytes (int16) @@ -1361,7 +1361,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpypy import array, zeros + from _numpypy import array, zeros int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" @@ -1389,7 +1389,7 @@ assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1414,7 +1414,7 @@ [500, 1001]])''' def test_repr_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -1429,7 +1429,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -1462,7 +1462,7 @@ assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' def test_str_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -1478,7 +1478,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array, dtype + from _numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) @@ -1500,7 +1500,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_app_reshape(self): - from numpypy import arange, array, dtype, reshape + from _numpypy import arange, array, dtype, reshape a = arange(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpypy import add, ufunc + from _numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpypy import add, multiply, sin + from _numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpypy import add, sin + from _numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpypy import negative, sign, minimum + from _numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpypy import array, ndarray, negative, minimum + from _numpypy import array, ndarray, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpypy import array, absolute + from _numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpypy import array, add + from _numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpypy import array, divide + from _numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -114,7 +114,7 @@ assert (divide(array([-10]), array([2])) == array([-5])).all() def test_fabs(self): - from numpypy import array, fabs + from _numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -123,7 +123,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpypy import array, minimum + from _numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -132,7 +132,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpypy import array, maximum + from _numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -145,7 +145,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpypy import array, multiply + from _numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -154,7 +154,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpypy import array, sign, dtype + from _numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -173,7 +173,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpypy import array, reciprocal + from _numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -182,7 +182,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpypy import array, subtract + from _numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -191,7 +191,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpypy import array, floor + from _numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -200,7 +200,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpypy import array, copysign + from _numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -216,7 +216,7 @@ def test_exp(self): import math - from numpypy import array, exp + from _numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -230,7 +230,7 @@ def test_sin(self): import math - from numpypy import array, sin + from _numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -243,7 +243,7 @@ def test_cos(self): import math - from numpypy import array, cos + from _numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -252,7 +252,7 @@ def test_tan(self): import math - from numpypy import array, tan + from _numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -262,7 +262,7 @@ def test_arcsin(self): import math - from numpypy import array, arcsin + from _numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -276,7 +276,7 @@ def test_arccos(self): import math - from numpypy import array, arccos + from _numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -291,7 +291,7 @@ def test_arctan(self): import math - from numpypy import array, arctan + from _numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -304,7 +304,7 @@ def test_arcsinh(self): import math - from numpypy import arcsinh, inf + from _numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -312,7 +312,7 @@ def test_arctanh(self): import math - from numpypy import arctanh + from _numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -323,7 +323,7 @@ def test_sqrt(self): import math - from numpypy import sqrt + from _numpypy import sqrt nan, inf = float("nan"), float("inf") data = [1, 2, 3, inf] @@ -333,13 +333,13 @@ assert math.isnan(sqrt(nan)) def test_reduce_errors(self): - from numpypy import sin, add + from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpypy import add, maximum + from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -348,7 +348,7 @@ def test_comparisons(self): import operator - from numpypy import equal, not_equal, less, less_equal, greater, greater_equal + from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), From noreply at buildbot.pypy.org Sat Jan 7 21:01:49 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 21:01:49 +0100 (CET) Subject: [pypy-commit] pypy default: create applevel part here Message-ID: <20120107200149.0891082BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51123:7bb8b38d8563 Date: 2012-01-07 22:01 +0200 http://bitbucket.org/pypy/pypy/changeset/7bb8b38d8563/ Log: create applevel part here diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,1 @@ +from _numpypy import * From noreply at buildbot.pypy.org Sat Jan 7 21:21:31 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 7 Jan 2012 21:21:31 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: merged default in Message-ID: <20120107202131.9BE6682BFF@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks Changeset: r51124:ae6912658a2f Date: 2012-01-07 14:02 -0600 http://bitbucket.org/pypy/pypy/changeset/ae6912658a2f/ Log: merged default in diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,1 @@ +from _numpypy import * diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,7 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. This layout makes the number of types to take care about quite limited. diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -442,6 +442,22 @@ """ self.optimize_loop(ops, expected) + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -271,6 +271,10 @@ if newresult is not op.result and not newvalue.is_constant(): op = ResOperation(rop.SAME_AS, [op.result], newresult) self.optimizer._newoperations.append(op) + if self.optimizer.loop.logops: + debug_print(' Falling back to add extra: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + self.optimizer.flush() self.optimizer.emitting_dissabled = False @@ -435,7 +439,13 @@ return for a in op.getarglist(): if not isinstance(a, Const) and a not in seen: - self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen) + self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, + seen) + + if self.optimizer.loop.logops: + debug_print(' Emitting short op: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + optimizer.send_extra_operation(op) seen[op.result] = True if op.is_ovf(): diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize @@ -387,6 +388,8 @@ "Float": "space.w_float", "Long": "space.w_long", "Complex": "space.w_complex", + "ByteArray": "space.w_bytearray", + "MemoryView": "space.gettypeobject(W_MemoryView.typedef)", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', 'NotImplemented': 'space.type(space.w_NotImplemented)', diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py --- a/pypy/module/cpyext/buffer.py +++ b/pypy/module/cpyext/buffer.py @@ -1,6 +1,36 @@ +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, Py_buffer) +from pypy.module.cpyext.pyobject import PyObject + + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyObject_CheckBuffer(space, w_obj): + """Return 1 if obj supports the buffer interface otherwise 0.""" + return 0 # the bf_getbuffer field is never filled by cpyext + + at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real], + rffi.INT_real, error=-1) +def PyObject_GetBuffer(space, w_obj, view, flags): + """Export obj into a Py_buffer, view. These arguments must + never be NULL. The flags argument is a bit field indicating what + kind of buffer the caller is prepared to deal with and therefore what + kind of buffer the exporter is allowed to return. The buffer interface + allows for complicated memory sharing possibilities, but some caller may + not be able to handle all the complexity but may want to see if the + exporter will let them take a simpler view to its memory. + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a BufferError unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + Py_buffer structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + 0 is returned on success and -1 on error.""" + raise OperationError(space.w_TypeError, space.wrap( + 'PyPy does not yet implement the new buffer interface')) @cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL) def PyBuffer_IsContiguous(space, view, fortran): diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -123,10 +123,6 @@ typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *); typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **); -typedef int (*objobjproc)(PyObject *, PyObject *); -typedef int (*visitproc)(PyObject *, void *); -typedef int (*traverseproc)(PyObject *, visitproc, void *); - /* Py3k buffer interface */ typedef struct bufferinfo { void *buf; @@ -153,6 +149,41 @@ typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); + /* Flags for getting buffers */ +#define PyBUF_SIMPLE 0 +#define PyBUF_WRITABLE 0x0001 +/* we used to include an E, backwards compatible alias */ +#define PyBUF_WRITEABLE PyBUF_WRITABLE +#define PyBUF_FORMAT 0x0004 +#define PyBUF_ND 0x0008 +#define PyBUF_STRIDES (0x0010 | PyBUF_ND) +#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) +#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) +#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) +#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE) +#define PyBUF_CONTIG_RO (PyBUF_ND) + +#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE) +#define PyBUF_STRIDED_RO (PyBUF_STRIDES) + +#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT) + +#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT) + + +#define PyBUF_READ 0x100 +#define PyBUF_WRITE 0x200 +#define PyBUF_SHADOW 0x400 +/* end Py3k buffer interface */ + +typedef int (*objobjproc)(PyObject *, PyObject *); +typedef int (*visitproc)(PyObject *, void *); +typedef int (*traverseproc)(PyObject *, visitproc, void *); + typedef struct { /* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all arguments are guaranteed to be of the object's type (modulo diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -5,7 +5,7 @@ struct _is; /* Forward */ typedef struct _is { - int _foo; + struct _is *next; } PyInterpreterState; typedef struct _ts { diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -2,7 +2,10 @@ cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) from pypy.rpython.lltypesystem import rffi, lltype -PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ())) +PyInterpreterStateStruct = lltype.ForwardReference() +PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) +cpython_struct( + "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) @@ -54,7 +57,8 @@ class InterpreterState(object): def __init__(self, space): - self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True) + self.interpreter_state = lltype.malloc( + PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) def new_thread_state(self): capsule = ThreadStateCapsule() diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -34,141 +34,6 @@ @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyObject_CheckBuffer(space, obj): - """Return 1 if obj supports the buffer interface otherwise 0.""" - raise NotImplementedError - - at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1) -def PyObject_GetBuffer(space, obj, view, flags): - """Export obj into a Py_buffer, view. These arguments must - never be NULL. The flags argument is a bit field indicating what - kind of buffer the caller is prepared to deal with and therefore what - kind of buffer the exporter is allowed to return. The buffer interface - allows for complicated memory sharing possibilities, but some caller may - not be able to handle all the complexity but may want to see if the - exporter will let them take a simpler view to its memory. - - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a BufferError unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - Py_buffer structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. - - 0 is returned on success and -1 on error. - - The following table gives possible values to the flags arguments. - - Flag - - Description - - PyBUF_SIMPLE - - This is the default flag state. The returned - buffer may or may not have writable memory. The - format of the data will be assumed to be unsigned - bytes. This is a "stand-alone" flag constant. It - never needs to be '|'d to the others. The exporter - will raise an error if it cannot provide such a - contiguous buffer of bytes. - - PyBUF_WRITABLE - - The returned buffer must be writable. If it is - not writable, then raise an error. - - PyBUF_STRIDES - - This implies PyBUF_ND. The returned - buffer must provide strides information (i.e. the - strides cannot be NULL). This would be used when - the consumer can handle strided, discontiguous - arrays. Handling strides automatically assumes - you can handle shape. The exporter can raise an - error if a strided representation of the data is - not possible (i.e. without the suboffsets). - - PyBUF_ND - - The returned buffer must provide shape - information. The memory will be assumed C-style - contiguous (last dimension varies the - fastest). The exporter may raise an error if it - cannot provide this kind of contiguous buffer. If - this is not given then shape will be NULL. - - PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS - - These flags indicate that the contiguity returned - buffer must be respectively, C-contiguous (last - dimension varies the fastest), Fortran contiguous - (first dimension varies the fastest) or either - one. All of these flags imply - PyBUF_STRIDES and guarantee that the - strides buffer info structure will be filled in - correctly. - - PyBUF_INDIRECT - - This flag indicates the returned buffer must have - suboffsets information (which can be NULL if no - suboffsets are needed). This can be used when - the consumer can handle indirect array - referencing implied by these suboffsets. This - implies PyBUF_STRIDES. - - PyBUF_FORMAT - - The returned buffer must have true format - information if this flag is provided. This would - be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An - exporter should always be able to provide this - information if requested. If format is not - explicitly requested then the format must be - returned as NULL (which means 'B', or - unsigned bytes) - - PyBUF_STRIDED - - This is equivalent to (PyBUF_STRIDES | - PyBUF_WRITABLE). - - PyBUF_STRIDED_RO - - This is equivalent to (PyBUF_STRIDES). - - PyBUF_RECORDS - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_RECORDS_RO - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT). - - PyBUF_FULL - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_FULL_RO - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT). - - PyBUF_CONTIG - - This is equivalent to (PyBUF_ND | - PyBUF_WRITABLE). - - PyBUF_CONTIG_RO - - This is equivalent to (PyBUF_ND).""" raise NotImplementedError @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -37,6 +37,7 @@ def test_thread_state_interp(self, space, api): ts = api.PyThreadState_Get() assert ts.c_interp == api.PyInterpreterState_Head() + assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO) def test_basic_threadstate_dance(self, space, api): # Let extension modules call these functions, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -9,7 +9,7 @@ appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -14,29 +14,29 @@ return mean(a) def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n,n), dtype=dtype) for i in range(n): a[i][i] = 1 return a def mean(a): if not hasattr(a, "mean"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.mean() def sum(a): if not hasattr(a, "sum"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.sum() def min(a): if not hasattr(a, "min"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.min() def max(a): if not hasattr(a, "max"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.max() def arange(start, stop=None, step=1, dtype=None): @@ -47,9 +47,9 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i @@ -90,5 +90,5 @@ you should assign the new shape to the shape attribute of the array ''' if not hasattr(a, 'reshape'): - a = numpypy.array(a) + a = _numpypy.array(a) return a.reshape(shape) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -429,13 +429,10 @@ res.append(')') else: concrete.to_str(space, 1, res, indent=' ') - if (dtype is interp_dtype.get_dtype_cache(space).w_float64dtype or \ - dtype.kind == interp_dtype.SIGNEDLTR and \ - dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) \ - and self.size: - # Do not print dtype - pass - else: + if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +36,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +44,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +58,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +156,19 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -181,7 +181,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +196,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +218,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +241,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +251,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +260,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +275,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +286,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -315,7 +315,7 @@ def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +330,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +339,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +352,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,33 +3,33 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpypy import array, mean + from _numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -161,7 +161,7 @@ class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -176,25 +176,26 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_ndim(self): - from numpypy import array + from _numpypy import array x = array(0.2) assert x.ndim == 0 - x = array([1,2]) + x = array([1, 2]) assert x.ndim == 1 - x = array([[1,2], [3,4]]) + x = array([[1, 2], [3, 4]]) assert x.ndim == 2 - x = array([[[1,2], [3,4]], [[5,6], [7,8]] ]) + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) assert x.ndim == 3 - # numpy actually raises an AttributeError, but numpypy raises an AttributeError - raises (TypeError, 'x.ndim=3') - + # numpy actually raises an AttributeError, but _numpypy raises an + # TypeError + raises(TypeError, 'x.ndim = 3') + def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -203,7 +204,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -214,13 +215,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -229,7 +230,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -246,12 +247,12 @@ assert (c == b).all() def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -260,7 +261,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -270,7 +271,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -278,7 +279,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -289,7 +290,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -300,7 +301,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -319,7 +320,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -327,14 +328,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -343,13 +344,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -358,7 +359,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -378,7 +379,7 @@ a.shape = (1,) def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -391,7 +392,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -427,13 +428,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -446,7 +447,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -454,20 +455,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -476,14 +477,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -491,34 +492,34 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -526,7 +527,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -558,7 +559,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -566,14 +567,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -583,7 +584,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -591,14 +592,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -611,7 +612,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -619,14 +620,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -637,7 +638,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -648,7 +649,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -659,7 +660,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -673,7 +674,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -687,7 +688,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpypy import array + from _numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -695,7 +696,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -705,7 +706,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -719,13 +720,13 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -734,8 +735,8 @@ assert a.sum() == 5 def test_identity(self): - from numpypy import identity, array - from numpypy import int32, float64, dtype + from _numpypy import identity, array + from _numpypy import int32, float64, dtype a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') @@ -754,32 +755,32 @@ assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -800,14 +801,14 @@ assert a.argmax() == 2 def test_argmin(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -816,7 +817,7 @@ assert b.all() == True def test_any(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -825,7 +826,7 @@ assert c.any() == False def test_dot(self): - from numpypy import array, dot + from _numpypy import array, dot a = array(range(5)) assert a.dot(a) == 30.0 @@ -835,14 +836,14 @@ assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all() def test_dot_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype, float64, int8, bool_ + from _numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -859,7 +860,7 @@ def test_comparison(self): import operator - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -878,7 +879,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpypy import array + from _numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -888,7 +889,7 @@ assert not bool(array([0])) def test_slice_assignment(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[::-1] = a assert (a == [0, 1, 2, 1, 0]).all() @@ -898,8 +899,8 @@ assert (a == [8, 6, 4, 2, 0]).all() def test_debug_repr(self): - from numpypy import zeros, sin - from numpypy.pypy import debug_repr + from _numpypy import zeros, sin + from _numpypy.pypy import debug_repr a = zeros(1) assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' @@ -913,8 +914,8 @@ assert debug_repr(b) == 'Array' def test_remove_invalidates(self): - from numpypy import array - from numpypy.pypy import remove_invalidates + from _numpypy import array + from _numpypy.pypy import remove_invalidates a = array([1, 2, 3]) b = a + a remove_invalidates(a) @@ -922,7 +923,7 @@ assert b[0] == 28 def test_virtual_views(self): - from numpypy import arange + from _numpypy import arange a = arange(15) c = (a + a) d = c[::2] @@ -940,7 +941,7 @@ assert b[1] == 2 def test_tolist_scalar(self): - from numpypy import int32, bool_ + from _numpypy import int32, bool_ x = int32(23) assert x.tolist() == 23 assert type(x.tolist()) is int @@ -948,13 +949,13 @@ assert y.tolist() is True def test_tolist_zerodim(self): - from numpypy import array + from _numpypy import array x = array(3) assert x.tolist() == 3 assert type(x.tolist()) is int def test_tolist_singledim(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.tolist() == [0, 1, 2, 3, 4] assert type(a.tolist()[0]) is int @@ -962,17 +963,17 @@ assert b.tolist() == [0.2, 0.4, 0.6] def test_tolist_multidim(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert a.tolist() == [[1, 2], [3, 4]] def test_tolist_view(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): - from numpypy import array + from _numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] @@ -980,23 +981,23 @@ class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpypy - a = numpypy.zeros((2, 2)) + import _numpypy + a = _numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpypy - assert numpypy.zeros(1).shape == (1,) - assert numpypy.zeros((2, 2)).shape == (2, 2) - assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpypy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpypy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpypy.zeros(())) - raises(ValueError, numpypy.array, [[1, 2], 3]) + import _numpypy + assert _numpypy.zeros(1).shape == (1,) + assert _numpypy.zeros((2, 2)).shape == (2, 2) + assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(_numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, _numpypy.zeros(())) + raises(ValueError, _numpypy.array, [[1, 2], 3]) def test_getsetitem(self): - import numpypy - a = numpypy.zeros((2, 3, 1)) + import _numpypy + a = _numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -1007,8 +1008,8 @@ assert a[1, -1, 0] == 3 def test_slices(self): - import numpypy - a = numpypy.zeros((4, 3, 2)) + import _numpypy + a = _numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -1041,51 +1042,51 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpypy - raises(ValueError, numpypy.array, [[1], 2]) - raises(ValueError, numpypy.array, [[1, 2], [3]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) - a = numpypy.array([[1, 2], [4, 5]]) + import _numpypy + raises(ValueError, _numpypy.array, [[1], 2]) + raises(ValueError, _numpypy.array, [[1, 2], [3]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = _numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpypy - a = numpypy.zeros((3, 4)) + import _numpypy + a = _numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpypy.array([[1, 2], [3, 4]]) + a = _numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpypy.array([5, 6]) + a[1] = _numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpypy.array([8, 10]) + a[:, 1] = _numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpypy.array([11, 12]) + a[0, :: -1] = _numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == \ array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] @@ -1096,12 +1097,12 @@ assert c[1][1] == 12 def test_multidim_ones(self): - from numpypy import ones + from _numpypy import ones a = ones((1, 2, 3)) assert a[0, 1, 2] == 1.0 def test_multidim_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) a[:, 1:3] = b[:, 1:3] @@ -1112,21 +1113,21 @@ assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((10, 10)) b = ones(10) a[:, :] = b assert a[3, 5] == 1 def test_broadcast_shape_agreement(self): - from numpypy import zeros, array + from _numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) @@ -1140,7 +1141,7 @@ assert c.all() def test_broadcast_scalar(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 5), 'd') a[:, 1] = 3 assert a[2, 1] == 3 @@ -1151,14 +1152,14 @@ assert a[3, 2] == 0 def test_broadcast_call2(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((4, 1, 5)) b = ones((4, 3, 5)) b[:] = (a + a) assert (b == zeros((4, 3, 5))).all() def test_broadcast_virtualview(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = arange(8).reshape([2, 2, 2]) b = (a + a)[1, 1] c = zeros((2, 2, 2)) @@ -1166,13 +1167,13 @@ assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all() def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert a.argmax() == 5 assert a[:2, ].argmax() == 3 def test_broadcast_wrong_shapes(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 3, 2)) b = zeros((4, 2)) exc = raises(ValueError, lambda: a + b) @@ -1180,7 +1181,7 @@ " together with shapes (4,3,2) (4,2)" def test_reduce(self): - from numpypy import array + from _numpypy import array a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) assert a.sum() == (13 * 12) / 2 b = a[1:, 1::2] @@ -1188,7 +1189,7 @@ assert c.sum() == (6 + 8 + 10 + 12) * 2 def test_transpose(self): - from numpypy import array + from _numpypy import array a = array(((range(3), range(3, 6)), (range(6, 9), range(9, 12)), (range(12, 15), range(15, 18)), @@ -1207,7 +1208,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from numpypy import array, flatiter + from _numpypy import array, flatiter a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1222,23 +1223,23 @@ assert s == 140 def test_flatiter_array_conv(self): - from numpypy import array, dot + from _numpypy import array, dot a = array([1, 2, 3]) assert dot(a.flat, a.flat) == 14 def test_flatiter_varray(self): - from numpypy import ones + from _numpypy import ones a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] def test_slice_copy(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((10, 10)) b = a[0].copy() assert (b == zeros(10)).all() def test_array_interface(self): - from numpypy import array + from _numpypy import array a = array([1, 2, 3]) i = a.__array_interface__ assert isinstance(i['data'][0], int) @@ -1260,7 +1261,7 @@ def test_fromstring(self): import sys - from numpypy import fromstring, array, uint8, float32, int32 + from _numpypy import fromstring, array, uint8, float32, int32 a = fromstring(self.data) for i in range(4): @@ -1324,7 +1325,7 @@ assert (u == [1, 0]).all() def test_fromstring_types(self): - from numpypy import (fromstring, int8, int16, int32, int64, uint8, + from _numpypy import (fromstring, int8, int16, int32, int64, uint8, uint16, uint32, float32, float64) a = fromstring('\xFF', dtype=int8) @@ -1349,7 +1350,7 @@ assert j[0] == 12 def test_fromstring_invalid(self): - from numpypy import fromstring, uint16, uint8, int32 + from _numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail raises(ValueError, fromstring, "\x01\x02\x03") #3 bytes is not modulo 2 bytes (int16) @@ -1360,8 +1361,8 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpypy import array, zeros - intSize = array(5).dtype.itemsize + from _numpypy import array, zeros + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1369,12 +1370,12 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - if a.dtype.itemsize == intSize: + if a.dtype.itemsize == int_size: assert repr(a) == "array([0, 1, 2, 3, 4])" else: assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" a = array(range(5), 'int32') - if a.dtype.itemsize == intSize: + if a.dtype.itemsize == int_size: assert repr(a) == "array([0, 1, 2, 3, 4])" else: assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" @@ -1388,7 +1389,7 @@ assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1413,7 +1414,7 @@ [500, 1001]])''' def test_repr_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -1428,7 +1429,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -1461,7 +1462,7 @@ assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' def test_str_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -1477,7 +1478,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array, dtype + from _numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) @@ -1499,7 +1500,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_app_reshape(self): - from numpypy import arange, array, dtype, reshape + from _numpypy import arange, array, dtype, reshape a = arange(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpypy import add, ufunc + from _numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpypy import add, multiply, sin + from _numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpypy import add, sin + from _numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpypy import negative, sign, minimum + from _numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpypy import array, ndarray, negative, minimum + from _numpypy import array, ndarray, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpypy import array, absolute + from _numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpypy import array, add + from _numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpypy import array, divide + from _numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -114,7 +114,7 @@ assert (divide(array([-10]), array([2])) == array([-5])).all() def test_fabs(self): - from numpypy import array, fabs + from _numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -123,7 +123,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpypy import array, minimum + from _numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -132,7 +132,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpypy import array, maximum + from _numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -145,7 +145,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpypy import array, multiply + from _numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -154,7 +154,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpypy import array, sign, dtype + from _numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -173,7 +173,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpypy import array, reciprocal + from _numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -182,7 +182,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpypy import array, subtract + from _numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -191,7 +191,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpypy import array, floor + from _numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -200,7 +200,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpypy import array, copysign + from _numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -216,7 +216,7 @@ def test_exp(self): import math - from numpypy import array, exp + from _numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -230,7 +230,7 @@ def test_sin(self): import math - from numpypy import array, sin + from _numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -243,7 +243,7 @@ def test_cos(self): import math - from numpypy import array, cos + from _numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -252,7 +252,7 @@ def test_tan(self): import math - from numpypy import array, tan + from _numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -262,7 +262,7 @@ def test_arcsin(self): import math - from numpypy import array, arcsin + from _numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -276,7 +276,7 @@ def test_arccos(self): import math - from numpypy import array, arccos + from _numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -291,7 +291,7 @@ def test_arctan(self): import math - from numpypy import array, arctan + from _numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -304,7 +304,7 @@ def test_arcsinh(self): import math - from numpypy import arcsinh, inf + from _numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -312,7 +312,7 @@ def test_arctanh(self): import math - from numpypy import arctanh + from _numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -323,7 +323,7 @@ def test_sqrt(self): import math - from numpypy import sqrt + from _numpypy import sqrt nan, inf = float("nan"), float("inf") data = [1, 2, 3, inf] @@ -333,13 +333,13 @@ assert math.isnan(sqrt(nan)) def test_reduce_errors(self): - from numpypy import sin, add + from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpypy import add, maximum + from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -348,7 +348,7 @@ def test_comparisons(self): import operator - from numpypy import equal, not_equal, less, less_equal, greater, greater_equal + from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -42,7 +42,7 @@ 'argv' : 'state.get(space).w_argv', 'py3kwarning' : 'space.w_False', 'warnoptions' : 'state.get(space).w_warnoptions', - 'builtin_module_names' : 'state.w_None', + 'builtin_module_names' : 'space.w_None', 'pypy_getudir' : 'state.pypy_getudir', # not translated 'pypy_initial_path' : 'state.pypy_initial_path', diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py --- a/pypy/objspace/fake/checkmodule.py +++ b/pypy/objspace/fake/checkmodule.py @@ -1,8 +1,10 @@ from pypy.objspace.fake.objspace import FakeObjSpace, W_Root +from pypy.config.pypyoption import get_pypy_config def checkmodule(modname): - space = FakeObjSpace() + config = get_pypy_config(translating=True) + space = FakeObjSpace(config) mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__']) # force computation and record what we wrap module = mod.Module(space, W_Root()) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -93,9 +93,9 @@ class FakeObjSpace(ObjSpace): - def __init__(self): + def __init__(self, config=None): self._seen_extras = [] - ObjSpace.__init__(self) + ObjSpace.__init__(self, config=config) def float_w(self, w_obj): is_root(w_obj) @@ -135,6 +135,9 @@ def newfloat(self, x): return w_some_obj() + def newcomplex(self, x, y): + return w_some_obj() + def marshal_w(self, w_obj): "NOT_RPYTHON" raise NotImplementedError @@ -215,6 +218,10 @@ expected_length = 3 return [w_some_obj()] * expected_length + def unpackcomplex(self, w_complex): + is_root(w_complex) + return 1.1, 2.2 + def allocate_instance(self, cls, w_subtype): is_root(w_subtype) return instantiate(cls) @@ -232,6 +239,11 @@ def exec_(self, *args, **kwds): pass + def createexecutioncontext(self): + ec = ObjSpace.createexecutioncontext(self) + ec._py_repr = None + return ec + # ---------- def translates(self, func=None, argtypes=None, **kwds): @@ -267,18 +279,21 @@ ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', 'dict', 'unicode', 'complex', 'slice', 'bool', - 'type', 'basestring']): + 'type', 'basestring', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # for (name, _, arity, _) in ObjSpace.MethodTable: args = ['w_%d' % i for i in range(arity)] + params = args[:] d = {'is_root': is_root, 'w_some_obj': w_some_obj} + if name in ('get',): + params[-1] += '=None' exec compile2("""\ def meth(self, %s): %s return w_some_obj() - """ % (', '.join(args), + """ % (', '.join(params), '; '.join(['is_root(%s)' % arg for arg in args]))) in d meth = func_with_new_name(d['meth'], name) setattr(FakeObjSpace, name, meth) @@ -301,9 +316,12 @@ pass FakeObjSpace.default_compiler = FakeCompiler() -class FakeModule(object): +class FakeModule(Wrappable): + def __init__(self): + self.w_dict = w_some_obj() def get(self, name): name + "xx" # check that it's a string return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py --- a/pypy/objspace/fake/test/test_objspace.py +++ b/pypy/objspace/fake/test/test_objspace.py @@ -40,7 +40,7 @@ def test_constants(self): space = self.space space.translates(lambda: (space.w_None, space.w_True, space.w_False, - space.w_int, space.w_str, + space.w_int, space.w_str, space.w_object, space.w_TypeError)) def test_wrap(self): @@ -72,3 +72,9 @@ def test_newlist(self): self.space.newlist([W_Root(), W_Root()]) + + def test_default_values(self): + # the __get__ method takes either 2 or 3 arguments + space = self.space + space.translates(lambda: (space.get(W_Root(), W_Root()), + space.get(W_Root(), W_Root(), W_Root()))) From noreply at buildbot.pypy.org Sat Jan 7 21:21:32 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 7 Jan 2012 21:21:32 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: review notes Message-ID: <20120107202132.C14A182BFF@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks Changeset: r51125:1c83e7759323 Date: 2012-01-07 14:21 -0600 http://bitbucket.org/pypy/pypy/changeset/1c83e7759323/ Log: review notes diff --git a/REVIEW.rst b/REVIEW.rst new file mode 100644 --- /dev/null +++ b/REVIEW.rst @@ -0,0 +1,12 @@ +REVIEW NOTES +============ + +* ``namespace=locals()``, can we please not use ``locals()``, even in tests? I find it super hard to read, and it's bad for the JIT. +* Don't we already have a thing named portal (portal call maybe?) is the name confusing? +* ``interp_reso.pyp:wrap_greenkey()`` should do something useful on non-pypyjit jds. +* The ``WrappedOp`` constructor doesn't make much sense, it can only create an op with integer args? +* Let's at least expose ``name`` on ``WrappedOp``. +* DebugMergePoints don't appears to get their metadata. +* Someone else should review the annotator magic. +* Are entry_bridge's compiled seperately anymore? (``set_compile_hook`` docstring) + From noreply at buildbot.pypy.org Sat Jan 7 22:04:40 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 7 Jan 2012 22:04:40 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: add jit_merge_point Message-ID: <20120107210440.3B49682BFF@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51126:f3a9a6a5871d Date: 2012-01-06 16:32 +0200 http://bitbucket.org/pypy/pypy/changeset/f3a9a6a5871d/ Log: add jit_merge_point diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -297,7 +297,7 @@ descr_min = _reduce_ufunc_impl("minimum") def _reduce_argmax_argmin_impl(op_name): - reduce_driver = jit.JitDriver( + axisreduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), @@ -312,7 +312,7 @@ result = 0 idx = 1 while not frame.done(): - reduce_driver.jit_merge_point(sig=sig, + axisreduce_driver.jit_merge_point(sig=sig, shapelen=shapelen, self=self, dtype=dtype, frame=frame, result=result, @@ -783,18 +783,28 @@ return value def compute(self): + reduce_driver = jit.JitDriver( + greens=['shapelen', 'sig', 'self'], + reds=['result', 'ri', 'frame', 'nextval', 'dtype', 'value'], + get_printable_location=\ + signature.new_printable_location(self.binfunc), + ) self.computing = True dtype = self.dtype result = W_NDimArray(self.size, self.shape, dtype) self.values = self.values.get_concrete() shapelen = len(result.shape) - objlen = len(self.values.shape) sig = self.find_sig(res_shape=result.shape, arr=self.values) ri = ArrayIterator(result.size) frame = sig.create_frame(self.values, dim=self.dim) value = self.get_identity(sig, frame, shapelen) + nextval = 0. while not frame.done(): - #XXX add jit_merge_point + reduce_driver.jit_merge_point(frame=frame, self=self, + value=value, sig=sig, + shapelen=shapelen, ri=ri, + nextval=nextval, dtype=dtype, + result=result) if frame.iterators[0].axis_done: value = self.get_identity(sig, frame, shapelen) ri = ri.next(shapelen) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -115,6 +115,21 @@ "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) + def define_sum2d(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = sum(a,0) + b -> 1 + """ + + def test_axissum(self): + py.test.skip("2dsum") + result = self.run("sum2d") + assert result == 30 + self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 2, + "int_add": 1, "int_ge": 1, "guard_false": 1, + "jump": 1, 'arraylen_gc': 1}) + def define_prod(): return """ a = |30| From noreply at buildbot.pypy.org Sat Jan 7 22:04:41 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 7 Jan 2012 22:04:41 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: jit_merge_point translates, zjit test for sum() of 2d array fails Message-ID: <20120107210441.637C182BFF@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51127:579c843af22b Date: 2012-01-07 22:57 +0200 http://bitbucket.org/pypy/pypy/changeset/579c843af22b/ Log: jit_merge_point translates, zjit test for sum() of 2d array fails diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -36,6 +36,14 @@ get_printable_location=signature.new_printable_location('slice'), ) +axisreduce_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['self','result', 'ri', 'frame', 'nextval', 'dtype', 'value'], + get_printable_location=signature.new_printable_location('reduce'), +) + + def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) @@ -297,7 +305,7 @@ descr_min = _reduce_ufunc_impl("minimum") def _reduce_argmax_argmin_impl(op_name): - axisreduce_driver = jit.JitDriver( + reduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), @@ -312,7 +320,7 @@ result = 0 idx = 1 while not frame.done(): - axisreduce_driver.jit_merge_point(sig=sig, + reduce_driver.jit_merge_point(sig=sig, shapelen=shapelen, self=self, dtype=dtype, frame=frame, result=result, @@ -760,7 +768,6 @@ self.dtype = res_dtype self.dim = dim self.identity = identity - self.computing = False def _del_sources(self): self.values = None @@ -783,13 +790,6 @@ return value def compute(self): - reduce_driver = jit.JitDriver( - greens=['shapelen', 'sig', 'self'], - reds=['result', 'ri', 'frame', 'nextval', 'dtype', 'value'], - get_printable_location=\ - signature.new_printable_location(self.binfunc), - ) - self.computing = True dtype = self.dtype result = W_NDimArray(self.size, self.shape, dtype) self.values = self.values.get_concrete() @@ -798,9 +798,9 @@ ri = ArrayIterator(result.size) frame = sig.create_frame(self.values, dim=self.dim) value = self.get_identity(sig, frame, shapelen) - nextval = 0. + nextval = sig.eval(frame, self.values).convert_to(dtype) while not frame.done(): - reduce_driver.jit_merge_point(frame=frame, self=self, + axisreduce_driver.jit_merge_point(frame=frame, self=self, value=value, sig=sig, shapelen=shapelen, ri=ri, nextval=nextval, dtype=dtype, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -344,3 +344,7 @@ def eval(self, frame, arr): return self.right.eval(frame, arr) + + def debug_repr(self): + return 'ReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -743,6 +743,7 @@ def test_reduceND(self): from numpypy import arange a = arange(15).reshape(5, 3) + assert a.sum() == 105 assert (a.sum(0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -115,16 +115,15 @@ "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) - def define_sum2d(): + def define_axissum(): return """ a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] - b = sum(a,0) - b -> 1 + b = sum(a) #,0) + #b -> 1 """ def test_axissum(self): - py.test.skip("2dsum") - result = self.run("sum2d") + result = self.run("axissum") assert result == 30 self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 2, "int_add": 1, "int_ge": 1, "guard_false": 1, From noreply at buildbot.pypy.org Sat Jan 7 22:49:11 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 7 Jan 2012 22:49:11 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: improve the error message Message-ID: <20120107214911.2B15C82BFF@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51128:a65f5ec8c18b Date: 2012-01-07 23:48 +0200 http://bitbucket.org/pypy/pypy/changeset/a65f5ec8c18b/ Log: improve the error message diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -47,6 +47,8 @@ def f(i): interp = InterpreterState(codes[i]) interp.run(space) + if not len(interp.results): + raise Exception("need results") w_res = interp.results[-1] if isinstance(w_res, BaseArray): concr = w_res.get_concrete_or_scalar() From noreply at buildbot.pypy.org Sat Jan 7 23:26:01 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 7 Jan 2012 23:26:01 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: add optional arguments to sum in compile, axissum test now runs in test_zjit Message-ID: <20120107222601.A282D82BFF@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51129:834eda1cb2d7 Date: 2012-01-08 00:24 +0200 http://bitbucket.org/pypy/pypy/changeset/834eda1cb2d7/ Log: add optional arguments to sum in compile, axissum test now runs in test_zjit diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -372,13 +372,17 @@ def execute(self, interp): if self.name in SINGLE_ARG_FUNCTIONS: - if len(self.args) != 1: + if len(self.args) != 1 and self.name != 'sum': raise ArgumentMismatch arr = self.args[0].execute(interp) if not isinstance(arr, BaseArray): raise ArgumentNotAnArray if self.name == "sum": - w_res = arr.descr_sum(interp.space) + if len(self.args)>1: + w_res = arr.descr_sum(interp.space, + self.args[1].execute(interp)) + else: + w_res = arr.descr_sum(interp.space) elif self.name == "prod": w_res = arr.descr_prod(interp.space) elif self.name == "max": @@ -416,7 +420,7 @@ ('\]', 'array_right'), ('(->)|[\+\-\*\/]', 'operator'), ('=', 'assign'), - (',', 'coma'), + (',', 'comma'), ('\|', 'pipe'), ('\(', 'paren_left'), ('\)', 'paren_right'), @@ -504,7 +508,7 @@ return SliceConstant(start, stop, step) - def parse_expression(self, tokens): + def parse_expression(self, tokens, accept_comma=False): stack = [] while tokens.remaining(): token = tokens.pop() @@ -524,9 +528,13 @@ stack.append(RangeConstant(tokens.pop().v)) end = tokens.pop() assert end.name == 'pipe' + elif accept_comma and token.name == 'comma': + continue else: tokens.push() break + if accept_comma: + return stack stack.reverse() lhs = stack.pop() while stack: @@ -540,7 +548,7 @@ args = [] tokens.pop() # lparen while tokens.get(0).name != 'paren_right': - args.append(self.parse_expression(tokens)) + args += self.parse_expression(tokens, accept_comma=True) return FunctionCall(name, args) def parse_array_const(self, tokens): @@ -556,7 +564,7 @@ token = tokens.pop() if token.name == 'array_right': return elems - assert token.name == 'coma' + assert token.name == 'comma' def parse_statement(self, tokens): if (tokens.get(0).name == 'identifier' and diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -120,8 +120,8 @@ def define_axissum(): return """ a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] - b = sum(a) #,0) - #b -> 1 + b = sum(a,0) + b -> 1 """ def test_axissum(self): From noreply at buildbot.pypy.org Sun Jan 8 11:56:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 11:56:36 +0100 (CET) Subject: [pypy-commit] pypy default: (mikefc) implementation of var and std Message-ID: <20120108105636.D518F82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51130:da8d76b03c38 Date: 2012-01-08 12:56 +0200 http://bitbucket.org/pypy/pypy/changeset/da8d76b03c38/ Log: (mikefc) implementation of var and std diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -563,6 +563,18 @@ def descr_mean(self, space): return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_var(self, space): + ''' var = mean( (values - mean(values))**2 ) ''' + w_res = self.descr_sub(space, self.descr_mean(space)) + assert isinstance(w_res, BaseArray) + w_res = w_res.descr_pow(space, space.wrap(2)) + assert isinstance(w_res, BaseArray) + return w_res.descr_mean(space) + + def descr_std(self, space): + ''' std(v) = sqrt(var(v)) ''' + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)] ) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -1204,6 +1216,8 @@ all = interp2app(BaseArray.descr_all), any = interp2app(BaseArray.descr_any), dot = interp2app(BaseArray.descr_dot), + var = interp2app(BaseArray.descr_var), + std = interp2app(BaseArray.descr_std), copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -978,6 +978,20 @@ assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] + def test_var(self): + from _numpypy import array + a = array(range(10)) + assert a.var() == 8.25 + a = array([5.0]) + assert a.var() == 0.0 + + def test_std(self): + from _numpypy import array + a = array(range(10)) + assert a.std() == 2.8722813232690143 + a = array([5.0]) + assert a.std() == 0.0 + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): From noreply at buildbot.pypy.org Sun Jan 8 11:59:40 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 11:59:40 +0100 (CET) Subject: [pypy-commit] pypy default: (mikefc) partially import fromnumeric stuff Message-ID: <20120108105940.841BA82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51131:52ed6dd082e1 Date: 2012-01-08 12:59 +0200 http://bitbucket.org/pypy/pypy/changeset/52ed6dd082e1/ Log: (mikefc) partially import fromnumeric stuff diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,1 +1,2 @@ from _numpypy import * +from fromnumeric import * From noreply at buildbot.pypy.org Sun Jan 8 13:13:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 13:13:09 +0100 (CET) Subject: [pypy-commit] pypy default: there are assert that say "this must be in reg". Force it Message-ID: <20120108121309.E274C82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51132:882458b48b05 Date: 2012-01-08 14:12 +0200 http://bitbucket.org/pypy/pypy/changeset/882458b48b05/ Log: there are assert that say "this must be in reg". Force it diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -741,7 +741,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.force_allocate_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) From noreply at buildbot.pypy.org Sun Jan 8 13:18:38 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 13:18:38 +0100 (CET) Subject: [pypy-commit] pypy default: missing files Message-ID: <20120108121838.4B6BC82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51133:6bed35212c06 Date: 2012-01-08 14:18 +0200 http://bitbucket.org/pypy/pypy/changeset/6bed35212c06/ Log: missing files diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() From noreply at buildbot.pypy.org Sun Jan 8 14:33:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 14:33:03 +0100 (CET) Subject: [pypy-commit] pypy default: minor tests and fixes Message-ID: <20120108133303.5F74D82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51134:c14c5276c0e1 Date: 2012-01-08 15:31 +0200 http://bitbucket.org/pypy/pypy/changeset/c14c5276c0e1/ Log: minor tests and fixes diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -48,6 +48,7 @@ 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', + 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', } diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -78,6 +78,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -103,7 +104,7 @@ _attrs_ = () class W_IntegerBox(W_NumberBox): - pass + descr__new__, get_dtype = new_dtype_getter("long") class W_SignedIntegerBox(W_IntegerBox): pass @@ -170,6 +171,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), @@ -198,6 +200,7 @@ ) W_IntegerBox.typedef = TypeDef("integer", W_NumberBox.typedef, + __new__ = interp2app(W_IntegerBox.descr__new__.im_func), __module__ = "numpypy", ) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,6 +166,15 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): import _numpypy as numpy From noreply at buildbot.pypy.org Sun Jan 8 14:33:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 14:33:04 +0100 (CET) Subject: [pypy-commit] pypy default: simplification. We're not java Message-ID: <20120108133304.8D84782110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51135:014afe8c57ac Date: 2012-01-08 15:32 +0200 http://bitbucket.org/pypy/pypy/changeset/014afe8c57ac/ Log: simplification. We're not java diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -16,15 +16,13 @@ # debug name = "" pc = 0 + opnum = 0 def __init__(self, result): self.result = result - # methods implemented by each concrete class - # ------------------------------------------ - def getopnum(self): - raise NotImplementedError + return self.opnum # methods implemented by the arity mixins # --------------------------------------- @@ -590,12 +588,9 @@ baseclass = PlainResOp mixin = arity2mixin.get(arity, N_aryOp) - def getopnum(self): - return opnum - cls_name = '%s_OP' % name bases = (get_base_class(mixin, baseclass),) - dic = {'getopnum': getopnum} + dic = {'opnum': opnum} return type(cls_name, bases, dic) setup(__name__ == '__main__') # print out the table when run directly From noreply at buildbot.pypy.org Sun Jan 8 19:03:28 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 19:03:28 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: improve the hooks to be called before and after optimization Message-ID: <20120108180328.06C2382110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51136:6521c5b63450 Date: 2012-01-08 20:02 +0200 http://bitbucket.org/pypy/pypy/changeset/6521c5b63450/ Log: improve the hooks to be called before and after optimization diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -305,6 +305,13 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + portal = metainterp_sd.warmrunnerdesc.portal + portal.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, type, + greenkey) + else: + portal = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") @@ -316,11 +323,10 @@ finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() - if metainterp_sd.warmrunnerdesc is not None: - portal = metainterp_sd.warmrunnerdesc.portal - portal.on_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, - original_jitcell_token, loop.operations, type, - greenkey, ops_offset, asmstart, asmlen) + if portal is not None: + portal.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, type, + greenkey, ops_offset, asmstart, asmlen) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() @@ -341,8 +347,15 @@ show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + portal = metainterp_sd.warmrunnerdesc.portal + portal.before_compile_bridge(jitdriver_sd.jitdriver, + metainterp_sd.logger_ops, + original_loop_token, operations, n) + else: + portal = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: tp = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, @@ -351,12 +364,12 @@ finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() - if metainterp_sd.warmrunnerdesc is not None: - portal = metainterp_sd.warmrunnerdesc.portal - portal.on_compile_bridge(jitdriver_sd.jitdriver, - metainterp_sd.logger_ops, - original_loop_token, operations, n, ops_offset, - asmstart, asmlen) + if portal is not None: + portal.after_compile_bridge(jitdriver_sd.jitdriver, + metainterp_sd.logger_ops, + original_loop_token, operations, n, + ops_offset, + asmstart, asmlen) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -41,14 +41,25 @@ assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 def test_on_compile(self): - called = {} + called = [] class MyJitPortal(JitPortal): - def on_compile(self, jitdriver, logger, looptoken, operations, - type, greenkey, ops_offset, asmaddr, asmlen): + def after_compile(self, jitdriver, logger, looptoken, operations, + type, greenkey, ops_offset, asmaddr, asmlen): assert asmaddr == 0 assert asmlen == 0 - called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken + called.append(("compile", greenkey[1].getint(), + greenkey[0].getint(), type)) + + def before_compile(self, jitdriver, logger, looptoken, oeprations, + type, greenkey): + called.append(("optimize", greenkey[1].getint(), + greenkey[0].getint(), type)) + + def before_optimize(self, jitdriver, logger, looptoken, oeprations, + type, greenkey): + called.append(("trace", greenkey[1].getint(), + greenkey[0].getint(), type)) portal = MyJitPortal() @@ -62,26 +73,35 @@ i += 1 self.meta_interp(loop, [1, 4], policy=JitPolicy(portal)) - assert sorted(called.keys()) == [(4, 1, "loop")] + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] self.meta_interp(loop, [2, 4], policy=JitPolicy(portal)) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] def test_on_compile_bridge(self): - called = {} + called = [] class MyJitPortal(JitPortal): - def on_compile(self, jitdriver, logger, looptoken, operations, + def after_compile(self, jitdriver, logger, looptoken, operations, type, greenkey, ops_offset, asmaddr, asmlen): assert asmaddr == 0 assert asmlen == 0 - called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken + called.append("compile") - def on_compile_bridge(self, jitdriver, logger, orig_token, - operations, n, ops_offset, asmstart, asmlen): - assert 'bridge' not in called - called['bridge'] = orig_token + def after_compile_bridge(self, jitdriver, logger, orig_token, + operations, n, ops_offset, asmstart, asmlen): + called.append("compile_bridge") + def before_compile_bridge(self, jitdriver, logger, orig_token, + operations, n): + called.append("before_compile_bridge") + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) def loop(n, m): @@ -94,7 +114,7 @@ i += 1 self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal())) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] + assert called == ["compile", "before_compile_bridge", "compile_bridge"] def test_resop_interface(self): driver = JitDriver(greens = [], reds = ['i']) diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -731,29 +731,62 @@ like JIT loops compiled, aborts etc. An instance of this class might be returned by the policy.get_jit_portal method in order to function. + + each hook will accept some of the following args: + + + greenkey - a list of green boxes + jitdriver - an instance of jitdriver where tracing started + logger - an instance of jit.metainterp.logger.LogOperations + ops_offset + asmaddr - (int) raw address of assembler block + asmlen - assembler block length + type - either 'loop' or 'entry bridge' """ def on_abort(self, reason, jitdriver, greenkey): """ A hook called each time a loop is aborted with jitdriver and greenkey where it started, reason is a string why it got aborted """ - def on_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey, ops_offset, asmaddr, asmlen): - """ A hook called when loop is compiled. Overwrite - for your own jitdriver if you want to do something special, like - call applevel code. + #def before_optimize(self, jitdriver, logger, looptoken, operations, + # type, greenkey): + # """ A hook called before optimizer is run, args described in class + # docstring. Overwrite for custom behavior + # """ + # DISABLED - jitdriver - an instance of jitdriver where tracing started - logger - an instance of jit.metainterp.logger.LogOperations - asmaddr - (int) raw address of assembler block - asmlen - assembler block length - type - either 'loop' or 'entry bridge' + def before_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey): + """ A hook called after a loop is optimized, before compiling assembler, + args described ni class docstring. Overwrite for custom behavior """ - def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, - fail_descr_no, ops_offset, asmaddr, asmlen): - """ A hook called when a bridge is compiled. Overwrite - for your own jitdriver if you want to do something special + def after_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey, ops_offset, asmaddr, asmlen): + """ A hook called after a loop has compiled assembler, + args described in class docstring. Overwrite for custom behavior + """ + + #def before_optimize_bridge(self, jitdriver, logger, orig_looptoken, + # operations, fail_descr_no): + # """ A hook called before a bridge is optimized. + # Args described in class docstring, Overwrite for + # custom behavior + # """ + # DISABLED + + def before_compile_bridge(self, jitdriver, logger, orig_looptoken, + operations, fail_descr_no): + """ A hook called before a bridge is compiled, but after optimizations + are performed. Args described in class docstring, Overwrite for + custom behavior + """ + + def after_compile_bridge(self, jitdriver, logger, orig_looptoken, + operations, fail_descr_no, ops_offset, asmaddr, + asmlen): + """ A hook called after a bridge is compiled, args described in class + docstring, Overwrite for custom behavior """ def get_stats(self): From noreply at buildbot.pypy.org Sun Jan 8 19:19:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 19:19:30 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: update and improve the hooks Message-ID: <20120108181930.642DC82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51137:5ed435c1abb6 Date: 2012-01-08 20:18 +0200 http://bitbucket.org/pypy/pypy/changeset/5ed435c1abb6/ Log: update and improve the hooks diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -8,6 +8,7 @@ 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -61,6 +61,37 @@ cache.in_recursion = NonConstant(False) return space.w_None +def set_optimize_hook(space, w_hook): + """ set_compile_hook(hook) + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + """ + cache = space.fromcache(Cache) + cache.w_optimize_hook = w_hook + cache.in_recursion = NonConstant(False) + return space.w_None + def set_abort_hook(space, w_hook): """ set_abort_hook(hook) diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,8 +1,10 @@ from pypy.jit.codewriter.policy import JitPolicy from pypy.rlib.jit import JitPortal +from pypy.rlib import jit_hooks from pypy.interpreter.error import OperationError from pypy.jit.metainterp.jitprof import counter_names -from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey +from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\ + WrappedOp class PyPyPortal(JitPortal): def on_abort(self, reason, jitdriver, greenkey): @@ -21,18 +23,28 @@ e.write_unraisable(space, "jit hook ", cache.w_abort_hook) cache.in_recursion = False - def on_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey, ops_offset, asmstart, asmlen): + def after_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey, ops_offset, asmstart, asmlen): self._compile_hook(jitdriver, logger, operations, type, ops_offset, asmstart, asmlen, wrap_greenkey(self.space, jitdriver, greenkey)) - def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, - n, ops_offset, asmstart, asmlen): + def after_compile_bridge(self, jitdriver, logger, orig_looptoken, + operations, n, ops_offset, asmstart, asmlen): self._compile_hook(jitdriver, logger, operations, 'bridge', ops_offset, asmstart, asmlen, self.space.wrap(n)) + def before_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey): + self._optimize_hook(jitdriver, logger, operations, type, + wrap_greenkey(self.space, jitdriver, greenkey)) + + def before_compile_bridge(self, jitdriver, logger, orig_looptoken, + operations, n): + self._optimize_hook(jitdriver, logger, operations, 'bridge', + self.space.wrap(n)) + def _compile_hook(self, jitdriver, logger, operations, type, ops_offset, asmstart, asmlen, w_arg): space = self.space @@ -55,6 +67,34 @@ e.write_unraisable(space, "jit hook ", cache.w_compile_hook) cache.in_recursion = False + def _optimize_hook(self, jitdriver, logger, operations, type, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_optimize_hook): + logops = logger._make_log_operations() + list_w = wrap_oplist(space, logops, operations, {}) + cache.in_recursion = True + try: + w_res = space.call_function(cache.w_optimize_hook, + space.wrap(jitdriver.name), + space.wrap(type), + w_arg, + space.newlist(list_w)) + if space.is_w(w_res, space.w_None): + return + l = [] + for w_item in space.listview(w_res): + item = space.interp_w(WrappedOp, w_item) + l.append(jit_hooks._cast_to_resop(item.op)) + operations[:] = l # modifying operations above is probably not + # a great idea since types may not work and we'll end up with + # half-working list and a segfault/fatal RPython error + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + cache.in_recursion = False + pypy_portal = PyPyPortal() class PyPyJitPolicy(JitPolicy): diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -47,7 +47,7 @@ code_gcref = lltype.cast_opaque_ptr(llmemory.GCREF, ll_code) logger = Logger(MockSD()) - oplist = parse(""" + cls.origoplist = parse(""" [i1, i2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) @@ -55,19 +55,23 @@ """, namespace={'ptr0': code_gcref}).operations greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] offset = {} - for i, op in enumerate(oplist): + for i, op in enumerate(cls.origoplist): if i != 1: offset[op] = i def interp_on_compile(): - pypy_portal.on_compile(pypyjitdriver, logger, JitCellToken(), - oplist, 'loop', greenkey, offset, - 0, 0) + pypy_portal.after_compile(pypyjitdriver, logger, JitCellToken(), + cls.oplist, 'loop', greenkey, offset, + 0, 0) def interp_on_compile_bridge(): - pypy_portal.on_compile_bridge(pypyjitdriver, logger, - JitCellToken(), oplist, 0, - offset, 0, 0) + pypy_portal.after_compile_bridge(pypyjitdriver, logger, + JitCellToken(), cls.oplist, 0, + offset, 0, 0) + + def interp_on_optimize(): + pypy_portal.before_compile(pypyjitdriver, logger, JitCellToken(), + cls.oplist, 'loop', greenkey) def interp_on_abort(): pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey) @@ -76,6 +80,10 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) + + def setup_method(self, meth): + self.__class__.oplist = self.origoplist def test_on_compile(self): import pypyjit @@ -160,6 +168,22 @@ self.on_abort() assert l == [('pypyjit', 'ABORT_TOO_LONG')] + def test_on_optimize(self): + import pypyjit + l = [] + + def hook(name, looptype, tuple_or_guard_no, ops, *args): + l.append(ops) + + def optimize_hook(name, looptype, tuple_or_guard_no, ops): + return [] + + pypyjit.set_compile_hook(hook) + pypyjit.set_optimize_hook(optimize_hook) + self.on_optimize() + self.on_compile() + assert l == [[]] + def test_creation(self): import pypyjit From noreply at buildbot.pypy.org Sun Jan 8 19:29:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 19:29:02 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: improve a bit how to get to items Message-ID: <20120108182902.51DB282110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51138:f4129eca042d Date: 2012-01-08 20:28 +0200 http://bitbucket.org/pypy/pypy/changeset/f4129eca042d/ Log: improve a bit how to get to items diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -5,6 +5,7 @@ from pypy.jit.codewriter.policy import JitPolicy from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr class TestJitPortal(LLJitMixin): def test_abort_quasi_immut(self): @@ -130,7 +131,9 @@ [jit_hooks.boxint_new(3), jit_hooks.boxint_new(4)], jit_hooks.boxint_new(1)) - return jit_hooks.resop_opnum(op) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 - res = self.meta_interp(main, []) - assert res == rop.INT_ADD + self.meta_interp(main, []) diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -4,26 +4,28 @@ from pypy.rpython.lltypesystem import llmemory, lltype from pypy.rpython.lltypesystem import rclass from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\ - cast_base_ptr_to_instance + cast_base_ptr_to_instance, llstr, hlstr from pypy.rlib.objectmodel import specialize -def register_helper(helper, s_result): - - class Entry(ExtRegistryEntry): - _about_ = helper +def register_helper(s_result): + def wrapper(helper): + class Entry(ExtRegistryEntry): + _about_ = helper - def compute_result_annotation(self, *args): - return s_result + def compute_result_annotation(self, *args): + return s_result - def specialize_call(self, hop): - from pypy.rpython.lltypesystem import lltype + def specialize_call(self, hop): + from pypy.rpython.lltypesystem import lltype - c_func = hop.inputconst(lltype.Void, helper) - c_name = hop.inputconst(lltype.Void, 'access_helper') - args_v = [hop.inputarg(arg, arg=i) - for i, arg in enumerate(hop.args_r)] - return hop.genop('jit_marker', [c_name, c_func] + args_v, - resulttype=hop.r_result) + c_func = hop.inputconst(lltype.Void, helper) + c_name = hop.inputconst(lltype.Void, 'access_helper') + args_v = [hop.inputarg(arg, arg=i) + for i, arg in enumerate(hop.args_r)] + return hop.genop('jit_marker', [c_name, c_func] + args_v, + resulttype=hop.r_result) + return helper + return wrapper def _cast_to_box(llref): from pypy.jit.metainterp.history import AbstractValue @@ -42,6 +44,7 @@ return lltype.cast_opaque_ptr(llmemory.GCREF, cast_instance_to_base_ptr(obj)) + at register_helper(annmodel.SomePtr(llmemory.GCREF)) def resop_new(no, llargs, llres): from pypy.jit.metainterp.history import ResOperation @@ -49,15 +52,23 @@ res = _cast_to_box(llres) return _cast_to_gcref(ResOperation(no, args, res)) -register_helper(resop_new, annmodel.SomePtr(llmemory.GCREF)) - + at register_helper(annmodel.SomePtr(llmemory.GCREF)) def boxint_new(no): from pypy.jit.metainterp.history import BoxInt return _cast_to_gcref(BoxInt(no)) -register_helper(boxint_new, annmodel.SomePtr(llmemory.GCREF)) - -def resop_opnum(llop): + at register_helper(annmodel.SomeInteger()) +def resop_getopnum(llop): return _cast_to_resop(llop).getopnum() -register_helper(resop_opnum, annmodel.SomeInteger()) + at register_helper(annmodel.SomeString(can_be_None=True)) +def resop_getopname(llop): + return llstr(_cast_to_resop(llop).getopname()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getarg(llop, no): + return _cast_to_gcref(_cast_to_resop(llop).getarg(no)) + + at register_helper(annmodel.SomeInteger()) +def box_getint(llbox): + return _cast_to_box(llbox).getint() From noreply at buildbot.pypy.org Sun Jan 8 19:39:53 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 19:39:53 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: expose some more Message-ID: <20120108183953.EE59282110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51139:143e2aef1cb6 Date: 2012-01-08 20:39 +0200 http://bitbucket.org/pypy/pypy/changeset/143e2aef1cb6/ Log: expose some more diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -135,5 +135,14 @@ assert jit_hooks.resop_getopnum(op) == rop.INT_ADD box = jit_hooks.resop_getarg(op, 0) assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) self.meta_interp(main, []) diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -72,3 +72,20 @@ @register_helper(annmodel.SomeInteger()) def box_getint(llbox): return _cast_to_box(llbox).getint() + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_clone(llbox): + return _cast_to_gcref(_cast_to_box(llbox).clonebox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_constbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).constbox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_nonconstbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).nonconstbox()) + + at register_helper(annmodel.SomeBool()) +def box_isconst(llbox): + from pypy.jit.metainterp.history import Const + return isinstance(_cast_to_box(llbox), Const) From noreply at buildbot.pypy.org Sun Jan 8 19:53:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 19:53:30 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: increasingly boring exerice of exposing more and more Message-ID: <20120108185330.2C56582110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51140:f59c4f53adf9 Date: 2012-01-08 20:53 +0200 http://bitbucket.org/pypy/pypy/changeset/f59c4f53adf9/ Log: increasingly boring exerice of exposing more and more diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'Box': 'interp_resop.WrappedBox', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -5,7 +5,7 @@ from pypy.interpreter.pycode import PyCode from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import lltype, llmemory -from pypy.rpython.annlowlevel import cast_base_ptr_to_instance +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant @@ -114,14 +114,12 @@ logops.repr_of_resop(op)) for op in operations] @unwrap_spec(num=int, offset=int, repr=str) -def descr_new_resop(space, w_tp, num, w_args, w_res=NoneNotWrapped, offset=-1, +def descr_new_resop(space, w_tp, num, w_args, w_res=None, offset=-1, repr=''): - args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] - if w_res is None: - llres = lltype.nullptr(llmemory.GCREF.TO) - else: - llres = jit_hooks.boxint_new(space.int_w(w_res)) + llres = space.interp_w(WrappedBox, w_res).llbox + # XXX None case return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) class WrappedOp(Wrappable): @@ -136,7 +134,14 @@ return space.wrap(self.repr_of_resop) def descr_num(self, space): - return space.wrap(jit_hooks.resop_opnum(self.op)) + return space.wrap(jit_hooks.resop_getopnum(self.op)) + + def descr_name(self, space): + return space.wrap(hlstr(jit_hooks.resop_getopname(self.op))) + + @unwrap_spec(no=int) + def descr_getarg(self, space, no): + return WrappedBox(jit_hooks.resop_getarg(self.op, no)) WrappedOp.typedef = TypeDef( 'ResOperation', @@ -144,5 +149,26 @@ __new__ = interp2app(descr_new_resop), __repr__ = interp2app(WrappedOp.descr_repr), num = GetSetProperty(WrappedOp.descr_num), + name = GetSetProperty(WrappedOp.descr_name), + getarg = interp2app(WrappedOp.descr_getarg), ) WrappedOp.acceptable_as_base_class = False + +class WrappedBox(Wrappable): + """ A class representing a single box + """ + def __init__(self, llbox): + self.llbox = llbox + + def descr_getint(self, space): + return space.wrap(jit_hooks.box_getint(self.llbox)) + + at unwrap_spec(no=int) +def descr_new_box(space, w_tp, no): + return WrappedBox(jit_hooks.boxint_new(no)) + +WrappedBox.typedef = TypeDef( + 'Box', + __new__ = interp2app(descr_new_box), + getint = interp2app(WrappedBox.descr_getint), +) diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -48,9 +48,10 @@ logger = Logger(MockSD()) cls.origoplist = parse(""" - [i1, i2] + [i1, i2, p2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) + guard_nonnull(p2) [] guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] @@ -102,7 +103,7 @@ assert elem[2][0].co_name == 'function' assert elem[2][1] == 0 assert elem[2][2] == False - assert len(elem[3]) == 3 + assert len(elem[3]) == 4 int_add = elem[3][0] #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num @@ -185,7 +186,10 @@ assert l == [[]] def test_creation(self): - import pypyjit + from pypyjit import Box, ResOperation - op = pypyjit.ResOperation(self.int_add_num, [1, 3], 4) + op = ResOperation(self.int_add_num, [Box(1), Box(3)], Box(4)) assert op.num == self.int_add_num + assert op.name == 'int_add' + box = op.getarg(0) + assert box.getint() == 1 From noreply at buildbot.pypy.org Sun Jan 8 19:54:32 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 8 Jan 2012 19:54:32 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: added support for float getinteriorfield_raws Message-ID: <20120108185432.073AF82110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks Changeset: r51141:e44952da636d Date: 2012-01-08 12:53 -0600 http://bitbucket.org/pypy/pypy/changeset/e44952da636d/ Log: added support for float getinteriorfield_raws diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False From noreply at buildbot.pypy.org Sun Jan 8 19:54:33 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 8 Jan 2012 19:54:33 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: merged upstream Message-ID: <20120108185433.3604E82110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks Changeset: r51142:ade5f6c6f404 Date: 2012-01-08 12:54 -0600 http://bitbucket.org/pypy/pypy/changeset/ade5f6c6f404/ Log: merged upstream diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -305,6 +305,13 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + portal = metainterp_sd.warmrunnerdesc.portal + portal.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, type, + greenkey) + else: + portal = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") @@ -316,11 +323,10 @@ finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() - if metainterp_sd.warmrunnerdesc is not None: - portal = metainterp_sd.warmrunnerdesc.portal - portal.on_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, - original_jitcell_token, loop.operations, type, - greenkey, ops_offset, asmstart, asmlen) + if portal is not None: + portal.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, type, + greenkey, ops_offset, asmstart, asmlen) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() @@ -341,8 +347,15 @@ show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + portal = metainterp_sd.warmrunnerdesc.portal + portal.before_compile_bridge(jitdriver_sd.jitdriver, + metainterp_sd.logger_ops, + original_loop_token, operations, n) + else: + portal = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: tp = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, @@ -351,12 +364,12 @@ finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() - if metainterp_sd.warmrunnerdesc is not None: - portal = metainterp_sd.warmrunnerdesc.portal - portal.on_compile_bridge(jitdriver_sd.jitdriver, - metainterp_sd.logger_ops, - original_loop_token, operations, n, ops_offset, - asmstart, asmlen) + if portal is not None: + portal.after_compile_bridge(jitdriver_sd.jitdriver, + metainterp_sd.logger_ops, + original_loop_token, operations, n, + ops_offset, + asmstart, asmlen) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -5,6 +5,7 @@ from pypy.jit.codewriter.policy import JitPolicy from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr class TestJitPortal(LLJitMixin): def test_abort_quasi_immut(self): @@ -41,14 +42,25 @@ assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 def test_on_compile(self): - called = {} + called = [] class MyJitPortal(JitPortal): - def on_compile(self, jitdriver, logger, looptoken, operations, - type, greenkey, ops_offset, asmaddr, asmlen): + def after_compile(self, jitdriver, logger, looptoken, operations, + type, greenkey, ops_offset, asmaddr, asmlen): assert asmaddr == 0 assert asmlen == 0 - called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken + called.append(("compile", greenkey[1].getint(), + greenkey[0].getint(), type)) + + def before_compile(self, jitdriver, logger, looptoken, oeprations, + type, greenkey): + called.append(("optimize", greenkey[1].getint(), + greenkey[0].getint(), type)) + + def before_optimize(self, jitdriver, logger, looptoken, oeprations, + type, greenkey): + called.append(("trace", greenkey[1].getint(), + greenkey[0].getint(), type)) portal = MyJitPortal() @@ -62,26 +74,35 @@ i += 1 self.meta_interp(loop, [1, 4], policy=JitPolicy(portal)) - assert sorted(called.keys()) == [(4, 1, "loop")] + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] self.meta_interp(loop, [2, 4], policy=JitPolicy(portal)) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] def test_on_compile_bridge(self): - called = {} + called = [] class MyJitPortal(JitPortal): - def on_compile(self, jitdriver, logger, looptoken, operations, + def after_compile(self, jitdriver, logger, looptoken, operations, type, greenkey, ops_offset, asmaddr, asmlen): assert asmaddr == 0 assert asmlen == 0 - called[(greenkey[1].getint(), greenkey[0].getint(), type)] = looptoken + called.append("compile") - def on_compile_bridge(self, jitdriver, logger, orig_token, - operations, n, ops_offset, asmstart, asmlen): - assert 'bridge' not in called - called['bridge'] = orig_token + def after_compile_bridge(self, jitdriver, logger, orig_token, + operations, n, ops_offset, asmstart, asmlen): + called.append("compile_bridge") + def before_compile_bridge(self, jitdriver, logger, orig_token, + operations, n): + called.append("before_compile_bridge") + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) def loop(n, m): @@ -94,7 +115,7 @@ i += 1 self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal())) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] + assert called == ["compile", "before_compile_bridge", "compile_bridge"] def test_resop_interface(self): driver = JitDriver(greens = [], reds = ['i']) @@ -110,7 +131,18 @@ [jit_hooks.boxint_new(3), jit_hooks.boxint_new(4)], jit_hooks.boxint_new(1)) - return jit_hooks.resop_opnum(op) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) - res = self.meta_interp(main, []) - assert res == rop.INT_ADD + self.meta_interp(main, []) diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -8,8 +8,10 @@ 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'Box': 'interp_resop.WrappedBox', } def setup_after_space_initialization(self): diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -5,7 +5,7 @@ from pypy.interpreter.pycode import PyCode from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import lltype, llmemory -from pypy.rpython.annlowlevel import cast_base_ptr_to_instance +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant @@ -61,6 +61,37 @@ cache.in_recursion = NonConstant(False) return space.w_None +def set_optimize_hook(space, w_hook): + """ set_compile_hook(hook) + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + """ + cache = space.fromcache(Cache) + cache.w_optimize_hook = w_hook + cache.in_recursion = NonConstant(False) + return space.w_None + def set_abort_hook(space, w_hook): """ set_abort_hook(hook) @@ -83,14 +114,12 @@ logops.repr_of_resop(op)) for op in operations] @unwrap_spec(num=int, offset=int, repr=str) -def descr_new_resop(space, w_tp, num, w_args, w_res=NoneNotWrapped, offset=-1, +def descr_new_resop(space, w_tp, num, w_args, w_res=None, offset=-1, repr=''): - args = [jit_hooks.boxint_new(space.int_w(w_arg)) for w_arg in + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] - if w_res is None: - llres = lltype.nullptr(llmemory.GCREF.TO) - else: - llres = jit_hooks.boxint_new(space.int_w(w_res)) + llres = space.interp_w(WrappedBox, w_res).llbox + # XXX None case return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) class WrappedOp(Wrappable): @@ -105,7 +134,14 @@ return space.wrap(self.repr_of_resop) def descr_num(self, space): - return space.wrap(jit_hooks.resop_opnum(self.op)) + return space.wrap(jit_hooks.resop_getopnum(self.op)) + + def descr_name(self, space): + return space.wrap(hlstr(jit_hooks.resop_getopname(self.op))) + + @unwrap_spec(no=int) + def descr_getarg(self, space, no): + return WrappedBox(jit_hooks.resop_getarg(self.op, no)) WrappedOp.typedef = TypeDef( 'ResOperation', @@ -113,5 +149,26 @@ __new__ = interp2app(descr_new_resop), __repr__ = interp2app(WrappedOp.descr_repr), num = GetSetProperty(WrappedOp.descr_num), + name = GetSetProperty(WrappedOp.descr_name), + getarg = interp2app(WrappedOp.descr_getarg), ) WrappedOp.acceptable_as_base_class = False + +class WrappedBox(Wrappable): + """ A class representing a single box + """ + def __init__(self, llbox): + self.llbox = llbox + + def descr_getint(self, space): + return space.wrap(jit_hooks.box_getint(self.llbox)) + + at unwrap_spec(no=int) +def descr_new_box(space, w_tp, no): + return WrappedBox(jit_hooks.boxint_new(no)) + +WrappedBox.typedef = TypeDef( + 'Box', + __new__ = interp2app(descr_new_box), + getint = interp2app(WrappedBox.descr_getint), +) diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,8 +1,10 @@ from pypy.jit.codewriter.policy import JitPolicy from pypy.rlib.jit import JitPortal +from pypy.rlib import jit_hooks from pypy.interpreter.error import OperationError from pypy.jit.metainterp.jitprof import counter_names -from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey +from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\ + WrappedOp class PyPyPortal(JitPortal): def on_abort(self, reason, jitdriver, greenkey): @@ -21,18 +23,28 @@ e.write_unraisable(space, "jit hook ", cache.w_abort_hook) cache.in_recursion = False - def on_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey, ops_offset, asmstart, asmlen): + def after_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey, ops_offset, asmstart, asmlen): self._compile_hook(jitdriver, logger, operations, type, ops_offset, asmstart, asmlen, wrap_greenkey(self.space, jitdriver, greenkey)) - def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, - n, ops_offset, asmstart, asmlen): + def after_compile_bridge(self, jitdriver, logger, orig_looptoken, + operations, n, ops_offset, asmstart, asmlen): self._compile_hook(jitdriver, logger, operations, 'bridge', ops_offset, asmstart, asmlen, self.space.wrap(n)) + def before_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey): + self._optimize_hook(jitdriver, logger, operations, type, + wrap_greenkey(self.space, jitdriver, greenkey)) + + def before_compile_bridge(self, jitdriver, logger, orig_looptoken, + operations, n): + self._optimize_hook(jitdriver, logger, operations, 'bridge', + self.space.wrap(n)) + def _compile_hook(self, jitdriver, logger, operations, type, ops_offset, asmstart, asmlen, w_arg): space = self.space @@ -55,6 +67,34 @@ e.write_unraisable(space, "jit hook ", cache.w_compile_hook) cache.in_recursion = False + def _optimize_hook(self, jitdriver, logger, operations, type, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_optimize_hook): + logops = logger._make_log_operations() + list_w = wrap_oplist(space, logops, operations, {}) + cache.in_recursion = True + try: + w_res = space.call_function(cache.w_optimize_hook, + space.wrap(jitdriver.name), + space.wrap(type), + w_arg, + space.newlist(list_w)) + if space.is_w(w_res, space.w_None): + return + l = [] + for w_item in space.listview(w_res): + item = space.interp_w(WrappedOp, w_item) + l.append(jit_hooks._cast_to_resop(item.op)) + operations[:] = l # modifying operations above is probably not + # a great idea since types may not work and we'll end up with + # half-working list and a segfault/fatal RPython error + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + cache.in_recursion = False + pypy_portal = PyPyPortal() class PyPyJitPolicy(JitPolicy): diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -47,27 +47,32 @@ code_gcref = lltype.cast_opaque_ptr(llmemory.GCREF, ll_code) logger = Logger(MockSD()) - oplist = parse(""" - [i1, i2] + cls.origoplist = parse(""" + [i1, i2, p2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) + guard_nonnull(p2) [] guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] offset = {} - for i, op in enumerate(oplist): + for i, op in enumerate(cls.origoplist): if i != 1: offset[op] = i def interp_on_compile(): - pypy_portal.on_compile(pypyjitdriver, logger, JitCellToken(), - oplist, 'loop', greenkey, offset, - 0, 0) + pypy_portal.after_compile(pypyjitdriver, logger, JitCellToken(), + cls.oplist, 'loop', greenkey, offset, + 0, 0) def interp_on_compile_bridge(): - pypy_portal.on_compile_bridge(pypyjitdriver, logger, - JitCellToken(), oplist, 0, - offset, 0, 0) + pypy_portal.after_compile_bridge(pypyjitdriver, logger, + JitCellToken(), cls.oplist, 0, + offset, 0, 0) + + def interp_on_optimize(): + pypy_portal.before_compile(pypyjitdriver, logger, JitCellToken(), + cls.oplist, 'loop', greenkey) def interp_on_abort(): pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey) @@ -76,6 +81,10 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) + + def setup_method(self, meth): + self.__class__.oplist = self.origoplist def test_on_compile(self): import pypyjit @@ -94,7 +103,7 @@ assert elem[2][0].co_name == 'function' assert elem[2][1] == 0 assert elem[2][2] == False - assert len(elem[3]) == 3 + assert len(elem[3]) == 4 int_add = elem[3][0] #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num @@ -160,8 +169,27 @@ self.on_abort() assert l == [('pypyjit', 'ABORT_TOO_LONG')] + def test_on_optimize(self): + import pypyjit + l = [] + + def hook(name, looptype, tuple_or_guard_no, ops, *args): + l.append(ops) + + def optimize_hook(name, looptype, tuple_or_guard_no, ops): + return [] + + pypyjit.set_compile_hook(hook) + pypyjit.set_optimize_hook(optimize_hook) + self.on_optimize() + self.on_compile() + assert l == [[]] + def test_creation(self): - import pypyjit + from pypyjit import Box, ResOperation - op = pypyjit.ResOperation(self.int_add_num, [1, 3], 4) + op = ResOperation(self.int_add_num, [Box(1), Box(3)], Box(4)) assert op.num == self.int_add_num + assert op.name == 'int_add' + box = op.getarg(0) + assert box.getint() == 1 diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -731,29 +731,62 @@ like JIT loops compiled, aborts etc. An instance of this class might be returned by the policy.get_jit_portal method in order to function. + + each hook will accept some of the following args: + + + greenkey - a list of green boxes + jitdriver - an instance of jitdriver where tracing started + logger - an instance of jit.metainterp.logger.LogOperations + ops_offset + asmaddr - (int) raw address of assembler block + asmlen - assembler block length + type - either 'loop' or 'entry bridge' """ def on_abort(self, reason, jitdriver, greenkey): """ A hook called each time a loop is aborted with jitdriver and greenkey where it started, reason is a string why it got aborted """ - def on_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey, ops_offset, asmaddr, asmlen): - """ A hook called when loop is compiled. Overwrite - for your own jitdriver if you want to do something special, like - call applevel code. + #def before_optimize(self, jitdriver, logger, looptoken, operations, + # type, greenkey): + # """ A hook called before optimizer is run, args described in class + # docstring. Overwrite for custom behavior + # """ + # DISABLED - jitdriver - an instance of jitdriver where tracing started - logger - an instance of jit.metainterp.logger.LogOperations - asmaddr - (int) raw address of assembler block - asmlen - assembler block length - type - either 'loop' or 'entry bridge' + def before_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey): + """ A hook called after a loop is optimized, before compiling assembler, + args described ni class docstring. Overwrite for custom behavior """ - def on_compile_bridge(self, jitdriver, logger, orig_looptoken, operations, - fail_descr_no, ops_offset, asmaddr, asmlen): - """ A hook called when a bridge is compiled. Overwrite - for your own jitdriver if you want to do something special + def after_compile(self, jitdriver, logger, looptoken, operations, type, + greenkey, ops_offset, asmaddr, asmlen): + """ A hook called after a loop has compiled assembler, + args described in class docstring. Overwrite for custom behavior + """ + + #def before_optimize_bridge(self, jitdriver, logger, orig_looptoken, + # operations, fail_descr_no): + # """ A hook called before a bridge is optimized. + # Args described in class docstring, Overwrite for + # custom behavior + # """ + # DISABLED + + def before_compile_bridge(self, jitdriver, logger, orig_looptoken, + operations, fail_descr_no): + """ A hook called before a bridge is compiled, but after optimizations + are performed. Args described in class docstring, Overwrite for + custom behavior + """ + + def after_compile_bridge(self, jitdriver, logger, orig_looptoken, + operations, fail_descr_no, ops_offset, asmaddr, + asmlen): + """ A hook called after a bridge is compiled, args described in class + docstring, Overwrite for custom behavior """ def get_stats(self): diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -4,26 +4,28 @@ from pypy.rpython.lltypesystem import llmemory, lltype from pypy.rpython.lltypesystem import rclass from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\ - cast_base_ptr_to_instance + cast_base_ptr_to_instance, llstr, hlstr from pypy.rlib.objectmodel import specialize -def register_helper(helper, s_result): - - class Entry(ExtRegistryEntry): - _about_ = helper +def register_helper(s_result): + def wrapper(helper): + class Entry(ExtRegistryEntry): + _about_ = helper - def compute_result_annotation(self, *args): - return s_result + def compute_result_annotation(self, *args): + return s_result - def specialize_call(self, hop): - from pypy.rpython.lltypesystem import lltype + def specialize_call(self, hop): + from pypy.rpython.lltypesystem import lltype - c_func = hop.inputconst(lltype.Void, helper) - c_name = hop.inputconst(lltype.Void, 'access_helper') - args_v = [hop.inputarg(arg, arg=i) - for i, arg in enumerate(hop.args_r)] - return hop.genop('jit_marker', [c_name, c_func] + args_v, - resulttype=hop.r_result) + c_func = hop.inputconst(lltype.Void, helper) + c_name = hop.inputconst(lltype.Void, 'access_helper') + args_v = [hop.inputarg(arg, arg=i) + for i, arg in enumerate(hop.args_r)] + return hop.genop('jit_marker', [c_name, c_func] + args_v, + resulttype=hop.r_result) + return helper + return wrapper def _cast_to_box(llref): from pypy.jit.metainterp.history import AbstractValue @@ -42,6 +44,7 @@ return lltype.cast_opaque_ptr(llmemory.GCREF, cast_instance_to_base_ptr(obj)) + at register_helper(annmodel.SomePtr(llmemory.GCREF)) def resop_new(no, llargs, llres): from pypy.jit.metainterp.history import ResOperation @@ -49,15 +52,40 @@ res = _cast_to_box(llres) return _cast_to_gcref(ResOperation(no, args, res)) -register_helper(resop_new, annmodel.SomePtr(llmemory.GCREF)) - + at register_helper(annmodel.SomePtr(llmemory.GCREF)) def boxint_new(no): from pypy.jit.metainterp.history import BoxInt return _cast_to_gcref(BoxInt(no)) -register_helper(boxint_new, annmodel.SomePtr(llmemory.GCREF)) - -def resop_opnum(llop): + at register_helper(annmodel.SomeInteger()) +def resop_getopnum(llop): return _cast_to_resop(llop).getopnum() -register_helper(resop_opnum, annmodel.SomeInteger()) + at register_helper(annmodel.SomeString(can_be_None=True)) +def resop_getopname(llop): + return llstr(_cast_to_resop(llop).getopname()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getarg(llop, no): + return _cast_to_gcref(_cast_to_resop(llop).getarg(no)) + + at register_helper(annmodel.SomeInteger()) +def box_getint(llbox): + return _cast_to_box(llbox).getint() + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_clone(llbox): + return _cast_to_gcref(_cast_to_box(llbox).clonebox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_constbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).constbox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_nonconstbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).nonconstbox()) + + at register_helper(annmodel.SomeBool()) +def box_isconst(llbox): + from pypy.jit.metainterp.history import Const + return isinstance(_cast_to_box(llbox), Const) From noreply at buildbot.pypy.org Sun Jan 8 19:55:12 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 8 Jan 2012 19:55:12 +0100 (CET) Subject: [pypy-commit] pypy default: added support for float getinteriorfield_raws Message-ID: <20120108185512.5CFB282110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51143:e89672d5d28f Date: 2012-01-08 12:53 -0600 http://bitbucket.org/pypy/pypy/changeset/e89672d5d28f/ Log: added support for float getinteriorfield_raws diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False From noreply at buildbot.pypy.org Sun Jan 8 20:46:52 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 20:46:52 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: expose some more of API Message-ID: <20120108194652.5061082110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51144:4628182fa0e4 Date: 2012-01-08 21:46 +0200 http://bitbucket.org/pypy/pypy/changeset/4628182fa0e4/ Log: expose some more of API diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitportal.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitportal.py @@ -144,5 +144,12 @@ assert jit_hooks.box_isconst(box3) box4 = jit_hooks.box_nonconstbox(box) assert not jit_hooks.box_isconst(box4) + box5 = jit_hooks.boxint_new(18) + jit_hooks.resop_setarg(op, 0, box5) + assert jit_hooks.resop_getarg(op, 0) == box5 + box6 = jit_hooks.resop_getresult(op) + assert jit_hooks.box_getint(box6) == 1 + jit_hooks.resop_setresult(op, box5) + assert jit_hooks.resop_getresult(op) == box5 self.meta_interp(main, []) diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -113,13 +113,34 @@ ops_offset.get(op, 0), logops.repr_of_resop(op)) for op in operations] - at unwrap_spec(num=int, offset=int, repr=str) -def descr_new_resop(space, w_tp, num, w_args, w_res=None, offset=-1, +class WrappedBox(Wrappable): + """ A class representing a single box + """ + def __init__(self, llbox): + self.llbox = llbox + + def descr_getint(self, space): + return space.wrap(jit_hooks.box_getint(self.llbox)) + + at unwrap_spec(no=int) +def descr_new_box(space, w_tp, no): + return WrappedBox(jit_hooks.boxint_new(no)) + +WrappedBox.typedef = TypeDef( + 'Box', + __new__ = interp2app(descr_new_box), + getint = interp2app(WrappedBox.descr_getint), +) + + at unwrap_spec(num=int, offset=int, repr=str, res=WrappedBox) +def descr_new_resop(space, w_tp, num, w_args, res, offset=-1, repr=''): args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] - llres = space.interp_w(WrappedBox, w_res).llbox - # XXX None case + if res is None: + llres = jit_hooks.emptyval() + else: + llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) class WrappedOp(Wrappable): @@ -143,6 +164,19 @@ def descr_getarg(self, space, no): return WrappedBox(jit_hooks.resop_getarg(self.op, no)) + @unwrap_spec(no=int, box=WrappedBox) + def descr_setarg(self, space, no, box): + jit_hooks.resop_setarg(self.op, no, box.llbox) + return space.w_None + + def descr_getresult(self, space): + return WrappedBox(jit_hooks.resop_getresult(self.op)) + + @unwrap_spec(box=WrappedBox) + def descr_setresult(self, space, box): + jit_hooks.resop_setresult(self.op, box.llbox) + return space.w_None + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -151,24 +185,8 @@ num = GetSetProperty(WrappedOp.descr_num), name = GetSetProperty(WrappedOp.descr_name), getarg = interp2app(WrappedOp.descr_getarg), + setarg = interp2app(WrappedOp.descr_setarg), + result = GetSetProperty(WrappedOp.descr_getresult, + WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False - -class WrappedBox(Wrappable): - """ A class representing a single box - """ - def __init__(self, llbox): - self.llbox = llbox - - def descr_getint(self, space): - return space.wrap(jit_hooks.box_getint(self.llbox)) - - at unwrap_spec(no=int) -def descr_new_box(space, w_tp, no): - return WrappedBox(jit_hooks.boxint_new(no)) - -WrappedBox.typedef = TypeDef( - 'Box', - __new__ = interp2app(descr_new_box), - getint = interp2app(WrappedBox.descr_getint), -) diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -88,9 +88,11 @@ for w_item in space.listview(w_res): item = space.interp_w(WrappedOp, w_item) l.append(jit_hooks._cast_to_resop(item.op)) - operations[:] = l # modifying operations above is probably not + del operations[:] # modifying operations above is probably not # a great idea since types may not work and we'll end up with # half-working list and a segfault/fatal RPython error + for elem in l: + operations.append(elem) except OperationError, e: e.write_unraisable(space, "jit hook ", cache.w_compile_hook) cache.in_recursion = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -193,3 +193,9 @@ assert op.name == 'int_add' box = op.getarg(0) assert box.getint() == 1 + box2 = op.result + assert box2.getint() == 4 + op.setarg(0, box2) + assert op.getarg(0).getint() == 4 + op.result = box + assert op.result.getint() == 1 diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -44,6 +44,9 @@ return lltype.cast_opaque_ptr(llmemory.GCREF, cast_instance_to_base_ptr(obj)) +def emptyval(): + return lltype.nullptr(llmemory.GCREF.TO) + @register_helper(annmodel.SomePtr(llmemory.GCREF)) def resop_new(no, llargs, llres): from pypy.jit.metainterp.history import ResOperation @@ -69,6 +72,18 @@ def resop_getarg(llop, no): return _cast_to_gcref(_cast_to_resop(llop).getarg(no)) + at register_helper(annmodel.s_None) +def resop_setarg(llop, no, llbox): + _cast_to_resop(llop).setarg(no, _cast_to_box(llbox)) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getresult(llop): + return _cast_to_gcref(_cast_to_resop(llop).result) + + at register_helper(annmodel.s_None) +def resop_setresult(llop, llbox): + _cast_to_resop(llop).result = _cast_to_box(llbox) + @register_helper(annmodel.SomeInteger()) def box_getint(llbox): return _cast_to_box(llbox).getint() From noreply at buildbot.pypy.org Sun Jan 8 20:57:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 20:57:34 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: obscure translation fix and a real fix Message-ID: <20120108195734.5909182110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51145:6014710c801a Date: 2012-01-08 21:57 +0200 http://bitbucket.org/pypy/pypy/changeset/6014710c801a/ Log: obscure translation fix and a real fix diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -17,6 +17,7 @@ def __init__(self, space): self.w_compile_hook = space.w_None self.w_abort_hook = space.w_None + self.w_optimize_hook = space.w_None def wrap_greenkey(space, jitdriver, greenkey): if jitdriver.name == 'pypyjit': @@ -174,6 +175,7 @@ @unwrap_spec(box=WrappedBox) def descr_setresult(self, space, box): + assert isinstance(box, WrappedBox) jit_hooks.resop_setresult(self.op, box.llbox) return space.w_None From noreply at buildbot.pypy.org Sun Jan 8 21:03:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 21:03:51 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: remove nonworking stuff and return space.w_None Message-ID: <20120108200351.F222682110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51146:22a0d8fd2ca8 Date: 2012-01-08 22:03 +0200 http://bitbucket.org/pypy/pypy/changeset/22a0d8fd2ca8/ Log: remove nonworking stuff and return space.w_None diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -60,7 +60,6 @@ cache = space.fromcache(Cache) cache.w_compile_hook = w_hook cache.in_recursion = NonConstant(False) - return space.w_None def set_optimize_hook(space, w_hook): """ set_compile_hook(hook) @@ -91,7 +90,6 @@ cache = space.fromcache(Cache) cache.w_optimize_hook = w_hook cache.in_recursion = NonConstant(False) - return space.w_None def set_abort_hook(space, w_hook): """ set_abort_hook(hook) @@ -107,7 +105,6 @@ cache = space.fromcache(Cache) cache.w_abort_hook = w_hook cache.in_recursion = NonConstant(False) - return space.w_None def wrap_oplist(space, logops, operations, ops_offset): return [WrappedOp(jit_hooks._cast_to_gcref(op), @@ -168,16 +165,13 @@ @unwrap_spec(no=int, box=WrappedBox) def descr_setarg(self, space, no, box): jit_hooks.resop_setarg(self.op, no, box.llbox) - return space.w_None def descr_getresult(self, space): return WrappedBox(jit_hooks.resop_getresult(self.op)) - @unwrap_spec(box=WrappedBox) - def descr_setresult(self, space, box): - assert isinstance(box, WrappedBox) + def descr_setresult(self, space, w_box): + box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) - return space.w_None WrappedOp.typedef = TypeDef( 'ResOperation', From noreply at buildbot.pypy.org Sun Jan 8 21:17:03 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 8 Jan 2012 21:17:03 +0100 (CET) Subject: [pypy-commit] pypy default: issue900: Implement processor pinning on win32, Message-ID: <20120108201703.D4C0482110@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51147:a7e8e37cbf30 Date: 2012-01-08 21:16 +0100 http://bitbucket.org/pypy/pypy/changeset/a7e8e37cbf30/ Log: issue900: Implement processor pinning on win32, should fix inconsistent figures with cProfile. diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c --- a/pypy/translator/c/src/profiling.c +++ b/pypy/translator/c/src/profiling.c @@ -29,6 +29,35 @@ profiling_setup = 0; } } + +#elif defined(_WIN32) +#include + +DWORD_PTR base_affinity_mask; +int profiling_setup = 0; + +void pypy_setup_profiling() { + if (!profiling_setup) { + DWORD_PTR affinity_mask, system_affinity_mask; + GetProcessAffinityMask(GetCurrentProcess(), + &base_affinity_mask, &system_affinity_mask); + affinity_mask = 1; + /* Pick one cpu allowed by the system */ + if (system_affinity_mask) + while ((affinity_mask & system_affinity_mask) == 0) + affinity_mask <<= 1; + SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); + profiling_setup = 1; + } +} + +void pypy_teardown_profiling() { + if (profiling_setup) { + SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); + profiling_setup = 0; + } +} + #else void pypy_setup_profiling() { } void pypy_teardown_profiling() { } From noreply at buildbot.pypy.org Sun Jan 8 21:57:05 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 21:57:05 +0100 (CET) Subject: [pypy-commit] pypy default: fix test_resoperaion? Message-ID: <20120108205705.9017482110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51148:58b011f973ba Date: 2012-01-08 22:48 +0200 http://bitbucket.org/pypy/pypy/changeset/58b011f973ba/ Log: fix test_resoperaion? diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py --- a/pypy/jit/metainterp/test/test_resoperation.py +++ b/pypy/jit/metainterp/test/test_resoperation.py @@ -30,17 +30,17 @@ cls = rop.opclasses[rop.rop.INT_ADD] assert issubclass(cls, rop.PlainResOp) assert issubclass(cls, rop.BinaryOp) - assert cls.getopnum.im_func(None) == rop.rop.INT_ADD + assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD cls = rop.opclasses[rop.rop.CALL] assert issubclass(cls, rop.ResOpWithDescr) assert issubclass(cls, rop.N_aryOp) - assert cls.getopnum.im_func(None) == rop.rop.CALL + assert cls.getopnum.im_func(cls) == rop.rop.CALL cls = rop.opclasses[rop.rop.GUARD_TRUE] assert issubclass(cls, rop.GuardResOp) assert issubclass(cls, rop.UnaryOp) - assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE + assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE def test_mixins_in_common_base(): INT_ADD = rop.opclasses[rop.rop.INT_ADD] From noreply at buildbot.pypy.org Sun Jan 8 21:57:06 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 21:57:06 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120108205706.B74DB82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51149:9835710fde04 Date: 2012-01-08 22:56 +0200 http://bitbucket.org/pypy/pypy/changeset/9835710fde04/ Log: merge diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c --- a/pypy/translator/c/src/profiling.c +++ b/pypy/translator/c/src/profiling.c @@ -29,6 +29,35 @@ profiling_setup = 0; } } + +#elif defined(_WIN32) +#include + +DWORD_PTR base_affinity_mask; +int profiling_setup = 0; + +void pypy_setup_profiling() { + if (!profiling_setup) { + DWORD_PTR affinity_mask, system_affinity_mask; + GetProcessAffinityMask(GetCurrentProcess(), + &base_affinity_mask, &system_affinity_mask); + affinity_mask = 1; + /* Pick one cpu allowed by the system */ + if (system_affinity_mask) + while ((affinity_mask & system_affinity_mask) == 0) + affinity_mask <<= 1; + SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); + profiling_setup = 1; + } +} + +void pypy_teardown_profiling() { + if (profiling_setup) { + SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); + profiling_setup = 0; + } +} + #else void pypy_setup_profiling() { } void pypy_teardown_profiling() { } From noreply at buildbot.pypy.org Sun Jan 8 22:37:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 8 Jan 2012 22:37:47 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: oops Message-ID: <20120108213747.40F0382110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51150:03976db091c4 Date: 2012-01-08 23:37 +0200 http://bitbucket.org/pypy/pypy/changeset/03976db091c4/ Log: oops diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -83,6 +83,7 @@ w_arg, space.newlist(list_w)) if space.is_w(w_res, space.w_None): + cache.in_recursion = False return l = [] for w_item in space.listview(w_res): From noreply at buildbot.pypy.org Sun Jan 8 23:22:03 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 8 Jan 2012 23:22:03 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: cleanup but no real progress Message-ID: <20120108222203.A15A582110@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51151:de99533d42d0 Date: 2012-01-09 00:20 +0200 http://bitbucket.org/pypy/pypy/changeset/de99533d42d0/ Log: cleanup but no real progress diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -58,10 +58,10 @@ a = numpypy.array(a) return a.min() -def max(a): +def max(a, axis=None): if not hasattr(a, "max"): a = numpypy.array(a) - return a.max() + return a.max(axis) def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -39,8 +39,8 @@ axisreduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self','result', 'ri', 'frame', 'nextval', 'dtype', 'value'], - get_printable_location=signature.new_printable_location('reduce'), + reds=['identity', 'self','result', 'ri', 'frame', 'nextval', 'dtype', 'value'], + get_printable_location=signature.new_printable_location('axisreduce'), ) @@ -692,6 +692,7 @@ # to allow garbage-collecting them raise NotImplementedError + @jit.unroll_safe def compute(self): result = W_NDimArray(self.size, self.shape, self.find_dtype()) shapelen = len(self.shape) @@ -757,6 +758,8 @@ class Reduce(VirtualArray): + _immutable_fields_ = ['dim', 'binfunc', 'dtype', 'identity'] + def __init__(self, binfunc, name, dim, res_dtype, values, identity=None): shape = values.shape[0:dim] + values.shape[dim + 1:len(values.shape)] VirtualArray.__init__(self, name, shape, res_dtype) @@ -789,11 +792,13 @@ value = self.identity.convert_to(self.dtype) return value + @jit.unroll_safe def compute(self): dtype = self.dtype result = W_NDimArray(self.size, self.shape, dtype) self.values = self.values.get_concrete() shapelen = len(result.shape) + identity = self.identity sig = self.find_sig(res_shape=result.shape, arr=self.values) ri = ArrayIterator(result.size) frame = sig.create_frame(self.values, dim=self.dim) @@ -804,9 +809,14 @@ value=value, sig=sig, shapelen=shapelen, ri=ri, nextval=nextval, dtype=dtype, + identity=identity, result=result) if frame.iterators[0].axis_done: - value = self.get_identity(sig, frame, shapelen) + if identity is None: + value = sig.eval(frame, self.values).convert_to(dtype) + frame.next(shapelen) + else: + value = identity.convert_to(dtype) ri = ri.next(shapelen) assert isinstance(sig, signature.ReduceSignature) nextval = sig.eval(frame, self.values).convert_to(dtype) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -744,13 +744,11 @@ from numpypy import arange a = arange(15).reshape(5, 3) assert a.sum() == 105 + assert a.max() == 14 assert (a.sum(0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() - b = a.copy() - #b should be an array, not a view - assert (b.sum(1) == [3, 12, 21, 30, 39]).all() def test_identity(self): from numpypy import identity, array diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -127,9 +127,17 @@ def test_axissum(self): result = self.run("axissum") assert result == 30 - self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 2, - "int_add": 1, "int_ge": 1, "guard_false": 1, - "jump": 1, 'arraylen_gc': 1}) + self.check_simple_loop({'arraylen_gc': 1, + 'call': 1, + 'getfield_gc': 3, + "getinteriorfield_raw": 1, + "guard_class": 1, + "guard_false": 2, + 'guard_no_exception': 1, + "float_add": 1, + "jump": 1, + 'setinteriorfield_raw': 1, + }) def define_prod(): return """ From noreply at buildbot.pypy.org Sun Jan 8 23:38:53 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 8 Jan 2012 23:38:53 +0100 (CET) Subject: [pypy-commit] pypy default: ArgErr.getmsg() does not include the function name anymore. Message-ID: <20120108223853.3BB7F82110@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51152:62df4f51cdc8 Date: 2012-01-08 20:35 +0100 http://bitbucket.org/pypy/pypy/changeset/62df4f51cdc8/ Log: ArgErr.getmsg() does not include the function name anymore. This will make it easier to support Python3 and its unicode names. diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -257,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): From noreply at buildbot.pypy.org Mon Jan 9 11:38:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 9 Jan 2012 11:38:33 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Remove the extra debug prints. Message-ID: <20120109103833.B8DBE82110@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51153:75ce27172ee1 Date: 2012-01-09 11:38 +0100 http://bitbucket.org/pypy/pypy/changeset/75ce27172ee1/ Log: Remove the extra debug prints. diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -240,7 +240,7 @@ hdr = self.header(obj) hdr.tid = self.combine(typeid, self.current_young_marker, 0) hdr.next = self.new_young_objects - debug_print("malloc:", rawtotalsize, obj) + #debug_print("malloc:", rawtotalsize, obj) self.new_young_objects = hdr self.new_young_objects_size += r_uint(rawtotalsize) if self.new_young_objects_size > self.nursery_limit: @@ -271,7 +271,7 @@ hdr.next = self.new_young_objects totalsize = llarena.round_up_for_allocation(totalsize) rawtotalsize = raw_malloc_usage(totalsize) - debug_print("malloc:", rawtotalsize, obj) + #debug_print("malloc:", rawtotalsize, obj) self.new_young_objects = hdr self.new_young_objects_size += r_uint(rawtotalsize) if self.new_young_objects_size > self.nursery_limit: @@ -326,7 +326,7 @@ cym = self.current_young_marker com = self.current_old_marker mark = self.get_mark(obj) - debug_print("deletion_barrier:", mark, obj) + #debug_print("deletion_barrier:", mark, obj) # if mark == com: # most common case, make it fast # @@ -661,8 +661,8 @@ # NB. it's ok to edit 'gray_objects' from the mutator thread here, # because the collector thread is not running yet obj = root.address[0] - debug_print("_add_stack_root", obj) - assert 'DEAD' not in repr(obj) + #debug_print("_add_stack_root", obj) + #assert 'DEAD' not in repr(obj) self.get_mark(obj) self.collector.gray_objects.append(obj) @@ -699,7 +699,7 @@ while list != self.NULL: obj = llmemory.cast_ptr_to_adr(list) + size_gc_header size1 = size_gc_header + self.get_size(obj) - print "debug:", llmemory.raw_malloc_usage(size1) + #print "debug:", llmemory.raw_malloc_usage(size1) size += llmemory.raw_malloc_usage(size1) # detect loops ll_assert(list != previous, "loop!") @@ -707,7 +707,7 @@ if count & (count-1) == 0: # only on powers of two, to previous = list # detect loops of any size list = list.next - print "\tTOTAL:", size + #print "\tTOTAL:", size ll_assert(size == totalsize, "bogus total size in linked list") return count @@ -979,7 +979,7 @@ # we scan a modified content --- and the original content # is never scanned. # - debug_print("mark:", obj) + #debug_print("mark:", obj) self.gc.trace(obj, self._collect_add_pending, None) self.set_mark(obj, com) # @@ -1033,7 +1033,7 @@ if mark == still_not_marked: # the object is still not marked. Free it. blockadr = llmemory.cast_ptr_to_adr(hdr) - debug_print("free:", blockadr + size_gc_header) + #debug_print("free:", blockadr + size_gc_header) blockadr = llarena.getfakearenaaddress(blockadr) llarena.arena_free(blockadr) # From noreply at buildbot.pypy.org Mon Jan 9 11:56:36 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:36 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: update some more tests Message-ID: <20120109105636.B55AA82110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51154:9688a1080b2b Date: 2011-12-30 20:28 +0100 http://bitbucket.org/pypy/pypy/changeset/9688a1080b2b/ Log: update some more tests diff --git a/pypy/jit/backend/arm/test/test_gc_integration.py b/pypy/jit/backend/arm/test/test_gc_integration.py --- a/pypy/jit/backend/arm/test/test_gc_integration.py +++ b/pypy/jit/backend/arm/test/test_gc_integration.py @@ -46,82 +46,19 @@ return ['compressed'] + shape[1:] -class MockGcRootMap2(object): - is_shadow_stack = False - - def get_basic_shape(self, is_64_bit): - return ['shape'] - - def add_frame_offset(self, shape, offset): - shape.append(offset) - - def add_callee_save_reg(self, shape, reg_index): - index_to_name = {1: 'ebx', 2: 'esi', 3: 'edi'} - shape.append(index_to_name[reg_index]) - - def compress_callshape(self, shape, datablockwrapper): - assert datablockwrapper == 'fakedatablockwrapper' - assert shape[0] == 'shape' - return ['compressed'] + shape[1:] - - class MockGcDescr(GcCache): - is_shadow_stack = False - - def get_funcptr_for_new(self): - return 123 - - get_funcptr_for_newarray = get_funcptr_for_new - get_funcptr_for_newstr = get_funcptr_for_new - get_funcptr_for_newunicode = get_funcptr_for_new get_malloc_slowpath_addr = None - + write_barrier_descr = None moving_gc = True gcrootmap = MockGcRootMap() def initialize(self): pass - record_constptrs = GcLLDescr_framework.record_constptrs.im_func + _record_constptrs = GcLLDescr_framework._record_constptrs.im_func rewrite_assembler = GcLLDescr_framework.rewrite_assembler.im_func -class TestRegallocDirectGcIntegration(object): - - def test_mark_gc_roots(self): - py.test.skip('roots') - cpu = CPU(None, None) - cpu.setup_once() - regalloc = Regalloc(MockAssembler(cpu, MockGcDescr(False))) - regalloc.assembler.datablockwrapper = 'fakedatablockwrapper' - boxes = [BoxPtr() for i in range(len(ARMv7RegisterManager.all_regs))] - longevity = {} - for box in boxes: - longevity[box] = (0, 1) - regalloc.fm = ARMFrameManager() - regalloc.rm = ARMv7RegisterManager(longevity, regalloc.fm, - assembler=regalloc.assembler) - regalloc.xrm = VFPRegisterManager(longevity, regalloc.fm, - assembler=regalloc.assembler) - cpu = regalloc.assembler.cpu - for box in boxes: - regalloc.rm.try_allocate_reg(box) - TP = lltype.FuncType([], lltype.Signed) - calldescr = cpu.calldescrof(TP, TP.ARGS, TP.RESULT, - EffectInfo.MOST_GENERAL) - regalloc.rm._check_invariants() - box = boxes[0] - regalloc.position = 0 - regalloc.consider_call(ResOperation(rop.CALL, [box], BoxInt(), - calldescr)) - assert len(regalloc.assembler.movs) == 3 - # - mark = regalloc.get_mark_gc_roots(cpu.gc_ll_descr.gcrootmap) - assert mark[0] == 'compressed' - base = -WORD * FRAME_FIXED_SIZE - expected = ['ebx', 'esi', 'edi', base, base-WORD, base-WORD*2] - assert dict.fromkeys(mark[1:]) == dict.fromkeys(expected) - class TestRegallocGcIntegration(BaseTestRegalloc): cpu = CPU(None, None) @@ -199,42 +136,32 @@ ''' self.interpret(ops, [0, 0, 0, 0, 0, 0, 0, 0, 0], run=False) +NOT_INITIALIZED = chr(0xdd) + class GCDescrFastpathMalloc(GcLLDescription): gcrootmap = None - expected_malloc_slowpath_size = WORD*2 + write_barrier_descr = None def __init__(self): - GcCache.__init__(self, False) + GcLLDescription.__init__(self, None) # create a nursery - NTP = rffi.CArray(lltype.Signed) - self.nursery = lltype.malloc(NTP, 16, flavor='raw') - self.addrs = lltype.malloc(rffi.CArray(lltype.Signed), 3, + NTP = rffi.CArray(lltype.Char) + self.nursery = lltype.malloc(NTP, 64, flavor='raw') + for i in range(64): + self.nursery[i] = NOT_INITIALIZED + self.addrs = lltype.malloc(rffi.CArray(lltype.Signed), 2, flavor='raw') self.addrs[0] = rffi.cast(lltype.Signed, self.nursery) - self.addrs[1] = self.addrs[0] + 16*WORD - self.addrs[2] = 0 - # 16 WORDs + self.addrs[1] = self.addrs[0] + 64 + self.calls = [] def malloc_slowpath(size): - assert size == self.expected_malloc_slowpath_size + self.calls.append(size) + # reset the nursery nadr = rffi.cast(lltype.Signed, self.nursery) self.addrs[0] = nadr + size - self.addrs[2] += 1 return nadr - self.malloc_slowpath = malloc_slowpath - self.MALLOC_SLOWPATH = lltype.FuncType([lltype.Signed], - lltype.Signed) - self._counter = 123000 - - def can_inline_malloc(self, descr): - return True - - def get_funcptr_for_new(self): - return 42 -# return llhelper(lltype.Ptr(self.NEW_TP), self.new) - - def init_size_descr(self, S, descr): - descr.tid = self._counter - self._counter += 1 + self.generate_function('malloc_nursery', malloc_slowpath, + [lltype.Signed], lltype.Signed) def get_nursery_free_addr(self): return rffi.cast(lltype.Signed, self.addrs) @@ -243,209 +170,61 @@ return rffi.cast(lltype.Signed, self.addrs) + WORD def get_malloc_slowpath_addr(self): - fptr = llhelper(lltype.Ptr(self.MALLOC_SLOWPATH), self.malloc_slowpath) - return rffi.cast(lltype.Signed, fptr) + return self.get_malloc_fn_addr('malloc_nursery') - get_funcptr_for_newarray = None - get_funcptr_for_newstr = None - get_funcptr_for_newunicode = None + def check_nothing_in_nursery(self): + # CALL_MALLOC_NURSERY should not write anything in the nursery + for i in range(64): + assert self.nursery[i] == NOT_INITIALIZED class TestMallocFastpath(BaseTestRegalloc): def setup_method(self, method): cpu = CPU(None, None) - cpu.vtable_offset = WORD cpu.gc_ll_descr = GCDescrFastpathMalloc() cpu.setup_once() + self.cpu = cpu - # hack: specify 'tid' explicitly, because this test is not running - # with the gc transformer - NODE = lltype.GcStruct('node', ('tid', lltype.Signed), - ('value', lltype.Signed)) - nodedescr = cpu.sizeof(NODE) - valuedescr = cpu.fielddescrof(NODE, 'value') - - self.cpu = cpu - self.nodedescr = nodedescr - vtable = lltype.malloc(rclass.OBJECT_VTABLE, immortal=True) - vtable_int = cpu.cast_adr_to_int(llmemory.cast_ptr_to_adr(vtable)) - NODE2 = lltype.GcStruct('node2', - ('parent', rclass.OBJECT), - ('tid', lltype.Signed), - ('vtable', lltype.Ptr(rclass.OBJECT_VTABLE))) - descrsize = cpu.sizeof(NODE2) - heaptracker.register_known_gctype(cpu, vtable, NODE2) - self.descrsize = descrsize - self.vtable_int = vtable_int - - self.namespace = locals().copy() - def test_malloc_fastpath(self): ops = ''' - [i0] - p0 = new(descr=nodedescr) - setfield_gc(p0, i0, descr=valuedescr) - finish(p0) + [] + p0 = call_malloc_nursery(16) + p1 = call_malloc_nursery(32) + p2 = call_malloc_nursery(16) + finish(p0, p1, p2) ''' - self.interpret(ops, [42]) - # check the nursery + self.interpret(ops, []) + # check the returned pointers gc_ll_descr = self.cpu.gc_ll_descr - assert gc_ll_descr.nursery[0] == self.nodedescr.tid - assert gc_ll_descr.nursery[1] == 42 nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) - assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*2) - assert gc_ll_descr.addrs[2] == 0 # slowpath never called + ref = self.cpu.get_latest_value_ref + assert rffi.cast(lltype.Signed, ref(0)) == nurs_adr + 0 + assert rffi.cast(lltype.Signed, ref(1)) == nurs_adr + 16 + assert rffi.cast(lltype.Signed, ref(2)) == nurs_adr + 48 + # check the nursery content and state + gc_ll_descr.check_nothing_in_nursery() + assert gc_ll_descr.addrs[0] == nurs_adr + 64 + # slowpath never called + assert gc_ll_descr.calls == [] def test_malloc_slowpath(self): ops = ''' [] - p0 = new(descr=nodedescr) - p1 = new(descr=nodedescr) - p2 = new(descr=nodedescr) - p3 = new(descr=nodedescr) - p4 = new(descr=nodedescr) - p5 = new(descr=nodedescr) - p6 = new(descr=nodedescr) - p7 = new(descr=nodedescr) - p8 = new(descr=nodedescr) - finish(p0, p1, p2, p3, p4, p5, p6, p7, p8) + p0 = call_malloc_nursery(16) + p1 = call_malloc_nursery(32) + p2 = call_malloc_nursery(24) # overflow + finish(p0, p1, p2) ''' self.interpret(ops, []) + # check the returned pointers + gc_ll_descr = self.cpu.gc_ll_descr + nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) + ref = self.cpu.get_latest_value_ref + assert rffi.cast(lltype.Signed, ref(0)) == nurs_adr + 0 + assert rffi.cast(lltype.Signed, ref(1)) == nurs_adr + 16 + assert rffi.cast(lltype.Signed, ref(2)) == nurs_adr + 0 + # check the nursery content and state + gc_ll_descr.check_nothing_in_nursery() + assert gc_ll_descr.addrs[0] == nurs_adr + 24 # this should call slow path once - gc_ll_descr = self.cpu.gc_ll_descr - nadr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) - assert gc_ll_descr.addrs[0] == nadr + (WORD*2) - assert gc_ll_descr.addrs[2] == 1 # slowpath called once - - def test_new_with_vtable(self): - ops = ''' - [i0, i1] - p0 = new_with_vtable(ConstClass(vtable)) - guard_class(p0, ConstClass(vtable)) [i0] - finish(i1) - ''' - self.interpret(ops, [0, 1]) - assert self.getint(0) == 1 - gc_ll_descr = self.cpu.gc_ll_descr - assert gc_ll_descr.nursery[0] == self.descrsize.tid - assert gc_ll_descr.nursery[1] == self.vtable_int - nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) - assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*3) - assert gc_ll_descr.addrs[2] == 0 # slowpath never called - - -class Seen(Exception): - pass - - -class GCDescrFastpathMallocVarsize(GCDescrFastpathMalloc): - def can_inline_malloc_varsize(self, arraydescr, num_elem): - return num_elem < 5 - - def get_funcptr_for_newarray(self): - return 52 - - def init_array_descr(self, A, descr): - descr.tid = self._counter - self._counter += 1 - - def args_for_new_array(self, descr): - raise Seen("args_for_new_array") - - -class TestMallocVarsizeFastpath(BaseTestRegalloc): - def setup_method(self, method): - cpu = CPU(None, None) - cpu.vtable_offset = WORD - cpu.gc_ll_descr = GCDescrFastpathMallocVarsize() - cpu.setup_once() - self.cpu = cpu - - ARRAY = lltype.GcArray(lltype.Signed) - arraydescr = cpu.arraydescrof(ARRAY) - self.arraydescr = arraydescr - ARRAYCHAR = lltype.GcArray(lltype.Char) - arraychardescr = cpu.arraydescrof(ARRAYCHAR) - - self.namespace = locals().copy() - - def test_malloc_varsize_fastpath(self): - # Hack. Running the GcLLDescr_framework without really having - # a complete GC means that we end up with both the tid and the - # length being at offset 0. In this case, so the length overwrites - # the tid. This is of course only the case in this test class. - ops = ''' - [] - p0 = new_array(4, descr=arraydescr) - setarrayitem_gc(p0, 0, 142, descr=arraydescr) - setarrayitem_gc(p0, 3, 143, descr=arraydescr) - finish(p0) - ''' - self.interpret(ops, []) - # check the nursery - gc_ll_descr = self.cpu.gc_ll_descr - assert gc_ll_descr.nursery[0] == 4 - assert gc_ll_descr.nursery[1] == 142 - assert gc_ll_descr.nursery[4] == 143 - nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) - assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*5) - assert gc_ll_descr.addrs[2] == 0 # slowpath never called - - def test_malloc_varsize_slowpath(self): - ops = ''' - [] - p0 = new_array(4, descr=arraydescr) - setarrayitem_gc(p0, 0, 420, descr=arraydescr) - setarrayitem_gc(p0, 3, 430, descr=arraydescr) - p1 = new_array(4, descr=arraydescr) - setarrayitem_gc(p1, 0, 421, descr=arraydescr) - setarrayitem_gc(p1, 3, 431, descr=arraydescr) - p2 = new_array(4, descr=arraydescr) - setarrayitem_gc(p2, 0, 422, descr=arraydescr) - setarrayitem_gc(p2, 3, 432, descr=arraydescr) - p3 = new_array(4, descr=arraydescr) - setarrayitem_gc(p3, 0, 423, descr=arraydescr) - setarrayitem_gc(p3, 3, 433, descr=arraydescr) - finish(p0, p1, p2, p3) - ''' - gc_ll_descr = self.cpu.gc_ll_descr - gc_ll_descr.expected_malloc_slowpath_size = 5*WORD - self.interpret(ops, []) - assert gc_ll_descr.addrs[2] == 1 # slowpath called once - - def test_malloc_varsize_too_big(self): - ops = ''' - [] - p0 = new_array(5, descr=arraydescr) - finish(p0) - ''' - py.test.raises(Seen, self.interpret, ops, []) - - def test_malloc_varsize_variable(self): - ops = ''' - [i0] - p0 = new_array(i0, descr=arraydescr) - finish(p0) - ''' - py.test.raises(Seen, self.interpret, ops, []) - - def test_malloc_array_of_char(self): - # check that fastpath_malloc_varsize() respects the alignment - # of the pointer in the nursery - ops = ''' - [] - p1 = new_array(1, descr=arraychardescr) - p2 = new_array(2, descr=arraychardescr) - p3 = new_array(3, descr=arraychardescr) - p4 = new_array(4, descr=arraychardescr) - finish(p1, p2, p3, p4) - ''' - self.interpret(ops, []) - p1 = self.getptr(0, llmemory.GCREF) - p2 = self.getptr(1, llmemory.GCREF) - p3 = self.getptr(2, llmemory.GCREF) - p4 = self.getptr(3, llmemory.GCREF) - assert p1._obj.intval & (WORD-1) == 0 # aligned - assert p2._obj.intval & (WORD-1) == 0 # aligned - assert p3._obj.intval & (WORD-1) == 0 # aligned - assert p4._obj.intval & (WORD-1) == 0 # aligned + assert gc_ll_descr.calls == [24] diff --git a/pypy/jit/backend/arm/test/test_generated.py b/pypy/jit/backend/arm/test/test_generated.py --- a/pypy/jit/backend/arm/test/test_generated.py +++ b/pypy/jit/backend/arm/test/test_generated.py @@ -137,7 +137,7 @@ looptoken = JitCellToken() cpu.compile_loop(inputargs, operations, looptoken) args = [-5 , 24 , 46 , -15 , 13 , -8 , 0 , -6 , 6 , 6] - op = cpu.execute_token(looptoken) + op = cpu.execute_token(looptoken, *args) assert op.identifier == 2 assert cpu.get_latest_value_int(0) == 24 assert cpu.get_latest_value_int(1) == -32 diff --git a/pypy/jit/backend/arm/test/test_regalloc.py b/pypy/jit/backend/arm/test/test_regalloc.py --- a/pypy/jit/backend/arm/test/test_regalloc.py +++ b/pypy/jit/backend/arm/test/test_regalloc.py @@ -151,20 +151,20 @@ loop = self.parse(ops) looptoken = JitCellToken() self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) - args = [] + arguments = [] for arg in args: if isinstance(arg, int): - args.append(arg) + arguments.append(arg) elif isinstance(arg, float): arg = longlong.getfloatstorage(arg) - args.append(arg) + arguments.append(arg) else: assert isinstance(lltype.typeOf(arg), lltype.Ptr) llgcref = lltype.cast_opaque_ptr(llmemory.GCREF, arg) - args.append(llgcref) + arguments.append(llgcref) loop._jitcelltoken = looptoken if run: - self.cpu.execute_token(looptoken, *args) + self.cpu.execute_token(looptoken, *arguments) return loop def prepare_loop(self, ops): From noreply at buildbot.pypy.org Mon Jan 9 11:56:37 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:37 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove assertion, that does not work anymore Message-ID: <20120109105637.DEB3082110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51155:72a4e791d5e5 Date: 2011-12-30 20:29 +0100 http://bitbucket.org/pypy/pypy/changeset/72a4e791d5e5/ Log: remove assertion, that does not work anymore diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -1039,8 +1039,6 @@ def malloc_cond(self, nursery_free_adr, nursery_top_adr, size): assert size & (WORD-1) == 0 # must be correctly aligned - size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery) - size = (size + WORD - 1) & ~(WORD - 1) # round up self.mc.gen_load_int(r.r0.value, nursery_free_adr) self.mc.LDR_ri(r.r0.value, r.r0.value) From noreply at buildbot.pypy.org Mon Jan 9 11:56:39 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:39 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: make sure we get an int here Message-ID: <20120109105639.0D68F82110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51156:a796398e72b0 Date: 2011-12-30 20:29 +0100 http://bitbucket.org/pypy/pypy/changeset/a796398e72b0/ Log: make sure we get an int here diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py --- a/pypy/jit/backend/arm/codebuilder.py +++ b/pypy/jit/backend/arm/codebuilder.py @@ -175,7 +175,8 @@ assert target_ofs & 0x3 == 0 self.write32(c << 28 | 0xA << 24 | (target_ofs >> 2) & 0xFFFFFF) - def BL(self, target, c=cond.AL): + def BL(self, addr, c=cond.AL): + target = rffi.cast(rffi.INT, addr) if c == cond.AL: self.ADD_ri(reg.lr.value, reg.pc.value, arch.PC_OFFSET / 2) self.LDR_ri(reg.pc.value, reg.pc.value, imm=-arch.PC_OFFSET / 2) From noreply at buildbot.pypy.org Mon Jan 9 11:56:40 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:40 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: fix tests Message-ID: <20120109105640.3492182110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51157:b7e4239284ca Date: 2011-12-31 16:14 +0100 http://bitbucket.org/pypy/pypy/changeset/b7e4239284ca/ Log: fix tests diff --git a/pypy/jit/backend/arm/test/test_recompilation.py b/pypy/jit/backend/arm/test/test_recompilation.py --- a/pypy/jit/backend/arm/test/test_recompilation.py +++ b/pypy/jit/backend/arm/test/test_recompilation.py @@ -71,19 +71,19 @@ i2 = int_lt(i1, 20) guard_true(i2, descr=fdescr1) [i1] jump(i1, i10, i11, i12, i13, i14, i15, i16, descr=targettoken) - ''', [0]) + ''', [0, 0, 0, 0, 0, 0, 0, 0]) other_loop = self.interpret(''' - [i3] + [i3, i10, i11, i12, i13, i14, i15, i16] label(i3, descr=targettoken2) guard_false(i3, descr=fdescr2) [i3] jump(i3, descr=targettoken2) - ''', [1]) + ''', [1, 0, 0, 0, 0, 0, 0, 0]) ops = ''' [i3] jump(i3, 1, 2, 3, 4, 5, 6, 7, descr=targettoken) ''' bridge = self.attach_bridge(ops, other_loop, 1) - fail = self.run(other_loop, 1) + fail = self.run(other_loop, 1, 0, 0, 0, 0, 0, 0, 0) assert fail.identifier == 1 def test_bridge_jumps_to_self_deeper(self): @@ -99,7 +99,7 @@ i5 = int_lt(i3, 20) guard_true(i5) [i99, i3] jump(i3, i30, 1, i30, i30, i30, descr=targettoken) - ''', [0]) + ''', [0, 0, 0, 0, 0, 0]) assert self.getint(0) == 0 assert self.getint(1) == 1 ops = ''' @@ -120,9 +120,9 @@ guard_op = loop.operations[6] #assert loop._jitcelltoken.compiled_loop_token.param_depth == 0 # the force_spill() forces the stack to grow - assert guard_op.getdescr()._arm_bridge_frame_depth > loop_frame_depth + #assert guard_op.getdescr()._x86_bridge_frame_depth > loop_frame_depth #assert guard_op.getdescr()._x86_bridge_param_depth == 0 - self.run(loop, 0, 0, 0) + self.run(loop, 0, 0, 0, 0, 0, 0) assert self.getint(0) == 1 assert self.getint(1) == 20 @@ -138,7 +138,7 @@ i5 = int_lt(i3, 20) guard_true(i5) [i99, i3] jump(i3, i1, i2, descr=targettoken) - ''', [0]) + ''', [0, 0, 0]) assert self.getint(0) == 0 assert self.getint(1) == 1 ops = ''' @@ -149,4 +149,4 @@ self.run(loop, 0, 0, 0) assert self.getint(0) == 1 assert self.getint(1) == 20 - + diff --git a/pypy/jit/backend/arm/test/test_regalloc.py b/pypy/jit/backend/arm/test/test_regalloc.py --- a/pypy/jit/backend/arm/test/test_regalloc.py +++ b/pypy/jit/backend/arm/test/test_regalloc.py @@ -178,14 +178,15 @@ return self.cpu.get_latest_value_int(index) def getfloat(self, index): - return self.cpu.get_latest_value_float(index) + v = self.cpu.get_latest_value_float(index) + return longlong.getrealfloat(v) def getints(self, end): return [self.cpu.get_latest_value_int(index) for index in range(0, end)] def getfloats(self, end): - return [self.cpu.get_latest_value_float(index) for + return [self.getfloat(index) for index in range(0, end)] def getptr(self, index, T): @@ -229,9 +230,9 @@ guard_true(i5) [i4, i1, i2, i3] jump(i4, i1, i2, i3, descr=targettoken) ''' - self.interpret(ops, [0, 0, 0, 0]) + loop = self.interpret(ops, [0, 0, 0, 0]) ops2 = ''' - [i5] + [i5, i6, i7, i8] label(i5, descr=targettoken2) i1 = int_add(i5, 1) i3 = int_add(i1, 1) @@ -240,13 +241,13 @@ guard_true(i2) [i4] jump(i4, descr=targettoken2) ''' - loop2 = self.interpret(ops2, [0]) + loop2 = self.interpret(ops2, [0, 0, 0, 0]) bridge_ops = ''' [i4] jump(i4, i4, i4, i4, descr=targettoken) ''' - self.attach_bridge(bridge_ops, loop2, 5) - self.run(loop2, 0) + bridge = self.attach_bridge(bridge_ops, loop2, 5) + self.run(loop2, 0, 0, 0, 0) assert self.getint(0) == 31 assert self.getint(1) == 30 assert self.getint(2) == 30 @@ -283,7 +284,7 @@ ''' loop = self.interpret(ops, [0]) assert self.getint(0) == 1 - self.attach_bridge(bridge_ops, loop, 2) + bridge = self.attach_bridge(bridge_ops, loop, 2) self.run(loop, 0) assert self.getint(0) == 1 @@ -309,8 +310,8 @@ loop = self.interpret(ops, [0, 10]) assert self.getint(0) == 0 assert self.getint(1) == 10 - self.attach_bridge(bridge_ops, loop, 0) - relf.run(loop, 0, 10) + bridge = self.attach_bridge(bridge_ops, loop, 0) + self.run(loop, 0, 10) assert self.getint(0) == 0 assert self.getint(1) == 10 @@ -352,7 +353,7 @@ jump(i4, 3, i5, 4, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) - assert self.getints(4) == [1 << 29, 30, 3, 4] + assert self.getints(4) == [1<<29, 30, 3, 4] ops = ''' [i0, i1, i2, i3] label(i0, i1, i2, i3, descr=targettoken) @@ -363,7 +364,7 @@ jump(i4, i5, 3, 4, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) - assert self.getints(4) == [1 << 29, 30, 3, 4] + assert self.getints(4) == [1<<29, 30, 3, 4] ops = ''' [i0, i3, i1, i2] label(i0, i3, i1, i2, descr=targettoken) @@ -374,7 +375,7 @@ jump(i4, 4, i5, 3, descr=targettoken) ''' self.interpret(ops, [0, 0, 0, 0]) - assert self.getints(4) == [1 << 29, 30, 3, 4] + assert self.getints(4) == [1<<29, 30, 3, 4] def test_result_selected_reg_via_neg(self): ops = ''' @@ -388,7 +389,7 @@ ''' self.interpret(ops, [0, 0, 3, 0]) assert self.getints(3) == [1, -3, 10] - + def test_compare_memory_result_survives(self): ops = ''' [i0, i1, i2, i3] @@ -411,7 +412,7 @@ guard_true(i5) [i2, i1] jump(i0, i18, i15, i16, i2, i1, i4, descr=targettoken) ''' - self.interpret(ops, [0, 1, 2, 3]) + self.interpret(ops, [0, 1, 2, 3, 0, 0, 0]) def test_op_result_unused(self): ops = ''' @@ -445,8 +446,7 @@ finish(i0, i1, i2, i3, i4, i5, i6, i7, i8) ''' self.attach_bridge(bridge_ops, loop, 1) - args = [i for i in range(9)] - self.run(loop, *args) + self.run(loop, 0, 1, 2, 3, 4, 5, 6, 7, 8) assert self.getints(9) == range(9) def test_loopargs(self): @@ -456,7 +456,8 @@ jump(i4, i1, i2, i3) """ regalloc = self.prepare_loop(ops) - assert len(regalloc.rm.reg_bindings) == 2 + assert len(regalloc.rm.reg_bindings) == 4 + assert len(regalloc.frame_manager.bindings) == 0 def test_loopargs_2(self): ops = """ @@ -465,7 +466,7 @@ finish(i4, i1, i2, i3) """ regalloc = self.prepare_loop(ops) - assert len(regalloc.rm.reg_bindings) == 2 + assert len(regalloc.rm.reg_bindings) == 4 def test_loopargs_3(self): ops = """ @@ -475,7 +476,7 @@ jump(i4, i1, i2, i3) """ regalloc = self.prepare_loop(ops) - assert len(regalloc.rm.reg_bindings) == 2 + assert len(regalloc.rm.reg_bindings) == 4 class TestRegallocCompOps(BaseTestRegalloc): @@ -617,7 +618,8 @@ class TestRegallocFloats(BaseTestRegalloc): def test_float_add(self): - py.test.skip('need floats') + if not self.cpu.supports_floats: + py.test.skip("requires floats") ops = ''' [f0, f1] f2 = float_add(f0, f1) @@ -627,7 +629,8 @@ assert self.getfloats(3) == [4.5, 3.0, 1.5] def test_float_adds_stack(self): - py.test.skip('need floats') + if not self.cpu.supports_floats: + py.test.skip("requires floats") ops = ''' [f0, f1, f2, f3, f4, f5, f6, f7, f8] f9 = float_add(f0, f1) @@ -639,7 +642,8 @@ .4, .5, .6, .7, .8, .9] def test_lt_const(self): - py.test.skip('need floats') + if not self.cpu.supports_floats: + py.test.skip("requires floats") ops = ''' [f0] i1 = float_lt(3.5, f0) @@ -649,7 +653,8 @@ assert self.getint(0) == 0 def test_bug_float_is_true_stack(self): - py.test.skip('need floats') + if not self.cpu.supports_floats: + py.test.skip("requires floats") # NB. float_is_true no longer exists. Unsure if keeping this test # makes sense any more. ops = ''' @@ -681,8 +686,8 @@ i10 = call(ConstClass(f1ptr), i0, descr=f1_calldescr) finish(i10, i1, i2, i3, i4, i5, i6, i7, i8, i9) ''' - self.interpret(ops, [4, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9]) - assert self.getints(11) == [5, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9] + self.interpret(ops, [4, 7, 9, 9, 9, 9, 9, 9, 9, 9]) + assert self.getints(10) == [5, 7, 9, 9, 9, 9, 9, 9, 9, 9] def test_two_calls(self): ops = ''' @@ -691,8 +696,8 @@ i11 = call(ConstClass(f2ptr), i10, i1, descr=f2_calldescr) finish(i11, i1, i2, i3, i4, i5, i6, i7, i8, i9) ''' - self.interpret(ops, [4, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9]) - assert self.getints(11) == [5 * 7, 7, 9, 9, 9, 9, 9, 9, 9, 9, 9] + self.interpret(ops, [4, 7, 9, 9, 9, 9, 9, 9, 9, 9]) + assert self.getints(10) == [5 * 7, 7, 9, 9, 9, 9, 9, 9, 9, 9] def test_call_many_arguments(self): ops = ''' @@ -747,7 +752,7 @@ loop = """ [i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, i12, i13, i14] label(i0, i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, i12, i13, i14, descr=targettoken) - jump(i1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, descr=targettoken) + jump(i1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, descr=targettoken) """ self.interpret(loop, range(15), run=False) # ensure compiling this loop works From noreply at buildbot.pypy.org Mon Jan 9 11:56:41 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:41 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: add a DOUBLEWORD constant to replace all the 2 * WORD Message-ID: <20120109105641.644A882110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51158:0a79c804ce94 Date: 2012-01-03 11:08 +0100 http://bitbucket.org/pypy/pypy/changeset/0a79c804ce94/ Log: add a DOUBLEWORD constant to replace all the 2 * WORD diff --git a/pypy/jit/backend/arm/arch.py b/pypy/jit/backend/arm/arch.py --- a/pypy/jit/backend/arm/arch.py +++ b/pypy/jit/backend/arm/arch.py @@ -4,6 +4,7 @@ FUNC_ALIGN = 8 WORD = 4 +DOUBLE_WORD = 8 # the number of registers that we need to save around malloc calls N_REGISTERS_SAVED_BY_MALLOC = 9 diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -6,7 +6,7 @@ decode64 from pypy.jit.backend.arm import conditions as c from pypy.jit.backend.arm import registers as r -from pypy.jit.backend.arm.arch import WORD, FUNC_ALIGN, \ +from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD, FUNC_ALIGN, \ PC_OFFSET, N_REGISTERS_SAVED_BY_MALLOC from pypy.jit.backend.arm.codebuilder import ARMv7Builder, OverwritingBuilder from pypy.jit.backend.arm.regalloc import (Regalloc, ARMFrameManager, @@ -85,7 +85,7 @@ self.STACK_FIXED_AREA += N_REGISTERS_SAVED_BY_MALLOC * WORD if self.cpu.supports_floats: self.STACK_FIXED_AREA += (len(r.callee_saved_vfp_registers) - * 2 * WORD) + * DOUBLE_WORD) if self.STACK_FIXED_AREA % 8 != 0: self.STACK_FIXED_AREA += WORD # Stack alignment assert self.STACK_FIXED_AREA % 8 == 0 @@ -202,16 +202,16 @@ enc = rffi.cast(rffi.CCHARP, mem_loc) frame_depth = frame_loc - (regs_loc + len(r.all_regs) - * WORD + len(r.all_vfp_regs) * 2 * WORD) + * WORD + len(r.all_vfp_regs) * DOUBLE_WORD) assert (frame_loc - frame_depth) % 4 == 0 stack = rffi.cast(rffi.CCHARP, frame_loc - frame_depth) assert regs_loc % 4 == 0 vfp_regs = rffi.cast(rffi.CCHARP, regs_loc) - assert (regs_loc + len(r.all_vfp_regs) * 2 * WORD) % 4 == 0 + assert (regs_loc + len(r.all_vfp_regs) * DOUBLE_WORD) % 4 == 0 assert frame_depth >= 0 regs = rffi.cast(rffi.CCHARP, - regs_loc + len(r.all_vfp_regs) * 2 * WORD) + regs_loc + len(r.all_vfp_regs) * DOUBLE_WORD) i = -1 fail_index = -1 while(True): @@ -253,7 +253,7 @@ else: # REG_LOC reg = ord(enc[i]) if group == self.FLOAT_TYPE: - value = decode64(vfp_regs, reg * 2 * WORD) + value = decode64(vfp_regs, reg * DOUBLE_WORD) self.fail_boxes_float.setitem(fail_index, value) continue else: diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -1,5 +1,5 @@ from pypy.jit.backend.arm.assembler import AssemblerARM -from pypy.jit.backend.arm.arch import WORD +from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD from pypy.jit.backend.arm.registers import all_regs, all_vfp_regs from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.rpython.llinterp import LLInterpreter @@ -111,7 +111,7 @@ addr_end_of_frame = (addr_of_force_index - (frame_depth + len(all_regs) * WORD + - len(all_vfp_regs) * 2 * WORD)) + len(all_vfp_regs) * DOUBLE_WORD)) fail_index_2 = self.assembler.failure_recovery_func( faildescr._failure_recovery_code, addr_of_force_index, From noreply at buildbot.pypy.org Mon Jan 9 11:56:42 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:42 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove unused imports Message-ID: <20120109105642.8D80D82110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51159:2c283e1293a8 Date: 2012-01-03 11:09 +0100 http://bitbucket.org/pypy/pypy/changeset/2c283e1293a8/ Log: remove unused imports diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -1,7 +1,6 @@ from __future__ import with_statement import os from pypy.jit.backend.arm.helper.assembler import saved_registers, \ - count_reg_args, \ decode32, encode32, \ decode64 from pypy.jit.backend.arm import conditions as c @@ -13,7 +12,6 @@ ARMv7RegisterManager, check_imm_arg, operations as regalloc_operations, operations_with_guard as regalloc_operations_with_guard) -from pypy.jit.backend.arm.jump import remap_frame_layout from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.codewriter import longlong diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -13,7 +13,7 @@ ) from pypy.jit.backend.arm.jump import remap_frame_layout_mixed from pypy.jit.backend.arm.arch import MY_COPY_OF_REGS -from pypy.jit.backend.arm.arch import WORD, N_REGISTERS_SAVED_BY_MALLOC +from pypy.jit.backend.arm.arch import WORD from pypy.jit.codewriter import longlong from pypy.jit.metainterp.history import (Const, ConstInt, ConstFloat, ConstPtr, Box, BoxPtr, From noreply at buildbot.pypy.org Mon Jan 9 11:56:43 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:43 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: rename field Message-ID: <20120109105643.B363182110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51160:cad3c03c5ac1 Date: 2012-01-03 11:10 +0100 http://bitbucket.org/pypy/pypy/changeset/cad3c03c5ac1/ Log: rename field diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -577,7 +577,7 @@ operations = self.setup(original_loop_token, operations) self._dump(operations, 'bridge') assert isinstance(faildescr, AbstractFailDescr) - code = faildescr._failure_recovery_code + code = faildescr._arm_failure_recovery_code enc = rffi.cast(rffi.CCHARP, code) frame_depth = faildescr._arm_current_frame_depth arglocs = self.decode_inputargs(enc) @@ -638,7 +638,7 @@ tok.faillocs, save_exc=tok.save_exc) # store info on the descr descr._arm_current_frame_depth = tok.faillocs[0].getint() - descr._failure_recovery_code = memaddr + descr._arm_failure_recovery_code = memaddr descr._arm_guard_pos = pos def process_pending_guards(self, block_start): diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -113,7 +113,7 @@ len(all_regs) * WORD + len(all_vfp_regs) * DOUBLE_WORD)) fail_index_2 = self.assembler.failure_recovery_func( - faildescr._failure_recovery_code, + faildescr._arm_failure_recovery_code, addr_of_force_index, addr_end_of_frame) self.assembler.leave_jitted_hook() From noreply at buildbot.pypy.org Mon Jan 9 11:56:44 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:44 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: write the fail index here Message-ID: <20120109105644.DA6D382110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51161:5767eb76b3f3 Date: 2012-01-03 11:11 +0100 http://bitbucket.org/pypy/pypy/changeset/5767eb76b3f3/ Log: write the fail index here diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -1068,6 +1068,7 @@ # do the call faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) + self.assembler._write_fail_index(fail_index) args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] self.assembler.emit_op_call(op, args, self, fcond, fail_index) # then reopen the stack From noreply at buildbot.pypy.org Mon Jan 9 11:56:46 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:46 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: move the actual call to assembler.py Message-ID: <20120109105646.17F7F82110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51162:96d252d2a2e6 Date: 2012-01-03 11:13 +0100 http://bitbucket.org/pypy/pypy/changeset/96d252d2a2e6/ Log: move the actual call to assembler.py diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -1193,6 +1193,18 @@ self.propagate_memoryerror_if_r0_is_null() return fcond + def emit_op_call_malloc_nursery(self, op, arglocs, regalloc, fcond): + # registers r0 and r1 are allocated for this call + assert len(arglocs) == 1 + size = arglocs[0].value + gc_ll_descr = self.cpu.gc_ll_descr + self.malloc_cond( + gc_ll_descr.get_nursery_free_addr(), + gc_ll_descr.get_nursery_top_addr(), + size + ) + return fcond + class FloatOpAssemlber(object): _mixin_ = True diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -953,13 +953,7 @@ self.possibly_free_var(op.result) self.possibly_free_var(t) - gc_ll_descr = self.assembler.cpu.gc_ll_descr - self.assembler.malloc_cond( - gc_ll_descr.get_nursery_free_addr(), - gc_ll_descr.get_nursery_top_addr(), - size - ) - + return [imm(size)] def get_mark_gc_roots(self, gcrootmap, use_copy_area=False): shape = gcrootmap.get_basic_shape(False) for v, val in self.frame_manager.bindings.items(): From noreply at buildbot.pypy.org Mon Jan 9 11:56:47 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:47 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove the condition flag from BKPT, which is an uncondional instruction Message-ID: <20120109105647.3EC1E82110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51163:7fe04da61940 Date: 2012-01-03 11:14 +0100 http://bitbucket.org/pypy/pypy/changeset/7fe04da61940/ Log: remove the condition flag from BKPT, which is an uncondional instruction diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py --- a/pypy/jit/backend/arm/codebuilder.py +++ b/pypy/jit/backend/arm/codebuilder.py @@ -154,8 +154,9 @@ instr = self._encode_reg_list(cond << 28 | 0x8BD << 16, regs) self.write32(instr) - def BKPT(self, cond=cond.AL): - self.write32(cond << 28 | 0x1200070) + def BKPT(self): + """Unconditional breakpoint""" + self.write32(0x1200070) # corresponds to the instruction vmrs APSR_nzcv, fpscr def VMRS(self, cond=cond.AL): From noreply at buildbot.pypy.org Mon Jan 9 11:56:48 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:48 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: add an alignment check after malloc calls for debugging Message-ID: <20120109105648.67A3982110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51164:f02dc5c4e43c Date: 2012-01-03 12:50 +0100 http://bitbucket.org/pypy/pypy/changeset/f02dc5c4e43c/ Log: add an alignment check after malloc calls for debugging diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -57,6 +57,8 @@ STACK_FIXED_AREA = -1 + debug = True + def __init__(self, cpu, failargs_limit=1000): self.cpu = cpu self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -1191,6 +1191,7 @@ def emit_op_call_malloc_gc(self, op, arglocs, regalloc, fcond): self.emit_op_call(op, arglocs, regalloc, fcond) self.propagate_memoryerror_if_r0_is_null() + self._alignment_check() return fcond def emit_op_call_malloc_nursery(self, op, arglocs, regalloc, fcond): @@ -1203,8 +1204,19 @@ gc_ll_descr.get_nursery_top_addr(), size ) + self._alignment_check() return fcond + def _alignment_check(self): + if not self.debug: + return + self.mc.MOV_rr(r.ip.value, r.r0.value) + self.mc.AND_ri(r.ip.value, r.ip.value, 3) + self.mc.CMP_ri(r.ip.value, 0) + self.mc.MOV_rr(r.pc.value, r.pc.value, cond=c.EQ) + self.mc.BKPT() + self.mc.NOP() + class FloatOpAssemlber(object): _mixin_ = True From noreply at buildbot.pypy.org Mon Jan 9 11:56:49 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:49 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: simplify some conditional paths in the generated code Message-ID: <20120109105649.94DA482110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51165:895cbdd61311 Date: 2012-01-03 12:51 +0100 http://bitbucket.org/pypy/pypy/changeset/895cbdd61311/ Log: simplify some conditional paths in the generated code diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -329,17 +329,13 @@ mc.LDR_ri(reg.value, r.fp.value, imm=ofs) mc.CMP_ri(r.r0.value, 0) - jmp_pos = mc.currpos() - mc.NOP() + mc.B(self.propagate_exception_path, c=c.EQ) nursery_free_adr = self.cpu.gc_ll_descr.get_nursery_free_addr() mc.gen_load_int(r.r1.value, nursery_free_adr) mc.LDR_ri(r.r1.value, r.r1.value) # see above mc.POP([r.ip.value, r.pc.value]) - pmc = OverwritingBuilder(mc, jmp_pos, WORD) - pmc.B_offs(jmp_pos, c=c.EQ) - mc.B(self.propagate_exception_path) rawstart = mc.materialize(self.cpu.asmmemmgr, []) self.malloc_slowpath = rawstart @@ -1055,9 +1051,6 @@ self.mc.CMP_rr(r.r1.value, r.ip.value) - fast_jmp_pos = self.mc.currpos() - self.mc.NOP() - # XXX update # See comments in _build_malloc_slowpath for the # details of the two helper functions that we are calling below. @@ -1071,11 +1064,7 @@ # a no-op. self.mark_gc_roots(self.write_new_force_index(), use_copy_area=True) - self.mc.BL(self.malloc_slowpath) - - offset = self.mc.currpos() - fast_jmp_pos - pmc = OverwritingBuilder(self.mc, fast_jmp_pos, WORD) - pmc.ADD_ri(r.pc.value, r.pc.value, offset - PC_OFFSET, cond=c.LS) + self.mc.BL(self.malloc_slowpath, c=c.HI) self.mc.gen_load_int(r.ip.value, nursery_free_adr) self.mc.STR_ri(r.r1.value, r.ip.value) From noreply at buildbot.pypy.org Mon Jan 9 11:56:50 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:50 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: modify stack_locations store position and the offset to the FP. Get rid of the special case for the first slot in the spilling area currently used for the FORCE_TOKEN Message-ID: <20120109105650.C096782110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51166:5e9aadf0b867 Date: 2012-01-04 15:57 +0100 http://bitbucket.org/pypy/pypy/changeset/5e9aadf0b867/ Log: modify stack_locations store position and the offset to the FP. Get rid of the special case for the first slot in the spilling area currently used for the FORCE_TOKEN diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -680,9 +680,7 @@ OverwritingBuilder.size_of_gen_load_int + WORD) # Note: the frame_depth is one less than the value stored in the frame # manager - if frame_depth == 1: - return - n = (frame_depth - 1) * WORD + n = frame_depth * WORD # ensure the sp is 8 byte aligned when patching it if n % 8 != 0: @@ -840,7 +838,7 @@ temp = r.lr else: temp = r.ip - offset = loc.position * WORD + offset = loc.value if not check_imm_arg(offset, size=0xFFF): self.mc.PUSH([temp.value], cond=cond) self.mc.gen_load_int(temp.value, -offset, cond=cond) @@ -861,7 +859,7 @@ assert loc is not r.lr, 'lr is not supported as a target \ when moving from the stack' # unspill a core register - offset = prev_loc.position * WORD + offset = prev_loc.value if not check_imm_arg(offset, size=0xFFF): self.mc.PUSH([r.lr.value], cond=cond) pushed = True @@ -875,7 +873,7 @@ assert prev_loc.type == FLOAT, 'trying to load from an \ incompatible location into a float register' # load spilled value into vfp reg - offset = prev_loc.position * WORD + offset = prev_loc.value self.mc.PUSH([r.ip.value], cond=cond) pushed = True if not check_imm_arg(offset): @@ -905,7 +903,7 @@ incompatible location from a float register' # spill vfp register self.mc.PUSH([r.ip.value], cond=cond) - offset = loc.position * WORD + offset = loc.value if not check_imm_arg(offset): self.mc.gen_load_int(r.ip.value, offset, cond=cond) self.mc.SUB_rr(r.ip.value, r.fp.value, r.ip.value, cond=cond) @@ -948,7 +946,7 @@ self.mc.POP([r.ip.value], cond=cond) elif vfp_loc.is_stack() and vfp_loc.type == FLOAT: # load spilled vfp value into two core registers - offset = vfp_loc.position * WORD + offset = vfp_loc.value if not check_imm_arg(offset, size=0xFFF): self.mc.PUSH([r.ip.value], cond=cond) self.mc.gen_load_int(r.ip.value, -offset, cond=cond) @@ -971,7 +969,7 @@ self.mc.VMOV_cr(vfp_loc.value, reg1.value, reg2.value, cond=cond) elif vfp_loc.is_stack(): # move from two core registers to a float stack location - offset = vfp_loc.position * WORD + offset = vfp_loc.value if not check_imm_arg(offset, size=0xFFF): self.mc.PUSH([r.ip.value], cond=cond) self.mc.gen_load_int(r.ip.value, -offset, cond=cond) diff --git a/pypy/jit/backend/arm/locations.py b/pypy/jit/backend/arm/locations.py --- a/pypy/jit/backend/arm/locations.py +++ b/pypy/jit/backend/arm/locations.py @@ -1,5 +1,5 @@ from pypy.jit.metainterp.history import INT, FLOAT -from pypy.jit.backend.arm.arch import WORD +from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD class AssemblerLocation(object): @@ -110,9 +110,13 @@ class StackLocation(AssemblerLocation): _immutable_ = True - def __init__(self, position, num_words=1, type=INT): + def __init__(self, position, fp_offset, type=INT): + if type == FLOAT: + self.width = DOUBLE_WORD + else: + self.width = WORD self.position = position - self.width = num_words * WORD + self.value = fp_offset self.type = type def __repr__(self): diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -54,26 +54,28 @@ return "" % (id(self),) +def get_fp_offset(i): + if i >= 0: + # Take the FORCE_TOKEN into account + return (1 + i) * WORD + else: + return i * WORD + + class ARMFrameManager(FrameManager): def __init__(self): FrameManager.__init__(self) - self.used = [True] # keep first slot free + #self.used = [True] # keep first slot free # XXX refactor frame to avoid this issue of keeping the first slot # reserved @staticmethod - def frame_pos(loc, type): - num_words = ARMFrameManager.frame_size(type) - if type == FLOAT: - if loc > 0: - # Make sure that loc is an even value - # the frame layout requires loc to be even if it is a spilled - # value!! - assert (loc & 1) == 0 - return locations.StackLocation(loc + 1, - num_words=num_words, type=type) - return locations.StackLocation(loc, num_words=num_words, type=type) + def frame_pos(i, box_type): + if box_type == FLOAT: + return locations.StackLocation(i, get_fp_offset(i + 1), box_type) + else: + return locations.StackLocation(i, get_fp_offset(i), box_type) @staticmethod def frame_size(type): @@ -84,10 +86,7 @@ @staticmethod def get_loc_index(loc): assert loc.is_stack() - if loc.type == FLOAT: - return loc.position - 1 - else: - return loc.position + return loc.position def void(self, op, fcond): @@ -721,7 +720,6 @@ else: src_locations2.append(src_loc) dst_locations2.append(dst_loc) - remap_frame_layout_mixed(self.assembler, src_locations1, dst_locations1, tmploc, src_locations2, dst_locations2, vfptmploc) @@ -960,6 +958,7 @@ if (isinstance(v, BoxPtr) and self.rm.stays_alive(v)): assert val.is_stack() gcrootmap.add_frame_offset(shape, val.position * -WORD) + gcrootmap.add_frame_offset(shape, -val.value) for v, reg in self.rm.reg_bindings.items(): if reg is r.r0: continue From noreply at buildbot.pypy.org Mon Jan 9 11:56:51 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:51 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: port encoding of locations used for guards from the x86 backend Message-ID: <20120109105651.F16C482110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51167:ffbd6f34a8c3 Date: 2012-01-04 15:58 +0100 http://bitbucket.org/pypy/pypy/changeset/ffbd6f34a8c3/ Log: port encoding of locations used for guards from the x86 backend diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -11,6 +11,7 @@ from pypy.jit.backend.arm.regalloc import (Regalloc, ARMFrameManager, ARMv7RegisterManager, check_imm_arg, operations as regalloc_operations, + get_fp_offset, operations_with_guard as regalloc_operations_with_guard) from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper from pypy.jit.backend.model import CompiledLoopToken @@ -30,30 +31,6 @@ class AssemblerARM(ResOpAssembler): - """ - Encoding for locations in memory - types: - \xED = FLOAT - \xEE = REF - \xEF = INT - location: - \xFC = stack location - \xFD = imm location - emtpy = reg location - \xFE = Empty loc - - \xFF = END_OF_LOCS - """ - FLOAT_TYPE = '\xED' - REF_TYPE = '\xEE' - INT_TYPE = '\xEF' - - STACK_LOC = '\xFC' - IMM_LOC = '\xFD' - # REG_LOC is empty - EMPTY_LOC = '\xFE' - - END_OF_LOCS = '\xFF' STACK_FIXED_AREA = -1 @@ -183,132 +160,138 @@ """mem_loc is a structure in memory describing where the values for the failargs are stored. frame loc is the address of the frame pointer for the frame to be decoded frame """ - return self.decode_registers_and_descr(mem_loc, - frame_pointer, stack_pointer) + vfp_registers = rffi.cast(rffi.LONGLONGP, stack_pointer) + registers = rffi.ptradd(vfp_registers, len(r.all_vfp_regs)) + registers = rffi.cast(rffi.LONGP, registers) + return self.decode_registers_and_descr(mem_loc, frame_pointer, + registers, vfp_registers) self.failure_recovery_func = failure_recovery_func - recovery_func_sign = lltype.Ptr(lltype.FuncType([lltype.Signed, - lltype.Signed, lltype.Signed], lltype.Signed)) + recovery_func_sign = lltype.Ptr(lltype.FuncType([lltype.Signed] * 3, + lltype.Signed)) @rgc.no_collect - def decode_registers_and_descr(self, mem_loc, frame_loc, regs_loc): + def decode_registers_and_descr(self, mem_loc, frame_pointer, + registers, vfp_registers): """Decode locations encoded in memory at mem_loc and write the values to the failboxes. Values for spilled vars and registers are stored on stack at frame_loc """ - # XXX check if units are correct here, when comparing words and bytes - # and stuff assert 0, 'check if units are correct here, when comparing - # words and bytes and stuff' + assert frame_pointer & 1 == 0 + bytecode = rffi.cast(rffi.UCHARP, mem_loc) + num = 0 + value = 0 + fvalue = 0 + code_inputarg = False + while True: + code = bytecode[0] + bytecode = rffi.ptradd(bytecode, 1) + if code >= self.CODE_FROMSTACK: + if code > 0x7F: + shift = 7 + code &= 0x7F + while True: + nextcode = rffi.cast(lltype.Signed, bytecode[0]) + bytecode = rffi.ptradd(bytecode, 1) + code |= (nextcode & 0x7F) << shift + shift += 7 + if nextcode <= 0x7F: + break + # load the value from the stack + kind = code & 3 + code = int((code - self.CODE_FROMSTACK) >> 2) + if code_inputarg: + code = ~code + code_inputarg = False + if kind == self.DESCR_FLOAT: + # we use code + 1 to get the hi word of the double worded float + stackloc = frame_pointer - get_fp_offset(int(code) + 1) + assert stackloc & 3 == 0 + fvalue = rffi.cast(rffi.LONGLONGP, stackloc)[0] + else: + stackloc = frame_pointer - get_fp_offset(int(code)) + assert stackloc & 1 == 0 + value = rffi.cast(rffi.LONGP, stackloc)[0] + else: + # 'code' identifies a register: load its value + kind = code & 3 + if kind == self.DESCR_SPECIAL: + if code == self.CODE_HOLE: + num += 1 + continue + if code == self.CODE_INPUTARG: + code_inputarg = True + continue + assert code == self.CODE_STOP + break + code >>= 2 + if kind == self.DESCR_FLOAT: + fvalue = vfp_registers[code] + else: + value = registers[code] + # store the loaded value into fail_boxes_ + if kind == self.DESCR_FLOAT: + tgt = self.fail_boxes_float.get_addr_for_num(num) + rffi.cast(rffi.LONGLONGP, tgt)[0] = fvalue + else: + if kind == self.DESCR_INT: + tgt = self.fail_boxes_int.get_addr_for_num(num) + elif kind == self.DESCR_REF: + assert (value & 3) == 0, "misaligned pointer" + tgt = self.fail_boxes_ptr.get_addr_for_num(num) + else: + assert 0, "bogus kind" + rffi.cast(rffi.LONGP, tgt)[0] = value + num += 1 + self.fail_boxes_count = num + fail_index = rffi.cast(rffi.INTP, bytecode)[0] + fail_index = rffi.cast(lltype.Signed, fail_index) + return fail_index - enc = rffi.cast(rffi.CCHARP, mem_loc) - frame_depth = frame_loc - (regs_loc + len(r.all_regs) - * WORD + len(r.all_vfp_regs) * DOUBLE_WORD) - assert (frame_loc - frame_depth) % 4 == 0 - stack = rffi.cast(rffi.CCHARP, frame_loc - frame_depth) - assert regs_loc % 4 == 0 - vfp_regs = rffi.cast(rffi.CCHARP, regs_loc) - assert (regs_loc + len(r.all_vfp_regs) * DOUBLE_WORD) % 4 == 0 - assert frame_depth >= 0 - - regs = rffi.cast(rffi.CCHARP, - regs_loc + len(r.all_vfp_regs) * DOUBLE_WORD) - i = -1 - fail_index = -1 - while(True): - i += 1 - fail_index += 1 - res = enc[i] - if res == self.END_OF_LOCS: + def decode_inputargs(self, code): + descr_to_box_type = [REF, INT, FLOAT] + bytecode = rffi.cast(rffi.UCHARP, code) + arglocs = [] + code_inputarg = False + while 1: + # decode the next instruction from the bytecode + code = rffi.cast(lltype.Signed, bytecode[0]) + bytecode = rffi.ptradd(bytecode, 1) + if code >= self.CODE_FROMSTACK: + # 'code' identifies a stack location + if code > 0x7F: + shift = 7 + code &= 0x7F + while True: + nextcode = rffi.cast(lltype.Signed, bytecode[0]) + bytecode = rffi.ptradd(bytecode, 1) + code |= (nextcode & 0x7F) << shift + shift += 7 + if nextcode <= 0x7F: + break + kind = code & 3 + code = (code - self.CODE_FROMSTACK) >> 2 + if code_inputarg: + code = ~code + code_inputarg = False + loc = ARMFrameManager.frame_pos(code, descr_to_box_type[kind]) + elif code == self.CODE_STOP: break - if res == self.EMPTY_LOC: + elif code == self.CODE_HOLE: continue - - group = res - i += 1 - res = enc[i] - if res == self.IMM_LOC: - # imm value - if group == self.INT_TYPE or group == self.REF_TYPE: - value = decode32(enc, i + 1) - i += 4 + elif code == self.CODE_INPUTARG: + code_inputarg = True + continue + else: + # 'code' identifies a register + kind = code & 3 + code >>= 2 + if kind == self.DESCR_FLOAT: + loc = r.all_vfp_regs[code] else: - assert group == self.FLOAT_TYPE - adr = decode32(enc, i + 1) - tp = rffi.CArrayPtr(longlong.FLOATSTORAGE) - value = rffi.cast(tp, adr)[0] - self.fail_boxes_float.setitem(fail_index, value) - i += 4 - continue - elif res == self.STACK_LOC: - stack_loc = decode32(enc, i + 1) - i += 4 - if group == self.FLOAT_TYPE: - value = decode64(stack, - frame_depth - (stack_loc + 1) * WORD) - fvalue = rffi.cast(longlong.FLOATSTORAGE, value) - self.fail_boxes_float.setitem(fail_index, fvalue) - continue - else: - value = decode32(stack, frame_depth - stack_loc * WORD) - else: # REG_LOC - reg = ord(enc[i]) - if group == self.FLOAT_TYPE: - value = decode64(vfp_regs, reg * DOUBLE_WORD) - self.fail_boxes_float.setitem(fail_index, value) - continue - else: - value = decode32(regs, reg * WORD) - - if group == self.INT_TYPE: - self.fail_boxes_int.setitem(fail_index, value) - elif group == self.REF_TYPE: - assert (value & 3) == 0, "misaligned pointer" - tgt = self.fail_boxes_ptr.get_addr_for_num(fail_index) - rffi.cast(rffi.LONGP, tgt)[0] = value - else: - assert 0, 'unknown type' - - assert enc[i] == self.END_OF_LOCS - descr = decode32(enc, i + 1) - self.fail_boxes_count = fail_index - self.fail_force_index = frame_loc - return descr - - def decode_inputargs(self, enc): - locs = [] - j = 0 - while enc[j] != self.END_OF_LOCS: - res = enc[j] - if res == self.EMPTY_LOC: - j += 1 - continue - - assert res in [self.FLOAT_TYPE, self.INT_TYPE, self.REF_TYPE], \ - 'location type is not supported' - res_type = res - j += 1 - res = enc[j] - if res == self.IMM_LOC: - # XXX decode imm if necessary - assert 0, 'Imm Locations are not supported' - elif res == self.STACK_LOC: - if res_type == self.FLOAT_TYPE: - t = FLOAT - elif res_type == self.INT_TYPE: - t = INT - else: - t = REF - stack_loc = decode32(enc, j + 1) - loc = ARMFrameManager.frame_pos(stack_loc, t) - j += 4 - else: # REG_LOC - if res_type == self.FLOAT_TYPE: - loc = r.all_vfp_regs[ord(res)] - else: - loc = r.all_regs[ord(res)] - j += 1 - locs.append(loc) - return locs + loc = r.all_regs[code] + arglocs.append(loc) + return arglocs[:] def _build_malloc_slowpath(self): mc = ARMv7Builder() @@ -364,85 +347,78 @@ return mc.materialize(self.cpu.asmmemmgr, [], self.cpu.gc_ll_descr.gcrootmap) - def gen_descr_encoding(self, descr, args, arglocs): - # The size of the allocated memory is based on the following sizes - # first argloc is the frame depth and not considered for the memory - # allocation - # 4 bytes for the value - # 1 byte for the type - # 1 byte for the location - # 1 separator byte - # 4 bytes for the faildescr - # const floats are stored in memory and the box contains the address - memsize = (len(arglocs) - 1) * 6 + 5 + DESCR_REF = 0x00 + DESCR_INT = 0x01 + DESCR_FLOAT = 0x02 + DESCR_SPECIAL = 0x03 + CODE_FROMSTACK = 64 + CODE_STOP = 0 | DESCR_SPECIAL + CODE_HOLE = 4 | DESCR_SPECIAL + CODE_INPUTARG = 8 | DESCR_SPECIAL + + def gen_descr_encoding(self, descr, failargs, locs): + buf = [] + for i in range(len(failargs)): + arg = failargs[i] + if arg is not None: + if arg.type == REF: + kind = self.DESCR_REF + elif arg.type == INT: + kind = self.DESCR_INT + elif arg.type == FLOAT: + kind = self.DESCR_FLOAT + else: + raise AssertionError("bogus kind") + loc = locs[i] + if loc.is_stack(): + pos = loc.position + if pos < 0: + buf.append(chr(self.CODE_INPUTARG)) + pos = ~pos + n = self.CODE_FROMSTACK // 4 + pos + else: + assert loc.is_reg() or loc.is_vfp_reg() + n = loc.value + n = kind + 4 * n + while n > 0x7F: + buf.append(chr((n & 0x7F) | 0x80)) + n >>= 7 + else: + n = self.CODE_HOLE + buf.append(chr(n)) + buf.append(chr(self.CODE_STOP)) + + fdescr = self.cpu.get_fail_descr_number(descr) + buf.append(chr(fdescr & 0xFF)) + buf.append(chr(fdescr >> 8 & 0xFF)) + buf.append(chr(fdescr >> 16 & 0xFF)) + buf.append(chr(fdescr >> 24 & 0xFF)) + + # assert that the fail_boxes lists are big enough + assert len(failargs) <= self.fail_boxes_int.SIZE + + memsize = len(buf) memaddr = self.datablockwrapper.malloc_aligned(memsize, alignment=1) mem = rffi.cast(rffi.CArrayPtr(lltype.Char), memaddr) - i = 0 - j = 0 - while i < len(args): - if arglocs[i + 1]: - arg = args[i] - loc = arglocs[i + 1] - if arg.type == INT: - mem[j] = self.INT_TYPE - j += 1 - elif arg.type == REF: - mem[j] = self.REF_TYPE - j += 1 - elif arg.type == FLOAT: - mem[j] = self.FLOAT_TYPE - j += 1 - else: - assert 0, 'unknown type' - - if loc.is_reg() or loc.is_vfp_reg(): - mem[j] = chr(loc.value) - j += 1 - elif loc.is_imm() or loc.is_imm_float(): - assert (arg.type == INT or arg.type == REF - or arg.type == FLOAT) - mem[j] = self.IMM_LOC - encode32(mem, j + 1, loc.getint()) - j += 5 - else: - assert loc.is_stack() - mem[j] = self.STACK_LOC - if arg.type == FLOAT: - # Float locs store the location number with an offset - # of 1 -.- so we need to take this into account here - # when generating the encoding - encode32(mem, j + 1, loc.position - 1) - else: - encode32(mem, j + 1, loc.position) - j += 5 - else: - mem[j] = self.EMPTY_LOC - j += 1 - i += 1 - - mem[j] = chr(0xFF) - - n = self.cpu.get_fail_descr_number(descr) - encode32(mem, j + 1, n) + for i in range(memsize): + mem[i] = buf[i] return memaddr def _gen_path_to_exit_path(self, descr, args, arglocs, save_exc, fcond=c.AL): assert isinstance(save_exc, bool) - memaddr = self.gen_descr_encoding(descr, args, arglocs) + memaddr = self.gen_descr_encoding(descr, args, arglocs[1:]) self.gen_exit_code(self.mc, memaddr, save_exc, fcond) return memaddr def gen_exit_code(self, mc, memaddr, save_exc, fcond=c.AL): assert isinstance(save_exc, bool) self.mc.gen_load_int(r.ip.value, memaddr) - #mc.LDR_ri(r.ip.value, r.pc.value, imm=WORD) if save_exc: path = self._leave_jitted_hook_save_exc else: path = self._leave_jitted_hook mc.B(path) - #mc.write32(memaddr) def align(self): while(self.mc.currpos() % FUNC_ALIGN != 0): @@ -576,9 +552,8 @@ self._dump(operations, 'bridge') assert isinstance(faildescr, AbstractFailDescr) code = faildescr._arm_failure_recovery_code - enc = rffi.cast(rffi.CCHARP, code) frame_depth = faildescr._arm_current_frame_depth - arglocs = self.decode_inputargs(enc) + arglocs = self.decode_inputargs(code) if not we_are_translated(): assert len(inputargs) == len(arglocs) diff --git a/pypy/jit/backend/arm/locations.py b/pypy/jit/backend/arm/locations.py --- a/pypy/jit/backend/arm/locations.py +++ b/pypy/jit/backend/arm/locations.py @@ -80,9 +80,6 @@ def is_imm(self): return True - def as_key(self): - return self.value + 40 - class ConstFloatLoc(AssemblerLocation): """This class represents an imm float value which is stored in memory at @@ -103,9 +100,6 @@ def is_imm_float(self): return True - def as_key(self): - return -1 * self.value - class StackLocation(AssemblerLocation): _immutable_ = True @@ -132,7 +126,7 @@ return True def as_key(self): - return -self.position + return self.position + 10000 def imm(i): diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -327,6 +327,7 @@ count = 0 n_register_args = len(r.argument_regs) cur_frame_pos = - (self.assembler.STACK_FIXED_AREA / WORD) + 1 + cur_frame_pos = 1 - (self.assembler.STACK_FIXED_AREA // WORD) for box in inputargs: assert isinstance(box, Box) # handle inputargs in argument registers From noreply at buildbot.pypy.org Mon Jan 9 11:56:53 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:53 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Add the condition code for always here Message-ID: <20120109105653.22E4282110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51168:120e4541efaf Date: 2012-01-04 15:59 +0100 http://bitbucket.org/pypy/pypy/changeset/120e4541efaf/ Log: Add the condition code for always here diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py --- a/pypy/jit/backend/arm/codebuilder.py +++ b/pypy/jit/backend/arm/codebuilder.py @@ -156,7 +156,7 @@ def BKPT(self): """Unconditional breakpoint""" - self.write32(0x1200070) + self.write32(cond.AL << 28 | 0x1200070) # corresponds to the instruction vmrs APSR_nzcv, fpscr def VMRS(self, cond=cond.AL): From noreply at buildbot.pypy.org Mon Jan 9 11:56:54 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 9 Jan 2012 11:56:54 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Use the codebuilder to write the bytecode used to describe the failarg locations for a guard. Also abuse the link register to pass the location of the encoding around. Message-ID: <20120109105654.4A8A882110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51169:10eab3fbb965 Date: 2012-01-09 11:49 +0100 http://bitbucket.org/pypy/pypy/changeset/10eab3fbb965/ Log: Use the codebuilder to write the bytecode used to describe the failarg locations for a guard. Also abuse the link register to pass the location of the encoding around. diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -129,7 +129,7 @@ def _gen_leave_jitted_hook_code(self, save_exc): mc = ARMv7Builder() # XXX add a check if cpu supports floats - with saved_registers(mc, r.caller_resp + [r.ip], r.caller_vfp_resp): + with saved_registers(mc, r.caller_resp + [r.lr], r.caller_vfp_resp): addr = self.cpu.get_on_leave_jitted_int(save_exception=save_exc) mc.BL(addr) assert self._exit_code_addr != 0 @@ -334,7 +334,7 @@ self._insert_checks(mc) with saved_registers(mc, r.all_regs, r.all_vfp_regs): # move mem block address, to r0 to pass as - mc.MOV_rr(r.r0.value, r.ip.value) + mc.MOV_rr(r.r0.value, r.lr.value) # pass the current frame pointer as second param mc.MOV_rr(r.r1.value, r.fp.value) # pass the current stack pointer as third param @@ -357,7 +357,7 @@ CODE_INPUTARG = 8 | DESCR_SPECIAL def gen_descr_encoding(self, descr, failargs, locs): - buf = [] + assert self.mc is not None for i in range(len(failargs)): arg = failargs[i] if arg is not None: @@ -373,7 +373,7 @@ if loc.is_stack(): pos = loc.position if pos < 0: - buf.append(chr(self.CODE_INPUTARG)) + self.mc.writechar(chr(self.CODE_INPUTARG)) pos = ~pos n = self.CODE_FROMSTACK // 4 + pos else: @@ -381,44 +381,33 @@ n = loc.value n = kind + 4 * n while n > 0x7F: - buf.append(chr((n & 0x7F) | 0x80)) + self.mc.writechar(chr((n & 0x7F) | 0x80)) n >>= 7 else: n = self.CODE_HOLE - buf.append(chr(n)) - buf.append(chr(self.CODE_STOP)) + self.mc.writechar(chr(n)) + self.mc.writechar(chr(self.CODE_STOP)) fdescr = self.cpu.get_fail_descr_number(descr) - buf.append(chr(fdescr & 0xFF)) - buf.append(chr(fdescr >> 8 & 0xFF)) - buf.append(chr(fdescr >> 16 & 0xFF)) - buf.append(chr(fdescr >> 24 & 0xFF)) + self.mc.write32(fdescr) + self.align() # assert that the fail_boxes lists are big enough assert len(failargs) <= self.fail_boxes_int.SIZE - memsize = len(buf) - memaddr = self.datablockwrapper.malloc_aligned(memsize, alignment=1) - mem = rffi.cast(rffi.CArrayPtr(lltype.Char), memaddr) - for i in range(memsize): - mem[i] = buf[i] - return memaddr - def _gen_path_to_exit_path(self, descr, args, arglocs, save_exc, fcond=c.AL): assert isinstance(save_exc, bool) - memaddr = self.gen_descr_encoding(descr, args, arglocs[1:]) - self.gen_exit_code(self.mc, memaddr, save_exc, fcond) - return memaddr + self.gen_exit_code(self.mc, save_exc, fcond) + self.gen_descr_encoding(descr, args, arglocs[1:]) - def gen_exit_code(self, mc, memaddr, save_exc, fcond=c.AL): + def gen_exit_code(self, mc, save_exc, fcond=c.AL): assert isinstance(save_exc, bool) - self.mc.gen_load_int(r.ip.value, memaddr) if save_exc: path = self._leave_jitted_hook_save_exc else: path = self._leave_jitted_hook - mc.B(path) + mc.BL(path) def align(self): while(self.mc.currpos() % FUNC_ALIGN != 0): @@ -551,7 +540,7 @@ operations = self.setup(original_loop_token, operations) self._dump(operations, 'bridge') assert isinstance(faildescr, AbstractFailDescr) - code = faildescr._arm_failure_recovery_code + code = self._find_failure_recovery_bytecode(faildescr) frame_depth = faildescr._arm_current_frame_depth arglocs = self.decode_inputargs(code) if not we_are_translated(): @@ -585,6 +574,11 @@ frame_depth) self.teardown() + def _find_failure_recovery_bytecode(self, faildescr): + guard_addr = faildescr._arm_block_start + faildescr._arm_guard_pos + # a guard requires 3 words to encode the jump to the exit code. + return guard_addr + 3 * WORD + def fixup_target_tokens(self, rawstart): for targettoken in self.target_tokens_currently_compiling: targettoken._arm_loop_code += rawstart @@ -607,11 +601,10 @@ pos = self.mc.currpos() tok.pos_recovery_stub = pos - memaddr = self._gen_path_to_exit_path(descr, tok.failargs, + self._gen_path_to_exit_path(descr, tok.failargs, tok.faillocs, save_exc=tok.save_exc) # store info on the descr descr._arm_current_frame_depth = tok.faillocs[0].getint() - descr._arm_failure_recovery_code = memaddr descr._arm_guard_pos = pos def process_pending_guards(self, block_start): diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -106,6 +106,7 @@ assert fail_index >= 0, "already forced!" faildescr = self.get_fail_descr_from_number(fail_index) rffi.cast(TP, addr_of_force_index)[0] = ~fail_index + bytecode = self.assembler._find_failure_recovery_bytecode(faildescr) # start of "no gc operation!" block frame_depth = faildescr._arm_current_frame_depth * WORD addr_end_of_frame = (addr_of_force_index - @@ -113,7 +114,7 @@ len(all_regs) * WORD + len(all_vfp_regs) * DOUBLE_WORD)) fail_index_2 = self.assembler.failure_recovery_func( - faildescr._arm_failure_recovery_code, + bytecode, addr_of_force_index, addr_end_of_frame) self.assembler.leave_jitted_hook() From noreply at buildbot.pypy.org Mon Jan 9 12:46:10 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 9 Jan 2012 12:46:10 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Fix: I corrected the comment but not the actual value Message-ID: <20120109114610.ECF5182110@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51170:9ec48159f6e4 Date: 2012-01-09 12:45 +0100 http://bitbucket.org/pypy/pypy/changeset/9ec48159f6e4/ Log: Fix: I corrected the comment but not the actual value diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -65,7 +65,7 @@ # The minimal RAM usage: use 24 MB by default. # Environment variable: PYPY_GC_MIN - "min_heap_size": 6*1024*1024, + "min_heap_size": 24*1024*1024, } From noreply at buildbot.pypy.org Mon Jan 9 16:06:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 16:06:22 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: make sure there are no more attrs on base class Message-ID: <20120109150622.4615F82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51171:04ce4efd6ee6 Date: 2012-01-09 17:05 +0200 http://bitbucket.org/pypy/pypy/changeset/04ce4efd6ee6/ Log: make sure there are no more attrs on base class diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -17,6 +17,8 @@ name = "" pc = 0 + _attrs_ = ('result',) + def __init__(self, result): self.result = result From noreply at buildbot.pypy.org Mon Jan 9 17:30:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 17:30:47 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: (fijal, arigo) improve the assembler check (hopefully) usable for other Message-ID: <20120109163047.508A582110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51172:941c2be81863 Date: 2012-01-09 18:30 +0200 http://bitbucket.org/pypy/pypy/changeset/941c2be81863/ Log: (fijal, arigo) improve the assembler check (hopefully) usable for other processors diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -28,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -3006,23 +3009,21 @@ self.cpu.assembler.set_debug(True) # always on untranslated assert asmlen != 0 cpuname = autodetect_main_model_and_size() - if 'x86' in cpuname: - # XXX we have to check the precise assembler, otherwise - # we don't quite know if borders are correct - def checkops(mc, startline, ops): - for i in range(startline, len(mc)): - assert mc[i].split("\t")[-1].startswith(ops[i - startline]) + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, startline, ops): + for i in range(startline, len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i - startline]) - data = ctypes.string_at(asm, asmlen) - mc = list(machine_code_dump(data, asm, cpuname)) - assert len(mc) == 5 - checkops(mc, 1, ['add', 'test', 'je', 'jmp']) - data = ctypes.string_at(basm, basmlen) - mc = list(machine_code_dump(data, basm, cpuname)) - assert len(mc) == 4 - checkops(mc, 1, ['lea', 'mov', 'jmp']) - else: - raise Exception("Implement this test for your CPU") + data = ctypes.string_at(asm, asmlen) + mc = list(machine_code_dump(data, asm, cpuname)) + assert len(mc) == 5 + checkops(mc, 1, self.add_loop_instructions) + data = ctypes.string_at(basm, basmlen) + mc = list(machine_code_dump(data, basm, cpuname)) + assert len(mc) == 4 + checkops(mc, 1, self.bridge_loop_instructions) def test_compile_bridge_with_target(self): diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,9 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['add', 'test', 'je', 'jmp'] + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() From noreply at buildbot.pypy.org Mon Jan 9 17:32:01 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 17:32:01 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: remove nonsense method, update the docstring Message-ID: <20120109163201.AAF5D82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51173:c39f96d8c69b Date: 2012-01-09 18:31 +0200 http://bitbucket.org/pypy/pypy/changeset/c39f96d8c69b/ Log: remove nonsense method, update the docstring diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -88,14 +88,6 @@ raise ValueError("access_directly on a function which we don't see %s" % graph) return res - def get_jit_portal(self): - """ Returns a None or an instance of pypy.rlib.jit.JitPortal - The portal methods are called for various special cases in the JIT - as a mean to give feedback to the user. Read JitPortal's docstring - for details. - """ - return None - def contains_unsupported_variable_type(graph, supports_floats, supports_longlong, supports_singlefloats): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -729,8 +729,7 @@ """ This is the main connector between the JIT and the interpreter. Several methods on portal will be invoked at various stages of JIT running like JIT loops compiled, aborts etc. - An instance of this class might be returned by the policy.get_jit_portal - method in order to function. + An instance of this class will be available as policy.portal. each hook will accept some of the following args: From noreply at buildbot.pypy.org Mon Jan 9 18:08:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 18:08:50 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: use try: finally: for cache.in_recursion Message-ID: <20120109170850.2BE3B82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51174:d142d1bd4aa9 Date: 2012-01-09 19:07 +0200 http://bitbucket.org/pypy/pypy/changeset/d142d1bd4aa9/ Log: use try: finally: for cache.in_recursion diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -15,13 +15,15 @@ if space.is_true(cache.w_abort_hook): cache.in_recursion = True try: - space.call_function(cache.w_abort_hook, - space.wrap(jitdriver.name), - wrap_greenkey(space, jitdriver, greenkey), - space.wrap(counter_names[reason])) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_abort_hook) - cache.in_recursion = False + try: + space.call_function(cache.w_abort_hook, + space.wrap(jitdriver.name), + wrap_greenkey(space, jitdriver, greenkey), + space.wrap(counter_names[reason])) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_abort_hook) + finally: + cache.in_recursion = False def after_compile(self, jitdriver, logger, looptoken, operations, type, greenkey, ops_offset, asmstart, asmlen): @@ -56,16 +58,18 @@ list_w = wrap_oplist(space, logops, operations, ops_offset) cache.in_recursion = True try: - space.call_function(cache.w_compile_hook, - space.wrap(jitdriver.name), - space.wrap(type), - w_arg, - space.newlist(list_w), - space.wrap(asmstart), - space.wrap(asmlen)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False + try: + space.call_function(cache.w_compile_hook, + space.wrap(jitdriver.name), + space.wrap(type), + w_arg, + space.newlist(list_w), + space.wrap(asmstart), + space.wrap(asmlen)) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False def _optimize_hook(self, jitdriver, logger, operations, type, w_arg): space = self.space @@ -77,26 +81,28 @@ list_w = wrap_oplist(space, logops, operations, {}) cache.in_recursion = True try: - w_res = space.call_function(cache.w_optimize_hook, - space.wrap(jitdriver.name), - space.wrap(type), - w_arg, - space.newlist(list_w)) - if space.is_w(w_res, space.w_None): - cache.in_recursion = False - return - l = [] - for w_item in space.listview(w_res): - item = space.interp_w(WrappedOp, w_item) - l.append(jit_hooks._cast_to_resop(item.op)) - del operations[:] # modifying operations above is probably not - # a great idea since types may not work and we'll end up with - # half-working list and a segfault/fatal RPython error - for elem in l: - operations.append(elem) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False + try: + w_res = space.call_function(cache.w_optimize_hook, + space.wrap(jitdriver.name), + space.wrap(type), + w_arg, + space.newlist(list_w)) + if space.is_w(w_res, space.w_None): + return + l = [] + for w_item in space.listview(w_res): + item = space.interp_w(WrappedOp, w_item) + l.append(jit_hooks._cast_to_resop(item.op)) + del operations[:] # modifying operations above is + # probably not a great idea since types may not work + # and we'll end up with half-working list and + # a segfault/fatal RPython error + for elem in l: + operations.append(elem) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False pypy_portal = PyPyPortal() From noreply at buildbot.pypy.org Mon Jan 9 18:08:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 18:08:51 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: add a name to another jitdriver Message-ID: <20120109170851.5050182110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51175:1b168f836dde Date: 2012-01-09 19:08 +0200 http://bitbucket.org/pypy/pypy/changeset/1b168f836dde/ Log: add a name to another jitdriver diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() From noreply at buildbot.pypy.org Mon Jan 9 18:17:54 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 18:17:54 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: rename JitPortal to JitHookInterface Message-ID: <20120109171754.D6C0582110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51176:8fdbf83e4cce Date: 2012-01-09 19:17 +0200 http://bitbucket.org/pypy/pypy/changeset/8fdbf83e4cce/ Log: rename JitPortal to JitHookInterface diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,15 +8,15 @@ class JitPolicy(object): - def __init__(self, portal=None): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False - if portal is None: - from pypy.rlib.jit import JitPortal - portal = JitPortal() - self.portal = portal + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -306,12 +306,12 @@ loop.check_consistency() if metainterp_sd.warmrunnerdesc is not None: - portal = metainterp_sd.warmrunnerdesc.portal - portal.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, - original_jitcell_token, loop.operations, type, - greenkey) + hooks = metainterp_sd.warmrunnerdesc.hooks + hooks.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, type, + greenkey) else: - portal = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") @@ -323,8 +323,8 @@ finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() - if portal is not None: - portal.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, + if hooks is not None: + hooks.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, original_jitcell_token, loop.operations, type, greenkey, ops_offset, asmstart, asmlen) metainterp_sd.stats.add_new_loop(loop) @@ -348,12 +348,12 @@ seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) if metainterp_sd.warmrunnerdesc is not None: - portal = metainterp_sd.warmrunnerdesc.portal - portal.before_compile_bridge(jitdriver_sd.jitdriver, + hooks = metainterp_sd.warmrunnerdesc.hooks + hooks.before_compile_bridge(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, original_loop_token, operations, n) else: - portal = None + hooks = None operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") @@ -364,12 +364,12 @@ finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() - if portal is not None: - portal.after_compile_bridge(jitdriver_sd.jitdriver, - metainterp_sd.logger_ops, - original_loop_token, operations, n, - ops_offset, - asmstart, asmlen) + if hooks is not None: + hooks.after_compile_bridge(jitdriver_sd.jitdriver, + metainterp_sd.logger_ops, + original_loop_token, operations, n, + ops_offset, + asmstart, asmlen) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1795,8 +1795,8 @@ debug_print('~~~ ABORTING TRACING') jd_sd = self.jitdriver_sd greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] - self.staticdata.warmrunnerdesc.portal.on_abort(reason, jd_sd.jitdriver, - greenkey) + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, jd_sd.jitdriver, + greenkey) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): diff --git a/pypy/jit/metainterp/test/test_jitportal.py b/pypy/jit/metainterp/test/test_jitiface.py rename from pypy/jit/metainterp/test/test_jitportal.py rename to pypy/jit/metainterp/test/test_jitiface.py --- a/pypy/jit/metainterp/test/test_jitportal.py +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -1,5 +1,5 @@ -from pypy.rlib.jit import JitDriver, JitPortal +from pypy.rlib.jit import JitDriver, JitHookInterface from pypy.rlib import jit_hooks from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import JitPolicy @@ -7,17 +7,17 @@ from pypy.jit.metainterp.resoperation import rop from pypy.rpython.annlowlevel import hlstr -class TestJitPortal(LLJitMixin): +class TestJitHookInterface(LLJitMixin): def test_abort_quasi_immut(self): reasons = [] - class MyJitPortal(JitPortal): + class MyJitIface(JitHookInterface): def on_abort(self, reason, jitdriver, greenkey): assert jitdriver is myjitdriver assert len(greenkey) == 1 reasons.append(reason) - portal = MyJitPortal() + iface = MyJitIface() myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) @@ -37,14 +37,14 @@ return total # assert f(100, 7) == 721 - res = self.meta_interp(f, [100, 7], policy=JitPolicy(portal)) + res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface)) assert res == 721 assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 def test_on_compile(self): called = [] - class MyJitPortal(JitPortal): + class MyJitIface(JitHookInterface): def after_compile(self, jitdriver, logger, looptoken, operations, type, greenkey, ops_offset, asmaddr, asmlen): assert asmaddr == 0 @@ -62,7 +62,7 @@ called.append(("trace", greenkey[1].getint(), greenkey[0].getint(), type)) - portal = MyJitPortal() + iface = MyJitIface() driver = JitDriver(greens = ['n', 'm'], reds = ['i']) @@ -73,11 +73,11 @@ driver.jit_merge_point(n=n, m=m, i=i) i += 1 - self.meta_interp(loop, [1, 4], policy=JitPolicy(portal)) + self.meta_interp(loop, [1, 4], policy=JitPolicy(iface)) assert called == [#("trace", 4, 1, "loop"), ("optimize", 4, 1, "loop"), ("compile", 4, 1, "loop")] - self.meta_interp(loop, [2, 4], policy=JitPolicy(portal)) + self.meta_interp(loop, [2, 4], policy=JitPolicy(iface)) assert called == [#("trace", 4, 1, "loop"), ("optimize", 4, 1, "loop"), ("compile", 4, 1, "loop"), @@ -88,7 +88,7 @@ def test_on_compile_bridge(self): called = [] - class MyJitPortal(JitPortal): + class MyJitIface(JitHookInterface): def after_compile(self, jitdriver, logger, looptoken, operations, type, greenkey, ops_offset, asmaddr, asmlen): assert asmaddr == 0 @@ -114,7 +114,7 @@ i += 2 i += 1 - self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitPortal())) + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface())) assert called == ["compile", "before_compile_bridge", "compile_bridge"] def test_resop_interface(self): diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -210,13 +210,12 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # - self.portal = policy.portal + self.hooks = policy.jithookiface self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() self.make_enter_functions() self.rewrite_jit_merge_points(policy) - self.portal = policy.portal verbose = False # not self.cpu.translate_support_code self.rewrite_access_helpers() diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -725,11 +725,11 @@ return hop.genop('jit_marker', vlist, resulttype=lltype.Void) -class JitPortal(object): +class JitHookInterface(object): """ This is the main connector between the JIT and the interpreter. - Several methods on portal will be invoked at various stages of JIT running - like JIT loops compiled, aborts etc. - An instance of this class will be available as policy.portal. + Several methods on this class will be invoked at various stages + of JIT running like JIT loops compiled, aborts etc. + An instance of this class will be available as policy.jithookiface. each hook will accept some of the following args: From noreply at buildbot.pypy.org Mon Jan 9 18:20:44 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 18:20:44 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: update the pypyjit module as well Message-ID: <20120109172044.A198A82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51177:666eb3524b3c Date: 2012-01-09 19:20 +0200 http://bitbucket.org/pypy/pypy/changeset/666eb3524b3c/ Log: update the pypyjit module as well diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -17,11 +17,11 @@ def setup_after_space_initialization(self): # force the __extend__ hacks to occur early from pypy.module.pypyjit.interp_jit import pypyjitdriver - from pypy.module.pypyjit.policy import pypy_portal + from pypy.module.pypyjit.policy import pypy_hooks # add the 'defaults' attribute from pypy.rlib.jit import PARAMETERS space = self.space pypyjitdriver.space = space w_obj = space.wrap(PARAMETERS) space.setattr(space.wrap(self), space.wrap('defaults'), w_obj) - pypy_portal.space = space + pypy_hooks.space = space diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,12 +1,12 @@ from pypy.jit.codewriter.policy import JitPolicy -from pypy.rlib.jit import JitPortal +from pypy.rlib.jit import JitHookInterface from pypy.rlib import jit_hooks from pypy.interpreter.error import OperationError from pypy.jit.metainterp.jitprof import counter_names from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\ WrappedOp -class PyPyPortal(JitPortal): +class PyPyJitIface(JitHookInterface): def on_abort(self, reason, jitdriver, greenkey): space = self.space cache = space.fromcache(Cache) @@ -104,7 +104,7 @@ finally: cache.in_recursion = False -pypy_portal = PyPyPortal() +pypy_hooks = PyPyJitIface() class PyPyJitPolicy(JitPolicy): diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -11,7 +11,7 @@ from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.module.pypyjit.interp_jit import pypyjitdriver -from pypy.module.pypyjit.policy import pypy_portal +from pypy.module.pypyjit.policy import pypy_hooks from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG @@ -61,21 +61,21 @@ offset[op] = i def interp_on_compile(): - pypy_portal.after_compile(pypyjitdriver, logger, JitCellToken(), + pypy_hooks.after_compile(pypyjitdriver, logger, JitCellToken(), cls.oplist, 'loop', greenkey, offset, 0, 0) def interp_on_compile_bridge(): - pypy_portal.after_compile_bridge(pypyjitdriver, logger, + pypy_hooks.after_compile_bridge(pypyjitdriver, logger, JitCellToken(), cls.oplist, 0, offset, 0, 0) def interp_on_optimize(): - pypy_portal.before_compile(pypyjitdriver, logger, JitCellToken(), + pypy_hooks.before_compile(pypyjitdriver, logger, JitCellToken(), cls.oplist, 'loop', greenkey) def interp_on_abort(): - pypy_portal.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey) + pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey) cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -226,8 +226,8 @@ return self.get_entry_point(config) def jitpolicy(self, driver): - from pypy.module.pypyjit.policy import PyPyJitPolicy, pypy_portal - return PyPyJitPolicy(pypy_portal) + from pypy.module.pypyjit.policy import PyPyJitPolicy, pypy_hooks + return PyPyJitPolicy(pypy_hooks) def get_entry_point(self, config): from pypy.tool.lib_pypy import import_from_lib_pypy From noreply at buildbot.pypy.org Mon Jan 9 18:49:56 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 18:49:56 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: improve the situation with arguments of the hooks Message-ID: <20120109174956.E165382110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51178:b3dd81a62153 Date: 2012-01-09 19:49 +0200 http://bitbucket.org/pypy/pypy/changeset/b3dd81a62153/ Log: improve the situation with arguments of the hooks diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -141,7 +141,6 @@ self._compile_loop_or_bridge(c, inputargs, operations, clt) old, oldindex = faildescr._compiled_fail llimpl.compile_redirect_fail(old, oldindex, c) - return None, 0, 0 def compile_loop(self, inputargs, operations, jitcell_token, log=True, name=''): @@ -156,7 +155,6 @@ clt.compiled_version = c jitcell_token.compiled_loop_token = clt self._compile_loop_or_bridge(c, inputargs, operations, clt) - return None, 0, 0 def free_loop_and_bridges(self, compiled_loop_token): for c in compiled_loop_token.loop_and_bridges: diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -477,8 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return (ops_offset, rawstart + looppos, - size_excluding_failure_stuff - looppos) + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -492,7 +493,7 @@ except ValueError: debug_print("Bridge out of guard", descr_number, "was already compiled!") - raise + return self.setup(original_loop_token) if log: @@ -540,7 +541,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset, startpos + rawstart, codeendpos - startpos + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -307,32 +308,36 @@ if metainterp_sd.warmrunnerdesc is not None: hooks = metainterp_sd.warmrunnerdesc.hooks - hooks.before_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, - original_jitcell_token, loop.operations, type, - greenkey) + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) else: + debug_info = None hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - tp = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, - name=loopname) - ops_offset, asmstart, asmlen = tp + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() if hooks is not None: - hooks.after_compile(jitdriver_sd.jitdriver, metainterp_sd.logger_ops, - original_jitcell_token, loop.operations, type, - greenkey, ops_offset, asmstart, asmlen) + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset, name=loopname) @@ -349,31 +354,34 @@ TreeLoop.check_consistency_of_branch(operations, seen) if metainterp_sd.warmrunnerdesc is not None: hooks = metainterp_sd.warmrunnerdesc.hooks - hooks.before_compile_bridge(jitdriver_sd.jitdriver, - metainterp_sd.logger_ops, - original_loop_token, operations, n) + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) else: hooks = None + debug_info = None operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - tp = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) - ops_offset, asmstart, asmlen = tp + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() if hooks is not None: - hooks.after_compile_bridge(jitdriver_sd.jitdriver, - metainterp_sd.logger_ops, - original_loop_token, operations, n, - ops_offset, - asmstart, asmlen) + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py --- a/pypy/jit/metainterp/test/test_jitiface.py +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -45,22 +45,18 @@ called = [] class MyJitIface(JitHookInterface): - def after_compile(self, jitdriver, logger, looptoken, operations, - type, greenkey, ops_offset, asmaddr, asmlen): - assert asmaddr == 0 - assert asmlen == 0 - called.append(("compile", greenkey[1].getint(), - greenkey[0].getint(), type)) + def after_compile(self, di): + called.append(("compile", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) - def before_compile(self, jitdriver, logger, looptoken, oeprations, - type, greenkey): - called.append(("optimize", greenkey[1].getint(), - greenkey[0].getint(), type)) + def before_compile(self, di): + called.append(("optimize", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) - def before_optimize(self, jitdriver, logger, looptoken, oeprations, - type, greenkey): - called.append(("trace", greenkey[1].getint(), - greenkey[0].getint(), type)) + #def before_optimize(self, jitdriver, logger, looptoken, oeprations, + # type, greenkey): + # called.append(("trace", greenkey[1].getint(), + # greenkey[0].getint(), type)) iface = MyJitIface() @@ -89,18 +85,13 @@ called = [] class MyJitIface(JitHookInterface): - def after_compile(self, jitdriver, logger, looptoken, operations, - type, greenkey, ops_offset, asmaddr, asmlen): - assert asmaddr == 0 - assert asmlen == 0 + def after_compile(self, di): called.append("compile") - def after_compile_bridge(self, jitdriver, logger, orig_token, - operations, n, ops_offset, asmstart, asmlen): + def after_compile_bridge(self, di): called.append("compile_bridge") - def before_compile_bridge(self, jitdriver, logger, orig_token, - operations, n): + def before_compile_bridge(self, di): called.append("before_compile_bridge") driver = JitDriver(greens = ['n', 'm'], reds = ['i']) diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -107,9 +107,15 @@ cache.in_recursion = NonConstant(False) def wrap_oplist(space, logops, operations, ops_offset): - return [WrappedOp(jit_hooks._cast_to_gcref(op), - ops_offset.get(op, 0), - logops.repr_of_resop(op)) for op in operations] + l_w = [] + for op in operations: + if ops_offset is None: + ofs = -1 + else: + ofs = ops_offset.get(op, 0) + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) + return l_w class WrappedBox(Wrappable): """ A class representing a single box diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -725,72 +725,104 @@ return hop.genop('jit_marker', vlist, resulttype=lltype.Void) +class AsmInfo(object): + """ An addition to JitDebugInfo concerning assembler. Attributes: + + ops_offset - dict of offsets of operations or None + asmaddr - (int) raw address of assembler block + asmlen - assembler block length + """ + def __init__(self, ops_offset, asmaddr, asmlen): + self.ops_offset = ops_offset + self.asmaddr = asmaddr + self.asmlen = asmlen + +class JitDebugInfo(object): + """ An object representing debug info. Attributes meanings: + + greenkey - a list of green boxes or None for bridge + logger - an instance of jit.metainterp.logger.LogOperations + type - either 'loop', 'entry bridge' or 'bridge' + looptoken - description of a loop + fail_descr_no - number of failing descr for bridges, -1 otherwise + asminfo - extra assembler information + """ + + asminfo = None + def __init__(self, jitdriver_sd, logger, looptoken, operations, type, + greenkey=None, fail_descr_no=-1): + self.jitdriver_sd = jitdriver_sd + self.logger = logger + self.looptoken = looptoken + self.operations = operations + self.type = type + if type == 'bridge': + assert fail_descr_no != -1 + else: + assert greenkey is not None + self.greenkey = greenkey + self.fail_descr_no = fail_descr_no + + def get_jitdriver(self): + """ Return where the jitdriver on which the jitting started + """ + return self.jitdriver_sd.jitdriver + + def get_greenkey_repr(self): + """ Return the string repr of a greenkey + """ + return self.jitdriver_sd.warmstate.get_location_str(self.greenkey) + class JitHookInterface(object): """ This is the main connector between the JIT and the interpreter. Several methods on this class will be invoked at various stages of JIT running like JIT loops compiled, aborts etc. An instance of this class will be available as policy.jithookiface. - - each hook will accept some of the following args: - - - greenkey - a list of green boxes - jitdriver - an instance of jitdriver where tracing started - logger - an instance of jit.metainterp.logger.LogOperations - ops_offset - asmaddr - (int) raw address of assembler block - asmlen - assembler block length - type - either 'loop' or 'entry bridge' """ def on_abort(self, reason, jitdriver, greenkey): """ A hook called each time a loop is aborted with jitdriver and greenkey where it started, reason is a string why it got aborted """ - #def before_optimize(self, jitdriver, logger, looptoken, operations, - # type, greenkey): - # """ A hook called before optimizer is run, args described in class - # docstring. Overwrite for custom behavior + #def before_optimize(self, debug_info): + # """ A hook called before optimizer is run, called with instance of + # JitDebugInfo. Overwrite for custom behavior # """ # DISABLED - def before_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey): + def before_compile(self, debug_info): """ A hook called after a loop is optimized, before compiling assembler, - args described ni class docstring. Overwrite for custom behavior + called with JitDebugInfo instance. Overwrite for custom behavior """ - def after_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey, ops_offset, asmaddr, asmlen): + def after_compile(self, debug_info): """ A hook called after a loop has compiled assembler, - args described in class docstring. Overwrite for custom behavior + called with JitDebugInfo instance. Overwrite for custom behavior """ - #def before_optimize_bridge(self, jitdriver, logger, orig_looptoken, + #def before_optimize_bridge(self, debug_info): # operations, fail_descr_no): # """ A hook called before a bridge is optimized. - # Args described in class docstring, Overwrite for + # Called with JitDebugInfo instance, overwrite for # custom behavior # """ # DISABLED - def before_compile_bridge(self, jitdriver, logger, orig_looptoken, - operations, fail_descr_no): + def before_compile_bridge(self, debug_info): """ A hook called before a bridge is compiled, but after optimizations - are performed. Args described in class docstring, Overwrite for + are performed. Called with instance of debug_info, overwrite for custom behavior """ - def after_compile_bridge(self, jitdriver, logger, orig_looptoken, - operations, fail_descr_no, ops_offset, asmaddr, - asmlen): - """ A hook called after a bridge is compiled, args described in class - docstring, Overwrite for custom behavior + def after_compile_bridge(self, debug_info): + """ A hook called after a bridge is compiled, called with JitDebugInfo + instance, overwrite for custom behavior """ def get_stats(self): """ Returns various statistics """ + raise NotImplementedError def record_known_class(value, cls): """ From noreply at buildbot.pypy.org Mon Jan 9 21:38:29 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 9 Jan 2012 21:38:29 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: zjit improvement Message-ID: <20120109203829.BB3B682110@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51179:0722e568f060 Date: 2012-01-09 22:36 +0200 http://bitbucket.org/pypy/pypy/changeset/0722e568f060/ Log: zjit improvement diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -22,6 +22,9 @@ def done(self): raise NotImplementedError + def axis_done(self): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 @@ -120,7 +123,7 @@ self.shapelen = len(shape) self.indices = [0] * len(shape) self._done = False - self.axis_done = False + self._axis_done = False self.offset = arr_start self.dim = dim self.dim_order = [] @@ -136,13 +139,19 @@ def done(self): return self._done + def axis_done(self): + return self._axis_done + + @jit.unroll_safe def next(self, shapelen): #shapelen will always be one less than self.shapelen offset = self.offset - axis_done = False - indices = [0] * self.shapelen - for i in range(self.shapelen): - indices[i] = self.indices[i] + _axis_done = False + done = False + #indices = [0] * self.shapelen + #for i in range(self.shapelen): + # indices[i] = self.indices[i] + indices = self.indices for i in self.dim_order: if indices[i] < self.shape[i] - 1: indices[i] += 1 @@ -150,13 +159,13 @@ break else: if i == self.dim: - axis_done = True + _axis_done = True indices[i] = 0 offset -= self.backstrides[i] else: - self._done = True + done = True res = instantiate(AxisIterator) - res.axis_done = axis_done + res._axis_done = _axis_done res.offset = offset res.indices = indices res.strides = self.strides @@ -165,7 +174,7 @@ res.shape = self.shape res.shapelen = self.shapelen res.dim = self.dim - res._done = self._done + res._done = done return res # ------ other iterators that are not part of the computation frame ---------- diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -39,7 +39,7 @@ axisreduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['identity', 'self','result', 'ri', 'frame', 'nextval', 'dtype', 'value'], + reds=['identity', 'self','result', 'ri', 'frame', 'dtype', 'value'], get_printable_location=signature.new_printable_location('axisreduce'), ) @@ -758,8 +758,6 @@ class Reduce(VirtualArray): - _immutable_fields_ = ['dim', 'binfunc', 'dtype', 'identity'] - def __init__(self, binfunc, name, dim, res_dtype, values, identity=None): shape = values.shape[0:dim] + values.shape[dim + 1:len(values.shape)] VirtualArray.__init__(self, name, shape, res_dtype) @@ -803,27 +801,27 @@ ri = ArrayIterator(result.size) frame = sig.create_frame(self.values, dim=self.dim) value = self.get_identity(sig, frame, shapelen) - nextval = sig.eval(frame, self.values).convert_to(dtype) + assert isinstance(sig, signature.ReduceSignature) while not frame.done(): axisreduce_driver.jit_merge_point(frame=frame, self=self, value=value, sig=sig, shapelen=shapelen, ri=ri, - nextval=nextval, dtype=dtype, + dtype=dtype, identity=identity, result=result) - if frame.iterators[0].axis_done: + if frame.axis_done(): + result.dtype.setitem(result.storage, ri.offset, value) if identity is None: value = sig.eval(frame, self.values).convert_to(dtype) frame.next(shapelen) else: value = identity.convert_to(dtype) ri = ri.next(shapelen) - assert isinstance(sig, signature.ReduceSignature) - nextval = sig.eval(frame, self.values).convert_to(dtype) - value = self.binfunc(dtype, value, nextval) - result.dtype.setitem(result.storage, ri.offset, value) + value = self.binfunc(dtype, value, + sig.eval(frame, self.values).convert_to(dtype)) frame.next(shapelen) assert ri.done + result.dtype.setitem(result.storage, ri.offset, value) return result diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -59,6 +59,12 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + def axis_done(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + return False + return self.iterators[final_iter].axis_done() + def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -127,16 +127,17 @@ def test_axissum(self): result = self.run("axissum") assert result == 30 - self.check_simple_loop({'arraylen_gc': 1, - 'call': 1, - 'getfield_gc': 3, - "getinteriorfield_raw": 1, - "guard_class": 1, - "guard_false": 2, - 'guard_no_exception': 1, - "float_add": 1, - "jump": 1, - 'setinteriorfield_raw': 1, + self.check_simple_loop({\ + 'setarrayitem_gc': 1, + 'getarrayitem_gc': 5, + 'getinteriorfield_raw': 1, + 'arraylen_gc': 2, + 'guard_true': 1, + 'int_sub': 1, + 'int_lt': 1, + 'jump': 1, + 'float_add': 1, + 'int_add': 2, }) def define_prod(): @@ -236,7 +237,8 @@ def test_ufunc(self): result = self.run("ufunc") assert result == -6 - self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, + self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, + "float_neg": 1, "setinteriorfield_raw": 1, "int_add": 2, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -346,7 +348,7 @@ result = self.run("setslice") assert result == 11.0 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 3, 'int_lt': 1, 'guard_true': 1, 'jump': 1, 'arraylen_gc': 3}) @@ -363,11 +365,12 @@ result = self.run("virtual_slice") assert result == 4 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) + class TestNumpyOld(LLJitMixin): def setup_class(cls): py.test.skip("old") @@ -401,4 +404,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) - From noreply at buildbot.pypy.org Mon Jan 9 22:06:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 22:06:57 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: update the interface on the pypyjit side Message-ID: <20120109210657.C373382110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51180:aed03c7eb163 Date: 2012-01-09 23:06 +0200 http://bitbucket.org/pypy/pypy/changeset/aed03c7eb163/ Log: update the interface on the pypyjit side diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1796,7 +1796,7 @@ jd_sd = self.jitdriver_sd greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] self.staticdata.warmrunnerdesc.hooks.on_abort(reason, jd_sd.jitdriver, - greenkey) + greenkey, jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py --- a/pypy/jit/metainterp/test/test_jitiface.py +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -12,14 +12,16 @@ reasons = [] class MyJitIface(JitHookInterface): - def on_abort(self, reason, jitdriver, greenkey): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): assert jitdriver is myjitdriver assert len(greenkey) == 1 reasons.append(reason) + assert greenkey_repr == 'blah' iface = MyJitIface() - myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total']) + myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'], + get_printable_location=lambda *args: 'blah') class Foo: _immutable_fields_ = ['a?'] diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -19,8 +19,9 @@ self.w_abort_hook = space.w_None self.w_optimize_hook = space.w_None -def wrap_greenkey(space, jitdriver, greenkey): - if jitdriver.name == 'pypyjit': +def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr): + jitdriver_name = jitdriver.name + if jitdriver_name == 'pypyjit': next_instr = greenkey[0].getint() is_being_profiled = greenkey[1].getint() ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), @@ -29,7 +30,7 @@ return space.newtuple([space.wrap(pycode), space.wrap(next_instr), space.newbool(bool(is_being_profiled))]) else: - return space.wrap('who knows?') + return space.wrap(greenkey_repr) def set_compile_hook(space, w_hook): """ set_compile_hook(hook) @@ -106,7 +107,7 @@ cache.w_abort_hook = w_hook cache.in_recursion = NonConstant(False) -def wrap_oplist(space, logops, operations, ops_offset): +def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] for op in operations: if ops_offset is None: diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -7,7 +7,7 @@ WrappedOp class PyPyJitIface(JitHookInterface): - def on_abort(self, reason, jitdriver, greenkey): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): space = self.space cache = space.fromcache(Cache) if cache.in_recursion: @@ -18,73 +18,75 @@ try: space.call_function(cache.w_abort_hook, space.wrap(jitdriver.name), - wrap_greenkey(space, jitdriver, greenkey), + wrap_greenkey(space, jitdriver, + greenkey, greenkey_repr), space.wrap(counter_names[reason])) except OperationError, e: e.write_unraisable(space, "jit hook ", cache.w_abort_hook) finally: cache.in_recursion = False - def after_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey, ops_offset, asmstart, asmlen): - self._compile_hook(jitdriver, logger, operations, type, - ops_offset, asmstart, asmlen, - wrap_greenkey(self.space, jitdriver, greenkey)) + def after_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._compile_hook(debug_info, w_greenkey) - def after_compile_bridge(self, jitdriver, logger, orig_looptoken, - operations, n, ops_offset, asmstart, asmlen): - self._compile_hook(jitdriver, logger, operations, 'bridge', - ops_offset, asmstart, asmlen, - self.space.wrap(n)) + def after_compile_bridge(self, debug_info): + self._compile_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) - def before_compile(self, jitdriver, logger, looptoken, operations, type, - greenkey): - self._optimize_hook(jitdriver, logger, operations, type, - wrap_greenkey(self.space, jitdriver, greenkey)) + def before_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._optimize_hook(debug_info, w_greenkey) - def before_compile_bridge(self, jitdriver, logger, orig_looptoken, - operations, n): - self._optimize_hook(jitdriver, logger, operations, 'bridge', - self.space.wrap(n)) + def before_compile_bridge(self, debug_info): + self._optimize_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) - def _compile_hook(self, jitdriver, logger, operations, type, - ops_offset, asmstart, asmlen, w_arg): + def _compile_hook(self, debug_info, w_arg): space = self.space cache = space.fromcache(Cache) if cache.in_recursion: return if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations, ops_offset) + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations, + debug_info.asminfo.ops_offset) cache.in_recursion = True try: try: + jd_name = debug_info.get_jitdriver().name + asminfo = debug_info.asminfo space.call_function(cache.w_compile_hook, - space.wrap(jitdriver.name), - space.wrap(type), + space.wrap(jd_name), + space.wrap(debug_info.type), w_arg, space.newlist(list_w), - space.wrap(asmstart), - space.wrap(asmlen)) + space.wrap(asminfo.asmaddr), + space.wrap(asminfo.asmlen)) except OperationError, e: e.write_unraisable(space, "jit hook ", cache.w_compile_hook) finally: cache.in_recursion = False - def _optimize_hook(self, jitdriver, logger, operations, type, w_arg): + def _optimize_hook(self, debug_info, w_arg): space = self.space cache = space.fromcache(Cache) if cache.in_recursion: return if space.is_true(cache.w_optimize_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations, {}) + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations) cache.in_recursion = True try: try: + jd_name = debug_info.get_jitdriver().name w_res = space.call_function(cache.w_optimize_hook, - space.wrap(jitdriver.name), - space.wrap(type), + space.wrap(jd_name), + space.wrap(debug_info.type), w_arg, space.newlist(list_w)) if space.is_w(w_res, space.w_None): @@ -93,12 +95,12 @@ for w_item in space.listview(w_res): item = space.interp_w(WrappedOp, w_item) l.append(jit_hooks._cast_to_resop(item.op)) - del operations[:] # modifying operations above is + del debug_info.operations[:] # modifying operations above is # probably not a great idea since types may not work # and we'll end up with half-working list and # a segfault/fatal RPython error for elem in l: - operations.append(elem) + debug_info.operations.append(elem) except OperationError, e: e.write_unraisable(space, "jit hook ", cache.w_compile_hook) finally: diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -15,6 +15,7 @@ from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG +from pypy.rlib.jit import JitDebugInfo, AsmInfo class MockJitDriverSD(object): class warmstate(object): @@ -25,6 +26,9 @@ pycode = cast_base_ptr_to_instance(PyCode, ll_code) return pycode.co_name + jitdriver = pypyjitdriver + + class MockSD(object): class cpu(object): ts = llhelper @@ -47,7 +51,7 @@ code_gcref = lltype.cast_opaque_ptr(llmemory.GCREF, ll_code) logger = Logger(MockSD()) - cls.origoplist = parse(""" + oplist = parse(""" [i1, i2, p2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) @@ -56,35 +60,43 @@ """, namespace={'ptr0': code_gcref}).operations greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] offset = {} - for i, op in enumerate(cls.origoplist): + for i, op in enumerate(oplist): if i != 1: offset[op] = i + di_loop = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop_optimize = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop.asminfo = AsmInfo(offset, 0, 0) + di_bridge = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'bridge', fail_descr_no=0) + di_bridge.asminfo = AsmInfo(offset, 0, 0) + def interp_on_compile(): - pypy_hooks.after_compile(pypyjitdriver, logger, JitCellToken(), - cls.oplist, 'loop', greenkey, offset, - 0, 0) + di_loop.oplist = cls.oplist + pypy_hooks.after_compile(di_loop) def interp_on_compile_bridge(): - pypy_hooks.after_compile_bridge(pypyjitdriver, logger, - JitCellToken(), cls.oplist, 0, - offset, 0, 0) + pypy_hooks.after_compile_bridge(di_bridge) def interp_on_optimize(): - pypy_hooks.before_compile(pypyjitdriver, logger, JitCellToken(), - cls.oplist, 'loop', greenkey) + di_loop_optimize.oplist = cls.oplist + pypy_hooks.before_compile(di_loop_optimize) def interp_on_abort(): - pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey) + pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey, + 'blah') cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) + cls.orig_oplist = oplist def setup_method(self, meth): - self.__class__.oplist = self.origoplist + self.__class__.oplist = self.orig_oplist[:] def test_on_compile(self): import pypyjit diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -779,7 +779,7 @@ of JIT running like JIT loops compiled, aborts etc. An instance of this class will be available as policy.jithookiface. """ - def on_abort(self, reason, jitdriver, greenkey): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): """ A hook called each time a loop is aborted with jitdriver and greenkey where it started, reason is a string why it got aborted """ From noreply at buildbot.pypy.org Mon Jan 9 23:15:58 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 23:15:58 +0100 (CET) Subject: [pypy-commit] pypy look-into-thread: a test - look into thread module Message-ID: <20120109221558.8E13882110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: look-into-thread Changeset: r51181:283df4b51997 Date: 2012-01-10 00:13 +0200 http://bitbucket.org/pypy/pypy/changeset/283df4b51997/ Log: a test - look into thread module diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -17,7 +17,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', 'thread']: return True return False From noreply at buildbot.pypy.org Mon Jan 9 23:22:50 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Mon, 9 Jan 2012 23:22:50 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Implement not function Message-ID: <20120109222250.9A8CE82110@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r32:2f84a6d52477 Date: 2011-12-29 22:05 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/2f84a6d52477/ Log: Implement not function diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -3,7 +3,7 @@ W_Number, W_Real, W_Integer, W_List, W_Character, W_Vector, \ Body, W_Procedure, W_String, W_Promise, plst2lst, w_undefined, \ SchemeSyntaxError, SchemeQuit, WrongArgType, WrongArgsNumber, \ - w_nil + w_nil, w_true, w_false ## # operations @@ -534,6 +534,19 @@ def predicate(self, w_obj): return w_obj is w_nil +class Not(W_Procedure): + _symbol_name = "not" + + def procedure(self, ctx, lst): + if len(lst) != 1: + raise WrongArgsNumber + + w_bool = lst[0] + if w_bool.to_boolean(): + return w_false + else: + return w_true + ## # Input/Output procedures From noreply at buildbot.pypy.org Mon Jan 9 23:22:51 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Mon, 9 Jan 2012 23:22:51 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Implement all numerical comparisions (< <= > >=) Message-ID: <20120109222251.A295F82110@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r33:82753c10ee59 Date: 2011-12-29 22:37 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/82753c10ee59/ Log: Implement all numerical comparisions (< <= > >=) diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -94,9 +94,7 @@ Mul = create_op_class('*', '', "Mul", 1) Div = create_op_class('/', '1 /', "Div") -class Equal(W_Procedure): - _symbol_name = "=" - +class NumberComparison(W_Procedure): def procedure(self, ctx, lst): if len(lst) < 2: return W_Boolean(True) @@ -109,12 +107,43 @@ if not isinstance(arg, W_Number): raise WrongArgType(arg, "Number") - if prev.to_number() != arg.to_number(): + if not self.relation(prev.to_number(), arg.to_number()): return W_Boolean(False) prev = arg return W_Boolean(True) +class Equal(NumberComparison): + _symbol_name = "=" + + def relation(self, a, b): + return a == b + +class LessThen(NumberComparison): + _symbol_name = "<" + + def relation(self, a, b): + return a < b + +class LessEqual(NumberComparison): + _symbol_name = "<=" + + def relation(self, a, b): + return a <= b + +class GreaterThen(NumberComparison): + _symbol_name = ">" + + def relation(self, a, b): + return a > b + +class GreaterEqual(NumberComparison): + _symbol_name = ">=" + + def relation(self, a, b): + return a >= b + + class List(W_Procedure): _symbol_name = "list" diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py --- a/scheme/test/test_eval.py +++ b/scheme/test/test_eval.py @@ -199,6 +199,21 @@ py.test.raises(WrongArgType, eval_noctx, "(= 'a 1)") + w_bool = eval_noctx("(< 1 2 3)") + assert w_bool.to_boolean() is True + + w_bool = eval_noctx("(< 1 3 2)") + assert w_bool.to_boolean() is False + + w_bool = eval_noctx("(> 3 2 1)") + assert w_bool.to_boolean() is True + + w_bool = eval_noctx("(<= 1 1 2 2 3)") + assert w_bool.to_boolean() is True + + w_bool = eval_noctx("(>= 3 3 1)") + assert w_bool.to_boolean() is True + def test_comparison_heteronums(): w_bool = eval_noctx("(= 1 1.0 1.1)") assert w_bool.to_boolean() is False @@ -839,4 +854,10 @@ py.test.raises(WrongArgType, eval_, ctx, "(append 'a '())") py.test.raises(WrongArgType, eval_, ctx, "(append 1 2 3)") - py.test.raises(WrongArgType, eval_, ctx, "(append! (cons 1 2) '(3 4))") \ No newline at end of file + py.test.raises(WrongArgType, eval_, ctx, "(append! (cons 1 2) '(3 4))") + +def test_not(): + assert not eval_noctx("(not #t)").to_boolean() + assert eval_noctx("(not #f)").to_boolean() + assert not eval_noctx("(not '())").to_boolean() + assert not eval_noctx("(not 0)").to_boolean() From noreply at buildbot.pypy.org Mon Jan 9 23:22:52 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Mon, 9 Jan 2012 23:22:52 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Implement Assoc*-functions Message-ID: <20120109222252.AA20382110@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r34:19a17e0790e6 Date: 2011-12-29 23:37 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/19a17e0790e6/ Log: Implement Assoc*-functions diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -380,6 +380,52 @@ return W_String(w_char.to_string() * w_number.to_fixnum()) ## +# Association lists +## +class AssocX(W_Procedure): + def procedure(self, ctx, lst): + if len(lst) != 2: + raise WrongArgsNumber + + (w_obj, w_alst) = lst + + w_iter = w_alst + while w_iter is not w_nil: + if not isinstance(w_iter, W_Pair): + raise WrongArgType(w_alst, "AList") + + w_item = w_iter.car + + if not isinstance(w_item, W_Pair): + raise WrongArgType(w_alst, "AList") + + if self.compare(w_obj, w_item.car): + return w_item + + w_iter = w_iter.cdr + + return w_false + +class Assq(AssocX): + _symbol_name = "assq" + + def compare(self, w_obj1, w_obj2): + return w_obj1.eq(w_obj2) + +class Assv(AssocX): + _symbol_name = "assv" + + def compare(self, w_obj1, w_obj2): + return w_obj1.eqv(w_obj2) + +class Assoc(AssocX): + _symbol_name = "assoc" + + def compare(self, w_obj1, w_obj2): + return w_obj1.equal(w_obj2) + + +## # Equivalnece Predicates ## class EquivalnecePredicate(W_Procedure): diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py --- a/scheme/test/test_eval.py +++ b/scheme/test/test_eval.py @@ -861,3 +861,30 @@ assert eval_noctx("(not #f)").to_boolean() assert not eval_noctx("(not '())").to_boolean() assert not eval_noctx("(not 0)").to_boolean() + +def test_assoc(): + w_res = eval_noctx("(assq 'b '((a 1) (b 2) (c 3)))") + assert isinstance(w_res, W_Pair) + assert w_res.equal(parse_("(b 2)")) + + w_res = eval_noctx("(assq 'x '((a 1) (b 2) (c 3)))") + assert w_res is w_false + + w_res = eval_noctx("(assv (+ 1 2) '((1 a) (2 b) (3 c)))") + assert isinstance(w_res, W_Pair) + assert w_res.equal(parse_("(3 c)")) + + w_res = eval_noctx("(assq (list 'a) '(((a)) ((b)) ((c))))") + assert w_res is w_false + + w_res = eval_noctx("(assoc (list 'a) '(((a)) ((b)) ((c))))") + assert isinstance(w_res, W_Pair) + assert w_res.equal(parse_("((a))")) + + w_res = eval_noctx("(assq 'a '())") + assert w_res is w_false + + py.test.raises(WrongArgType, eval_noctx, "(assq 'a '(a b c))") + py.test.raises(WrongArgType, eval_noctx, "(assq 1 2)") + py.test.raises(WrongArgsNumber, eval_noctx, "(assq 1 '(1 2) '(3 4))") + From noreply at buildbot.pypy.org Mon Jan 9 23:22:53 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Mon, 9 Jan 2012 23:22:53 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Implement 'cadr' and friends (all 28 versions) Message-ID: <20120109222253.B452382110@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r35:d254d5ae04cf Date: 2012-01-09 00:47 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/d254d5ae04cf/ Log: Implement 'cadr' and friends (all 28 versions) diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -177,6 +177,89 @@ raise WrongArgType(w_pair, "Pair") return w_pair.cdr +class CarCdrCombination(W_Procedure): + def procedure(self, ctx, lst): + if len(lst) != 1: + raise WrongArgsNumber + w_pair = lst[0] + return self.do_oper(w_pair) + + def do_oper(self, w_pair): + raise NotImplementedError + +def gen_cxxxr_class(proc_name, oper_lst): + class Cxxxr(CarCdrCombination): + pass + + src_block = """ + w_iter = w_pair + """ + oper_lst.reverse() + for oper in oper_lst: + src_block += """ + if not isinstance(w_iter, W_Pair): + raise WrongArgType(w_iter, "Pair") + """ + if oper == "car": + src_block += """ + w_iter = w_iter.car + """ + elif oper == "cdr": + src_block += """ + w_iter = w_iter.cdr + """ + else: + raise ValueError("oper must 'car' or 'cdr'") + + src_block += """ + return w_iter + """ + + local_locals = {} + attr_name = "do_oper" + + code = py.code.Source((""" + def %s(self, w_pair): + from scheme.object import W_Pair, WrongArgType + """ % attr_name) + src_block) + + exec code.compile() in local_locals + local_locals[attr_name]._annspecialcase_ = 'specialize:argtype(1)' + setattr(Cxxxr, attr_name, local_locals[attr_name]) + + Cxxxr._symbol_name = proc_name + Cxxxr.__name__ = proc_name.capitalize() + return Cxxxr + +Caar = gen_cxxxr_class("caar", ['car', 'car']) +Cadr = gen_cxxxr_class("cadr", ['car', 'cdr']) +Cdar = gen_cxxxr_class("cdar", ['cdr', 'car']) +Cddr = gen_cxxxr_class("cddr", ['cdr', 'cdr']) +Caaar = gen_cxxxr_class("caaar", ['car', 'car', 'car']) +Caadr = gen_cxxxr_class("caadr", ['car', 'car', 'cdr']) +Cadar = gen_cxxxr_class("cadar", ['car', 'cdr', 'car']) +Caddr = gen_cxxxr_class("caddr", ['car', 'cdr', 'cdr']) +Cdaar = gen_cxxxr_class("cdaar", ['cdr', 'car', 'car']) +Cdadr = gen_cxxxr_class("cdadr", ['cdr', 'car', 'cdr']) +Cddar = gen_cxxxr_class("cddar", ['cdr', 'cdr', 'car']) +Cdddr = gen_cxxxr_class("cdddr", ['cdr', 'cdr', 'cdr']) +Caaaar = gen_cxxxr_class("caaaar", ['car', 'car', 'car', 'car']) +Caaadr = gen_cxxxr_class("caaadr", ['car', 'car', 'car', 'cdr']) +Caadar = gen_cxxxr_class("caadar", ['car', 'car', 'cdr', 'car']) +Caaddr = gen_cxxxr_class("caaddr", ['car', 'car', 'cdr', 'cdr']) +Cadaar = gen_cxxxr_class("cadaar", ['car', 'cdr', 'car', 'car']) +Cadadr = gen_cxxxr_class("cadadr", ['car', 'cdr', 'car', 'cdr']) +Caddar = gen_cxxxr_class("caddar", ['car', 'cdr', 'cdr', 'car']) +Cadddr = gen_cxxxr_class("cadddr", ['car', 'cdr', 'cdr', 'cdr']) +Cdaaar = gen_cxxxr_class("cdaaar", ['cdr', 'car', 'car', 'car']) +Cdaadr = gen_cxxxr_class("cdaadr", ['cdr', 'car', 'car', 'cdr']) +Cdadar = gen_cxxxr_class("cdadar", ['cdr', 'car', 'cdr', 'car']) +Cdaddr = gen_cxxxr_class("cdaddr", ['cdr', 'car', 'cdr', 'cdr']) +Cddaar = gen_cxxxr_class("cddaar", ['cdr', 'cdr', 'car', 'car']) +Cddadr = gen_cxxxr_class("cddadr", ['cdr', 'cdr', 'car', 'cdr']) +Cdddar = gen_cxxxr_class("cdddar", ['cdr', 'cdr', 'cdr', 'car']) +Cddddr = gen_cxxxr_class("cddddr", ['cdr', 'cdr', 'cdr', 'cdr']) + class SetCar(W_Procedure): _symbol_name = "set-car!" @@ -270,9 +353,11 @@ (w_procedure, w_lst) = lst if not isinstance(w_procedure, W_Procedure): + #print w_procedure.to_repr(), "is not a procedure" raise WrongArgType(w_procedure, "Procedure") if not isinstance(w_lst, W_List): + #print w_lst.to_repr(), "is not a list" raise WrongArgType(w_lst, "List") return w_procedure.call_tr(ctx, w_lst) diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py --- a/scheme/test/test_eval.py +++ b/scheme/test/test_eval.py @@ -888,3 +888,100 @@ py.test.raises(WrongArgType, eval_noctx, "(assq 1 2)") py.test.raises(WrongArgsNumber, eval_noctx, "(assq 1 '(1 2) '(3 4))") +def test_cxxxr(): + w_res = eval_noctx("(caar '((a b) c d))") + assert w_res.equal(parse_("a")) + + w_res = eval_noctx("(cadr '((a b) c d))") + assert w_res.equal(parse_("c")) + + w_res = eval_noctx("(cdar '((a b) c d))") + assert w_res.equal(parse_("(b)")) + + w_res = eval_noctx("(cddr '((a b) c d))") + assert w_res.equal(parse_("(d)")) + + w_res = eval_noctx("(caaar '(((a b) c d) (e f) g h))") + assert w_res.equal(parse_("a")) + + w_res = eval_noctx("(caadr '(((a b) c d) (e f) g h))") + assert w_res.equal(parse_("e")) + + w_res = eval_noctx("(cadar '(((a b) c d) (e f) g h))") + assert w_res.equal(parse_("c")) + + w_res = eval_noctx("(caddr '(((a b) c d) (e f) g h))") + assert w_res.equal(parse_("g")) + + w_res = eval_noctx("(cdaar '(((a b) c d) (e f) g h))") + assert w_res.equal(parse_("(b)")) + + w_res = eval_noctx("(cdadr '(((a b) c d) (e f) g h))") + assert w_res.equal(parse_("(f)")) + + w_res = eval_noctx("(cddar '(((a b) c d) (e f) g h))") + assert w_res.equal(parse_("(d)")) + + w_res = eval_noctx("""(caaaar '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("a")) + + w_res = eval_noctx("""(caaadr '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("i")) + + w_res = eval_noctx("""(caadar '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("e")) + + w_res = eval_noctx("""(caaddr '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("m")) + + w_res = eval_noctx("""(cadaar '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("c")) + + w_res = eval_noctx("""(cadadr '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("k")) + + w_res = eval_noctx("""(caddar '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("g")) + + w_res = eval_noctx("""(cadddr '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("o")) + + w_res = eval_noctx("""(cdaaar '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("(b)")) + + w_res = eval_noctx("""(cdaadr '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("(j)")) + + w_res = eval_noctx("""(cdadar '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("(f)")) + + w_res = eval_noctx("""(cdaddr '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("(n)")) + + w_res = eval_noctx("""(cddaar '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("(d)")) + + w_res = eval_noctx("""(cddadr '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("(l)")) + + w_res = eval_noctx("""(cdddar '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("(h)")) + + w_res = eval_noctx("""(cddddr '((((a b) c d) (e f) g h) + ((i j) k l) (m n) o p))""") + assert w_res.equal(parse_("(p)")) From noreply at buildbot.pypy.org Mon Jan 9 23:22:54 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Mon, 9 Jan 2012 23:22:54 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Implement member & friends Message-ID: <20120109222254.BD14482110@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r36:a93db4dbd6b0 Date: 2012-01-09 21:33 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/a93db4dbd6b0/ Log: Implement member & friends diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -491,6 +491,9 @@ return w_false + def compare(self, w_obj1, w_obj2): + raise NotImplementedError + class Assq(AssocX): _symbol_name = "assq" @@ -511,6 +514,49 @@ ## +# Member function +## +class MemX(W_Procedure): + def procedure(self, ctx, lst): + if len(lst) != 2: + raise WrongArgsNumber + + (w_obj, w_lst) = lst + + w_iter = w_lst + while w_iter is not w_nil: + if not isinstance(w_iter, W_Pair): + raise WrongArgType(w_lst, "List") + + if self.compare(w_obj, w_iter.car): + return w_iter + + w_iter = w_iter.cdr + + return w_false + + def compare(self, w_obj1, w_obj2): + raise NotImplementedError + +class Memq(MemX): + _symbol_name = "memq" + + def compare(self, w_obj1, w_obj2): + return w_obj1.eq(w_obj2) + +class Memv(MemX): + _symbol_name = "memv" + + def compare(self, w_obj1, w_obj2): + return w_obj1.eqv(w_obj2) + +class Member(MemX): + _symbol_name = "member" + + def compare(self, w_obj1, w_obj2): + return w_obj1.equal(w_obj2) + +## # Equivalnece Predicates ## class EquivalnecePredicate(W_Procedure): diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py --- a/scheme/test/test_eval.py +++ b/scheme/test/test_eval.py @@ -888,7 +888,27 @@ py.test.raises(WrongArgType, eval_noctx, "(assq 1 2)") py.test.raises(WrongArgsNumber, eval_noctx, "(assq 1 '(1 2) '(3 4))") -def test_cxxxr(): +def test_member(): + w_res = eval_noctx("(memq 'a '(a b c))") + assert w_res.equal(parse_("(a b c)")) + + w_res = eval_noctx("(memq 'b '(a b c))") + assert w_res.equal(parse_("(b c)")) + + w_res = eval_noctx("(memq 'd '(a b c))") + assert w_res.eq(w_false) + + w_res = eval_noctx("(memv 10 (list 11 10 9))") + assert w_res.equal(parse_("(10 9)")) + + w_res = eval_noctx("(member '(c d) '((a b) (c d) (e f)))") + assert w_res.equal(parse_("((c d) (e f))")) + + py.test.raises(WrongArgType, eval_noctx, "(member 1 2)") + py.test.raises(WrongArgsNumber, eval_noctx, "(memq 1)") + py.test.raises(WrongArgsNumber, eval_noctx, "(memq 1 2 3)") + +def test_cadadr(): w_res = eval_noctx("(caar '((a b) c d))") assert w_res.equal(parse_("a")) From noreply at buildbot.pypy.org Mon Jan 9 23:22:55 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Mon, 9 Jan 2012 23:22:55 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Stubbing of case, basic tests work Message-ID: <20120109222255.C5E8782110@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r37:046b82d2ef4c Date: 2012-01-09 22:07 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/046b82d2ef4c/ Log: Stubbing of case, basic tests work diff --git a/scheme/r5rs_derived_expr.ss b/scheme/r5rs_derived_expr.ss --- a/scheme/r5rs_derived_expr.ss +++ b/scheme/r5rs_derived_expr.ss @@ -39,3 +39,18 @@ (let ((x test1)) (if x x (or test2 ...)))))) +(define-syntax case + (syntax-rules (else) +;;; XXX this check does not work yet +; ((case (key ...) clauses ...) +; (let ((atom-key (key ...))) +; (case atom-key clauses ...))) + ((case key (else expr1 expr2 ...)) + (begin expr1 expr2 ...)) + ((case key ((atoms ...) expr1 expr2 ...)) + (if (memv key '(atoms ...)) + (begin expr1 expr2 ...))) + ((case key ((atoms ...) expr1 expr2 ...) clause2 clause3 ...) + (if (memv key '(atoms ...)) + (begin expr1 expr2 ...) + (case key clause2 clause3 ...))))) diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py --- a/scheme/test/test_eval.py +++ b/scheme/test/test_eval.py @@ -1005,3 +1005,20 @@ w_res = eval_noctx("""(cddddr '((((a b) c d) (e f) g h) ((i j) k l) (m n) o p))""") assert w_res.equal(parse_("(p)")) + +def test_case(): + w_res = eval_noctx(""" + (case (* 2 3) + ((2 3 5 7) 'prime) + ((1 4 6 8 9) 'composite)) + """) + assert w_res.eq(symbol("composite")) + + w_res = eval_noctx(""" + (case (car '(c d)) + ((a e i o u) 'vowel) + ((w y) 'semivowel) + (else 'consonant)) + """) + assert w_res.eq(symbol("consonant")) + From noreply at buildbot.pypy.org Mon Jan 9 23:22:56 2012 From: noreply at buildbot.pypy.org (boemmels) Date: Mon, 9 Jan 2012 23:22:56 +0100 (CET) Subject: [pypy-commit] lang-scheme default: Bugfix, apply evaluated it's argument twice. Message-ID: <20120109222256.CEC7082110@wyvern.cs.uni-duesseldorf.de> Author: Juergen Boemmels Branch: Changeset: r38:2f31b68cba35 Date: 2012-01-09 22:39 +0100 http://bitbucket.org/pypy/lang-scheme/changeset/2f31b68cba35/ Log: Bugfix, apply evaluated it's argument twice. diff --git a/scheme/object.py b/scheme/object.py --- a/scheme/object.py +++ b/scheme/object.py @@ -625,7 +625,7 @@ w_iter = w_list while w_iter is not w_nil: if not isinstance(w_iter, W_Pair): - raise WrongArg(w_list, "List") + raise WrongArgType(w_list, "List") lst.append(w_iter.car) w_iter = w_iter.cdr diff --git a/scheme/procedure.py b/scheme/procedure.py --- a/scheme/procedure.py +++ b/scheme/procedure.py @@ -3,7 +3,7 @@ W_Number, W_Real, W_Integer, W_List, W_Character, W_Vector, \ Body, W_Procedure, W_String, W_Promise, plst2lst, w_undefined, \ SchemeSyntaxError, SchemeQuit, WrongArgType, WrongArgsNumber, \ - w_nil, w_true, w_false + w_nil, w_true, w_false, lst2plst ## # operations @@ -360,7 +360,7 @@ #print w_lst.to_repr(), "is not a list" raise WrongArgType(w_lst, "List") - return w_procedure.call_tr(ctx, w_lst) + return w_procedure.procedure_tr(ctx, lst2plst(w_lst)) class Quit(W_Procedure): _symbol_name = "quit" diff --git a/scheme/test/test_eval.py b/scheme/test/test_eval.py --- a/scheme/test/test_eval.py +++ b/scheme/test/test_eval.py @@ -817,6 +817,10 @@ assert w_result.to_number() == 64 assert eval_(ctx, "(apply + '())").to_number() == 0 + + w_result = eval_(ctx, "(apply list '((+ 2 3) (* 3 4)))") + assert w_result.equal(parse_("((+ 2 3) (* 3 4))")) + py.test.raises(WrongArgsNumber, eval_, ctx, "(apply 1)") py.test.raises(WrongArgType, eval_, ctx, "(apply 1 '(1))") py.test.raises(WrongArgType, eval_, ctx, "(apply + 42)") From noreply at buildbot.pypy.org Mon Jan 9 23:30:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 23:30:30 +0100 (CET) Subject: [pypy-commit] pypy default: document JIT parameters Message-ID: <20120109223030.C809082110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51182:3c58c0bd8803 Date: 2012-01-10 00:28 +0200 http://bitbucket.org/pypy/pypy/changeset/3c58c0bd8803/ Log: document JIT parameters diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -386,6 +386,18 @@ class JitHintError(Exception): """Inconsistency in the JIT hints.""" +PARAMETER_DOCS = { + 'threshold': 'number of times a loop has to run for it to become hot', + 'function_threshold': 'number of times a function must run for it to become traced from start', + 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TRACE_TOO_LONG', + 'inlining': 'inline python functions or not (1/0)', + 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', + 'retrace_limit': 'how many times we can try retracing before giving up', + 'max_retrace_guards': 'number of extra guards a retrace can cause', + 'enable_opts': 'optimizations to enabled or all, INTERNAL USE ONLY' + } + PARAMETERS = {'threshold': 1039, # just above 1024, prime 'function_threshold': 1619, # slightly more than one above, also prime 'trace_eagerness': 200, From noreply at buildbot.pypy.org Mon Jan 9 23:30:32 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 23:30:32 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120109223032.04F7F82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51183:5f4b16c8ec98 Date: 2012-01-10 00:29 +0200 http://bitbucket.org/pypy/pypy/changeset/5f4b16c8ec98/ Log: merge diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -257,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): From noreply at buildbot.pypy.org Mon Jan 9 23:35:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 23:35:15 +0100 (CET) Subject: [pypy-commit] pypy default: include some actually useful info in --help Message-ID: <20120109223515.9A57982110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51184:d16e4f017733 Date: 2012-01-10 00:34 +0200 http://bitbucket.org/pypy/pypy/changeset/d16e4f017733/ Log: include some actually useful info in --help diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,8 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %slow-level JIT parameter (default %s)' % ( - key, ' '*(18-len(key)), value) + print ' --jit %s=N %s%s (default %s)' % ( + key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Mon Jan 9 23:38:43 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 9 Jan 2012 23:38:43 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: merged default Message-ID: <20120109223843.9A61E82110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks Changeset: r51185:9e69f381ba7e Date: 2012-01-09 16:34 -0600 http://bitbucket.org/pypy/pypy/changeset/9e69f381ba7e/ Log: merged default diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,1 +1,2 @@ from _numpypy import * +from fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -257,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -741,7 +741,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.force_allocate_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -16,17 +16,15 @@ # debug name = "" pc = 0 + opnum = 0 _attrs_ = ('result',) def __init__(self, result): self.result = result - # methods implemented by each concrete class - # ------------------------------------------ - def getopnum(self): - raise NotImplementedError + return self.opnum # methods implemented by the arity mixins # --------------------------------------- @@ -592,12 +590,9 @@ baseclass = PlainResOp mixin = arity2mixin.get(arity, N_aryOp) - def getopnum(self): - return opnum - cls_name = '%s_OP' % name bases = (get_base_class(mixin, baseclass),) - dic = {'getopnum': getopnum} + dic = {'opnum': opnum} return type(cls_name, bases, dic) setup(__name__ == '__main__') # print out the table when run directly diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py --- a/pypy/jit/metainterp/test/test_resoperation.py +++ b/pypy/jit/metainterp/test/test_resoperation.py @@ -30,17 +30,17 @@ cls = rop.opclasses[rop.rop.INT_ADD] assert issubclass(cls, rop.PlainResOp) assert issubclass(cls, rop.BinaryOp) - assert cls.getopnum.im_func(None) == rop.rop.INT_ADD + assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD cls = rop.opclasses[rop.rop.CALL] assert issubclass(cls, rop.ResOpWithDescr) assert issubclass(cls, rop.N_aryOp) - assert cls.getopnum.im_func(None) == rop.rop.CALL + assert cls.getopnum.im_func(cls) == rop.rop.CALL cls = rop.opclasses[rop.rop.GUARD_TRUE] assert issubclass(cls, rop.GuardResOp) assert issubclass(cls, rop.UnaryOp) - assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE + assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE def test_mixins_in_common_base(): INT_ADD = rop.opclasses[rop.rop.INT_ADD] diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -48,6 +48,7 @@ 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', + 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', } diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -78,6 +78,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -103,7 +104,7 @@ _attrs_ = () class W_IntegerBox(W_NumberBox): - pass + descr__new__, get_dtype = new_dtype_getter("long") class W_SignedIntegerBox(W_IntegerBox): pass @@ -170,6 +171,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), @@ -198,6 +200,7 @@ ) W_IntegerBox.typedef = TypeDef("integer", W_NumberBox.typedef, + __new__ = interp2app(W_IntegerBox.descr__new__.im_func), __module__ = "numpypy", ) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -568,6 +568,18 @@ def descr_mean(self, space): return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_var(self, space): + ''' var = mean( (values - mean(values))**2 ) ''' + w_res = self.descr_sub(space, self.descr_mean(space)) + assert isinstance(w_res, BaseArray) + w_res = w_res.descr_pow(space, space.wrap(2)) + assert isinstance(w_res, BaseArray) + return w_res.descr_mean(space) + + def descr_std(self, space): + ''' std(v) = sqrt(var(v)) ''' + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)] ) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -1209,6 +1221,8 @@ all = interp2app(BaseArray.descr_all), any = interp2app(BaseArray.descr_any), dot = interp2app(BaseArray.descr_dot), + var = interp2app(BaseArray.descr_var), + std = interp2app(BaseArray.descr_std), copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,6 +166,15 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): import _numpypy as numpy diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -978,6 +978,20 @@ assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] + def test_var(self): + from _numpypy import array + a = array(range(10)) + assert a.var() == 8.25 + a = array([5.0]) + assert a.var() == 0.0 + + def test_std(self): + from _numpypy import array + a = array(range(10)) + assert a.std() == 2.8722813232690143 + a = array([5.0]) + assert a.std() == 0.0 + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -385,6 +385,18 @@ class JitHintError(Exception): """Inconsistency in the JIT hints.""" +PARAMETER_DOCS = { + 'threshold': 'number of times a loop has to run for it to become hot', + 'function_threshold': 'number of times a function must run for it to become traced from start', + 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TRACE_TOO_LONG', + 'inlining': 'inline python functions or not (1/0)', + 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', + 'retrace_limit': 'how many times we can try retracing before giving up', + 'max_retrace_guards': 'number of extra guards a retrace can cause', + 'enable_opts': 'optimizations to enabled or all, INTERNAL USE ONLY' + } + PARAMETERS = {'threshold': 1039, # just above 1024, prime 'function_threshold': 1619, # slightly more than one above, also prime 'trace_eagerness': 200, diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c --- a/pypy/translator/c/src/profiling.c +++ b/pypy/translator/c/src/profiling.c @@ -29,6 +29,35 @@ profiling_setup = 0; } } + +#elif defined(_WIN32) +#include + +DWORD_PTR base_affinity_mask; +int profiling_setup = 0; + +void pypy_setup_profiling() { + if (!profiling_setup) { + DWORD_PTR affinity_mask, system_affinity_mask; + GetProcessAffinityMask(GetCurrentProcess(), + &base_affinity_mask, &system_affinity_mask); + affinity_mask = 1; + /* Pick one cpu allowed by the system */ + if (system_affinity_mask) + while ((affinity_mask & system_affinity_mask) == 0) + affinity_mask <<= 1; + SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); + profiling_setup = 1; + } +} + +void pypy_teardown_profiling() { + if (profiling_setup) { + SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); + profiling_setup = 0; + } +} + #else void pypy_setup_profiling() { } void pypy_teardown_profiling() { } From noreply at buildbot.pypy.org Mon Jan 9 23:38:44 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 9 Jan 2012 23:38:44 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: typo fix Message-ID: <20120109223844.C098682110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks Changeset: r51186:66f1a9fb79c9 Date: 2012-01-09 16:38 -0600 http://bitbucket.org/pypy/pypy/changeset/66f1a9fb79c9/ Log: typo fix diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -63,7 +63,7 @@ cache.in_recursion = NonConstant(False) def set_optimize_hook(space, w_hook): - """ set_compile_hook(hook) + """ set_optimize_hook(hook) Set a compiling hook that will be called each time a loop is optimized, but before assembler compilation. This allows to add additional From noreply at buildbot.pypy.org Mon Jan 9 23:39:44 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 9 Jan 2012 23:39:44 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: remove dead file Message-ID: <20120109223944.6462482110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks Changeset: r51187:6a26fcde567b Date: 2012-01-09 16:39 -0600 http://bitbucket.org/pypy/pypy/changeset/6a26fcde567b/ Log: remove dead file diff --git a/REVIEW.rst b/REVIEW.rst deleted file mode 100644 --- a/REVIEW.rst +++ /dev/null @@ -1,12 +0,0 @@ -REVIEW NOTES -============ - -* ``namespace=locals()``, can we please not use ``locals()``, even in tests? I find it super hard to read, and it's bad for the JIT. -* Don't we already have a thing named portal (portal call maybe?) is the name confusing? -* ``interp_reso.pyp:wrap_greenkey()`` should do something useful on non-pypyjit jds. -* The ``WrappedOp`` constructor doesn't make much sense, it can only create an op with integer args? -* Let's at least expose ``name`` on ``WrappedOp``. -* DebugMergePoints don't appears to get their metadata. -* Someone else should review the annotator magic. -* Are entry_bridge's compiled seperately anymore? (``set_compile_hook`` docstring) - From noreply at buildbot.pypy.org Mon Jan 9 23:46:59 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 9 Jan 2012 23:46:59 +0100 (CET) Subject: [pypy-commit] pypy look-into-thread: don't look into those llops Message-ID: <20120109224659.638E082110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: look-into-thread Changeset: r51188:d2fe92d73a1f Date: 2012-01-10 00:46 +0200 http://bitbucket.org/pypy/pypy/changeset/d2fe92d73a1f/ Log: don't look into those llops diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -192,6 +192,7 @@ # Thread integration. # These are six completely ad-hoc operations at the moment. + at jit.dont_look_inside def gc_thread_prepare(): """To call just before thread.start_new_thread(). This allocates a new shadow stack to be used by the future @@ -202,6 +203,7 @@ if we_are_translated(): llop.gc_thread_prepare(lltype.Void) + at jit.dont_look_inside def gc_thread_run(): """To call whenever the current thread (re-)acquired the GIL. """ @@ -209,12 +211,14 @@ llop.gc_thread_run(lltype.Void) gc_thread_run._always_inline_ = True + at jit.dont_look_inside def gc_thread_start(): """To call at the beginning of a new thread. """ if we_are_translated(): llop.gc_thread_start(lltype.Void) + at jit.dont_look_inside def gc_thread_die(): """To call just before the final GIL release done by a dying thread. After a thread_die(), no more gc operation should @@ -224,6 +228,7 @@ llop.gc_thread_die(lltype.Void) gc_thread_die._always_inline_ = True + at jit.dont_look_inside def gc_thread_before_fork(): """To call just before fork(). Prepares for forking, after which only the current thread will be alive. @@ -233,6 +238,7 @@ else: return llmemory.NULL + at jit.dont_look_inside def gc_thread_after_fork(result_of_fork, opaqueaddr): """To call just after fork(). """ From noreply at buildbot.pypy.org Mon Jan 9 23:55:32 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 9 Jan 2012 23:55:32 +0100 (CET) Subject: [pypy-commit] pypy default: stylistic cleanups Message-ID: <20120109225532.0505D82110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51189:409a8b279f54 Date: 2012-01-09 16:55 -0600 http://bitbucket.org/pypy/pypy/changeset/409a8b279f54/ Log: stylistic cleanups diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -104,7 +104,7 @@ _attrs_ = () class W_IntegerBox(W_NumberBox): - descr__new__, get_dtype = new_dtype_getter("long") + pass class W_SignedIntegerBox(W_IntegerBox): pass @@ -200,7 +200,6 @@ ) W_IntegerBox.typedef = TypeDef("integer", W_NumberBox.typedef, - __new__ = interp2app(W_IntegerBox.descr__new__.im_func), __module__ = "numpypy", ) @@ -248,6 +247,7 @@ long_name = "int64" W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), __module__ = "numpypy", + __new__ = interp2app(W_LongBox.descr__new__.im_func), ) W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -564,7 +564,7 @@ return space.div(self.descr_sum(space), space.wrap(self.size)) def descr_var(self, space): - ''' var = mean( (values - mean(values))**2 ) ''' + # var = mean((values - mean(values)) ** 2) w_res = self.descr_sub(space, self.descr_mean(space)) assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) @@ -572,8 +572,8 @@ return w_res.descr_mean(space) def descr_std(self, space): - ''' std(v) = sqrt(var(v)) ''' - return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)] ) + # std(v) = sqrt(var(v)) + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) def descr_nonzero(self, space): if self.size > 1: From noreply at buildbot.pypy.org Tue Jan 10 00:05:48 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 10 Jan 2012 00:05:48 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: test for sum_promote fails miserably, signature.dtype is not arr.dtype Message-ID: <20120109230548.C411E82110@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51190:e00f14813b9e Date: 2012-01-10 01:04 +0200 http://bitbucket.org/pypy/pypy/changeset/e00f14813b9e/ Log: test for sum_promote fails miserably, signature.dtype is not arr.dtype diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -19,12 +19,12 @@ a[i][i] = 1 return a -def mean(a): +def mean(a, axis=None): if not hasattr(a, "mean"): a = numpypy.array(a) - return a.mean() + return a.mean(axis) -def sum(a): +def sum(a,axis=None): '''sum(a, axis=None) Sum of array elements over a given axis. @@ -51,12 +51,12 @@ # TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements. if not hasattr(a, "sum"): a = numpypy.array(a) - return a.sum() + return a.sum(axis) -def min(a): +def min(a, axis=None): if not hasattr(a, "min"): a = numpypy.array(a) - return a.min() + return a.min(axis) def max(a, axis=None): if not hasattr(a, "max"): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -724,10 +724,13 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array + from numpypy import array,mean a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 + a = array(range(105)).reshape(3, 5, 7) + assert (mean(a, axis=0) == array(range(35, 70)).reshape(5, 7)).all() + assert (mean(a, 2) == array(range(0, 15)).reshape(3, 5) * 7 + 3).all() def test_sum(self): from numpypy import array, arange From noreply at buildbot.pypy.org Tue Jan 10 10:47:05 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 10:47:05 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: add test to ensure that arguments are passed correctly Message-ID: <20120110094705.9B5F082110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51191:dd765153417e Date: 2012-01-10 10:46 +0100 http://bitbucket.org/pypy/pypy/changeset/dd765153417e/ Log: add test to ensure that arguments are passed correctly diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py --- a/pypy/jit/backend/ppc/test/test_runner.py +++ b/pypy/jit/backend/ppc/test/test_runner.py @@ -1,5 +1,16 @@ from pypy.jit.backend.test.runner_test import LLtypeBackendTest from pypy.jit.backend.ppc.runner import PPC_64_CPU +from pypy.jit.tool.oparser import parse +from pypy.jit.metainterp.history import (AbstractFailDescr, + AbstractDescr, + BasicFailDescr, + BoxInt, Box, BoxPtr, + JitCellToken, TargetToken, + ConstInt, ConstPtr, + BoxObj, Const, + ConstObj, BoxFloat, ConstFloat) +from pypy.rpython.lltypesystem import lltype, llmemory, rstr, rffi, rclass +from pypy.jit.codewriter.effectinfo import EffectInfo import py class FakeStats(object): @@ -13,3 +24,34 @@ def test_cond_call_gc_wb_array_card_marking_fast_path(self): py.test.skip("unsure what to do here") + + def test_compile_loop_many_int_args(self): + for numargs in range(1, 16): + for _ in range(numargs): + self.cpu.reserve_some_free_fail_descr_number() + ops = [] + arglist = "[%s]\n" % ", ".join(["i%d" % i for i in range(numargs)]) + ops.append(arglist) + + arg1 = 0 + arg2 = 1 + res = numargs + for i in range(numargs - 1): + op = "i%d = int_add(i%d, i%d)\n" % (res, arg1, arg2) + arg1 = res + res += 1 + arg2 += 1 + ops.append(op) + ops.append("finish(i%d)" % (res - 1)) + + ops = "".join(ops) + loop = parse(ops) + looptoken = JitCellToken() + done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr()) + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + ARGS = [lltype.Signed] * numargs + RES = lltype.Signed + args = [i+1 for i in range(numargs)] + res = self.cpu.execute_token(looptoken, *args) + assert self.cpu.get_latest_value_int(0) == sum(args) + From noreply at buildbot.pypy.org Tue Jan 10 11:24:41 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 11:24:41 +0100 (CET) Subject: [pypy-commit] pypy default: argh, I'm stupid, use the correct API Message-ID: <20120110102441.28E0782110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51192:e6f379da6e7c Date: 2012-01-10 12:24 +0200 http://bitbucket.org/pypy/pypy/changeset/e6f379da6e7c/ Log: argh, I'm stupid, use the correct API diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -741,7 +741,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.force_allocate_reg(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) From noreply at buildbot.pypy.org Tue Jan 10 11:37:40 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 11:37:40 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: rename test, start with 2 arguments Message-ID: <20120110103740.7AA8182110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51193:768f640c18b7 Date: 2012-01-10 11:36 +0100 http://bitbucket.org/pypy/pypy/changeset/768f640c18b7/ Log: rename test, start with 2 arguments diff --git a/pypy/jit/backend/ppc/test/test_runner.py b/pypy/jit/backend/ppc/test/test_runner.py --- a/pypy/jit/backend/ppc/test/test_runner.py +++ b/pypy/jit/backend/ppc/test/test_runner.py @@ -26,7 +26,7 @@ py.test.skip("unsure what to do here") def test_compile_loop_many_int_args(self): - for numargs in range(1, 16): + for numargs in range(2, 16): for _ in range(numargs): self.cpu.reserve_some_free_fail_descr_number() ops = [] From noreply at buildbot.pypy.org Tue Jan 10 11:37:41 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 11:37:41 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): fix offset to stack parameters Message-ID: <20120110103741.B9B5882110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51194:d1b7f8e3b929 Date: 2012-01-10 11:37 +0100 http://bitbucket.org/pypy/pypy/changeset/d1b7f8e3b929/ Log: (bivab, hager): fix offset to stack parameters diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -161,7 +161,7 @@ arg_index = 0 count = 0 n_register_args = len(r.PARAM_REGS) - cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD + 1 + cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD for box in inputargs: assert isinstance(box, Box) # handle inputargs in argument registers From noreply at buildbot.pypy.org Tue Jan 10 11:41:48 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 10 Jan 2012 11:41:48 +0100 (CET) Subject: [pypy-commit] pypy default: Fix the docstrings. Message-ID: <20120110104148.B446082110@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51195:799b4c3164db Date: 2012-01-10 11:41 +0100 http://bitbucket.org/pypy/pypy/changeset/799b4c3164db/ Log: Fix the docstrings. diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -390,12 +390,12 @@ 'threshold': 'number of times a loop has to run for it to become hot', 'function_threshold': 'number of times a function must run for it to become traced from start', 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', - 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TRACE_TOO_LONG', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG', 'inlining': 'inline python functions or not (1/0)', 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', 'retrace_limit': 'how many times we can try retracing before giving up', 'max_retrace_guards': 'number of extra guards a retrace can cause', - 'enable_opts': 'optimizations to enabled or all, INTERNAL USE ONLY' + 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' } PARAMETERS = {'threshold': 1039, # just above 1024, prime From noreply at buildbot.pypy.org Tue Jan 10 11:49:40 2012 From: noreply at buildbot.pypy.org (timo_jbo) Date: Tue, 10 Jan 2012 11:49:40 +0100 (CET) Subject: [pypy-commit] pypy strbuf_by_default: turn on the strbuf (strjoin v2) objspace optimisation by default Message-ID: <20120110104940.1F84582110@wyvern.cs.uni-duesseldorf.de> Author: Timo Paulssen Branch: strbuf_by_default Changeset: r51196:211606889b44 Date: 2012-01-10 11:47 +0100 http://bitbucket.org/pypy/pypy/changeset/211606889b44/ Log: turn on the strbuf (strjoin v2) objspace optimisation by default diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -237,7 +237,7 @@ default=False), BoolOption("withstrbuf", "use strings optimized for addition (ver 2)", - default=False), + default=True), BoolOption("withprebuiltchar", "use prebuilt single-character string objects", From noreply at buildbot.pypy.org Tue Jan 10 11:52:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 11:52:50 +0100 (CET) Subject: [pypy-commit] pypy look-into-thread: don't look into a function that does add_memory_pressure. We should fix it Message-ID: <20120110105250.27CDD82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: look-into-thread Changeset: r51197:a72a6f955660 Date: 2012-01-10 12:52 +0200 http://bitbucket.org/pypy/pypy/changeset/a72a6f955660/ Log: don't look into a function that does add_memory_pressure. We should fix it one day diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -156,6 +156,7 @@ null_ll_lock = lltype.nullptr(TLOCKP.TO) + at jit.dont_look_inside def allocate_ll_lock(): # track_allocation=False here; be careful to lltype.free() it. The # reason it is set to False is that we get it from all app-level From noreply at buildbot.pypy.org Tue Jan 10 12:19:11 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 12:19:11 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): fix off-by-one bug in computation of offset to stack locations Message-ID: <20120110111911.86B9382110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51198:308dd2d5e89f Date: 2012-01-10 12:18 +0100 http://bitbucket.org/pypy/pypy/changeset/308dd2d5e89f/ Log: (bivab, hager): fix off-by-one bug in computation of offset to stack locations diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -709,14 +709,14 @@ # move immediate value to memory elif loc.is_stack(): self.mc.alloc_scratch_reg() - offset = loc.as_key() * WORD - WORD + offset = loc.as_key() * WORD self.mc.load_imm(r.SCRATCH.value, value) self.mc.store(r.SCRATCH.value, r.SPP.value, offset) self.mc.free_scratch_reg() return assert 0, "not supported location" elif prev_loc.is_stack(): - offset = prev_loc.as_key() * WORD - WORD + offset = prev_loc.as_key() * WORD # move from memory to register if loc.is_reg(): reg = loc.as_key() @@ -724,7 +724,7 @@ return # move in memory elif loc.is_stack(): - target_offset = loc.as_key() * WORD - WORD + target_offset = loc.as_key() * WORD self.mc.alloc_scratch_reg() self.mc.load(r.SCRATCH.value, r.SPP.value, offset) self.mc.store(r.SCRATCH.value, r.SPP.value, target_offset) @@ -740,7 +740,7 @@ return # move to memory elif loc.is_stack(): - offset = loc.as_key() * WORD - WORD + offset = loc.as_key() * WORD self.mc.store(reg, r.SPP.value, offset) return assert 0, "not supported location" diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -161,7 +161,7 @@ arg_index = 0 count = 0 n_register_args = len(r.PARAM_REGS) - cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD + cur_frame_pos = -self.assembler.OFFSET_STACK_ARGS // WORD + 1 for box in inputargs: assert isinstance(box, Box) # handle inputargs in argument registers From noreply at buildbot.pypy.org Tue Jan 10 12:37:53 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 12:37:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: fix wrong initialisation of StackLocation in regalloc_push/regalloc_pop Message-ID: <20120110113753.6FA9782110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51199:e1dea1c15227 Date: 2012-01-10 12:37 +0100 http://bitbucket.org/pypy/pypy/changeset/e1dea1c15227/ Log: fix wrong initialisation of StackLocation in regalloc_push/regalloc_pop diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -39,6 +39,7 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop +from pypy.jit.backend.ppc.ppcgen.locations import StackLocation memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, @@ -757,7 +758,7 @@ assert 0, "not implemented yet" # XXX this code has to be verified assert not self.stack_in_use - target = StackLocation(self.ENCODING_AREA) # write to force index field + target = StackLocation(self.ENCODING_AREA // WORD) # write to ENCODING AREA self.regalloc_mov(loc, target) self.stack_in_use = True elif loc.is_reg(): @@ -782,7 +783,7 @@ assert 0, "not implemented yet" # XXX this code has to be verified assert self.stack_in_use - from_loc = StackLocation(self.ENCODING_AREA) + from_loc = StackLocation(self.ENCODING_AREA // WORD) # read from ENCODING AREA self.regalloc_mov(from_loc, loc) self.stack_in_use = False elif loc.is_reg(): From noreply at buildbot.pypy.org Tue Jan 10 13:04:35 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 13:04:35 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: adjust _build_propagate_exception_path to new interface Message-ID: <20120110120435.072BD82110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51200:46750704d638 Date: 2012-01-10 13:03 +0100 http://bitbucket.org/pypy/pypy/changeset/46750704d638/ Log: adjust _build_propagate_exception_path to new interface diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -282,7 +282,8 @@ mc = PPCBuilder() with Saved_Volatiles(mc): - addr = self.cpu.get_on_leave_jitted_int(save_exception=True) + addr = self.cpu.get_on_leave_jitted_int(save_exception=True, + default_to_memoryerror=True) mc.call(addr) mc.load_imm(r.RES, self.cpu.propagate_exception_v) From noreply at buildbot.pypy.org Tue Jan 10 13:59:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 13:59:42 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add a draft Message-ID: <20120110125942.4A43A82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4005:07cb0fa35b28 Date: 2012-01-10 14:56 +0200 http://bitbucket.org/pypy/extradoc/changeset/07cb0fa35b28/ Log: add a draft diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst new file mode 100644 --- /dev/null +++ b/blog/draft/laplace.rst @@ -0,0 +1,165 @@ +NumPyPy progress report - running benchmarks +============================================ + +Hello. + +I'm pleased to inform about progress we made on NumPyPy both in terms of +completeness and performance. This post mostly deals with the performance +side and how far we got by now. **Word of warning:** It's worth noting that +the performance work on the numpy side is not done - we're maybe half way +through and there are trivial and not so trivial optimizations to be performed. +In fact we didn't even start to implement some optimizations like vectorization. + +Benchmark +--------- + +We choose a laplace transform, which is also used on scipy's +`PerformancePython`_ wiki. The problem with the implementation on the +performance python wiki page is that there are two algorithms used which +has different convergence, but also very different performance characteristics +on modern machines. Instead we implemented our own versions in C and a set +of various Python versions using numpy or not. The full source is available +on `fijal's hack`_ repo and the exact revision used is 18502dbbcdb3. + +Let me describe various algorithms used. Note that some of them contain +pypy-specific hacks to work around current limitations in the implementation. +Those hacks will go away eventually and the performance should improve and +not decrease. It's worth noting that while numerically the algorithms used +are identical, the exact data layout is not and differs between methods. + +**Note on all the benchmarks:** they're all run once, but the performance +is very stable across runs. + +So, starting from the C version, it implements dead simple laplace transform +using two loops and a double-reference memory (array of ``int**``). The double +reference does not matter for performance and two algorithms are implemented +in ``inline-laplace.c`` and ``laplace.c``. They're both compiled with +``gcc 4.4.5`` and ``-O3``. + +A straightforward version of those in python +is implemented in ``laplace.py`` using respectively ``inline_slow_time_step`` +and ``slow_time_step``. ``slow_2_time_step`` does the same thing, except +it copies arrays in-place instead of creating new copies. + ++-----------------------+----------------------+--------------------+ +| bench | number of iterations | time per iteration | ++-----------------------+----------------------+--------------------+ +| laplace C | 219 | 6.3ms | ++-----------------------+----------------------+--------------------+ +| inline-laplace C | 278 | 20ms | ++-----------------------+----------------------+--------------------+ +| slow python | 219 | 17ms | ++-----------------------+----------------------+--------------------+ +| slow 2 python | 219 | 14ms | ++-----------------------+----------------------+--------------------+ +| inline_slow python | 278 | 23.7 | ++-----------------------+----------------------+--------------------+ + +The important thing to notice here that data dependency in the inline version +is causing a huge slowdown. Note that this is already **not too bad**, +as in yes, the braindead python version of the same algorithm takes longer +and pypy is not able to use as much info about data being independent, but this +is within the same ballpark - **15% - 170%** slower than C, but it definitely +matters more which algorithm you choose than which language. For a comparison, +slow versions take about **5.75s** each on CPython 2.6 **per iteration**, +so estimating, they're about **200x** slower than the PyPy equivalent. +I didn't measure full run though :) + +Next step is to use numpy expressions. The first problem we run into is that +computing the error walks again the entire array. This is fairly inefficient +in terms of cache access, so I took a liberty of computing errors every 15 +steps. This makes convergence rounded to the nearest 15 iterations, but +speeds things up anyway. ``numeric_time_step`` takes the most braindead +approach of replacing the array with itself, like this:: + + u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 + + (u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv + +We need 3 arrays here - one for an intermediate (pypy does not automatically +create intermediates for expressions), one for a copy to compute error and +one for the result. This works a bit by chance, since numpy ``+`` or +``*`` creates an intermediate and pypy simulates the behavior if necessary. + +``numeric_2_time_step`` works pretty much the same:: + + src = self.u + self.u = src.copy() + self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv + +except the copy is now explicit rather than implicit. + +``numeric_3_time_step`` does the same thing, but notices you don't have to copy +the entire array, it's enough to copy border pieces and fill rest with zeros:: + + src = self.u + self.u = numpy.zeros((self.nx, self.ny), 'd') + self.u[0] = src[0] + self.u[-1] = src[-1] + self.u[:, 0] = src[:, 0] + self.u[:, -1] = src[:, -1] + self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv + +``numeric_4_time_step`` is the one that tries to resemble the C version more. +Instead of doing an array copy, it actually notices that you can alternate +between two arrays. This is exactly what C version does. +Note the ``remove_invalidates`` call that's a pypy specific hack - we hope +to remove this call in the near future, but in short it promises "I don't +have any unbuilt intermediates that depend on the value of the argument", +which means you don't have to compute expressions you're not actually using:: + + remove_invalidates(self.old_u) + remove_invalidates(self.u) + self.old_u[:,:] = self.u + src = self.old_u + self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv + +This one is the most equivalent to the C version. + +``numeric_5_time_step`` does the same thing, but notices you don't have to +copy the entire array, it's enough to just copy edges. This is an optimization +that was not done in the C version:: + + remove_invalidates(self.old_u) + remove_invalidates(self.u) + src = self.u + self.old_u, self.u = self.u, self.old_u + self.u[0] = src[0] + self.u[-1] = src[-1] + self.u[:, 0] = src[:, 0] + self.u[:, -1] = src[:, -1] + self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv + +Let's look at the table of runs. As above, ``gcc 4.4.5``, compiled with +``-O3``, pypy nightly 7bb8b38d8563, 64bit platform. All of the numeric methods +run 226 steps each, slightly more than 219, rounding to the next 15 when +the error is computed. Comparison for PyPy and CPython: + ++-----------------------+-------------+----------------+ +| benchmark | PyPy | CPython | ++-----------------------+-------------+----------------+ +| numeric | 21ms | 35ms | ++-----------------------+-------------+----------------+ +| numeric 2 | 14ms | 37ms | ++-----------------------+-------------+----------------+ +| numeric 3 | 13ms | 29ms | ++-----------------------+-------------+----------------+ +| numeric 4 | 11ms | 31ms | ++-----------------------+-------------+----------------+ +| numeric 5 | 9.3ms | 21ms | ++-----------------------+-------------+----------------+ + +So, I can say that those preliminary results are pretty ok. They're not as +fast as the C version, but we're already much faster than CPython, almost +always more than 2x on this relatively real-world example. This is not the +end though. As we continue work, we hope to use a much better high level +information that we have about operations to eventually outperform C, hopefully +in 2012. Stay tuned. + +Cheers, +fijal + +.. _`PerformancePython`: http://www.scipy.org/PerformancePython From noreply at buildbot.pypy.org Tue Jan 10 13:59:44 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 13:59:44 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge Message-ID: <20120110125944.2983082110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4006:642dcd49d458 Date: 2012-01-10 14:59 +0200 http://bitbucket.org/pypy/extradoc/changeset/642dcd49d458/ Log: merge diff --git a/blog/draft/pycon-2012-teaser.rst b/blog/draft/pycon-2012-teaser.rst --- a/blog/draft/pycon-2012-teaser.rst +++ b/blog/draft/pycon-2012-teaser.rst @@ -13,7 +13,7 @@ perform much better. In this tutorial we'll give you insights on how to push PyPy to it's limits. We'll focus on understanding the performance characteristics of PyPy, and learning the analysis tools in order to maximize - your applications performance. + your applications performance. *This is the tutorial.* * **Why PyPy by example**, by Maciej Fijalkowski, Alex Gaynor and Armin Rigo: One of the goals of PyPy is to make existing Python code faster, however an diff --git a/planning/jit.txt b/planning/jit.txt --- a/planning/jit.txt +++ b/planning/jit.txt @@ -86,8 +86,6 @@ - ((turn max(x, y)/min(x, y) into MAXSD, MINSD instructions when x and y are floats.)) (a mess, MAXSD/MINSD have different semantics WRT nan) -- list.pop() (with no arguments) calls into delitem, rather than recognizing that - no items need to be moved BACKEND TASKS ------------- diff --git a/sprintinfo/leysin-winter-2011/announcement.txt b/sprintinfo/leysin-winter-2012/announcement.txt copy from sprintinfo/leysin-winter-2011/announcement.txt copy to sprintinfo/leysin-winter-2012/announcement.txt --- a/sprintinfo/leysin-winter-2011/announcement.txt +++ b/sprintinfo/leysin-winter-2012/announcement.txt @@ -1,30 +1,23 @@ ===================================================================== - PyPy Leysin Winter Sprint (16-22nd January 2011) + PyPy Leysin Winter Sprint (15-22nd January 2012) ===================================================================== The next PyPy sprint will be in Leysin, Switzerland, for the -seventh time. This is a fully public sprint: newcomers and topics +eighth time. This is a fully public sprint: newcomers and topics other than those proposed below are welcome. ------------------------------ Goals and topics of the sprint ------------------------------ -* Now that we have released 1.4, and plan to release 1.4.1 soon - (possibly before the sprint), the sprint itself is going to be - mainly working on fixing issues reported by various users. Of - course this does not prevent people from showing up with a more - precise interest in mind. If there are newcomers, we will gladly - give introduction talks. +* Py3k: work towards supporting Python 3 in PyPy -* We will also work on polishing and merging the long-standing - branches that are around, which could eventually lead to the - next PyPy release. These branches are notably: +* NumPyPy: work towards supporting the numpy module in PyPy - - fast-forward (Python 2.7 support, by Benjamin, Amaury, and others) - - jit-unroll-loops (improve JITting of smaller loops, by Hakan) - - arm-backend (a JIT backend for ARM, by David) - - jitypes2 (fast ctypes calls with the JIT, by Antonio). +* JIT backends: integrate tests for ARM; look at the PowerPC 64; + maybe try again to write an LLVM- or GCC-based one + +* STM and STM-related topics; or the Concurrent Mark-n-Sweep GC * And as usual, the main side goal is to have fun in winter sports :-) We can take a day off for ski. @@ -33,8 +26,9 @@ Exact times ----------- -The work days should be 16-22 January 2011. People may arrive on -the 15th already and/or leave on the 23rd. +The work days should be 15-21 January 2011 (Sunday-Saturday). The +official plans are for people to arrive on the 14th or the 15th, and to +leave on the 22nd. ----------------------- Location & Accomodation @@ -56,13 +50,14 @@ expensive) and maybe the possibility to get a single room if you really want to. -Please register by svn: +Please register by Mercurial:: - http://codespeak.net/svn/pypy/extradoc/sprintinfo/leysin-winter-2011/people.txt + https://bitbucket.org/pypy/extradoc/ + https://bitbucket.org/pypy/extradoc/raw/extradoc/sprintinfo/leysin-winter-2012 -or on the pypy-sprint mailing list if you do not yet have check-in rights: +or on the pypy-dev mailing list if you do not yet have check-in rights: - http://codespeak.net/mailman/listinfo/pypy-sprint + http://mail.python.org/mailman/listinfo/pypy-dev You need a Swiss-to-(insert country here) power adapter. There will be some Swiss-to-EU adapters around -- bring a EU-format power strip if you diff --git a/sprintinfo/leysin-winter-2012/people.txt b/sprintinfo/leysin-winter-2012/people.txt new file mode 100644 --- /dev/null +++ b/sprintinfo/leysin-winter-2012/people.txt @@ -0,0 +1,60 @@ + +People coming to the Leysin sprint Winter 2011 +================================================== + +People who have a ``?`` in their arrive/depart or accomodation +column are known to be coming but there are no details +available yet from them. + + +==================== ============== ======================= + Name Arrive/Depart Accomodation +==================== ============== ======================= +Armin Rigo private +David Schneider 17/22 ermina +Antonio Cuni 16/22 ermina, might arrive on the 15th +Romain Guillebert 15/22 ermina +==================== ============== ======================= + + +People on the following list were present at previous sprints: + +==================== ============== ===================== + Name Arrive/Depart Accomodation +==================== ============== ===================== +Antonio Cuni ? ? +Michael Foord ? ? +Maciej Fijalkowski ? ? +David Schneider ? ? +Jacob Hallen ? ? +Laura Creighton ? ? +Hakan Ardo ? ? +Carl Friedrich Bolz ? ? +Samuele Pedroni ? ? +Anders Hammarquist ? ? +Christian Tismer ? ? +Niko Matsakis ? ? +Toby Watson ? ? +Paul deGrandis ? ? +Michael Hudson ? ? +Anders Lehmann ? ? +Niklaus Haldimann ? ? +Lene Wagner ? ? +Amaury Forgeot d'Arc ? ? +Valentino Volonghi ? ? +Boris Feigin ? ? +Andrew Thompson ? ? +Bert Freudenberg ? ? +Beatrice Duering ? ? +Richard Emslie ? ? +Johan Hahn ? ? +Stephan Diehl ? ? +Alexander Schremmer ? ? +Anders Chrigstroem ? ? +Eric van Riet Paap ? ? +Holger Krekel ? ? +Guido Wesdorp ? ? +Leonardo Santagada ? ? +Alexandre Fayolle ? ? +Sylvain Th�nault ? ? +==================== ============== ===================== diff --git a/talk/dagstuhl2012/figures/all_numbers.png b/talk/dagstuhl2012/figures/all_numbers.png new file mode 100644 index 0000000000000000000000000000000000000000..9076ac193fc9ba1954e24e2ae372ec7e1e1f44e6 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/metatrace01.pdf b/talk/dagstuhl2012/figures/metatrace01.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0b7181b5a476093c16ff1233f37535378ef7bf8a GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/telco.png b/talk/dagstuhl2012/figures/telco.png new file mode 100644 index 0000000000000000000000000000000000000000..56033389dab8bfe8211ffd4de5bfcb23cdc94b0f GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace-levels-metatracing.svg b/talk/dagstuhl2012/figures/trace-levels-metatracing.svg new file mode 100644 --- /dev/null +++ b/talk/dagstuhl2012/figures/trace-levels-metatracing.svg @@ -0,0 +1,833 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + CPU + + Interpreter + + User Program + + + f1 + + + + + f2 + + + + + main_loop + + + + + + BINARY_ADD + + + + JUMP_IF_FALSE + + + + + + + ... + ... + + + + + + + + + + Trace for f1 + + ops frommain_loop...ops fromBINARY_ADD...more ops frommain_loop...ops_fromJUMP_IF_FALSEguard(...)jump to start + + + Tracer + + + CPU + + + diff --git a/talk/dagstuhl2012/figures/trace-levels-tracing.svg b/talk/dagstuhl2012/figures/trace-levels-tracing.svg new file mode 100644 --- /dev/null +++ b/talk/dagstuhl2012/figures/trace-levels-tracing.svg @@ -0,0 +1,991 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + CPU + + Interpreter + + + + + main_loop + + + + + + BINARY_ADD + + + + JUMP_IF_FALSE + + + + + + + ... + ... + + + + + + Tracer + + + CPU + + + + + + + CPU + User Program + + + f1 + + + + + f2 + + + + diff --git a/talk/dagstuhl2012/figures/trace01.pdf b/talk/dagstuhl2012/figures/trace01.pdf new file mode 100644 index 0000000000000000000000000000000000000000..252b5089e72d3626e636cd02397204a464c7ca22 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace02.pdf b/talk/dagstuhl2012/figures/trace02.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ece12fe0c3f96856afea26c49d92ade630db9328 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace03.pdf b/talk/dagstuhl2012/figures/trace03.pdf new file mode 100644 index 0000000000000000000000000000000000000000..04b38b8996eb2c297214c017bbe1cce1f8f64bdb GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace04.pdf b/talk/dagstuhl2012/figures/trace04.pdf new file mode 100644 index 0000000000000000000000000000000000000000..472b798aeae005652fc0d749ed6571117c5819d9 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/figures/trace05.pdf b/talk/dagstuhl2012/figures/trace05.pdf new file mode 100644 index 0000000000000000000000000000000000000000..977e3bbda8d4d349f27f06fcdaa3bd100e95e1a7 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/meta-tracing-pypy.pdf b/talk/dagstuhl2012/meta-tracing-pypy.pdf new file mode 100644 index 0000000000000000000000000000000000000000..910ebeedd1a286e8104d7f59848d1bfa80eed050 GIT binary patch [cut] diff --git a/talk/dagstuhl2012/talk.tex b/talk/dagstuhl2012/talk.tex new file mode 100644 --- /dev/null +++ b/talk/dagstuhl2012/talk.tex @@ -0,0 +1,417 @@ +\documentclass[utf8x]{beamer} + +% This file is a solution template for: + +% - Talk at a conference/colloquium. +% - Talk length is about 20min. +% - Style is ornate. + +\mode +{ + \usetheme{Warsaw} + % or ... + + %\setbeamercovered{transparent} + % or whatever (possibly just delete it) +} + + +\usepackage[english]{babel} +\usepackage{listings} +\usepackage{fancyvrb} +\usepackage{ulem} +\usepackage{color} +\usepackage{alltt} +\usepackage{hyperref} + +\usepackage[utf8x]{inputenc} + + +\newcommand\redsout[1]{{\color{red}\sout{\hbox{\color{black}{#1}}}}} +\newcommand{\noop}{} + +% or whatever + +% Or whatever. Note that the encoding and the font should match. If T1 +% does not look nice, try deleting the line with the fontenc. + + +\title{Meta-Tracing in the PyPy Project} + +\author[Carl Friedrich Bolz et. al.]{\emph{Carl Friedrich Bolz}\inst{1} \and Antonio Cuni\inst{1} \and Maciej Fijałkowski\inst{2} \and Michael Leuschel\inst{1} \and Samuele Pedroni\inst{3} \and Armin Rigo\inst{1} \and many~more} +% - Give the names in the same order as the appear in the paper. +% - Use the \inst{?} command only if the authors have different +% affiliation. + +\institute[Heinrich-Heine-Universität Düsseldorf] +{$^1$Heinrich-Heine-Universität Düsseldorf, STUPS Group, Germany \and + + $^2$merlinux GmbH, Hildesheim, Germany \and + + $^3$Canonical +} + +\date{Foundations of Scription Languages, Dagstuhl, 5th January 2012} +% - Either use conference name or its abbreviation. +% - Not really informative to the audience, more for people (including +% yourself) who are reading the slides online + + +% If you have a file called "university-logo-filename.xxx", where xxx +% is a graphic format that can be processed by latex or pdflatex, +% resp., then you can add a logo as follows: + + + + +% Delete this, if you do not want the table of contents to pop up at +% the beginning of each subsection: +%\AtBeginSubsection[] +%{ +% \begin{frame} +% \frametitle{Outline} +% \tableofcontents[currentsection,currentsubsection] +% \end{frame} +%} + + +% If you wish to uncover everything in a step-wise fashion, uncomment +% the following command: + +%\beamerdefaultoverlayspecification{<+->} + + +\begin{document} + +\begin{frame} + \titlepage +\end{frame} + +\begin{frame} + \frametitle{Good JIT Compilers for Scripting Languages are Hard} + \begin{itemize} + \item recent languages like Python, Ruby, JS, PHP have complex core semantics + \item many corner cases, even hard to interpret correctly + \item particularly in contexts where you have limited resources (like + academic, Open Source) + \end{itemize} + \pause + \begin{block}{Problems} + \begin{enumerate} + \item implement all corner-cases of semantics correctly + \item ... and the common cases efficiently + \item while maintaing reasonable simplicity in the implementation + \end{enumerate} + \end{block} +\end{frame} + +\begin{frame} + \frametitle{Example: Attribute Reads in Python} + What happens when an attribute \texttt{x.m} is read? (simplified) + \pause + \begin{itemize} + \item check for \texttt{x.\_\_getattribute\_\_}, if there, call it + \pause + \item look for the attribute in the object's dictionary, if it's there, return it + \pause + \item walk up the MRO and look in each class' dictionary for the attribute + \pause + \item if the attribute is found, call its \texttt{\_\_get\_\_} attribute and return the result + \pause + \item if the attribute is not found, look for \texttt{x.\_\_getattr\_\_}, if there, call it + \pause + \item raise an \texttt{AttributeError} + \end{itemize} +\end{frame} + +\begin{frame} + \frametitle{An Interpreter} + \includegraphics[scale=0.5]{figures/trace01.pdf} +\end{frame} + +\begin{frame} + \frametitle{A Tracing JIT} + \includegraphics[scale=0.5]{figures/trace02.pdf} +\end{frame} + +\begin{frame} + \frametitle{A Tracing JIT} + \includegraphics[scale=0.5]{figures/trace03.pdf} +\end{frame} + +\begin{frame} + \frametitle{A Tracing JIT} + \includegraphics[scale=0.5]{figures/trace04.pdf} +\end{frame} + +\begin{frame} + \frametitle{Tracing JITs} + Advantages: + \begin{itemize} + \item can be added to existing VM + \item interpreter does a lot of work + \item can fall back to interpreter for uncommon paths + \end{itemize} + \pause + \begin{block}{Problems} + \begin{itemize} + \item traces typically contain bytecodes + \item many scripting languages have bytecodes that contain complex logic + \item need to expand the bytecode in the trace into something more explicit + \item this duplicates the language semantics in the tracer/optimizer + \end{itemize} + \end{block} +\end{frame} + +\begin{frame} + \frametitle{Idea of Meta-Tracing} + \includegraphics[scale=0.5]{figures/trace05.pdf} +\end{frame} + +\begin{frame} + \frametitle{Meta-Tracing} + \includegraphics[scale=0.5]{figures/metatrace01.pdf} +\end{frame} + +\begin{frame} + \frametitle{Meta-Tracing JITs} + \begin{block}{Advantages:} + \begin{itemize} + \item semantics are always like that of the interpreter + \item trace fully contains language semantics + \item meta-tracers can be reused for various interpreters + \end{itemize} + \end{block} + \pause + a few meta-tracing systems have been built: + \begin{itemize} + \item Sullivan et.al. describe a meta-tracer using the Dynamo RIO system + \item Yermolovich et.al. run a Lua implementation on top of a tracing JS implementation + \item SPUR is a tracing JIT for CLR bytecodes, which is used to speed up a JS implementation in C\# + \end{itemize} +\end{frame} + +\begin{frame} + \frametitle{PyPy} + A general environment for implementing scripting languages + \pause + \begin{block}{Approach} + \begin{itemize} + \item write an interpreter for the language in RPython + \item compilable to an efficient C-based VM + \pause + \item (RPython is a restricted subset of Python) + \end{itemize} + \end{block} + \pause +\end{frame} + +\begin{frame} + \frametitle{PyPy's Meta-Tracing JIT} + \begin{itemize} + \item PyPy contains a meta-tracing JIT for interpreters in RPython + \item needs a few source-code hints (or annotations) \emph{in the interpreter} + \item allows interpreter-author to express language specific type feedback + \item contains powerful general optimizations + \pause + \item general techniques to deal with reified frames + \end{itemize} +\end{frame} + + + +\begin{frame} + \frametitle{Language Implementations Done with PyPy} + \begin{itemize} + \item Most complete language implemented: Python + \item regular expression matcher of Python standard library + \item A reasonably complete Prolog + \item Converge (previous talk) + \item lots of experiments (Squeak, Gameboy emulator, JS, start of a PHP, Haskell, ...) + \end{itemize} +\end{frame} + + +\begin{frame} + \frametitle{Some Benchmarks for Python} + \begin{itemize} + \item benchmarks done using PyPy's Python interpreter + \item about 30'000 lines of code + \end{itemize} +\end{frame} + +\begin{frame} + \includegraphics[scale=0.3]{figures/all_numbers.png} +\end{frame} + +\begin{frame} + \frametitle{Telco Benchmark} + \includegraphics[scale=0.3]{figures/telco.png} +\end{frame} + +\begin{frame} + \frametitle{Conclusion} + \begin{itemize} + \item writing good JITs for recent scripting languages is too hard! + \item only reasonable if the language is exceptionally simple + \item or if somebody has a lot of money + \item PyPy is one point in a large design space of meta-solutions + \item uses tracing on the level of the interpreter (meta-tracing) to get speed + \pause + \item \textbf{In a way, the exact approach is not too important: let's write more meta-tools!} + \end{itemize} +\end{frame} + +\begin{frame} + \frametitle{Thank you! Questions?} + \begin{itemize} + \item writing good JITs for recent scripting languages is too hard! + \item only reasonable if the language is exceptionally simple + \item or if somebody has a lot of money + \item PyPy is one point in a large design space of meta-solutions + \item uses tracing on the level of the interpreter (meta-tracing) to get speed + \item \textbf{In a way, the exact approach is not too important: let's write more meta-tools!} + \end{itemize} +\end{frame} + +\begin{frame} + \frametitle{Possible Further Slides} + \hyperlink{necessary-hints}{\beamergotobutton{}} Getting Meta-Tracing to Work + + \hyperlink{feedback}{\beamergotobutton{}} Language-Specific Runtime Feedback + + \hyperlink{optimizations}{\beamergotobutton{}} Powerful General Optimizations + + \hyperlink{virtualizables}{\beamergotobutton{}} Optimizing Reified Frames + + \hyperlink{which-langs}{\beamergotobutton{}} Which Languages Can Meta-Tracing be Used With? + + \hyperlink{OOVM}{\beamergotobutton{}} Using OO VMs as an implementation substrate + + \hyperlink{PE}{\beamergotobutton{}} Comparison with Partial Evaluation + +\end{frame} + +\begin{frame}[label=necessary-hints] + \frametitle{Getting Meta-Tracing to Work} + \begin{itemize} + \item Interpreter author needs add some hints to the interpreter + \item one hint to identify the bytecode dispatch loop + \item one hint to identify the jump bytecode + \item with these in place, meta-tracing works + \item but produces non-optimal code + \end{itemize} +\end{frame} + + +\begin{frame}[label=feedback] + \frametitle{Language-Specific Runtime Feedback} + Problems of Naive Meta-Tracing: + \begin{itemize} + \item user-level types are normal instances on the implementation level + \item thus no runtime feedback of user-level types + \item tracer does not know about invariants in the interpreter + \end{itemize} + \pause + \begin{block}{Solution in PyPy} + \begin{itemize} + \item introduce more hints that the interpreter-author can use + \item hints are annotation in the interpreter + \item they give information to the meta-tracer + \pause + \item one to induce runtime feedback of arbitrary information (typically types) + \item the second one to influence constant folding + \end{itemize} + \end{block} +\end{frame} + + +\begin{frame}[label=optimizations] + \frametitle{Powerful General Optimizations} + \begin{itemize} + \item Very powerful general optimizations on traces + \pause + \begin{block}{Heap Optimizations} + \begin{itemize} + \item escape analysis/allocation removal + \item remove short-lived objects + \item gets rid of the overhead of boxing primitive types + \item also reduces overhead of constant heap accesses + \end{itemize} + \end{block} + \end{itemize} +\end{frame} + +\begin{frame}[label=virtualizables] + \frametitle{Optimizing Reified Frames} + \begin{itemize} + \item Common problem in scripting languages + \item frames are reified in the language, i.e. can be accessed via reflection + \item used to implement the debugger in the language itself + \item or for more advanced usecases (backtracking in Smalltalk) + \item when using a JIT, quite expensive to keep them up-to-date + \pause + \begin{block}{Solution in PyPy} + \begin{itemize} + \item General mechanism for updating reified frames lazily + \item use deoptimization when frame objects are accessed by the program + \item interpreter just needs to mark the frame class + \end{itemize} + \end{block} + \end{itemize} +\end{frame} + + +\begin{frame}[label=which-langs] + \frametitle{Bonus: Which Languages Can Meta-Tracing be Used With?} + \begin{itemize} + \item To make meta-tracing useful, there needs to be some kind of runtime variability + \item that means it definitely works for all dynamically typed languages + \item ... but also for other languages with polymorphism that is not resolvable at compile time + \item most languages that have any kind of runtime work + \end{itemize} +\end{frame} + +\begin{frame}[label=OOVM] + \frametitle{Bonus: Using OO VMs as an implementation substrate} + \begin{block}{Benefits} + \begin{itemize} + \item higher level of implementation + \item the VM supplies a GC and mostly a JIT + \item better interoperability than what the C level provides + \item \texttt{invokedynamic} should make it possible to get language-specific runtime feedback + \end{itemize} + \end{block} + \pause + \begin{block}{Problems} + \begin{itemize} + \item can be hard to map concepts of the scripting language to + the host OO VM + \item performance is often not improved, and can be very bad, because of this + semantic mismatch + \item getting good performance needs a huge amount of tweaking + \item tools not really prepared to deal with people that care about + the shape of the generated assembler + \end{itemize} + \end{block} + \pause +\end{frame} + +\begin{frame}[label=PE] + \frametitle{Bonus: Comparison with Partial Evaluation} + \begin{itemize} + \pause + \item the only difference between meta-tracing and partial evaluation is that meta-tracing works + \pause + \item ... mostly kidding + \pause + \item very similar from the motivation and ideas + \item PE was never scaled up to perform well on large interpreters + \item classical PE mostly ahead of time + \item PE tried very carefully to select the right paths to inline and optimize + \item quite often this fails and inlines too much or too little + \item tracing is much more pragmatic: simply look what happens + \end{itemize} +\end{frame} + +\end{document} diff --git a/talk/icooolps2011/talk/talk.tex b/talk/icooolps2011/talk/talk.tex --- a/talk/icooolps2011/talk/talk.tex +++ b/talk/icooolps2011/talk/talk.tex @@ -437,7 +437,7 @@ |{\color{gray}$index_1$ = Map.getindex($map_1$, "a")}| |{\color{gray}guard($index_1$ != -1)}| $storage_1$ = $inst_1$.storage -$result_1$ = $storage_1$[$index_1$}] +$result_1$ = $storage_1$[$index_1$] # $inst_1$.getfield("b") |{\color{gray}$map_2$ = $inst_1$.map| diff --git a/talk/iwtc11/benchmarks/image/io.py b/talk/iwtc11/benchmarks/image/io.py --- a/talk/iwtc11/benchmarks/image/io.py +++ b/talk/iwtc11/benchmarks/image/io.py @@ -1,4 +1,6 @@ import os, re, array +from subprocess import Popen, PIPE, STDOUT + def mplayer(Image, fn='tv://', options=''): f = os.popen('mplayer -really-quiet -noframedrop ' + options + ' ' @@ -19,18 +21,18 @@ def view(self, img): assert img.typecode == 'B' if not self.width: - self.mplayer = os.popen('mplayer -really-quiet -noframedrop - ' + - '2> /dev/null ', 'w') - self.mplayer.write('YUV4MPEG2 W%d H%d F100:1 Ip A1:1\n' % - (img.width, img.height)) + w, h = img.width, img.height + self.mplayer = Popen(['mplayer', '-', '-benchmark', + '-demuxer', 'rawvideo', + '-rawvideo', 'w=%d:h=%d:format=y8' % (w, h), + '-really-quiet'], + stdin=PIPE, stdout=PIPE, stderr=PIPE) + self.width = img.width self.height = img.height - self.color_data = array.array('B', [127]) * (img.width * img.height / 2) assert self.width == img.width assert self.height == img.height - self.mplayer.write('FRAME\n') - img.tofile(self.mplayer) - self.color_data.tofile(self.mplayer) + img.tofile(self.mplayer.stdin) default_viewer = MplayerViewer() From noreply at buildbot.pypy.org Tue Jan 10 14:03:54 2012 From: noreply at buildbot.pypy.org (timo_jbo) Date: Tue, 10 Jan 2012 14:03:54 +0100 (CET) Subject: [pypy-commit] pypy strbuf_by_default: fix whitebox test that checks for W_StringObject, rather than W_AbstractStringObject. Message-ID: <20120110130354.8F39682110@wyvern.cs.uni-duesseldorf.de> Author: Timo Paulssen Branch: strbuf_by_default Changeset: r51201:9014cd34145f Date: 2012-01-10 13:56 +0100 http://bitbucket.org/pypy/pypy/changeset/9014cd34145f/ Log: fix whitebox test that checks for W_StringObject, rather than W_AbstractStringObject. diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -48,13 +48,13 @@ assert space.sliceindices(w_obj, w(3)) == (1,2,3) def test_fastpath_isinstance(self): - from pypy.objspace.std.stringobject import W_StringObject + from pypy.objspace.std.stringobject import W_AbstractStringObject, W_StringObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.iterobject import W_AbstractSeqIterObject from pypy.objspace.std.iterobject import W_SeqIterObject space = self.space - assert space._get_interplevel_cls(space.w_str) is W_StringObject + assert space._get_interplevel_cls(space.w_str) is W_AbstractStringObject assert space._get_interplevel_cls(space.w_int) is W_IntObject class X(W_StringObject): def __init__(self): From noreply at buildbot.pypy.org Tue Jan 10 14:11:31 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 10 Jan 2012 14:11:31 +0100 (CET) Subject: [pypy-commit] pypy default: reintroduce changes done in b6390a34f261 to push_arg_as_ffiptr in clibffi.py, somehow lost in a731ffd298b4 Message-ID: <20120110131131.20B4482110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: Changeset: r51202:e2f82a5d9f5e Date: 2012-01-10 14:09 +0100 http://bitbucket.org/pypy/pypy/changeset/e2f82a5d9f5e/ Log: reintroduce changes done in b6390a34f261 to push_arg_as_ffiptr in clibffi.py, somehow lost in a731ffd298b4 diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -360,12 +363,36 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' From noreply at buildbot.pypy.org Tue Jan 10 14:13:53 2012 From: noreply at buildbot.pypy.org (bivab) Date: Tue, 10 Jan 2012 14:13:53 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge default Message-ID: <20120110131353.889EE82110@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r51203:d094b25960ad Date: 2012-01-10 14:13 +0100 http://bitbucket.org/pypy/pypy/changeset/d094b25960ad/ Log: merge default diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -257,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,7 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. This layout makes the number of types to take care about quite limited. diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -683,7 +683,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -420,8 +420,8 @@ debug._log = None # assert ops_offset is looptoken._x86_ops_offset - # getfield_raw/int_add/setfield_raw + ops + None - assert len(ops_offset) == 3 + len(operations) + 1 + # 2*(getfield_raw/int_add/setfield_raw) + ops + None + assert len(ops_offset) == 2*3 + len(operations) + 1 assert (ops_offset[operations[0]] <= ops_offset[operations[1]] <= ops_offset[operations[2]] <= diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -442,6 +442,22 @@ """ self.optimize_loop(ops, expected) + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -271,6 +271,10 @@ if newresult is not op.result and not newvalue.is_constant(): op = ResOperation(rop.SAME_AS, [op.result], newresult) self.optimizer._newoperations.append(op) + if self.optimizer.loop.logops: + debug_print(' Falling back to add extra: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + self.optimizer.flush() self.optimizer.emitting_dissabled = False @@ -435,7 +439,13 @@ return for a in op.getarglist(): if not isinstance(a, Const) and a not in seen: - self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen) + self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, + seen) + + if self.optimizer.loop.logops: + debug_print(' Emitting short op: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + optimizer.send_extra_operation(op) seen[op.result] = True if op.is_ovf(): diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -16,15 +16,13 @@ # debug name = "" pc = 0 + opnum = 0 def __init__(self, result): self.result = result - # methods implemented by each concrete class - # ------------------------------------------ - def getopnum(self): - raise NotImplementedError + return self.opnum # methods implemented by the arity mixins # --------------------------------------- @@ -590,12 +588,9 @@ baseclass = PlainResOp mixin = arity2mixin.get(arity, N_aryOp) - def getopnum(self): - return opnum - cls_name = '%s_OP' % name bases = (get_base_class(mixin, baseclass),) - dic = {'getopnum': getopnum} + dic = {'opnum': opnum} return type(cls_name, bases, dic) setup(__name__ == '__main__') # print out the table when run directly diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py --- a/pypy/jit/metainterp/test/test_resoperation.py +++ b/pypy/jit/metainterp/test/test_resoperation.py @@ -30,17 +30,17 @@ cls = rop.opclasses[rop.rop.INT_ADD] assert issubclass(cls, rop.PlainResOp) assert issubclass(cls, rop.BinaryOp) - assert cls.getopnum.im_func(None) == rop.rop.INT_ADD + assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD cls = rop.opclasses[rop.rop.CALL] assert issubclass(cls, rop.ResOpWithDescr) assert issubclass(cls, rop.N_aryOp) - assert cls.getopnum.im_func(None) == rop.rop.CALL + assert cls.getopnum.im_func(cls) == rop.rop.CALL cls = rop.opclasses[rop.rop.GUARD_TRUE] assert issubclass(cls, rop.GuardResOp) assert issubclass(cls, rop.UnaryOp) - assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE + assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE def test_mixins_in_common_base(): INT_ADD = rop.opclasses[rop.rop.INT_ADD] diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize @@ -387,6 +388,8 @@ "Float": "space.w_float", "Long": "space.w_long", "Complex": "space.w_complex", + "ByteArray": "space.w_bytearray", + "MemoryView": "space.gettypeobject(W_MemoryView.typedef)", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', 'NotImplemented': 'space.type(space.w_NotImplemented)', diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py --- a/pypy/module/cpyext/buffer.py +++ b/pypy/module/cpyext/buffer.py @@ -1,6 +1,36 @@ +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, Py_buffer) +from pypy.module.cpyext.pyobject import PyObject + + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyObject_CheckBuffer(space, w_obj): + """Return 1 if obj supports the buffer interface otherwise 0.""" + return 0 # the bf_getbuffer field is never filled by cpyext + + at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real], + rffi.INT_real, error=-1) +def PyObject_GetBuffer(space, w_obj, view, flags): + """Export obj into a Py_buffer, view. These arguments must + never be NULL. The flags argument is a bit field indicating what + kind of buffer the caller is prepared to deal with and therefore what + kind of buffer the exporter is allowed to return. The buffer interface + allows for complicated memory sharing possibilities, but some caller may + not be able to handle all the complexity but may want to see if the + exporter will let them take a simpler view to its memory. + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a BufferError unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + Py_buffer structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + 0 is returned on success and -1 on error.""" + raise OperationError(space.w_TypeError, space.wrap( + 'PyPy does not yet implement the new buffer interface')) @cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL) def PyBuffer_IsContiguous(space, view, fortran): diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -123,10 +123,6 @@ typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *); typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **); -typedef int (*objobjproc)(PyObject *, PyObject *); -typedef int (*visitproc)(PyObject *, void *); -typedef int (*traverseproc)(PyObject *, visitproc, void *); - /* Py3k buffer interface */ typedef struct bufferinfo { void *buf; @@ -153,6 +149,41 @@ typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); + /* Flags for getting buffers */ +#define PyBUF_SIMPLE 0 +#define PyBUF_WRITABLE 0x0001 +/* we used to include an E, backwards compatible alias */ +#define PyBUF_WRITEABLE PyBUF_WRITABLE +#define PyBUF_FORMAT 0x0004 +#define PyBUF_ND 0x0008 +#define PyBUF_STRIDES (0x0010 | PyBUF_ND) +#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) +#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) +#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) +#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE) +#define PyBUF_CONTIG_RO (PyBUF_ND) + +#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE) +#define PyBUF_STRIDED_RO (PyBUF_STRIDES) + +#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT) + +#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT) + + +#define PyBUF_READ 0x100 +#define PyBUF_WRITE 0x200 +#define PyBUF_SHADOW 0x400 +/* end Py3k buffer interface */ + +typedef int (*objobjproc)(PyObject *, PyObject *); +typedef int (*visitproc)(PyObject *, void *); +typedef int (*traverseproc)(PyObject *, visitproc, void *); + typedef struct { /* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all arguments are guaranteed to be of the object's type (modulo diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -5,7 +5,7 @@ struct _is; /* Forward */ typedef struct _is { - int _foo; + struct _is *next; } PyInterpreterState; typedef struct _ts { diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -2,7 +2,10 @@ cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) from pypy.rpython.lltypesystem import rffi, lltype -PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ())) +PyInterpreterStateStruct = lltype.ForwardReference() +PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) +cpython_struct( + "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) @@ -54,7 +57,8 @@ class InterpreterState(object): def __init__(self, space): - self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True) + self.interpreter_state = lltype.malloc( + PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) def new_thread_state(self): capsule = ThreadStateCapsule() diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -34,141 +34,6 @@ @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyObject_CheckBuffer(space, obj): - """Return 1 if obj supports the buffer interface otherwise 0.""" - raise NotImplementedError - - at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1) -def PyObject_GetBuffer(space, obj, view, flags): - """Export obj into a Py_buffer, view. These arguments must - never be NULL. The flags argument is a bit field indicating what - kind of buffer the caller is prepared to deal with and therefore what - kind of buffer the exporter is allowed to return. The buffer interface - allows for complicated memory sharing possibilities, but some caller may - not be able to handle all the complexity but may want to see if the - exporter will let them take a simpler view to its memory. - - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a BufferError unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - Py_buffer structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. - - 0 is returned on success and -1 on error. - - The following table gives possible values to the flags arguments. - - Flag - - Description - - PyBUF_SIMPLE - - This is the default flag state. The returned - buffer may or may not have writable memory. The - format of the data will be assumed to be unsigned - bytes. This is a "stand-alone" flag constant. It - never needs to be '|'d to the others. The exporter - will raise an error if it cannot provide such a - contiguous buffer of bytes. - - PyBUF_WRITABLE - - The returned buffer must be writable. If it is - not writable, then raise an error. - - PyBUF_STRIDES - - This implies PyBUF_ND. The returned - buffer must provide strides information (i.e. the - strides cannot be NULL). This would be used when - the consumer can handle strided, discontiguous - arrays. Handling strides automatically assumes - you can handle shape. The exporter can raise an - error if a strided representation of the data is - not possible (i.e. without the suboffsets). - - PyBUF_ND - - The returned buffer must provide shape - information. The memory will be assumed C-style - contiguous (last dimension varies the - fastest). The exporter may raise an error if it - cannot provide this kind of contiguous buffer. If - this is not given then shape will be NULL. - - PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS - - These flags indicate that the contiguity returned - buffer must be respectively, C-contiguous (last - dimension varies the fastest), Fortran contiguous - (first dimension varies the fastest) or either - one. All of these flags imply - PyBUF_STRIDES and guarantee that the - strides buffer info structure will be filled in - correctly. - - PyBUF_INDIRECT - - This flag indicates the returned buffer must have - suboffsets information (which can be NULL if no - suboffsets are needed). This can be used when - the consumer can handle indirect array - referencing implied by these suboffsets. This - implies PyBUF_STRIDES. - - PyBUF_FORMAT - - The returned buffer must have true format - information if this flag is provided. This would - be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An - exporter should always be able to provide this - information if requested. If format is not - explicitly requested then the format must be - returned as NULL (which means 'B', or - unsigned bytes) - - PyBUF_STRIDED - - This is equivalent to (PyBUF_STRIDES | - PyBUF_WRITABLE). - - PyBUF_STRIDED_RO - - This is equivalent to (PyBUF_STRIDES). - - PyBUF_RECORDS - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_RECORDS_RO - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT). - - PyBUF_FULL - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_FULL_RO - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT). - - PyBUF_CONTIG - - This is equivalent to (PyBUF_ND | - PyBUF_WRITABLE). - - PyBUF_CONTIG_RO - - This is equivalent to (PyBUF_ND).""" raise NotImplementedError @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -37,6 +37,7 @@ def test_thread_state_interp(self, space, api): ts = api.PyThreadState_Get() assert ts.c_interp == api.PyInterpreterState_Head() + assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO) def test_basic_threadstate_dance(self, space, api): # Let extension modules call these functions, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -9,7 +9,7 @@ appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule @@ -48,6 +48,7 @@ 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', + 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -14,29 +14,29 @@ return mean(a) def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n,n), dtype=dtype) for i in range(n): a[i][i] = 1 return a def mean(a): if not hasattr(a, "mean"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.mean() def sum(a): if not hasattr(a, "sum"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.sum() def min(a): if not hasattr(a, "min"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.min() def max(a): if not hasattr(a, "max"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.max() def arange(start, stop=None, step=1, dtype=None): @@ -47,9 +47,9 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i @@ -90,5 +90,5 @@ you should assign the new shape to the shape attribute of the array ''' if not hasattr(a, 'reshape'): - a = numpypy.array(a) + a = _numpypy.array(a) return a.reshape(shape) diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -78,6 +78,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -170,6 +171,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), @@ -245,6 +247,7 @@ long_name = "int64" W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), __module__ = "numpypy", + __new__ = interp2app(W_LongBox.descr__new__.im_func), ) W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -380,6 +380,9 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -409,7 +412,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -422,8 +425,9 @@ else: concrete.to_str(space, 1, res, indent=' ') if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -559,6 +563,18 @@ def descr_mean(self, space): return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_var(self, space): + # var = mean((values - mean(values)) ** 2) + w_res = self.descr_sub(space, self.descr_mean(space)) + assert isinstance(w_res, BaseArray) + w_res = w_res.descr_pow(space, space.wrap(2)) + assert isinstance(w_res, BaseArray) + return w_res.descr_mean(space) + + def descr_std(self, space): + # std(v) = sqrt(var(v)) + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -840,80 +856,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) + if i < self.shape[0] - 1: + builder.append(ccomma +'\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe @@ -1185,6 +1201,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), @@ -1199,6 +1216,8 @@ all = interp2app(BaseArray.descr_all), any = interp2app(BaseArray.descr_any), dot = interp2app(BaseArray.descr_dot), + var = interp2app(BaseArray.descr_var), + std = interp2app(BaseArray.descr_std), copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +36,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +44,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +58,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +156,28 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -181,7 +190,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +205,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +227,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +250,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +260,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +269,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +284,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +295,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -315,7 +324,7 @@ def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +339,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +348,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +361,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,33 +3,33 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpypy import array, mean + from _numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -158,9 +158,10 @@ assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -175,12 +176,26 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from _numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1, 2]) + assert x.ndim == 1 + x = array([[1, 2], [3, 4]]) + assert x.ndim == 2 + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but _numpypy raises an + # TypeError + raises(TypeError, 'x.ndim = 3') + def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -189,7 +204,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -200,13 +215,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -215,7 +230,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -232,12 +247,12 @@ assert (c == b).all() def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -246,7 +261,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -256,7 +271,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -264,7 +279,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -275,7 +290,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -286,7 +301,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -305,7 +320,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -313,14 +328,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -329,13 +344,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -344,7 +359,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -364,7 +379,7 @@ a.shape = (1,) def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -377,7 +392,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -413,13 +428,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -432,7 +447,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -440,20 +455,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -462,14 +477,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -477,34 +492,34 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -512,7 +527,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -544,7 +559,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -552,14 +567,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -569,7 +584,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -577,14 +592,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -597,7 +612,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -605,14 +620,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -623,7 +638,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -634,7 +649,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -645,7 +660,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -659,7 +674,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -673,7 +688,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpypy import array + from _numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -681,7 +696,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -691,7 +706,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -705,13 +720,13 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -720,52 +735,52 @@ assert a.sum() == 5 def test_identity(self): - from numpypy import identity, array - from numpypy import int32, float64, dtype + from _numpypy import identity, array + from _numpypy import int32, float64, dtype a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -786,14 +801,14 @@ assert a.argmax() == 2 def test_argmin(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -802,7 +817,7 @@ assert b.all() == True def test_any(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -811,7 +826,7 @@ assert c.any() == False def test_dot(self): - from numpypy import array, dot + from _numpypy import array, dot a = array(range(5)) assert a.dot(a) == 30.0 @@ -821,14 +836,14 @@ assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all() def test_dot_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype, float64, int8, bool_ + from _numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -845,7 +860,7 @@ def test_comparison(self): import operator - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -864,7 +879,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpypy import array + from _numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -874,7 +889,7 @@ assert not bool(array([0])) def test_slice_assignment(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[::-1] = a assert (a == [0, 1, 2, 1, 0]).all() @@ -884,8 +899,8 @@ assert (a == [8, 6, 4, 2, 0]).all() def test_debug_repr(self): - from numpypy import zeros, sin - from numpypy.pypy import debug_repr + from _numpypy import zeros, sin + from _numpypy.pypy import debug_repr a = zeros(1) assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' @@ -899,8 +914,8 @@ assert debug_repr(b) == 'Array' def test_remove_invalidates(self): - from numpypy import array - from numpypy.pypy import remove_invalidates + from _numpypy import array + from _numpypy.pypy import remove_invalidates a = array([1, 2, 3]) b = a + a remove_invalidates(a) @@ -908,7 +923,7 @@ assert b[0] == 28 def test_virtual_views(self): - from numpypy import arange + from _numpypy import arange a = arange(15) c = (a + a) d = c[::2] @@ -926,7 +941,7 @@ assert b[1] == 2 def test_tolist_scalar(self): - from numpypy import int32, bool_ + from _numpypy import int32, bool_ x = int32(23) assert x.tolist() == 23 assert type(x.tolist()) is int @@ -934,13 +949,13 @@ assert y.tolist() is True def test_tolist_zerodim(self): - from numpypy import array + from _numpypy import array x = array(3) assert x.tolist() == 3 assert type(x.tolist()) is int def test_tolist_singledim(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.tolist() == [0, 1, 2, 3, 4] assert type(a.tolist()[0]) is int @@ -948,41 +963,55 @@ assert b.tolist() == [0.2, 0.4, 0.6] def test_tolist_multidim(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert a.tolist() == [[1, 2], [3, 4]] def test_tolist_view(self): - from numpypy import array - a = array([[1,2],[3,4]]) + from _numpypy import array + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): - from numpypy import array + from _numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] + def test_var(self): + from _numpypy import array + a = array(range(10)) + assert a.var() == 8.25 + a = array([5.0]) + assert a.var() == 0.0 + + def test_std(self): + from _numpypy import array + a = array(range(10)) + assert a.std() == 2.8722813232690143 + a = array([5.0]) + assert a.std() == 0.0 + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpypy - a = numpypy.zeros((2, 2)) + import _numpypy + a = _numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpypy - assert numpypy.zeros(1).shape == (1,) - assert numpypy.zeros((2, 2)).shape == (2, 2) - assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpypy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpypy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpypy.zeros(())) - raises(ValueError, numpypy.array, [[1, 2], 3]) + import _numpypy + assert _numpypy.zeros(1).shape == (1,) + assert _numpypy.zeros((2, 2)).shape == (2, 2) + assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(_numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, _numpypy.zeros(())) + raises(ValueError, _numpypy.array, [[1, 2], 3]) def test_getsetitem(self): - import numpypy - a = numpypy.zeros((2, 3, 1)) + import _numpypy + a = _numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -993,8 +1022,8 @@ assert a[1, -1, 0] == 3 def test_slices(self): - import numpypy - a = numpypy.zeros((4, 3, 2)) + import _numpypy + a = _numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -1027,51 +1056,51 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpypy - raises(ValueError, numpypy.array, [[1], 2]) - raises(ValueError, numpypy.array, [[1, 2], [3]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) - a = numpypy.array([[1, 2], [4, 5]]) + import _numpypy + raises(ValueError, _numpypy.array, [[1], 2]) + raises(ValueError, _numpypy.array, [[1, 2], [3]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = _numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpypy - a = numpypy.zeros((3, 4)) + import _numpypy + a = _numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpypy.array([[1, 2], [3, 4]]) + a = _numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpypy.array([5, 6]) + a[1] = _numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpypy.array([8, 10]) + a[:, 1] = _numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpypy.array([11, 12]) + a[0, :: -1] = _numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == \ array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] @@ -1082,37 +1111,37 @@ assert c[1][1] == 12 def test_multidim_ones(self): - from numpypy import ones + from _numpypy import ones a = ones((1, 2, 3)) assert a[0, 1, 2] == 1.0 def test_multidim_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((10, 10)) b = ones(10) a[:, :] = b assert a[3, 5] == 1 def test_broadcast_shape_agreement(self): - from numpypy import zeros, array + from _numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) @@ -1126,7 +1155,7 @@ assert c.all() def test_broadcast_scalar(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 5), 'd') a[:, 1] = 3 assert a[2, 1] == 3 @@ -1137,14 +1166,14 @@ assert a[3, 2] == 0 def test_broadcast_call2(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((4, 1, 5)) b = ones((4, 3, 5)) b[:] = (a + a) assert (b == zeros((4, 3, 5))).all() def test_broadcast_virtualview(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = arange(8).reshape([2, 2, 2]) b = (a + a)[1, 1] c = zeros((2, 2, 2)) @@ -1152,13 +1181,13 @@ assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all() def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert a.argmax() == 5 assert a[:2, ].argmax() == 3 def test_broadcast_wrong_shapes(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 3, 2)) b = zeros((4, 2)) exc = raises(ValueError, lambda: a + b) @@ -1166,7 +1195,7 @@ " together with shapes (4,3,2) (4,2)" def test_reduce(self): - from numpypy import array + from _numpypy import array a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) assert a.sum() == (13 * 12) / 2 b = a[1:, 1::2] @@ -1174,7 +1203,7 @@ assert c.sum() == (6 + 8 + 10 + 12) * 2 def test_transpose(self): - from numpypy import array + from _numpypy import array a = array(((range(3), range(3, 6)), (range(6, 9), range(9, 12)), (range(12, 15), range(15, 18)), @@ -1193,7 +1222,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from numpypy import array, flatiter + from _numpypy import array, flatiter a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1208,23 +1237,23 @@ assert s == 140 def test_flatiter_array_conv(self): - from numpypy import array, dot + from _numpypy import array, dot a = array([1, 2, 3]) assert dot(a.flat, a.flat) == 14 def test_flatiter_varray(self): - from numpypy import ones + from _numpypy import ones a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] def test_slice_copy(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((10, 10)) b = a[0].copy() assert (b == zeros(10)).all() def test_array_interface(self): - from numpypy import array + from _numpypy import array a = array([1, 2, 3]) i = a.__array_interface__ assert isinstance(i['data'][0], int) @@ -1233,6 +1262,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1245,7 +1275,7 @@ def test_fromstring(self): import sys - from numpypy import fromstring, array, uint8, float32, int32 + from _numpypy import fromstring, array, uint8, float32, int32 a = fromstring(self.data) for i in range(4): @@ -1275,17 +1305,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1309,7 +1339,7 @@ assert (u == [1, 0]).all() def test_fromstring_types(self): - from numpypy import (fromstring, int8, int16, int32, int64, uint8, + from _numpypy import (fromstring, int8, int16, int32, int64, uint8, uint16, uint32, float32, float64) a = fromstring('\xFF', dtype=int8) @@ -1333,9 +1363,8 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): - from numpypy import fromstring, uint16, uint8, int32 + from _numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail raises(ValueError, fromstring, "\x01\x02\x03") #3 bytes is not modulo 2 bytes (int16) @@ -1346,7 +1375,8 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpypy import array, zeros + from _numpypy import array, zeros + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1354,14 +1384,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from _numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1374,9 +1416,19 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -1391,7 +1443,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -1417,14 +1469,14 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' def test_str_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -1440,7 +1492,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array, dtype + from _numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) @@ -1462,7 +1514,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_app_reshape(self): - from numpypy import arange, array, dtype, reshape + from _numpypy import arange, array, dtype, reshape a = arange(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpypy import add, ufunc + from _numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpypy import add, multiply, sin + from _numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpypy import add, sin + from _numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpypy import negative, sign, minimum + from _numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpypy import array, ndarray, negative, minimum + from _numpypy import array, ndarray, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpypy import array, absolute + from _numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpypy import array, add + from _numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpypy import array, divide + from _numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -114,7 +114,7 @@ assert (divide(array([-10]), array([2])) == array([-5])).all() def test_fabs(self): - from numpypy import array, fabs + from _numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -123,7 +123,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpypy import array, minimum + from _numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -132,7 +132,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpypy import array, maximum + from _numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -145,7 +145,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpypy import array, multiply + from _numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -154,7 +154,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpypy import array, sign, dtype + from _numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -173,7 +173,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpypy import array, reciprocal + from _numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -182,7 +182,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpypy import array, subtract + from _numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -191,7 +191,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpypy import array, floor + from _numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -200,7 +200,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpypy import array, copysign + from _numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -216,7 +216,7 @@ def test_exp(self): import math - from numpypy import array, exp + from _numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -230,7 +230,7 @@ def test_sin(self): import math - from numpypy import array, sin + from _numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -243,7 +243,7 @@ def test_cos(self): import math - from numpypy import array, cos + from _numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -252,7 +252,7 @@ def test_tan(self): import math - from numpypy import array, tan + from _numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -262,7 +262,7 @@ def test_arcsin(self): import math - from numpypy import array, arcsin + from _numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -276,7 +276,7 @@ def test_arccos(self): import math - from numpypy import array, arccos + from _numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -291,7 +291,7 @@ def test_arctan(self): import math - from numpypy import array, arctan + from _numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -304,7 +304,7 @@ def test_arcsinh(self): import math - from numpypy import arcsinh, inf + from _numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -312,7 +312,7 @@ def test_arctanh(self): import math - from numpypy import arctanh + from _numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -323,7 +323,7 @@ def test_sqrt(self): import math - from numpypy import sqrt + from _numpypy import sqrt nan, inf = float("nan"), float("inf") data = [1, 2, 3, inf] @@ -333,13 +333,13 @@ assert math.isnan(sqrt(nan)) def test_reduce_errors(self): - from numpypy import sin, add + from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpypy import add, maximum + from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -348,7 +348,7 @@ def test_comparisons(self): import operator - from numpypy import equal, not_equal, less, less_equal, greater, greater_equal + from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -8,10 +8,12 @@ from pypy.tool import logparser from pypy.jit.tool.jitoutput import parse_prof from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range, - find_ids, TraceWithIds, + find_ids, OpMatcher, InvalidMatch) class BaseTestPyPyC(object): + log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary' + def setup_class(cls): if '__pypy__' not in sys.builtin_module_names: py.test.skip("must run this test with pypy") @@ -52,8 +54,7 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - print cmdline, logfile - env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)} + env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -98,7 +98,8 @@ end = time.time() return end - start # - log = self.run(main, [get_libc_name(), 200], threshold=150) + log = self.run(main, [get_libc_name(), 200], threshold=150, + import_site=True) assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead loops = log.loops_by_id('sleep') assert len(loops) == 1 # make sure that we actually JITted the loop @@ -121,7 +122,7 @@ return fabs._ptr.getaddr(), x libm_name = get_libm_name(sys.platform) - log = self.run(main, [libm_name]) + log = self.run(main, [libm_name], import_site=True) fabs_addr, res = log.result assert res == -4.0 loop, = log.loops_by_filename(self.filepath) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -15,7 +15,7 @@ i += letters[i % len(letters)] == uletters[i % len(letters)] return i - log = self.run(main, [300]) + log = self.run(main, [300], import_site=True) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" @@ -55,7 +55,7 @@ i += int(long(string.digits[i % len(string.digits)], 16)) return i - log = self.run(main, [1100]) + log = self.run(main, [1100], import_site=True) assert log.result == main(1100) loop, = log.loops_by_filename(self.filepath) assert loop.match(""" diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -42,7 +42,7 @@ 'argv' : 'state.get(space).w_argv', 'py3kwarning' : 'space.w_False', 'warnoptions' : 'state.get(space).w_warnoptions', - 'builtin_module_names' : 'state.w_None', + 'builtin_module_names' : 'space.w_None', 'pypy_getudir' : 'state.pypy_getudir', # not translated 'pypy_initial_path' : 'state.pypy_initial_path', diff --git a/pypy/module/sys/app.py b/pypy/module/sys/app.py --- a/pypy/module/sys/app.py +++ b/pypy/module/sys/app.py @@ -66,11 +66,11 @@ return None copyright_str = """ -Copyright 2003-2011 PyPy development team. +Copyright 2003-2012 PyPy development team. All Rights Reserved. For further information, see -Portions Copyright (c) 2001-2008 Python Software Foundation. +Portions Copyright (c) 2001-2012 Python Software Foundation. All Rights Reserved. Portions Copyright (c) 2000 BeOpen.com. diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py --- a/pypy/objspace/fake/checkmodule.py +++ b/pypy/objspace/fake/checkmodule.py @@ -1,8 +1,10 @@ from pypy.objspace.fake.objspace import FakeObjSpace, W_Root +from pypy.config.pypyoption import get_pypy_config def checkmodule(modname): - space = FakeObjSpace() + config = get_pypy_config(translating=True) + space = FakeObjSpace(config) mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__']) # force computation and record what we wrap module = mod.Module(space, W_Root()) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -93,9 +93,9 @@ class FakeObjSpace(ObjSpace): - def __init__(self): + def __init__(self, config=None): self._seen_extras = [] - ObjSpace.__init__(self) + ObjSpace.__init__(self, config=config) def float_w(self, w_obj): is_root(w_obj) @@ -135,6 +135,9 @@ def newfloat(self, x): return w_some_obj() + def newcomplex(self, x, y): + return w_some_obj() + def marshal_w(self, w_obj): "NOT_RPYTHON" raise NotImplementedError @@ -215,6 +218,10 @@ expected_length = 3 return [w_some_obj()] * expected_length + def unpackcomplex(self, w_complex): + is_root(w_complex) + return 1.1, 2.2 + def allocate_instance(self, cls, w_subtype): is_root(w_subtype) return instantiate(cls) @@ -232,6 +239,11 @@ def exec_(self, *args, **kwds): pass + def createexecutioncontext(self): + ec = ObjSpace.createexecutioncontext(self) + ec._py_repr = None + return ec + # ---------- def translates(self, func=None, argtypes=None, **kwds): @@ -267,18 +279,21 @@ ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', 'dict', 'unicode', 'complex', 'slice', 'bool', - 'type', 'basestring']): + 'type', 'basestring', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # for (name, _, arity, _) in ObjSpace.MethodTable: args = ['w_%d' % i for i in range(arity)] + params = args[:] d = {'is_root': is_root, 'w_some_obj': w_some_obj} + if name in ('get',): + params[-1] += '=None' exec compile2("""\ def meth(self, %s): %s return w_some_obj() - """ % (', '.join(args), + """ % (', '.join(params), '; '.join(['is_root(%s)' % arg for arg in args]))) in d meth = func_with_new_name(d['meth'], name) setattr(FakeObjSpace, name, meth) @@ -301,9 +316,12 @@ pass FakeObjSpace.default_compiler = FakeCompiler() -class FakeModule(object): +class FakeModule(Wrappable): + def __init__(self): + self.w_dict = w_some_obj() def get(self, name): name + "xx" # check that it's a string return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py --- a/pypy/objspace/fake/test/test_objspace.py +++ b/pypy/objspace/fake/test/test_objspace.py @@ -40,7 +40,7 @@ def test_constants(self): space = self.space space.translates(lambda: (space.w_None, space.w_True, space.w_False, - space.w_int, space.w_str, + space.w_int, space.w_str, space.w_object, space.w_TypeError)) def test_wrap(self): @@ -72,3 +72,9 @@ def test_newlist(self): self.space.newlist([W_Root(), W_Root()]) + + def test_default_values(self): + # the __get__ method takes either 2 or 3 arguments + space = self.space + space.translates(lambda: (space.get(W_Root(), W_Root()), + space.get(W_Root(), W_Root(), W_Root()))) diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -360,12 +363,36 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -386,6 +386,18 @@ class JitHintError(Exception): """Inconsistency in the JIT hints.""" +PARAMETER_DOCS = { + 'threshold': 'number of times a loop has to run for it to become hot', + 'function_threshold': 'number of times a function must run for it to become traced from start', + 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG', + 'inlining': 'inline python functions or not (1/0)', + 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', + 'retrace_limit': 'how many times we can try retracing before giving up', + 'max_retrace_guards': 'number of extra guards a retrace can cause', + 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' + } + PARAMETERS = {'threshold': 1039, # just above 1024, prime 'function_threshold': 1619, # slightly more than one above, also prime 'trace_eagerness': 200, diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -185,7 +185,10 @@ return self.code.map[self.bytecode_no] def getlineno(self): - return self.getopcode().lineno + code = self.getopcode() + if code is None: + return None + return code.lineno lineno = property(getlineno) def getline_starts_here(self): diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py --- a/pypy/tool/jitlogparser/storage.py +++ b/pypy/tool/jitlogparser/storage.py @@ -6,7 +6,6 @@ import py import os from lib_pypy.disassembler import dis -from pypy.tool.jitlogparser.parser import Function from pypy.tool.jitlogparser.module_finder import gather_all_code_objs class LoopStorage(object): diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c --- a/pypy/translator/c/src/profiling.c +++ b/pypy/translator/c/src/profiling.c @@ -29,6 +29,35 @@ profiling_setup = 0; } } + +#elif defined(_WIN32) +#include + +DWORD_PTR base_affinity_mask; +int profiling_setup = 0; + +void pypy_setup_profiling() { + if (!profiling_setup) { + DWORD_PTR affinity_mask, system_affinity_mask; + GetProcessAffinityMask(GetCurrentProcess(), + &base_affinity_mask, &system_affinity_mask); + affinity_mask = 1; + /* Pick one cpu allowed by the system */ + if (system_affinity_mask) + while ((affinity_mask & system_affinity_mask) == 0) + affinity_mask <<= 1; + SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); + profiling_setup = 1; + } +} + +void pypy_teardown_profiling() { + if (profiling_setup) { + SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); + profiling_setup = 0; + } +} + #else void pypy_setup_profiling() { } void pypy_teardown_profiling() { } diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,8 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %slow-level JIT parameter (default %s)' % ( - key, ' '*(18-len(key)), value) + print ' --jit %s=N %s%s (default %s)' % ( + key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Tue Jan 10 14:56:56 2012 From: noreply at buildbot.pypy.org (stefanor) Date: Tue, 10 Jan 2012 14:56:56 +0100 (CET) Subject: [pypy-commit] pypy default: Add pypy.1 manpage to sphinx docs Message-ID: <20120110135656.44BC082110@wyvern.cs.uni-duesseldorf.de> Author: Stefano Rivera Branch: Changeset: r51204:e8239b6167fa Date: 2012-01-10 15:56 +0200 http://bitbucket.org/pypy/pypy/changeset/e8239b6167fa/ Log: Add pypy.1 manpage to sphinx docs diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -97,3 +97,9 @@ $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." + +manpage: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished; the man pages are in $(BUILDDIR)/man" + diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,89 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is *arg*\ ``=``\ *value*. + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) From noreply at buildbot.pypy.org Tue Jan 10 15:00:56 2012 From: noreply at buildbot.pypy.org (stefanor) Date: Tue, 10 Jan 2012 15:00:56 +0100 (CET) Subject: [pypy-commit] pypy default: pypy manpage: Format for multiple --jit arguments Message-ID: <20120110140056.2340782110@wyvern.cs.uni-duesseldorf.de> Author: Stefano Rivera Branch: Changeset: r51205:2f90612495e2 Date: 2012-01-10 16:00 +0200 http://bitbucket.org/pypy/pypy/changeset/2f90612495e2/ Log: pypy manpage: Format for multiple --jit arguments diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst --- a/pypy/doc/man/pypy.1.rst +++ b/pypy/doc/man/pypy.1.rst @@ -46,7 +46,8 @@ Print translation information about this PyPy executable. --jit *arg* - Low level JIT parameters. Format is *arg*\ ``=``\ *value*. + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] ``off`` Disable the JIT. From noreply at buildbot.pypy.org Tue Jan 10 16:14:29 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 16:14:29 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: adjust emit_guard_call_assembler and prepare_guard_call_assembler Message-ID: <20120110151429.1D88382110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51206:a51d6a2b3e1d Date: 2012-01-10 16:14 +0100 http://bitbucket.org/pypy/pypy/changeset/a51d6a2b3e1d/ Log: adjust emit_guard_call_assembler and prepare_guard_call_assembler diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -932,11 +932,11 @@ self._write_fail_index(fail_index) descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, JitCellToken) # XXX check this - assert op.numargs() == len(descr._ppc_arglocs[0]) + #assert op.numargs() == len(descr._ppc_arglocs[0]) resbox = TempInt() - self._emit_call(fail_index, descr._ppc_direct_bootstrap_code, op.getarglist(), + self._emit_call(fail_index, descr._ppc_func_addr, op.getarglist(), regalloc, result=resbox) if op.result is None: value = self.cpu.done_with_this_frame_void_v diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -877,10 +877,11 @@ def prepare_guard_call_assembler(self, op, guard_op): descr = op.getdescr() - assert isinstance(descr, LoopToken) + assert isinstance(descr, JitCellToken) jd = descr.outermost_jitdriver_sd assert jd is not None - size = jd.portal_calldescr.get_result_size(self.cpu.translate_support_code) + #size = jd.portal_calldescr.get_result_size(self.cpu.translate_support_code) + size = jd.portal_calldescr.get_result_size() vable_index = jd.index_of_virtualizable if vable_index >= 0: self._sync_var(op.getarg(vable_index)) From noreply at buildbot.pypy.org Tue Jan 10 16:54:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 10 Jan 2012 16:54:05 +0100 (CET) Subject: [pypy-commit] pypy default: Add two papers. Message-ID: <20120110155405.12F6982110@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51207:71d3d24c92d1 Date: 2012-01-10 16:53 +0100 http://bitbucket.org/pypy/pypy/changeset/71d3d24c92d1/ Log: Add two papers. diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 From noreply at buildbot.pypy.org Tue Jan 10 17:05:43 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 10 Jan 2012 17:05:43 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: grammar changes all over Message-ID: <20120110160543.77D1182110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4007:a65754d300a3 Date: 2012-01-10 10:05 -0600 http://bitbucket.org/pypy/extradoc/changeset/a65754d300a3/ Log: grammar changes all over diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -3,43 +3,45 @@ Hello. -I'm pleased to inform about progress we made on NumPyPy both in terms of -completeness and performance. This post mostly deals with the performance -side and how far we got by now. **Word of warning:** It's worth noting that -the performance work on the numpy side is not done - we're maybe half way -through and there are trivial and not so trivial optimizations to be performed. -In fact we didn't even start to implement some optimizations like vectorization. +I'm pleased to inform you about the progress we have made on NumPyPy, both in +terms of completeness and performance. This post mostly deals with the +performance side and how far we have come so far. **Word of warning:** It's worth noting that the performance work on NumPyPy isn't done - we're maybe half way +to where we want to be and there are many trivial and not so trivial +optimizations to be performed. In fact we haven't even started to implement +important optimizations, like vectorization. Benchmark --------- -We choose a laplace transform, which is also used on scipy's -`PerformancePython`_ wiki. The problem with the implementation on the -performance python wiki page is that there are two algorithms used which -has different convergence, but also very different performance characteristics -on modern machines. Instead we implemented our own versions in C and a set -of various Python versions using numpy or not. The full source is available -on `fijal's hack`_ repo and the exact revision used is 18502dbbcdb3. +We choose a laplace transform, based on SciPy's `PerformancePython`_ wiki. +Unfortunately, the different implementations on the wiki page accidentally use +two different algorithms, which have different convergences, and very different +performance characteristics on modern computers. As a result, we implemented +our own versions in C and Python (both with and without NumPy). The full source +can be found in `fijal's hack`_ repo, all these benchmarks were performed at +revision 18502dbbcdb3. -Let me describe various algorithms used. Note that some of them contain -pypy-specific hacks to work around current limitations in the implementation. -Those hacks will go away eventually and the performance should improve and -not decrease. It's worth noting that while numerically the algorithms used -are identical, the exact data layout is not and differs between methods. +First, let me describe various algorithms used. Note that some of them contain +PyPy-specific hacks to work around limitations in the current implementation. +These hacks will go away eventually and the performance should improve. It's +worth noting that while numerically the algorithms used are identical, the +exact data layout in memory differs between them. -**Note on all the benchmarks:** they're all run once, but the performance -is very stable across runs. +**Note on all the benchmarks:** they were all run once, but the performance is +very stable across runs. -So, starting from the C version, it implements dead simple laplace transform -using two loops and a double-reference memory (array of ``int**``). The double -reference does not matter for performance and two algorithms are implemented -in ``inline-laplace.c`` and ``laplace.c``. They're both compiled with -``gcc 4.4.5`` and ``-O3``. +Starting with the C version, it implements a dead simple laplace transform +using two loops and a double-reference memory (array of ``int*``). The double +reference does not matter for performance and two algorithms are implemented in +``inline-laplace.c`` and ``laplace.c``. They're both compiled with +``gcc 4.4.5`` at ``-O3``. -A straightforward version of those in python -is implemented in ``laplace.py`` using respectively ``inline_slow_time_step`` -and ``slow_time_step``. ``slow_2_time_step`` does the same thing, except -it copies arrays in-place instead of creating new copies. +A straightforward version of those in Python is implemented in ``laplace.py`` +using respectively ``inline_slow_time_step`` and ``slow_time_step``. +``slow_2_time_step`` does the same thing, except it copies arrays in-place +instead of creating new copies. + +(XXX: these are timed under PyPy?) +-----------------------+----------------------+--------------------+ | bench | number of iterations | time per iteration | @@ -55,42 +57,44 @@ | inline_slow python | 278 | 23.7 | +-----------------------+----------------------+--------------------+ -The important thing to notice here that data dependency in the inline version -is causing a huge slowdown. Note that this is already **not too bad**, -as in yes, the braindead python version of the same algorithm takes longer -and pypy is not able to use as much info about data being independent, but this -is within the same ballpark - **15% - 170%** slower than C, but it definitely -matters more which algorithm you choose than which language. For a comparison, -slow versions take about **5.75s** each on CPython 2.6 **per iteration**, -so estimating, they're about **200x** slower than the PyPy equivalent. -I didn't measure full run though :) +An important thing to notice here is that the data dependency in the inline +version causes a huge slowdown for the C versions. This is already not too bad +for us, the braindead Python version takes longer and PyPy is not able to take +advantage of the knowledge that the data is independent, but it is in the same +ballpark - **15% - 170%** slower than C, but the algorithm you choose matters +more than the language. By comparison, the slow versions take about **5.75s** +each on CPython 2.6 **per iteration**, and by estimating, are about **200x** +slower than the PyPy equivalent. I didn't measure the full run though :) -Next step is to use numpy expressions. The first problem we run into is that -computing the error walks again the entire array. This is fairly inefficient -in terms of cache access, so I took a liberty of computing errors every 15 -steps. This makes convergence rounded to the nearest 15 iterations, but -speeds things up anyway. ``numeric_time_step`` takes the most braindead -approach of replacing the array with itself, like this:: +The next step is to use NumPy expressions. The first problem we run into is +that computing the error requires walking the entire array a second time. This +is fairly inefficient in terms of cache access, so I took the liberty of +computing the errors every 15 steps. This results in the convergence being +rounded to the nearest 15 iterations, but speeds things up considerably (XXX: +is this true?). ``numeric_time_step`` takes the most braindead approach of +replacing the array with itself, like this:: - u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 + + u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 + (u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv -We need 3 arrays here - one for an intermediate (pypy does not automatically -create intermediates for expressions), one for a copy to compute error and -one for the result. This works a bit by chance, since numpy ``+`` or -``*`` creates an intermediate and pypy simulates the behavior if necessary. +We need 3 arrays here - one is an intermediate (PyPy does not automatically +create intermediates for expressions), one is a copy for computing the error, +and one is the result. This works by chance, since in NumPy ``+`` or ``*`` +creates an intermediate, while NumPyPy avoids allocating the intermediate if +possible. ``numeric_2_time_step`` works pretty much the same:: src = self.u self.u = src.copy() - self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + + self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv except the copy is now explicit rather than implicit. ``numeric_3_time_step`` does the same thing, but notices you don't have to copy -the entire array, it's enough to copy border pieces and fill rest with zeros:: +the entire array, it's enough to copy the border pieces and fill rest with +zeros:: src = self.u self.u = numpy.zeros((self.nx, self.ny), 'd') @@ -98,29 +102,29 @@ self.u[-1] = src[-1] self.u[:, 0] = src[:, 0] self.u[:, -1] = src[:, -1] - self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + + self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv ``numeric_4_time_step`` is the one that tries to resemble the C version more. Instead of doing an array copy, it actually notices that you can alternate -between two arrays. This is exactly what C version does. -Note the ``remove_invalidates`` call that's a pypy specific hack - we hope -to remove this call in the near future, but in short it promises "I don't -have any unbuilt intermediates that depend on the value of the argument", -which means you don't have to compute expressions you're not actually using:: +between two arrays. This is exactly what C version does. The +``remove_invalidates`` call is a PyPy specific hack - we hope to remove this +call in the near future, but in short it promises "I don't have any unbuilt +intermediates that depend on the value of the argument", which means you don't +have to compute sub-expressions you're not actually using:: remove_invalidates(self.old_u) remove_invalidates(self.u) self.old_u[:,:] = self.u src = self.old_u - self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + + self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv This one is the most equivalent to the C version. -``numeric_5_time_step`` does the same thing, but notices you don't have to -copy the entire array, it's enough to just copy edges. This is an optimization -that was not done in the C version:: +``numeric_5_time_step`` does the same thing, but notices you don't have to copy +the entire array, it's enough to just copy edges. This is an optimization that +was not done in the C version:: remove_invalidates(self.old_u) remove_invalidates(self.u) @@ -130,13 +134,13 @@ self.u[-1] = src[-1] self.u[:, 0] = src[:, 0] self.u[:, -1] = src[:, -1] - self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + + self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv -Let's look at the table of runs. As above, ``gcc 4.4.5``, compiled with -``-O3``, pypy nightly 7bb8b38d8563, 64bit platform. All of the numeric methods -run 226 steps each, slightly more than 219, rounding to the next 15 when -the error is computed. Comparison for PyPy and CPython: +Let's look at the table of runs. As before, ``gcc 4.4.5``, compiled at ``-O3``, +and PyPy nightly 7bb8b38d8563, on an x86-64 machine. All of the numeric methods +run 226 steps each, slightly more than 219, rounding to the next 15 when the +error is computed. Comparison for PyPy and CPython: +-----------------------+-------------+----------------+ | benchmark | PyPy | CPython | @@ -150,14 +154,15 @@ | numeric 4 | 11ms | 31ms | +-----------------------+-------------+----------------+ | numeric 5 | 9.3ms | 21ms | -+-----------------------+-------------+----------------+ ++-----------------------+-------------+----------------- -So, I can say that those preliminary results are pretty ok. They're not as -fast as the C version, but we're already much faster than CPython, almost -always more than 2x on this relatively real-world example. This is not the -end though. As we continue work, we hope to use a much better high level -information that we have about operations to eventually outperform C, hopefully -in 2012. Stay tuned. +We think that these preliminary results are pretty good, they're not as fast as +the C version (or as fast as we'd like them to be), but we're already much +faster than NumPy on CPython, almost always by more than 2x on this relatively +real-world example. This is not the end though, in fact it's hardly the +beginning: as we continue work, we hope to make even much better use of the +high level information that we have, in order to eventually outperform C, +hopefully in 2012. Stay tuned. Cheers, fijal From noreply at buildbot.pypy.org Tue Jan 10 17:08:19 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 10 Jan 2012 17:08:19 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: another rewording Message-ID: <20120110160819.2048C82110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4008:5dc64fda0ea7 Date: 2012-01-10 10:08 -0600 http://bitbucket.org/pypy/extradoc/changeset/5dc64fda0ea7/ Log: another rewording diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -5,10 +5,11 @@ I'm pleased to inform you about the progress we have made on NumPyPy, both in terms of completeness and performance. This post mostly deals with the -performance side and how far we have come so far. **Word of warning:** It's worth noting that the performance work on NumPyPy isn't done - we're maybe half way -to where we want to be and there are many trivial and not so trivial -optimizations to be performed. In fact we haven't even started to implement -important optimizations, like vectorization. +performance side and how far we have come so far. **Word of warning:** the +performance work on NumPyPy isn't done - we're maybe half way to where we want +to be and there are many trivial and not so trivial optimizations to be +performed. In fact we haven't even started to implement important +optimizations, like vectorization. Benchmark --------- From noreply at buildbot.pypy.org Tue Jan 10 17:12:35 2012 From: noreply at buildbot.pypy.org (stefanor) Date: Tue, 10 Jan 2012 17:12:35 +0100 (CET) Subject: [pypy-commit] pypy default: Rather use standard Sphinx 1.x target Message-ID: <20120110161235.4952A82110@wyvern.cs.uni-duesseldorf.de> Author: Stefano Rivera Branch: Changeset: r51208:0e67e4538c80 Date: 2012-01-10 18:11 +0200 http://bitbucket.org/pypy/pypy/changeset/0e67e4538c80/ Log: Rather use standard Sphinx 1.x target diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @@ -97,9 +103,3 @@ $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." - -manpage: - $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man - @echo - @echo "Build finished; the man pages are in $(BUILDDIR)/man" - From noreply at buildbot.pypy.org Tue Jan 10 17:24:08 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 17:24:08 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): StackLocations have now a value field which stores the offset to the SPP. It is used in regalloc_mov. Message-ID: <20120110162408.63E2082110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51209:b5f5e48c3799 Date: 2012-01-10 17:22 +0100 http://bitbucket.org/pypy/pypy/changeset/b5f5e48c3799/ Log: (bivab, hager): StackLocations have now a value field which stores the offset to the SPP. It is used in regalloc_mov. diff --git a/pypy/jit/backend/ppc/ppcgen/locations.py b/pypy/jit/backend/ppc/ppcgen/locations.py --- a/pypy/jit/backend/ppc/ppcgen/locations.py +++ b/pypy/jit/backend/ppc/ppcgen/locations.py @@ -88,11 +88,11 @@ def __init__(self, position, num_words=1, type=INT): self.position = position - self.width = num_words * WORD self.type = type + self.value = get_spp_offset(position) def __repr__(self): - return 'FP(%s)+%d' % (self.type, self.position,) + return 'SPP(%s)+%d' % (self.type, self.value) def location_code(self): return 'b' @@ -108,3 +108,9 @@ def imm(val): return ImmLocation(val) + +def get_spp_offset(pos): + if pos < 0: + return -pos * WORD + else: + return -(pos + 1) * WORD diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -711,14 +711,14 @@ # move immediate value to memory elif loc.is_stack(): self.mc.alloc_scratch_reg() - offset = loc.as_key() * WORD + offset = loc.value self.mc.load_imm(r.SCRATCH.value, value) self.mc.store(r.SCRATCH.value, r.SPP.value, offset) self.mc.free_scratch_reg() return assert 0, "not supported location" elif prev_loc.is_stack(): - offset = prev_loc.as_key() * WORD + offset = prev_loc.value # move from memory to register if loc.is_reg(): reg = loc.as_key() @@ -726,7 +726,7 @@ return # move in memory elif loc.is_stack(): - target_offset = loc.as_key() * WORD + target_offset = loc.value self.mc.alloc_scratch_reg() self.mc.load(r.SCRATCH.value, r.SPP.value, offset) self.mc.store(r.SCRATCH.value, r.SPP.value, target_offset) @@ -742,7 +742,7 @@ return # move to memory elif loc.is_stack(): - offset = loc.as_key() * WORD + offset = loc.value self.mc.store(reg, r.SPP.value, offset) return assert 0, "not supported location" From noreply at buildbot.pypy.org Tue Jan 10 17:24:09 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 10 Jan 2012 17:24:09 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): we don't want to free the args here Message-ID: <20120110162409.8818782110@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51210:f04c600f8177 Date: 2012-01-10 17:23 +0100 http://bitbucket.org/pypy/pypy/changeset/f04c600f8177/ Log: (bivab, hager): we don't want to free the args here diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -466,7 +466,6 @@ self.mc.call(adr) self.mark_gc_roots(force_index) - regalloc.possibly_free_vars(args) # restore the arguments stored on the stack if result is not None: From noreply at buildbot.pypy.org Tue Jan 10 17:24:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 10 Jan 2012 17:24:31 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Rewrite to remove the emphasis on **per iteration** --- all the other Message-ID: <20120110162431.7450182110@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4009:749fa78eeb73 Date: 2012-01-10 17:24 +0100 http://bitbucket.org/pypy/extradoc/changeset/749fa78eeb73/ Log: Rewrite to remove the emphasis on **per iteration** --- all the other numbers are also per iteration. diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -64,8 +64,8 @@ advantage of the knowledge that the data is independent, but it is in the same ballpark - **15% - 170%** slower than C, but the algorithm you choose matters more than the language. By comparison, the slow versions take about **5.75s** -each on CPython 2.6 **per iteration**, and by estimating, are about **200x** -slower than the PyPy equivalent. I didn't measure the full run though :) +each on CPython 2.6 per iteration, and by estimating, would be about **200x** +slower than the PyPy equivalent if had the patience to measure the full run. The next step is to use NumPy expressions. The first problem we run into is that computing the error requires walking the entire array a second time. This From noreply at buildbot.pypy.org Tue Jan 10 17:25:33 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Tue, 10 Jan 2012 17:25:33 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: optimization fix Message-ID: <20120110162533.17BE482110@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51211:498b6ee337e9 Date: 2012-01-10 17:22 +0100 http://bitbucket.org/pypy/pypy/changeset/498b6ee337e9/ Log: optimization fix diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -691,6 +691,7 @@ for i in l: if i == obj: return True + return False return ListStrategy.contains(self, w_list, w_obj) def length(self, w_list): From noreply at buildbot.pypy.org Tue Jan 10 17:27:17 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 10 Jan 2012 17:27:17 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: less formal writing Message-ID: <20120110162717.6899282110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4010:fc2925740080 Date: 2012-01-10 10:25 -0600 http://bitbucket.org/pypy/extradoc/changeset/fc2925740080/ Log: less formal writing diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -3,47 +3,44 @@ Hello. -I'm pleased to inform you about the progress we have made on NumPyPy, both in -terms of completeness and performance. This post mostly deals with the -performance side and how far we have come so far. **Word of warning:** the -performance work on NumPyPy isn't done - we're maybe half way to where we want -to be and there are many trivial and not so trivial optimizations to be -performed. In fact we haven't even started to implement important -optimizations, like vectorization. +We're excited to let you know about some of the great progress we've made on +NumPyPy -- both completeness and performance. Here we'll mostly talk about the +performance side and how far we have come so far. **Word of warning:** this +work isn't done - we're maybe half way to where we want to be and there are +many trivial and not so trivial optimizations to be written. (For example, we +haven't even started to implement important optimizations, like vectorization.) Benchmark --------- -We choose a laplace transform, based on SciPy's `PerformancePython`_ wiki. +We chose a laplace transform, based on SciPy's `PerformancePython`_ wiki. Unfortunately, the different implementations on the wiki page accidentally use two different algorithms, which have different convergences, and very different performance characteristics on modern computers. As a result, we implemented -our own versions in C and Python (both with and without NumPy). The full source +our own versions in both C and Python (with and without NumPy). The full source can be found in `fijal's hack`_ repo, all these benchmarks were performed at revision 18502dbbcdb3. First, let me describe various algorithms used. Note that some of them contain PyPy-specific hacks to work around limitations in the current implementation. -These hacks will go away eventually and the performance should improve. It's -worth noting that while numerically the algorithms used are identical, the -exact data layout in memory differs between them. +These hacks will go away eventually and the performance will improve. +Numerically the algorithms used are identical, however exact data layout in +memory differs between them. -**Note on all the benchmarks:** they were all run once, but the performance is -very stable across runs. +**A note about all the benchmarks:** they were each run once, but the +performance is very stable across runs. Starting with the C version, it implements a dead simple laplace transform -using two loops and a double-reference memory (array of ``int*``). The double -reference does not matter for performance and two algorithms are implemented in -``inline-laplace.c`` and ``laplace.c``. They're both compiled with -``gcc 4.4.5`` at ``-O3``. +using two loops and double-reference memory (array of ``int*``). The double +reference does not matter for performance and the two algorithms are +implemented in ``inline-laplace.c`` and ``laplace.c``. They were both compiled +with ``gcc 4.4.5`` at ``-O3``. A straightforward version of those in Python is implemented in ``laplace.py`` using respectively ``inline_slow_time_step`` and ``slow_time_step``. ``slow_2_time_step`` does the same thing, except it copies arrays in-place instead of creating new copies. -(XXX: these are timed under PyPy?) - +-----------------------+----------------------+--------------------+ | bench | number of iterations | time per iteration | +-----------------------+----------------------+--------------------+ @@ -60,31 +57,31 @@ An important thing to notice here is that the data dependency in the inline version causes a huge slowdown for the C versions. This is already not too bad -for us, the braindead Python version takes longer and PyPy is not able to take -advantage of the knowledge that the data is independent, but it is in the same -ballpark - **15% - 170%** slower than C, but the algorithm you choose matters -more than the language. By comparison, the slow versions take about **5.75s** -each on CPython 2.6 **per iteration**, and by estimating, are about **200x** -slower than the PyPy equivalent. I didn't measure the full run though :) +for us though, the braindead Python version takes longer and PyPy is not able +to take advantage of the knowledge that the data is independent, but it is in +the same ballpark as the C versions - **15% - 170%** slower, but the algorithm +you choose matters more than the language. By comparison, the slow versions +take about **5.75s** each on CPython 2.6 **per iteration**, and by estimating, +are about **200x** slower than the PyPy equivalent. I didn't measure the full +run though :) The next step is to use NumPy expressions. The first problem we run into is that computing the error requires walking the entire array a second time. This is fairly inefficient in terms of cache access, so I took the liberty of computing the errors every 15 steps. This results in the convergence being -rounded to the nearest 15 iterations, but speeds things up considerably (XXX: -is this true?). ``numeric_time_step`` takes the most braindead approach of -replacing the array with itself, like this:: +rounded to the nearest 15 iterations, but speeds things up considerably. +``numeric_time_step`` takes the most braindead approach of replacing the array +with itself, like this:: u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 + (u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv -We need 3 arrays here - one is an intermediate (PyPy does not automatically -create intermediates for expressions), one is a copy for computing the error, -and one is the result. This works by chance, since in NumPy ``+`` or ``*`` -creates an intermediate, while NumPyPy avoids allocating the intermediate if -possible. +We need 3 arrays here - one is an intermediate (PyPy only needs one, for all of +those subexpressions), one is a copy for computing the error, and one is the +result. This works automatically, since in NumPy ``+`` or ``*`` creates an +intermediate, while NumPyPy avoids allocating the intermediate if possible. -``numeric_2_time_step`` works pretty much the same:: +``numeric_2_time_step`` works in pretty much the same way:: src = self.u self.u = src.copy() @@ -106,9 +103,9 @@ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv -``numeric_4_time_step`` is the one that tries to resemble the C version more. +``numeric_4_time_step`` is the one that tries hardest to resemble the C version. Instead of doing an array copy, it actually notices that you can alternate -between two arrays. This is exactly what C version does. The +between two arrays. This is exactly what the C version does. The ``remove_invalidates`` call is a PyPy specific hack - we hope to remove this call in the near future, but in short it promises "I don't have any unbuilt intermediates that depend on the value of the argument", which means you don't @@ -121,11 +118,11 @@ self.u[1:-1, 1:-1] = ((src[0:-2, 1:-1] + src[2:, 1:-1])*dy2 + (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv -This one is the most equivalent to the C version. +This one is the most comparable to the C version. ``numeric_5_time_step`` does the same thing, but notices you don't have to copy -the entire array, it's enough to just copy edges. This is an optimization that -was not done in the C version:: +the entire array, it's enough to just copy the edges. This is an optimization +that was not done in the C version:: remove_invalidates(self.old_u) remove_invalidates(self.u) @@ -140,8 +137,8 @@ Let's look at the table of runs. As before, ``gcc 4.4.5``, compiled at ``-O3``, and PyPy nightly 7bb8b38d8563, on an x86-64 machine. All of the numeric methods -run 226 steps each, slightly more than 219, rounding to the next 15 when the -error is computed. Comparison for PyPy and CPython: +run for 226 steps, slightly more than the 219, rounding to the next 15 when the +error is computed. +-----------------------+-------------+----------------+ | benchmark | PyPy | CPython | From noreply at buildbot.pypy.org Tue Jan 10 17:27:18 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 10 Jan 2012 17:27:18 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: resolved merge Message-ID: <20120110162718.85F4A82110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4011:75aa1ba6d29f Date: 2012-01-10 10:27 -0600 http://bitbucket.org/pypy/extradoc/changeset/75aa1ba6d29f/ Log: resolved merge diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -61,9 +61,9 @@ to take advantage of the knowledge that the data is independent, but it is in the same ballpark as the C versions - **15% - 170%** slower, but the algorithm you choose matters more than the language. By comparison, the slow versions -take about **5.75s** each on CPython 2.6 **per iteration**, and by estimating, -are about **200x** slower than the PyPy equivalent. I didn't measure the full -run though :) +take about **5.75s** each on CPython 2.6 per iteration, and by estimating, +are about **200x** slower than the PyPy equivalent, if I had the patience to +measure the full run. The next step is to use NumPy expressions. The first problem we run into is that computing the error requires walking the entire array a second time. This From noreply at buildbot.pypy.org Tue Jan 10 17:31:10 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 17:31:10 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: quantify "faster than C" Message-ID: <20120110163110.F093982110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4012:ad6f9cb35d27 Date: 2012-01-10 18:30 +0200 http://bitbucket.org/pypy/extradoc/changeset/ad6f9cb35d27/ Log: quantify "faster than C" diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -159,8 +159,9 @@ faster than NumPy on CPython, almost always by more than 2x on this relatively real-world example. This is not the end though, in fact it's hardly the beginning: as we continue work, we hope to make even much better use of the -high level information that we have, in order to eventually outperform C, -hopefully in 2012. Stay tuned. +high level information that we have. Looking at the generated assembler by +gcc in this example it's pretty clear we can outperform it by having a much +better aliasing information and hence a better possibilities for vectorization. Cheers, fijal From noreply at buildbot.pypy.org Tue Jan 10 17:35:37 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 17:35:37 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add link Message-ID: <20120110163537.8E97482110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4013:1f530d01ba87 Date: 2012-01-10 18:35 +0200 http://bitbucket.org/pypy/extradoc/changeset/1f530d01ba87/ Log: add link diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -167,3 +167,4 @@ fijal .. _`PerformancePython`: http://www.scipy.org/PerformancePython +.. _`fijal's hack`: https://bitbucket.org/fijal/hack2/src/default/bench/laplace From noreply at buildbot.pypy.org Tue Jan 10 17:36:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 10 Jan 2012 17:36:17 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: wording. Message-ID: <20120110163617.8EC9482110@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4014:9566e67df82c Date: 2012-01-10 17:36 +0100 http://bitbucket.org/pypy/extradoc/changeset/9566e67df82c/ Log: wording. diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -160,8 +160,9 @@ real-world example. This is not the end though, in fact it's hardly the beginning: as we continue work, we hope to make even much better use of the high level information that we have. Looking at the generated assembler by -gcc in this example it's pretty clear we can outperform it by having a much -better aliasing information and hence a better possibilities for vectorization. +gcc in this example it's pretty clear we can outperform it, thanks to better +aliasing information and hence better possibilities for vectorization. +Stay tuned. Cheers, fijal From noreply at buildbot.pypy.org Tue Jan 10 17:53:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 10 Jan 2012 17:53:45 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: typo Message-ID: <20120110165345.87D4482110@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4015:24ad6171712f Date: 2012-01-10 17:53 +0100 http://bitbucket.org/pypy/extradoc/changeset/24ad6171712f/ Log: typo diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -152,7 +152,7 @@ | numeric 4 | 11ms | 31ms | +-----------------------+-------------+----------------+ | numeric 5 | 9.3ms | 21ms | -+-----------------------+-------------+----------------- ++-----------------------+-------------+----------------+ We think that these preliminary results are pretty good, they're not as fast as the C version (or as fast as we'd like them to be), but we're already much From noreply at buildbot.pypy.org Tue Jan 10 19:22:58 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 10 Jan 2012 19:22:58 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: English language cleanups. Message-ID: <20120110182258.A8F4E82110@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: extradoc Changeset: r4016:0d508d74845b Date: 2012-01-10 13:22 -0500 http://bitbucket.org/pypy/extradoc/changeset/0d508d74845b/ Log: English language cleanups. diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -4,9 +4,10 @@ Hello. We're excited to let you know about some of the great progress we've made on -NumPyPy -- both completeness and performance. Here we'll mostly talk about the -performance side and how far we have come so far. **Word of warning:** this -work isn't done - we're maybe half way to where we want to be and there are +NumPyPy: both completeness and performance. In this blog entry we mostly +will talk about performance and how much progress we have made so far. +**Word of warning:** this +work isn't done -- we're maybe half way to where we want to be and there are many trivial and not so trivial optimizations to be written. (For example, we haven't even started to implement important optimizations, like vectorization.) @@ -27,10 +28,10 @@ Numerically the algorithms used are identical, however exact data layout in memory differs between them. -**A note about all the benchmarks:** they were each run once, but the +**A note about all the benchmarks:** they each were run once, but the performance is very stable across runs. -Starting with the C version, it implements a dead simple laplace transform +Starting with the C version, it implements a trivial laplace transform using two loops and double-reference memory (array of ``int*``). The double reference does not matter for performance and the two algorithms are implemented in ``inline-laplace.c`` and ``laplace.c``. They were both compiled @@ -55,13 +56,14 @@ | inline_slow python | 278 | 23.7 | +-----------------------+----------------------+--------------------+ -An important thing to notice here is that the data dependency in the inline -version causes a huge slowdown for the C versions. This is already not too bad -for us though, the braindead Python version takes longer and PyPy is not able -to take advantage of the knowledge that the data is independent, but it is in -the same ballpark as the C versions - **15% - 170%** slower, but the algorithm -you choose matters more than the language. By comparison, the slow versions -take about **5.75s** each on CPython 2.6 per iteration, and by estimating, +An important thing to notice is the data dependency of the inline +version causes a huge slowdown for the C versions. This is not a severe +disadvantage for us though -- the brain-dead Python version takes longer +and PyPy is not able to take advantage of the knowledge that the data is +independent. The results are in the same ballpark as the C versions -- +**15% - 170%** slower, but the algorithm +one chooses matters more than the language. By comparison, the slow versions +take about **5.75s** each on CPython 2.6 per iteration, and by estimation, are about **200x** slower than the PyPy equivalent, if I had the patience to measure the full run. @@ -78,7 +80,7 @@ We need 3 arrays here - one is an intermediate (PyPy only needs one, for all of those subexpressions), one is a copy for computing the error, and one is the -result. This works automatically, since in NumPy ``+`` or ``*`` creates an +result. This works automatically because in NumPy ``+`` or ``*`` creates an intermediate, while NumPyPy avoids allocating the intermediate if possible. ``numeric_2_time_step`` works in pretty much the same way:: @@ -90,7 +92,7 @@ except the copy is now explicit rather than implicit. -``numeric_3_time_step`` does the same thing, but notices you don't have to copy +``numeric_3_time_step`` does the same thing, but notices one doesn't have to copy the entire array, it's enough to copy the border pieces and fill rest with zeros:: @@ -104,12 +106,12 @@ (src[1:-1,0:-2] + src[1:-1, 2:])*dx2)*dnr_inv ``numeric_4_time_step`` is the one that tries hardest to resemble the C version. -Instead of doing an array copy, it actually notices that you can alternate +Instead of doing an array copy, it actually notices that one can alternate between two arrays. This is exactly what the C version does. The ``remove_invalidates`` call is a PyPy specific hack - we hope to remove this -call in the near future, but in short it promises "I don't have any unbuilt -intermediates that depend on the value of the argument", which means you don't -have to compute sub-expressions you're not actually using:: +call in the near future, but, in short, it promises "I don't have any unbuilt +intermediates that depend on the value of the argument", which means one doesn't +have to compute sub-expressions one is not actually using:: remove_invalidates(self.old_u) remove_invalidates(self.u) @@ -120,7 +122,7 @@ This one is the most comparable to the C version. -``numeric_5_time_step`` does the same thing, but notices you don't have to copy +``numeric_5_time_step`` does the same thing, but notices one doesn't have to copy the entire array, it's enough to just copy the edges. This is an optimization that was not done in the C version:: @@ -158,9 +160,9 @@ the C version (or as fast as we'd like them to be), but we're already much faster than NumPy on CPython, almost always by more than 2x on this relatively real-world example. This is not the end though, in fact it's hardly the -beginning: as we continue work, we hope to make even much better use of the +beginning! As we continue work, we hope to make even more use of the high level information that we have. Looking at the generated assembler by -gcc in this example it's pretty clear we can outperform it, thanks to better +gcc in this example, it's pretty clear we can outperform it, thanks to better aliasing information and hence better possibilities for vectorization. Stay tuned. From noreply at buildbot.pypy.org Tue Jan 10 19:45:00 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Tue, 10 Jan 2012 19:45:00 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: More English improvements and a few commas. Message-ID: <20120110184500.75A9082110@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: extradoc Changeset: r4017:15a3491e715a Date: 2012-01-10 13:44 -0500 http://bitbucket.org/pypy/extradoc/changeset/15a3491e715a/ Log: More English improvements and a few commas. diff --git a/blog/draft/laplace.rst b/blog/draft/laplace.rst --- a/blog/draft/laplace.rst +++ b/blog/draft/laplace.rst @@ -38,7 +38,7 @@ with ``gcc 4.4.5`` at ``-O3``. A straightforward version of those in Python is implemented in ``laplace.py`` -using respectively ``inline_slow_time_step`` and ``slow_time_step``. +using, respectively, ``inline_slow_time_step`` and ``slow_time_step``. ``slow_2_time_step`` does the same thing, except it copies arrays in-place instead of creating new copies. @@ -63,7 +63,7 @@ independent. The results are in the same ballpark as the C versions -- **15% - 170%** slower, but the algorithm one chooses matters more than the language. By comparison, the slow versions -take about **5.75s** each on CPython 2.6 per iteration, and by estimation, +take about **5.75s** each on CPython 2.6 per iteration and, by estimation, are about **200x** slower than the PyPy equivalent, if I had the patience to measure the full run. @@ -78,7 +78,7 @@ u[1:-1, 1:-1] = ((u[0:-2, 1:-1] + u[2:, 1:-1])*dy2 + (u[1:-1,0:-2] + u[1:-1, 2:])*dx2)*dnr_inv -We need 3 arrays here - one is an intermediate (PyPy only needs one, for all of +We need 3 arrays here -- one is an intermediate (PyPy only needs one, for all of those subexpressions), one is a copy for computing the error, and one is the result. This works automatically because in NumPy ``+`` or ``*`` creates an intermediate, while NumPyPy avoids allocating the intermediate if possible. @@ -92,7 +92,7 @@ except the copy is now explicit rather than implicit. -``numeric_3_time_step`` does the same thing, but notices one doesn't have to copy +``numeric_3_time_step`` does the same thing, but notice one doesn't have to copy the entire array, it's enough to copy the border pieces and fill rest with zeros:: @@ -156,13 +156,13 @@ | numeric 5 | 9.3ms | 21ms | +-----------------------+-------------+----------------+ -We think that these preliminary results are pretty good, they're not as fast as +We think that these preliminary results are pretty good. They're not as fast as the C version (or as fast as we'd like them to be), but we're already much -faster than NumPy on CPython, almost always by more than 2x on this relatively -real-world example. This is not the end though, in fact it's hardly the +faster than NumPy on CPython -- almost always by more than 2x on this relatively +real-world example. This is not the end, though. In fact, it's hardly the beginning! As we continue work, we hope to make even more use of the -high level information that we have. Looking at the generated assembler by -gcc in this example, it's pretty clear we can outperform it, thanks to better +high level information that we have. Looking at the assembler generated by +gcc for this example, it's pretty clear we can outperform it thanks to better aliasing information and hence better possibilities for vectorization. Stay tuned. From noreply at buildbot.pypy.org Tue Jan 10 20:19:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 20:19:34 +0100 (CET) Subject: [pypy-commit] pypy default: add a note about special methods Message-ID: <20120110191934.D8F4582110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51212:cc4956b9891d Date: 2012-01-10 15:02 +0200 http://bitbucket.org/pypy/pypy/changeset/cc4956b9891d/ Log: add a note about special methods diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -341,7 +341,8 @@ **objects** - Normal rules apply. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. From noreply at buildbot.pypy.org Tue Jan 10 20:19:36 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 20:19:36 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120110191936.10D9C82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51213:6b798036894a Date: 2012-01-10 21:19 +0200 http://bitbucket.org/pypy/pypy/changeset/6b798036894a/ Log: merge diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -360,12 +363,36 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -390,12 +390,12 @@ 'threshold': 'number of times a loop has to run for it to become hot', 'function_threshold': 'number of times a function must run for it to become traced from start', 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', - 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TRACE_TOO_LONG', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG', 'inlining': 'inline python functions or not (1/0)', 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', 'retrace_limit': 'how many times we can try retracing before giving up', 'max_retrace_guards': 'number of extra guards a retrace can cause', - 'enable_opts': 'optimizations to enabled or all, INTERNAL USE ONLY' + 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' } PARAMETERS = {'threshold': 1039, # just above 1024, prime From noreply at buildbot.pypy.org Tue Jan 10 20:54:27 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 20:54:27 +0100 (CET) Subject: [pypy-commit] pypy default: remove a horribly outdated script Message-ID: <20120110195427.D7ED982110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51214:5f2580ed4505 Date: 2012-01-10 21:47 +0200 http://bitbucket.org/pypy/pypy/changeset/5f2580ed4505/ Log: remove a horribly outdated script diff --git a/pypy/tool/release_dates.py b/pypy/tool/release_dates.py deleted file mode 100644 --- a/pypy/tool/release_dates.py +++ /dev/null @@ -1,14 +0,0 @@ -import py - -release_URL = 'http://codespeak.net/svn/pypy/release/' -releases = [r[:-2] for r in py.std.os.popen('svn list ' + release_URL).readlines() if 'x' not in r] - -f = file('release_dates.txt', 'w') -print >> f, 'date, release' -for release in releases: - for s in py.std.os.popen('svn info ' + release_URL + release).readlines(): - if s.startswith('Last Changed Date'): - date = s.split()[3] - print >> f, date, ',', release - break -f.close() From noreply at buildbot.pypy.org Tue Jan 10 21:23:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 21:23:09 +0100 (CET) Subject: [pypy-commit] pypy default: update LICENSE file. For future readers, run: Message-ID: <20120110202309.411B882110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51215:425ed28a95d9 Date: 2012-01-10 22:22 +0200 http://bitbucket.org/pypy/pypy/changeset/425ed28a95d9/ Log: update LICENSE file. For future readers, run: hg churn -c -t "{author}" | sed -e 's/ <.*//' | sed -e 's/[0-9][0-9]*//' and clean up the result diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py --- a/pypy/doc/tool/makecontributor.py +++ b/pypy/doc/tool/makecontributor.py @@ -6,7 +6,7 @@ import py # this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' +# hg churn -c -t "{author}" | sed -e 's/ <.*//' | sed -e 's/[0-9]+//' try: path = py.std.sys.argv[1] From noreply at buildbot.pypy.org Tue Jan 10 21:23:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 10 Jan 2012 21:23:42 +0100 (CET) Subject: [pypy-commit] pypy default: remove now completely useless file Message-ID: <20120110202342.88C3A82110@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51216:ca3f367e84af Date: 2012-01-10 22:23 +0200 http://bitbucket.org/pypy/pypy/changeset/ca3f367e84af/ Log: remove now completely useless file diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' | sed -e 's/[0-9]+//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author From noreply at buildbot.pypy.org Tue Jan 10 21:51:17 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 10 Jan 2012 21:51:17 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: fixed mean, added funky tests in ReduceSignature Message-ID: <20120110205117.086B082110@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51217:20bbff5d323d Date: 2012-01-10 22:49 +0200 http://bitbucket.org/pypy/pypy/changeset/20bbff5d323d/ Log: fixed mean, added funky tests in ReduceSignature diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -575,8 +575,12 @@ def descr_mean(self, space, w_dim=None): if space.is_w(w_dim, space.w_None): w_dim = space.wrap(-1) - return space.div(self.descr_sum_promote(space, w_dim), - space.wrap(self.size)) + dim = space.int_w(w_dim) + if dim < 0: + w_denom = space.wrap(self.size) + else: + w_denom = space.wrap(self.shape[dim]) + return space.div(self.descr_sum_promote(space, w_dim), w_denom) def descr_nonzero(self, space): if self.size > 1: @@ -777,7 +781,7 @@ if self.forced_result is not None: return self.forced_result.create_sig(res_shape) return signature.ReduceSignature(self.binfunc, self.name, self.dtype, - signature.ScalarSignature(self.dtype), + signature.ViewSignature(self.dtype), self.values.create_sig(res_shape)) def get_identity(self, sig, frame, shapelen): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -147,6 +147,7 @@ from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) + assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): @@ -168,6 +169,7 @@ def eval(self, frame, arr): iter = frame.iterators[self.iter_no] + assert arr.dtype is self.dtype return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) class ScalarSignature(ConcreteSignature): @@ -346,10 +348,20 @@ self.right._invent_numbering(cache, allnumbers) def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) + #Could be called with arr as output or arr as input. + from pypy.module.micronumpy.interp_numarray import Reduce + if isinstance(arr, Reduce): + self.left._invent_array_numbering(arr, cache) + else: + self.right._invent_array_numbering(arr, cache) def eval(self, frame, arr): - return self.right.eval(frame, arr) + #Could be called with arr as output or arr as input. + from pypy.module.micronumpy.interp_numarray import Reduce + if isinstance(arr, Reduce): + return self.left.eval(frame, arr) + else: + return self.right.eval(frame, arr) def debug_repr(self): return 'ReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(), From notifications-noreply at bitbucket.org Wed Jan 11 03:15:32 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Wed, 11 Jan 2012 02:15:32 -0000 Subject: [pypy-commit] Notification: pypy-sandbox-4-pycode Message-ID: <20120111021532.22777.15991@bitbucket13.managed.contegix.com> You have received a notification from trampgeek. Hi, I forked pypy. My fork is at https://bitbucket.org/trampgeek/pypy-sandbox-4-pycode. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Wed Jan 11 05:51:23 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 11 Jan 2012 05:51:23 +0100 (CET) Subject: [pypy-commit] pypy default: explicit relative import Message-ID: <20120111045123.DE67F82110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51218:0a73918924c2 Date: 2012-01-10 22:51 -0600 http://bitbucket.org/pypy/pypy/changeset/0a73918924c2/ Log: explicit relative import diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from fromnumeric import * +from .fromnumeric import * From noreply at buildbot.pypy.org Wed Jan 11 14:09:59 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:09:59 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added listview_str/int for setobjects to later create lists from sets without wrapping/unwrapping the elements Message-ID: <20120111130959.6802E82C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51219:764907052fed Date: 2012-01-10 17:43 +0100 http://bitbucket.org/pypy/pypy/changeset/764907052fed/ Log: added listview_str/int for setobjects to later create lists from sets without wrapping/unwrapping the elements diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -438,11 +438,15 @@ def listview_str(self, w_obj): if isinstance(w_obj, W_ListObject): return w_obj.getitems_str() + if isinstance(w_obj, W_SetObject): + return w_obj.listview_str() return None def listview_int(self, w_obj): if isinstance(w_obj, W_ListObject): return w_obj.getitems_int() + if isinstance(w_obj, W_SetObject): + return w_obj.listview_int() return None def sliceindices(self, w_slice, w_length): diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -63,6 +63,7 @@ # _____________ strategy methods ________________ + def clear(self): """ Removes all elements from the set. """ self.strategy.clear(self) @@ -87,6 +88,14 @@ """ Returns a dict with all elements of the set. Needed only for switching to ObjectSetStrategy. """ return self.strategy.getdict_w(self) + def listview_str(self): + """ If this is a string set return its contents as a list of uwnrapped strings. Otherwise return None. """ + return self.strategy.listview_str(self) + + def listview_int(self): + """ If this is an int set return its contents as a list of uwnrapped ints. Otherwise return None. """ + return self.strategy.listview_int(self) + def get_storage_copy(self): """ Returns a copy of the storage. Needed when we want to clone all elements from one set and put them into another. """ @@ -189,6 +198,12 @@ """ Returns an empty storage (erased) object. Used to initialize an empty set.""" raise NotImplementedError + def listview_str(self, w_set): + return None + + def listview_int(self, w_set): + return None + #def erase(self, storage): # raise NotImplementedError @@ -694,6 +709,9 @@ def get_empty_dict(self): return {} + def listview_str(self, w_set): + return self.unerase(w_set.sstorage).keys() + def is_correct_type(self, w_key): return type(w_key) is W_StringObject @@ -724,6 +742,9 @@ def get_empty_dict(self): return {} + def listview_int(self, w_set): + return self.unerase(w_set.sstorage).keys() + def is_correct_type(self, w_key): from pypy.objspace.std.intobject import W_IntObject return type(w_key) is W_IntObject diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -123,6 +123,19 @@ # changed cached object, need to change it back for other tests to pass intstr.get_storage_from_list = tmp_func + def test_listview_str_int_on_set(self): + w = self.space.wrap + + w_a = W_SetObject(self.space) + _initialize_set(self.space, w_a, w("abcdefg")) + assert sorted(self.space.listview_str(w_a)) == list("abcdefg") + assert self.space.listview_int(w_a) is None + + w_b = W_SetObject(self.space) + _initialize_set(self.space, w_b, self.space.newlist([w(1),w(2),w(3),w(4),w(5)])) + assert sorted(self.space.listview_int(w_b)) == [1,2,3,4,5] + assert self.space.listview_str(w_b) is None + class AppTestAppSetTest: def setup_class(self): From noreply at buildbot.pypy.org Wed Jan 11 14:10:00 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:10:00 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: (cfbolz, l.diekmann): restructure some code: the speed hack in FastListIterator Message-ID: <20120111131000.96EBB82C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51220:95d8ecd1711f Date: 2012-01-10 18:15 +0100 http://bitbucket.org/pypy/pypy/changeset/95d8ecd1711f/ Log: (cfbolz, l.diekmann): restructure some code: the speed hack in FastListIterator is no longer there, so we don't need to use extend here. Also, why was the generator path hidden in _init_from_iterable? diff --git a/pypy/objspace/std/iterobject.py b/pypy/objspace/std/iterobject.py --- a/pypy/objspace/std/iterobject.py +++ b/pypy/objspace/std/iterobject.py @@ -29,9 +29,8 @@ class W_SeqIterObject(W_AbstractSeqIterObject): """Sequence iterator implementation for general sequences.""" -class W_FastListIterObject(W_AbstractSeqIterObject): - """Sequence iterator specialized for lists, accessing - directly their RPython-level list of wrapped objects. +class W_FastListIterObject(W_AbstractSeqIterObject): # XXX still needed + """Sequence iterator specialized for lists. """ class W_FastTupleIterObject(W_AbstractSeqIterObject): diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -1035,26 +1035,26 @@ # this is on the silly side w_iterable, = __args__.parse_obj( None, 'list', init_signature, init_defaults) - w_list.__init__(space, []) if w_iterable is not None: - # unfortunately this is duplicating space.unpackiterable to avoid - # assigning a new RPython list to 'wrappeditems', which defeats the - # W_FastListIterObject optimization. if isinstance(w_iterable, W_ListObject): - w_list.extend(w_iterable) + w_iterable.copy_into(w_list) + return elif isinstance(w_iterable, W_TupleObject): - w_list.extend(W_ListObject(space, w_iterable.wrappeditems[:])) - else: - _init_from_iterable(space, w_list, w_iterable) + W_ListObject(space, w_iterable.wrappeditems[:]).copy_into(w_list) + return + w_list.__init__(space, []) + # xxx special hack for speed + from pypy.interpreter.generator import GeneratorIterator + if isinstance(w_iterable, GeneratorIterator): + w_iterable.unpack_into_w(w_list) + return + # /xxx + _init_from_iterable(space, w_list, w_iterable) + else: + w_list.__init__(space, []) def _init_from_iterable(space, w_list, w_iterable): # in its own function to make the JIT look into init__List - # xxx special hack for speed - from pypy.interpreter.generator import GeneratorIterator - if isinstance(w_iterable, GeneratorIterator): - w_iterable.unpack_into_w(w_list) - return - # /xxx w_iterator = space.iter(w_iterable) while True: try: From noreply at buildbot.pypy.org Wed Jan 11 14:10:01 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:10:01 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added fastpath for initialization of lists with iterables using int- or stringstrategy Message-ID: <20120111131001.D860F82C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51221:29acb5e48ac9 Date: 2012-01-11 14:09 +0100 http://bitbucket.org/pypy/pypy/changeset/29acb5e48ac9/ Log: added fastpath for initialization of lists with iterables using int- or stringstrategy diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -1042,6 +1042,21 @@ elif isinstance(w_iterable, W_TupleObject): W_ListObject(space, w_iterable.wrappeditems[:]).copy_into(w_list) return + + intlist = space.listview_int(w_iterable) + if intlist is not None: + w_list.strategy = strategy = space.fromcache(IntegerListStrategy) + # need to copy because intlist can share with w_iterable + w_list.lstorage = strategy.erase(intlist[:]) + return + + strlist = space.listview_str(w_iterable) + if strlist is not None: + w_list.strategy = strategy = space.fromcache(StringListStrategy) + # need to copy because intlist can share with w_iterable + w_list.lstorage = strategy.erase(strlist[:]) + return + w_list.__init__(space, []) # xxx special hack for speed from pypy.interpreter.generator import GeneratorIterator diff --git a/pypy/objspace/std/test/test_liststrategies.py b/pypy/objspace/std/test/test_liststrategies.py --- a/pypy/objspace/std/test/test_liststrategies.py +++ b/pypy/objspace/std/test/test_liststrategies.py @@ -463,6 +463,34 @@ w_res = listobject.list_pop__List_ANY(space, w_l, space.w_None) # does not crash assert space.unwrap(w_res) == 3 + def test_create_list_from_set(self): + from pypy.objspace.std.setobject import W_SetObject + from pypy.objspace.std.setobject import _initialize_set + + space = self.space + w = space.wrap + + w_l = W_ListObject(space, [space.wrap(1), space.wrap(2), space.wrap(3)]) + + w_set = W_SetObject(self.space) + _initialize_set(self.space, w_set, w_l) + w_set.iter = None # make sure fast path is used + + w_l2 = W_ListObject(space, []) + space.call_method(w_l2, "__init__", w_set) + + w_l2.sort(False) + assert space.eq_w(w_l, w_l2) + + w_l = W_ListObject(space, [space.wrap("a"), space.wrap("b"), space.wrap("c")]) + _initialize_set(self.space, w_set, w_l) + + space.call_method(w_l2, "__init__", w_set) + + w_l2.sort(False) + assert space.eq_w(w_l, w_l2) + + class TestW_ListStrategiesDisabled: def setup_class(cls): From noreply at buildbot.pypy.org Wed Jan 11 14:24:41 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:24:41 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: (cfbolz, l.diekmann) added fastpath for dict.fromkeys with iterable using stringstrategy Message-ID: <20120111132441.B669E82C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51222:b8be45d7d460 Date: 2012-01-11 14:24 +0100 http://bitbucket.org/pypy/pypy/changeset/b8be45d7d460/ Log: (cfbolz, l.diekmann) added fastpath for dict.fromkeys with iterable using stringstrategy diff --git a/pypy/objspace/std/dicttype.py b/pypy/objspace/std/dicttype.py --- a/pypy/objspace/std/dicttype.py +++ b/pypy/objspace/std/dicttype.py @@ -62,8 +62,14 @@ w_fill = space.w_None if space.is_w(w_type, space.w_dict): w_dict = W_DictMultiObject.allocate_and_init_instance(space, w_type) - for w_key in space.listview(w_keys): - w_dict.setitem(w_key, w_fill) + + strlist = space.listview_str(w_keys) + if strlist is not None: + for key in strlist: + w_dict.setitem_str(key, w_fill) + else: + for w_key in space.listview(w_keys): + w_dict.setitem(w_key, w_fill) else: w_dict = space.call_function(w_type) for w_key in space.listview(w_keys): diff --git a/pypy/objspace/std/test/test_dictmultiobject.py b/pypy/objspace/std/test/test_dictmultiobject.py --- a/pypy/objspace/std/test/test_dictmultiobject.py +++ b/pypy/objspace/std/test/test_dictmultiobject.py @@ -131,6 +131,16 @@ assert self.space.eq_w(space.call_function(get, w("33")), w(None)) assert self.space.eq_w(space.call_function(get, w("33"), w(44)), w(44)) + def test_fromkeys_fastpath(self): + space = self.space + w = space.wrap + + w_l = self.space.newlist([w("a"),w("b")]) + w_l.getitems = None + w_d = space.call_method(space.w_dict, "fromkeys", w_l) + + assert space.eq_w(w_d.getitem_str("a"), space.w_None) + assert space.eq_w(w_d.getitem_str("b"), space.w_None) class AppTest_DictObject: def setup_class(cls): From noreply at buildbot.pypy.org Wed Jan 11 14:25:02 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 11 Jan 2012 14:25:02 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: use temporary scratch register in emit_getarrayitem_gc and emit_setarrayitem_gc Message-ID: <20120111132502.991F782C03@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51223:7ccc3bb51335 Date: 2012-01-11 14:24 +0100 http://bitbucket.org/pypy/pypy/changeset/7ccc3bb51335/ Log: use temporary scratch register in emit_getarrayitem_gc and emit_setarrayitem_gc diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -549,8 +549,15 @@ def emit_setarrayitem_gc(self, op, arglocs, regalloc): value_loc, base_loc, ofs_loc, scale, ofs = arglocs assert ofs_loc.is_reg() + + # use r20 as scratch reg + SAVE_SCRATCH = r.r20 + # save value temporarily + self.mc.mtctr(SAVE_SCRATCH.value) + if scale.value > 0: - scale_loc = r.SCRATCH + #scale_loc = r.SCRATCH + scale_loc = SAVE_SCRATCH if IS_PPC_32: self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: @@ -574,13 +581,23 @@ else: assert 0, "scale %s not supported" % (scale.value) + # restore value of SAVE_SCRATCH + self.mc.mfctr(SAVE_SCRATCH.value) + emit_setarrayitem_raw = emit_setarrayitem_gc def emit_getarrayitem_gc(self, op, arglocs, regalloc): res, base_loc, ofs_loc, scale, ofs = arglocs assert ofs_loc.is_reg() + + # use r20 as scratch reg + SAVE_SCRATCH = r.r20 + # save value temporarily + self.mc.mtctr(SAVE_SCRATCH.value) + if scale.value > 0: - scale_loc = r.SCRATCH + #scale_loc = r.SCRATCH + scale_loc = SAVE_SCRATCH if IS_PPC_32: self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: @@ -604,6 +621,9 @@ else: assert 0 + # restore value of SAVE_SCRATCH + self.mc.mfctr(SAVE_SCRATCH.value) + #XXX Hack, Hack, Hack if not we_are_translated(): descr = op.getdescr() From noreply at buildbot.pypy.org Wed Jan 11 14:44:43 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:44:43 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: (cfbolz, l.diekmann) implemented listview_str on dicts Message-ID: <20120111134443.D604582C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51224:b1a065c4225d Date: 2012-01-11 14:40 +0100 http://bitbucket.org/pypy/pypy/changeset/b1a065c4225d/ Log: (cfbolz, l.diekmann) implemented listview_str on dicts diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -91,7 +91,7 @@ getitem_str delitem length \ clear keys values \ items iter setdefault \ - popitem".split() + popitem listview_str".split() def make_method(method): def f(self, *args): @@ -148,6 +148,9 @@ w_dict.strategy = strategy w_dict.dstorage = storage + def listview_str(self, w_dict): + return None + class EmptyDictStrategy(DictStrategy): @@ -457,6 +460,9 @@ assert key is not None return self.unerase(w_dict.dstorage).get(key, None) + def listview_str(self, w_dict): + return self.unerase(w_dict.dstorage).keys() + def iter(self, w_dict): return StrIteratorImplementation(self.space, self, w_dict) @@ -676,6 +682,7 @@ return space.newlist(w_self.items()) def dict_keys__DictMulti(space, w_self): + #XXX add fastpath for strategies here return space.newlist(w_self.keys()) def dict_values__DictMulti(space, w_self): diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -440,6 +440,8 @@ return w_obj.getitems_str() if isinstance(w_obj, W_SetObject): return w_obj.listview_str() + if isinstance(w_obj, W_DictMultiObject): + return w_obj.listview_str() return None def listview_int(self, w_obj): diff --git a/pypy/objspace/std/test/test_dictmultiobject.py b/pypy/objspace/std/test/test_dictmultiobject.py --- a/pypy/objspace/std/test/test_dictmultiobject.py +++ b/pypy/objspace/std/test/test_dictmultiobject.py @@ -142,6 +142,14 @@ assert space.eq_w(w_d.getitem_str("a"), space.w_None) assert space.eq_w(w_d.getitem_str("b"), space.w_None) + def test_listview_str_dict(self): + w = self.space.wrap + + w_d = self.space.newdict() + w_d.initialize_content([(w("a"), w(1)), (w("b"), w(2))]) + + assert self.space.listview_str(w_d) == ["a", "b"] + class AppTest_DictObject: def setup_class(cls): cls.w_on_pypy = cls.space.wrap("__pypy__" in sys.builtin_module_names) From noreply at buildbot.pypy.org Wed Jan 11 14:44:45 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:44:45 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: (cfbolz, l.diekmann): implemented listview_int for dicts Message-ID: <20120111134445.1128C82C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51225:19aec63fdbfe Date: 2012-01-11 14:44 +0100 http://bitbucket.org/pypy/pypy/changeset/19aec63fdbfe/ Log: (cfbolz, l.diekmann): implemented listview_int for dicts diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -91,7 +91,7 @@ getitem_str delitem length \ clear keys values \ items iter setdefault \ - popitem listview_str".split() + popitem listview_str listview_int".split() def make_method(method): def f(self, *args): @@ -151,6 +151,8 @@ def listview_str(self, w_dict): return None + def listview_int(self, w_dict): + return None class EmptyDictStrategy(DictStrategy): @@ -528,6 +530,9 @@ def iter(self, w_dict): return IntIteratorImplementation(self.space, self, w_dict) + def listview_int(self, w_dict): + return self.unerase(w_dict.dstorage).keys() + class IntIteratorImplementation(_WrappedIteratorMixin, IteratorImplementation): pass diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -449,6 +449,8 @@ return w_obj.getitems_int() if isinstance(w_obj, W_SetObject): return w_obj.listview_int() + if isinstance(w_obj, W_DictMultiObject): + return w_obj.listview_int() return None def sliceindices(self, w_slice, w_length): diff --git a/pypy/objspace/std/test/test_dictmultiobject.py b/pypy/objspace/std/test/test_dictmultiobject.py --- a/pypy/objspace/std/test/test_dictmultiobject.py +++ b/pypy/objspace/std/test/test_dictmultiobject.py @@ -150,6 +150,13 @@ assert self.space.listview_str(w_d) == ["a", "b"] + def test_listview_int_dict(self): + w = self.space.wrap + w_d = self.space.newdict() + w_d.initialize_content([(w(1), w("a")), (w(2), w("b"))]) + + assert self.space.listview_int(w_d) == [1, 2] + class AppTest_DictObject: def setup_class(cls): cls.w_on_pypy = cls.space.wrap("__pypy__" in sys.builtin_module_names) From noreply at buildbot.pypy.org Wed Jan 11 14:50:34 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:50:34 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added listview tests for listobject Message-ID: <20120111135034.BE52B82C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51226:243af83be4d9 Date: 2012-01-11 14:49 +0100 http://bitbucket.org/pypy/pypy/changeset/243af83be4d9/ Log: added listview tests for listobject diff --git a/pypy/objspace/std/test/test_liststrategies.py b/pypy/objspace/std/test/test_liststrategies.py --- a/pypy/objspace/std/test/test_liststrategies.py +++ b/pypy/objspace/std/test/test_liststrategies.py @@ -491,6 +491,16 @@ assert space.eq_w(w_l, w_l2) + def test_listview_str_list(self): + space = self.space + w_l = W_ListObject(space, [space.wrap("a"), space.wrap("b")]) + assert self.space.listview_str(w_l) == ["a", "b"] + + def test_listview_int_list(self): + space = self.space + w_l = W_ListObject(space, [space.wrap(1), space.wrap(2), space.wrap(3)]) + assert self.space.listview_int(w_l) == [1, 2, 3] + class TestW_ListStrategiesDisabled: def setup_class(cls): From noreply at buildbot.pypy.org Wed Jan 11 14:56:22 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:56:22 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: (cfbolz, l.diekmann): added listview_str for strings Message-ID: <20120111135622.B8C5682C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51227:44c5a3419379 Date: 2012-01-11 14:55 +0100 http://bitbucket.org/pypy/pypy/changeset/44c5a3419379/ Log: (cfbolz, l.diekmann): added listview_str for strings diff --git a/pypy/objspace/std/objspace.py b/pypy/objspace/std/objspace.py --- a/pypy/objspace/std/objspace.py +++ b/pypy/objspace/std/objspace.py @@ -442,6 +442,8 @@ return w_obj.listview_str() if isinstance(w_obj, W_DictMultiObject): return w_obj.listview_str() + if isinstance(w_obj, W_StringObject): + return w_obj.listview_str() return None def listview_int(self, w_obj): diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -60,6 +60,9 @@ from pypy.objspace.std.unicodetype import plain_str2unicode return plain_str2unicode(space, w_self._value) + def listview_str(w_self): + return list(w_self._value) + registerimplementation(W_StringObject) W_StringObject.EMPTY = W_StringObject('') diff --git a/pypy/objspace/std/test/test_stringobject.py b/pypy/objspace/std/test/test_stringobject.py --- a/pypy/objspace/std/test/test_stringobject.py +++ b/pypy/objspace/std/test/test_stringobject.py @@ -85,6 +85,10 @@ w_slice = space.newslice(w(1), w_None, w(2)) assert self.space.eq_w(space.getitem(w_str, w_slice), w('el')) + def test_listview_str(self): + w_str = self.space.wrap('abcd') + assert self.space.listview_str(w_str) == list("abcd") + class AppTestStringObject: def test_format_wrongchar(self): From noreply at buildbot.pypy.org Wed Jan 11 14:57:26 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 14:57:26 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: merged with default Message-ID: <20120111135726.C9D7582C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51228:28d2d9a61e5e Date: 2012-01-11 14:57 +0100 http://bitbucket.org/pypy/pypy/changeset/28d2d9a61e5e/ Log: merged with default diff too long, truncating to 10000 out of 22513 lines diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -231,8 +231,10 @@ sqlite.sqlite3_result_text.argtypes = [c_void_p, c_char_p, c_int, c_void_p] sqlite.sqlite3_result_text.restype = None -sqlite.sqlite3_enable_load_extension.argtypes = [c_void_p, c_int] -sqlite.sqlite3_enable_load_extension.restype = c_int +HAS_LOAD_EXTENSION = hasattr(sqlite, "sqlite3_enable_load_extension") +if HAS_LOAD_EXTENSION: + sqlite.sqlite3_enable_load_extension.argtypes = [c_void_p, c_int] + sqlite.sqlite3_enable_load_extension.restype = c_int ########################################## # END Wrapped SQLite C API and constants @@ -708,13 +710,14 @@ from sqlite3.dump import _iterdump return _iterdump(self) - def enable_load_extension(self, enabled): - self._check_thread() - self._check_closed() + if HAS_LOAD_EXTENSION: + def enable_load_extension(self, enabled): + self._check_thread() + self._check_closed() - rc = sqlite.sqlite3_enable_load_extension(self.db, int(enabled)) - if rc != SQLITE_OK: - raise OperationalError("Error enabling load extension") + rc = sqlite.sqlite3_enable_load_extension(self.db, int(enabled)) + if rc != SQLITE_OK: + raise OperationalError("Error enabling load extension") DML, DQL, DDL = range(3) diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -180,7 +180,12 @@ if name is None: name = pyobj.func_name if signature is None: - signature = cpython_code_signature(pyobj.func_code) + if hasattr(pyobj, '_generator_next_method_of_'): + from pypy.interpreter.argument import Signature + signature = Signature(['entry']) # haaaaaack + defaults = () + else: + signature = cpython_code_signature(pyobj.func_code) if defaults is None: defaults = pyobj.func_defaults self.name = name @@ -252,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,8 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -51,6 +51,24 @@ space.setattr(self, w_name, space.getitem(w_state, w_name)) + def missing_field(self, space, required, host): + "Find which required field is missing." + state = self.initialization_state + for i in range(len(required)): + if (state >> i) & 1: + continue # field is present + missing = required[i] + if missing is None: + continue # field is optional + w_obj = self.getdictvalue(space, missing) + if w_obj is None: + err = "required field \"%s\" missing from %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + else: + err = "incorrect type for field \"%s\" in %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + raise AssertionError("should not reach here") + class NodeVisitorNotImplemented(Exception): pass @@ -94,17 +112,6 @@ ) -def missing_field(space, state, required, host): - "Find which required field is missing." - for i in range(len(required)): - if not (state >> i) & 1: - missing = required[i] - if missing is not None: - err = "required field \"%s\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) - raise AssertionError("should not reach here") class mod(AST): @@ -112,7 +119,6 @@ class Module(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -128,7 +134,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Module') + self.missing_field(space, ['body'], 'Module') else: pass w_list = self.w_body @@ -145,7 +151,6 @@ class Interactive(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -161,7 +166,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Interactive') + self.missing_field(space, ['body'], 'Interactive') else: pass w_list = self.w_body @@ -178,7 +183,6 @@ class Expression(mod): - def __init__(self, body): self.body = body self.initialization_state = 1 @@ -192,7 +196,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Expression') + self.missing_field(space, ['body'], 'Expression') else: pass self.body.sync_app_attrs(space) @@ -200,7 +204,6 @@ class Suite(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -216,7 +219,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Suite') + self.missing_field(space, ['body'], 'Suite') else: pass w_list = self.w_body @@ -232,15 +235,13 @@ class stmt(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class FunctionDef(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, name, args, body, decorator_list, lineno, col_offset): self.name = name self.args = args @@ -264,7 +265,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['name', 'args', 'body', 'decorator_list', 'lineno', 'col_offset'], 'FunctionDef') + self.missing_field(space, ['lineno', 'col_offset', 'name', 'args', 'body', 'decorator_list'], 'FunctionDef') else: pass self.args.sync_app_attrs(space) @@ -292,9 +293,6 @@ class ClassDef(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, name, bases, body, decorator_list, lineno, col_offset): self.name = name self.bases = bases @@ -320,7 +318,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['name', 'bases', 'body', 'decorator_list', 'lineno', 'col_offset'], 'ClassDef') + self.missing_field(space, ['lineno', 'col_offset', 'name', 'bases', 'body', 'decorator_list'], 'ClassDef') else: pass w_list = self.w_bases @@ -357,9 +355,6 @@ class Return(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value stmt.__init__(self, lineno, col_offset) @@ -374,10 +369,10 @@ return visitor.visit_Return(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 6: - missing_field(space, self.initialization_state, [None, 'lineno', 'col_offset'], 'Return') + if (self.initialization_state & ~4) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None], 'Return') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.value = None if self.value: self.value.sync_app_attrs(space) @@ -385,9 +380,6 @@ class Delete(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, targets, lineno, col_offset): self.targets = targets self.w_targets = None @@ -404,7 +396,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['targets', 'lineno', 'col_offset'], 'Delete') + self.missing_field(space, ['lineno', 'col_offset', 'targets'], 'Delete') else: pass w_list = self.w_targets @@ -421,9 +413,6 @@ class Assign(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, targets, value, lineno, col_offset): self.targets = targets self.w_targets = None @@ -442,7 +431,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['targets', 'value', 'lineno', 'col_offset'], 'Assign') + self.missing_field(space, ['lineno', 'col_offset', 'targets', 'value'], 'Assign') else: pass w_list = self.w_targets @@ -460,9 +449,6 @@ class AugAssign(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, target, op, value, lineno, col_offset): self.target = target self.op = op @@ -480,7 +466,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['target', 'op', 'value', 'lineno', 'col_offset'], 'AugAssign') + self.missing_field(space, ['lineno', 'col_offset', 'target', 'op', 'value'], 'AugAssign') else: pass self.target.sync_app_attrs(space) @@ -489,9 +475,6 @@ class Print(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, dest, values, nl, lineno, col_offset): self.dest = dest self.values = values @@ -511,10 +494,10 @@ return visitor.visit_Print(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 30: - missing_field(space, self.initialization_state, [None, 'values', 'nl', 'lineno', 'col_offset'], 'Print') + if (self.initialization_state & ~4) ^ 27: + self.missing_field(space, ['lineno', 'col_offset', None, 'values', 'nl'], 'Print') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.dest = None if self.dest: self.dest.sync_app_attrs(space) @@ -532,9 +515,6 @@ class For(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, target, iter, body, orelse, lineno, col_offset): self.target = target self.iter = iter @@ -559,7 +539,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['target', 'iter', 'body', 'orelse', 'lineno', 'col_offset'], 'For') + self.missing_field(space, ['lineno', 'col_offset', 'target', 'iter', 'body', 'orelse'], 'For') else: pass self.target.sync_app_attrs(space) @@ -588,9 +568,6 @@ class While(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -613,7 +590,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'While') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'While') else: pass self.test.sync_app_attrs(space) @@ -641,9 +618,6 @@ class If(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -666,7 +640,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'If') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'If') else: pass self.test.sync_app_attrs(space) @@ -694,9 +668,6 @@ class With(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, context_expr, optional_vars, body, lineno, col_offset): self.context_expr = context_expr self.optional_vars = optional_vars @@ -717,10 +688,10 @@ return visitor.visit_With(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~2) ^ 29: - missing_field(space, self.initialization_state, ['context_expr', None, 'body', 'lineno', 'col_offset'], 'With') + if (self.initialization_state & ~8) ^ 23: + self.missing_field(space, ['lineno', 'col_offset', 'context_expr', None, 'body'], 'With') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.optional_vars = None self.context_expr.sync_app_attrs(space) if self.optional_vars: @@ -739,9 +710,6 @@ class Raise(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, type, inst, tback, lineno, col_offset): self.type = type self.inst = inst @@ -762,14 +730,14 @@ return visitor.visit_Raise(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~7) ^ 24: - missing_field(space, self.initialization_state, [None, None, None, 'lineno', 'col_offset'], 'Raise') + if (self.initialization_state & ~28) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None, None, None], 'Raise') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.type = None - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.inst = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.tback = None if self.type: self.type.sync_app_attrs(space) @@ -781,9 +749,6 @@ class TryExcept(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, body, handlers, orelse, lineno, col_offset): self.body = body self.w_body = None @@ -808,7 +773,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['body', 'handlers', 'orelse', 'lineno', 'col_offset'], 'TryExcept') + self.missing_field(space, ['lineno', 'col_offset', 'body', 'handlers', 'orelse'], 'TryExcept') else: pass w_list = self.w_body @@ -845,9 +810,6 @@ class TryFinally(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, body, finalbody, lineno, col_offset): self.body = body self.w_body = None @@ -868,7 +830,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['body', 'finalbody', 'lineno', 'col_offset'], 'TryFinally') + self.missing_field(space, ['lineno', 'col_offset', 'body', 'finalbody'], 'TryFinally') else: pass w_list = self.w_body @@ -895,9 +857,6 @@ class Assert(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, test, msg, lineno, col_offset): self.test = test self.msg = msg @@ -914,10 +873,10 @@ return visitor.visit_Assert(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~2) ^ 13: - missing_field(space, self.initialization_state, ['test', None, 'lineno', 'col_offset'], 'Assert') + if (self.initialization_state & ~8) ^ 7: + self.missing_field(space, ['lineno', 'col_offset', 'test', None], 'Assert') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.msg = None self.test.sync_app_attrs(space) if self.msg: @@ -926,9 +885,6 @@ class Import(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, names, lineno, col_offset): self.names = names self.w_names = None @@ -945,7 +901,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['names', 'lineno', 'col_offset'], 'Import') + self.missing_field(space, ['lineno', 'col_offset', 'names'], 'Import') else: pass w_list = self.w_names @@ -962,9 +918,6 @@ class ImportFrom(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, module, names, level, lineno, col_offset): self.module = module self.names = names @@ -982,12 +935,12 @@ return visitor.visit_ImportFrom(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~5) ^ 26: - missing_field(space, self.initialization_state, [None, 'names', None, 'lineno', 'col_offset'], 'ImportFrom') + if (self.initialization_state & ~20) ^ 11: + self.missing_field(space, ['lineno', 'col_offset', None, 'names', None], 'ImportFrom') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.module = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.level = 0 w_list = self.w_names if w_list is not None: @@ -1003,9 +956,6 @@ class Exec(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, body, globals, locals, lineno, col_offset): self.body = body self.globals = globals @@ -1025,12 +975,12 @@ return visitor.visit_Exec(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~6) ^ 25: - missing_field(space, self.initialization_state, ['body', None, None, 'lineno', 'col_offset'], 'Exec') + if (self.initialization_state & ~24) ^ 7: + self.missing_field(space, ['lineno', 'col_offset', 'body', None, None], 'Exec') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.globals = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.locals = None self.body.sync_app_attrs(space) if self.globals: @@ -1041,9 +991,6 @@ class Global(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, names, lineno, col_offset): self.names = names self.w_names = None @@ -1058,7 +1005,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['names', 'lineno', 'col_offset'], 'Global') + self.missing_field(space, ['lineno', 'col_offset', 'names'], 'Global') else: pass w_list = self.w_names @@ -1072,9 +1019,6 @@ class Expr(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value stmt.__init__(self, lineno, col_offset) @@ -1089,7 +1033,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Expr') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Expr') else: pass self.value.sync_app_attrs(space) @@ -1097,9 +1041,6 @@ class Pass(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1112,16 +1053,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Pass') + self.missing_field(space, ['lineno', 'col_offset'], 'Pass') else: pass class Break(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1134,16 +1072,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Break') + self.missing_field(space, ['lineno', 'col_offset'], 'Break') else: pass class Continue(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1156,21 +1091,19 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Continue') + self.missing_field(space, ['lineno', 'col_offset'], 'Continue') else: pass class expr(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class BoolOp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, op, values, lineno, col_offset): self.op = op self.values = values @@ -1188,7 +1121,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['op', 'values', 'lineno', 'col_offset'], 'BoolOp') + self.missing_field(space, ['lineno', 'col_offset', 'op', 'values'], 'BoolOp') else: pass w_list = self.w_values @@ -1205,9 +1138,6 @@ class BinOp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, left, op, right, lineno, col_offset): self.left = left self.op = op @@ -1225,7 +1155,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['left', 'op', 'right', 'lineno', 'col_offset'], 'BinOp') + self.missing_field(space, ['lineno', 'col_offset', 'left', 'op', 'right'], 'BinOp') else: pass self.left.sync_app_attrs(space) @@ -1234,9 +1164,6 @@ class UnaryOp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, op, operand, lineno, col_offset): self.op = op self.operand = operand @@ -1252,7 +1179,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['op', 'operand', 'lineno', 'col_offset'], 'UnaryOp') + self.missing_field(space, ['lineno', 'col_offset', 'op', 'operand'], 'UnaryOp') else: pass self.operand.sync_app_attrs(space) @@ -1260,9 +1187,6 @@ class Lambda(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, args, body, lineno, col_offset): self.args = args self.body = body @@ -1279,7 +1203,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['args', 'body', 'lineno', 'col_offset'], 'Lambda') + self.missing_field(space, ['lineno', 'col_offset', 'args', 'body'], 'Lambda') else: pass self.args.sync_app_attrs(space) @@ -1288,9 +1212,6 @@ class IfExp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -1309,7 +1230,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'IfExp') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'IfExp') else: pass self.test.sync_app_attrs(space) @@ -1319,9 +1240,6 @@ class Dict(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, keys, values, lineno, col_offset): self.keys = keys self.w_keys = None @@ -1342,7 +1260,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['keys', 'values', 'lineno', 'col_offset'], 'Dict') + self.missing_field(space, ['lineno', 'col_offset', 'keys', 'values'], 'Dict') else: pass w_list = self.w_keys @@ -1369,9 +1287,6 @@ class Set(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, elts, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1388,7 +1303,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['elts', 'lineno', 'col_offset'], 'Set') + self.missing_field(space, ['lineno', 'col_offset', 'elts'], 'Set') else: pass w_list = self.w_elts @@ -1405,9 +1320,6 @@ class ListComp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1426,7 +1338,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'ListComp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'ListComp') else: pass self.elt.sync_app_attrs(space) @@ -1444,9 +1356,6 @@ class SetComp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1465,7 +1374,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'SetComp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'SetComp') else: pass self.elt.sync_app_attrs(space) @@ -1483,9 +1392,6 @@ class DictComp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, key, value, generators, lineno, col_offset): self.key = key self.value = value @@ -1506,7 +1412,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['key', 'value', 'generators', 'lineno', 'col_offset'], 'DictComp') + self.missing_field(space, ['lineno', 'col_offset', 'key', 'value', 'generators'], 'DictComp') else: pass self.key.sync_app_attrs(space) @@ -1525,9 +1431,6 @@ class GeneratorExp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1546,7 +1449,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'GeneratorExp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'GeneratorExp') else: pass self.elt.sync_app_attrs(space) @@ -1564,9 +1467,6 @@ class Yield(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1581,10 +1481,10 @@ return visitor.visit_Yield(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 6: - missing_field(space, self.initialization_state, [None, 'lineno', 'col_offset'], 'Yield') + if (self.initialization_state & ~4) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None], 'Yield') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.value = None if self.value: self.value.sync_app_attrs(space) @@ -1592,9 +1492,6 @@ class Compare(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, left, ops, comparators, lineno, col_offset): self.left = left self.ops = ops @@ -1615,7 +1512,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['left', 'ops', 'comparators', 'lineno', 'col_offset'], 'Compare') + self.missing_field(space, ['lineno', 'col_offset', 'left', 'ops', 'comparators'], 'Compare') else: pass self.left.sync_app_attrs(space) @@ -1640,9 +1537,6 @@ class Call(expr): - _lineno_mask = 32 - _col_offset_mask = 64 - def __init__(self, func, args, keywords, starargs, kwargs, lineno, col_offset): self.func = func self.args = args @@ -1670,12 +1564,12 @@ return visitor.visit_Call(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~24) ^ 103: - missing_field(space, self.initialization_state, ['func', 'args', 'keywords', None, None, 'lineno', 'col_offset'], 'Call') + if (self.initialization_state & ~96) ^ 31: + self.missing_field(space, ['lineno', 'col_offset', 'func', 'args', 'keywords', None, None], 'Call') else: - if not self.initialization_state & 8: + if not self.initialization_state & 32: self.starargs = None - if not self.initialization_state & 16: + if not self.initialization_state & 64: self.kwargs = None self.func.sync_app_attrs(space) w_list = self.w_args @@ -1706,9 +1600,6 @@ class Repr(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1723,7 +1614,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Repr') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Repr') else: pass self.value.sync_app_attrs(space) @@ -1731,9 +1622,6 @@ class Num(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, n, lineno, col_offset): self.n = n expr.__init__(self, lineno, col_offset) @@ -1747,16 +1635,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['n', 'lineno', 'col_offset'], 'Num') + self.missing_field(space, ['lineno', 'col_offset', 'n'], 'Num') else: pass class Str(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, s, lineno, col_offset): self.s = s expr.__init__(self, lineno, col_offset) @@ -1770,16 +1655,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['s', 'lineno', 'col_offset'], 'Str') + self.missing_field(space, ['lineno', 'col_offset', 's'], 'Str') else: pass class Attribute(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, value, attr, ctx, lineno, col_offset): self.value = value self.attr = attr @@ -1796,7 +1678,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['value', 'attr', 'ctx', 'lineno', 'col_offset'], 'Attribute') + self.missing_field(space, ['lineno', 'col_offset', 'value', 'attr', 'ctx'], 'Attribute') else: pass self.value.sync_app_attrs(space) @@ -1804,9 +1686,6 @@ class Subscript(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, value, slice, ctx, lineno, col_offset): self.value = value self.slice = slice @@ -1824,7 +1703,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['value', 'slice', 'ctx', 'lineno', 'col_offset'], 'Subscript') + self.missing_field(space, ['lineno', 'col_offset', 'value', 'slice', 'ctx'], 'Subscript') else: pass self.value.sync_app_attrs(space) @@ -1833,9 +1712,6 @@ class Name(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, id, ctx, lineno, col_offset): self.id = id self.ctx = ctx @@ -1850,16 +1726,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['id', 'ctx', 'lineno', 'col_offset'], 'Name') + self.missing_field(space, ['lineno', 'col_offset', 'id', 'ctx'], 'Name') else: pass class List(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elts, ctx, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1877,7 +1750,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elts', 'ctx', 'lineno', 'col_offset'], 'List') + self.missing_field(space, ['lineno', 'col_offset', 'elts', 'ctx'], 'List') else: pass w_list = self.w_elts @@ -1894,9 +1767,6 @@ class Tuple(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elts, ctx, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1914,7 +1784,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elts', 'ctx', 'lineno', 'col_offset'], 'Tuple') + self.missing_field(space, ['lineno', 'col_offset', 'elts', 'ctx'], 'Tuple') else: pass w_list = self.w_elts @@ -1931,9 +1801,6 @@ class Const(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1947,7 +1814,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Const') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Const') else: pass @@ -2009,7 +1876,6 @@ class Ellipsis(slice): - def __init__(self): self.initialization_state = 0 @@ -2021,14 +1887,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 0: - missing_field(space, self.initialization_state, [], 'Ellipsis') + self.missing_field(space, [], 'Ellipsis') else: pass class Slice(slice): - def __init__(self, lower, upper, step): self.lower = lower self.upper = upper @@ -2049,7 +1914,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~7) ^ 0: - missing_field(space, self.initialization_state, [None, None, None], 'Slice') + self.missing_field(space, [None, None, None], 'Slice') else: if not self.initialization_state & 1: self.lower = None @@ -2067,7 +1932,6 @@ class ExtSlice(slice): - def __init__(self, dims): self.dims = dims self.w_dims = None @@ -2083,7 +1947,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['dims'], 'ExtSlice') + self.missing_field(space, ['dims'], 'ExtSlice') else: pass w_list = self.w_dims @@ -2100,7 +1964,6 @@ class Index(slice): - def __init__(self, value): self.value = value self.initialization_state = 1 @@ -2114,7 +1977,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['value'], 'Index') + self.missing_field(space, ['value'], 'Index') else: pass self.value.sync_app_attrs(space) @@ -2377,7 +2240,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['target', 'iter', 'ifs'], 'comprehension') + self.missing_field(space, ['target', 'iter', 'ifs'], 'comprehension') else: pass self.target.sync_app_attrs(space) @@ -2394,15 +2257,13 @@ node.sync_app_attrs(space) class excepthandler(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class ExceptHandler(excepthandler): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, type, name, body, lineno, col_offset): self.type = type self.name = name @@ -2424,12 +2285,12 @@ return visitor.visit_ExceptHandler(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~3) ^ 28: - missing_field(space, self.initialization_state, [None, None, 'body', 'lineno', 'col_offset'], 'ExceptHandler') + if (self.initialization_state & ~12) ^ 19: + self.missing_field(space, ['lineno', 'col_offset', None, None, 'body'], 'ExceptHandler') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.type = None - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.name = None if self.type: self.type.sync_app_attrs(space) @@ -2470,7 +2331,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~6) ^ 9: - missing_field(space, self.initialization_state, ['args', None, None, 'defaults'], 'arguments') + self.missing_field(space, ['args', None, None, 'defaults'], 'arguments') else: if not self.initialization_state & 2: self.vararg = None @@ -2513,7 +2374,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['arg', 'value'], 'keyword') + self.missing_field(space, ['arg', 'value'], 'keyword') else: pass self.value.sync_app_attrs(space) @@ -2533,7 +2394,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~2) ^ 1: - missing_field(space, self.initialization_state, ['name', None], 'alias') + self.missing_field(space, ['name', None], 'alias') else: if not self.initialization_state & 2: self.asname = None @@ -3019,6 +2880,8 @@ def Expression_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -3098,7 +2961,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -3112,14 +2975,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def stmt_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -3133,7 +2996,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 stmt.typedef = typedef.TypeDef("stmt", AST.typedef, @@ -3149,7 +3012,7 @@ w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -3163,14 +3026,14 @@ w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def FunctionDef_get_args(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'args') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) @@ -3184,10 +3047,10 @@ w_self.setdictvalue(space, 'args', w_new_value) return w_self.deldictvalue(space, 'args') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def FunctionDef_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3201,10 +3064,10 @@ def FunctionDef_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def FunctionDef_get_decorator_list(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: @@ -3218,7 +3081,7 @@ def FunctionDef_set_decorator_list(space, w_self, w_new_value): w_self.w_decorator_list = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _FunctionDef_field_unroller = unrolling_iterable(['name', 'args', 'body', 'decorator_list']) def FunctionDef_init(space, w_self, __args__): @@ -3254,7 +3117,7 @@ w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -3268,10 +3131,10 @@ w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ClassDef_get_bases(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: @@ -3285,10 +3148,10 @@ def ClassDef_set_bases(space, w_self, w_new_value): w_self.w_bases = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ClassDef_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3302,10 +3165,10 @@ def ClassDef_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def ClassDef_get_decorator_list(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: @@ -3319,7 +3182,7 @@ def ClassDef_set_decorator_list(space, w_self, w_new_value): w_self.w_decorator_list = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _ClassDef_field_unroller = unrolling_iterable(['name', 'bases', 'body', 'decorator_list']) def ClassDef_init(space, w_self, __args__): @@ -3356,7 +3219,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3364,13 +3227,15 @@ def Return_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, True) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Return_field_unroller = unrolling_iterable(['value']) def Return_init(space, w_self, __args__): @@ -3397,7 +3262,7 @@ ) def Delete_get_targets(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: @@ -3411,7 +3276,7 @@ def Delete_set_targets(space, w_self, w_new_value): w_self.w_targets = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Delete_field_unroller = unrolling_iterable(['targets']) def Delete_init(space, w_self, __args__): @@ -3439,7 +3304,7 @@ ) def Assign_get_targets(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: @@ -3453,14 +3318,14 @@ def Assign_set_targets(space, w_self, w_new_value): w_self.w_targets = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Assign_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3468,13 +3333,15 @@ def Assign_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Assign_field_unroller = unrolling_iterable(['targets', 'value']) def Assign_init(space, w_self, __args__): @@ -3507,7 +3374,7 @@ w_obj = w_self.getdictvalue(space, 'target') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) @@ -3515,20 +3382,22 @@ def AugAssign_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'target', w_new_value) return w_self.deldictvalue(space, 'target') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def AugAssign_get_op(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() @@ -3544,14 +3413,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def AugAssign_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3559,13 +3428,15 @@ def AugAssign_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _AugAssign_field_unroller = unrolling_iterable(['target', 'op', 'value']) def AugAssign_init(space, w_self, __args__): @@ -3598,7 +3469,7 @@ w_obj = w_self.getdictvalue(space, 'dest') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) @@ -3606,16 +3477,18 @@ def Print_set_dest(space, w_self, w_new_value): try: w_self.dest = space.interp_w(expr, w_new_value, True) + if type(w_self.dest) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'dest', w_new_value) return w_self.deldictvalue(space, 'dest') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Print_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -3629,14 +3502,14 @@ def Print_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Print_get_nl(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'nl') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) @@ -3650,7 +3523,7 @@ w_self.setdictvalue(space, 'nl', w_new_value) return w_self.deldictvalue(space, 'nl') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Print_field_unroller = unrolling_iterable(['dest', 'values', 'nl']) def Print_init(space, w_self, __args__): @@ -3684,7 +3557,7 @@ w_obj = w_self.getdictvalue(space, 'target') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) @@ -3692,20 +3565,22 @@ def For_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'target', w_new_value) return w_self.deldictvalue(space, 'target') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def For_get_iter(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'iter') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) @@ -3713,16 +3588,18 @@ def For_set_iter(space, w_self, w_new_value): try: w_self.iter = space.interp_w(expr, w_new_value, False) + if type(w_self.iter) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'iter', w_new_value) return w_self.deldictvalue(space, 'iter') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def For_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3736,10 +3613,10 @@ def For_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def For_get_orelse(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3753,7 +3630,7 @@ def For_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _For_field_unroller = unrolling_iterable(['target', 'iter', 'body', 'orelse']) def For_init(space, w_self, __args__): @@ -3789,7 +3666,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -3797,16 +3674,18 @@ def While_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def While_get_body(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3820,10 +3699,10 @@ def While_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def While_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3837,7 +3716,7 @@ def While_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _While_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def While_init(space, w_self, __args__): @@ -3872,7 +3751,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -3880,16 +3759,18 @@ def If_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def If_get_body(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3903,10 +3784,10 @@ def If_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def If_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3920,7 +3801,7 @@ def If_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _If_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def If_init(space, w_self, __args__): @@ -3955,7 +3836,7 @@ w_obj = w_self.getdictvalue(space, 'context_expr') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) @@ -3963,20 +3844,22 @@ def With_set_context_expr(space, w_self, w_new_value): try: w_self.context_expr = space.interp_w(expr, w_new_value, False) + if type(w_self.context_expr) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'context_expr', w_new_value) return w_self.deldictvalue(space, 'context_expr') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def With_get_optional_vars(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'optional_vars') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) @@ -3984,16 +3867,18 @@ def With_set_optional_vars(space, w_self, w_new_value): try: w_self.optional_vars = space.interp_w(expr, w_new_value, True) + if type(w_self.optional_vars) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'optional_vars', w_new_value) return w_self.deldictvalue(space, 'optional_vars') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def With_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4007,7 +3892,7 @@ def With_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _With_field_unroller = unrolling_iterable(['context_expr', 'optional_vars', 'body']) def With_init(space, w_self, __args__): @@ -4041,7 +3926,7 @@ w_obj = w_self.getdictvalue(space, 'type') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) @@ -4049,20 +3934,22 @@ def Raise_set_type(space, w_self, w_new_value): try: w_self.type = space.interp_w(expr, w_new_value, True) + if type(w_self.type) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'type', w_new_value) return w_self.deldictvalue(space, 'type') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Raise_get_inst(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'inst') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) @@ -4070,20 +3957,22 @@ def Raise_set_inst(space, w_self, w_new_value): try: w_self.inst = space.interp_w(expr, w_new_value, True) + if type(w_self.inst) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'inst', w_new_value) return w_self.deldictvalue(space, 'inst') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Raise_get_tback(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'tback') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) @@ -4091,13 +3980,15 @@ def Raise_set_tback(space, w_self, w_new_value): try: w_self.tback = space.interp_w(expr, w_new_value, True) + if type(w_self.tback) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'tback', w_new_value) return w_self.deldictvalue(space, 'tback') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Raise_field_unroller = unrolling_iterable(['type', 'inst', 'tback']) def Raise_init(space, w_self, __args__): @@ -4126,7 +4017,7 @@ ) def TryExcept_get_body(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4140,10 +4031,10 @@ def TryExcept_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def TryExcept_get_handlers(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: @@ -4157,10 +4048,10 @@ def TryExcept_set_handlers(space, w_self, w_new_value): w_self.w_handlers = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def TryExcept_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -4174,7 +4065,7 @@ def TryExcept_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _TryExcept_field_unroller = unrolling_iterable(['body', 'handlers', 'orelse']) def TryExcept_init(space, w_self, __args__): @@ -4206,7 +4097,7 @@ ) def TryFinally_get_body(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4220,10 +4111,10 @@ def TryFinally_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def TryFinally_get_finalbody(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: @@ -4237,7 +4128,7 @@ def TryFinally_set_finalbody(space, w_self, w_new_value): w_self.w_finalbody = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _TryFinally_field_unroller = unrolling_iterable(['body', 'finalbody']) def TryFinally_init(space, w_self, __args__): @@ -4271,7 +4162,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -4279,20 +4170,22 @@ def Assert_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Assert_get_msg(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'msg') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) @@ -4300,13 +4193,15 @@ def Assert_set_msg(space, w_self, w_new_value): try: w_self.msg = space.interp_w(expr, w_new_value, True) + if type(w_self.msg) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'msg', w_new_value) return w_self.deldictvalue(space, 'msg') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Assert_field_unroller = unrolling_iterable(['test', 'msg']) def Assert_init(space, w_self, __args__): @@ -4334,7 +4229,7 @@ ) def Import_get_names(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4348,7 +4243,7 @@ def Import_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Import_field_unroller = unrolling_iterable(['names']) def Import_init(space, w_self, __args__): @@ -4380,7 +4275,7 @@ w_obj = w_self.getdictvalue(space, 'module') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) @@ -4397,10 +4292,10 @@ w_self.setdictvalue(space, 'module', w_new_value) return w_self.deldictvalue(space, 'module') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ImportFrom_get_names(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4414,14 +4309,14 @@ def ImportFrom_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ImportFrom_get_level(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'level') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) @@ -4435,7 +4330,7 @@ w_self.setdictvalue(space, 'level', w_new_value) return w_self.deldictvalue(space, 'level') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _ImportFrom_field_unroller = unrolling_iterable(['module', 'names', 'level']) def ImportFrom_init(space, w_self, __args__): @@ -4469,7 +4364,7 @@ w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -4477,20 +4372,22 @@ def Exec_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Exec_get_globals(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'globals') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) @@ -4498,20 +4395,22 @@ def Exec_set_globals(space, w_self, w_new_value): try: w_self.globals = space.interp_w(expr, w_new_value, True) + if type(w_self.globals) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'globals', w_new_value) return w_self.deldictvalue(space, 'globals') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Exec_get_locals(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'locals') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) @@ -4519,13 +4418,15 @@ def Exec_set_locals(space, w_self, w_new_value): try: w_self.locals = space.interp_w(expr, w_new_value, True) + if type(w_self.locals) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'locals', w_new_value) return w_self.deldictvalue(space, 'locals') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Exec_field_unroller = unrolling_iterable(['body', 'globals', 'locals']) def Exec_init(space, w_self, __args__): @@ -4554,7 +4455,7 @@ ) def Global_get_names(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4568,7 +4469,7 @@ def Global_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Global_field_unroller = unrolling_iterable(['names']) def Global_init(space, w_self, __args__): @@ -4600,7 +4501,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -4608,13 +4509,15 @@ def Expr_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Expr_field_unroller = unrolling_iterable(['value']) def Expr_init(space, w_self, __args__): @@ -4696,7 +4599,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -4710,14 +4613,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def expr_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -4731,7 +4634,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 expr.typedef = typedef.TypeDef("expr", AST.typedef, @@ -4747,7 +4650,7 @@ w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() @@ -4763,10 +4666,10 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def BoolOp_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -4780,7 +4683,7 @@ def BoolOp_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _BoolOp_field_unroller = unrolling_iterable(['op', 'values']) def BoolOp_init(space, w_self, __args__): @@ -4813,7 +4716,7 @@ w_obj = w_self.getdictvalue(space, 'left') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) @@ -4821,20 +4724,22 @@ def BinOp_set_left(space, w_self, w_new_value): try: w_self.left = space.interp_w(expr, w_new_value, False) + if type(w_self.left) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'left', w_new_value) return w_self.deldictvalue(space, 'left') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def BinOp_get_op(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() @@ -4850,14 +4755,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def BinOp_get_right(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'right') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) @@ -4865,13 +4770,15 @@ def BinOp_set_right(space, w_self, w_new_value): try: w_self.right = space.interp_w(expr, w_new_value, False) + if type(w_self.right) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'right', w_new_value) return w_self.deldictvalue(space, 'right') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _BinOp_field_unroller = unrolling_iterable(['left', 'op', 'right']) def BinOp_init(space, w_self, __args__): @@ -4904,7 +4811,7 @@ w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() @@ -4920,14 +4827,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def UnaryOp_get_operand(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'operand') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) @@ -4935,13 +4842,15 @@ def UnaryOp_set_operand(space, w_self, w_new_value): try: w_self.operand = space.interp_w(expr, w_new_value, False) + if type(w_self.operand) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'operand', w_new_value) return w_self.deldictvalue(space, 'operand') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _UnaryOp_field_unroller = unrolling_iterable(['op', 'operand']) def UnaryOp_init(space, w_self, __args__): @@ -4973,7 +4882,7 @@ w_obj = w_self.getdictvalue(space, 'args') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) @@ -4987,14 +4896,14 @@ w_self.setdictvalue(space, 'args', w_new_value) return w_self.deldictvalue(space, 'args') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Lambda_get_body(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -5002,13 +4911,15 @@ def Lambda_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Lambda_field_unroller = unrolling_iterable(['args', 'body']) def Lambda_init(space, w_self, __args__): @@ -5040,7 +4951,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -5048,20 +4959,22 @@ def IfExp_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def IfExp_get_body(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -5069,20 +4982,22 @@ def IfExp_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def IfExp_get_orelse(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'orelse') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) @@ -5090,13 +5005,15 @@ def IfExp_set_orelse(space, w_self, w_new_value): try: w_self.orelse = space.interp_w(expr, w_new_value, False) + if type(w_self.orelse) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'orelse', w_new_value) return w_self.deldictvalue(space, 'orelse') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _IfExp_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def IfExp_init(space, w_self, __args__): @@ -5125,7 +5042,7 @@ ) def Dict_get_keys(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: @@ -5139,10 +5056,10 @@ def Dict_set_keys(space, w_self, w_new_value): w_self.w_keys = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Dict_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -5156,7 +5073,7 @@ def Dict_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Dict_field_unroller = unrolling_iterable(['keys', 'values']) def Dict_init(space, w_self, __args__): @@ -5186,7 +5103,7 @@ ) def Set_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -5200,7 +5117,7 @@ def Set_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Set_field_unroller = unrolling_iterable(['elts']) def Set_init(space, w_self, __args__): @@ -5232,7 +5149,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5240,16 +5157,18 @@ def ListComp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ListComp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5263,7 +5182,7 @@ def ListComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _ListComp_field_unroller = unrolling_iterable(['elt', 'generators']) def ListComp_init(space, w_self, __args__): @@ -5296,7 +5215,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5304,16 +5223,18 @@ def SetComp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def SetComp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5327,7 +5248,7 @@ def SetComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _SetComp_field_unroller = unrolling_iterable(['elt', 'generators']) def SetComp_init(space, w_self, __args__): @@ -5360,7 +5281,7 @@ w_obj = w_self.getdictvalue(space, 'key') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) @@ -5368,20 +5289,22 @@ def DictComp_set_key(space, w_self, w_new_value): try: w_self.key = space.interp_w(expr, w_new_value, False) + if type(w_self.key) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'key', w_new_value) return w_self.deldictvalue(space, 'key') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def DictComp_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5389,16 +5312,18 @@ def DictComp_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def DictComp_get_generators(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5412,7 +5337,7 @@ def DictComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _DictComp_field_unroller = unrolling_iterable(['key', 'value', 'generators']) def DictComp_init(space, w_self, __args__): @@ -5446,7 +5371,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5454,16 +5379,18 @@ def GeneratorExp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def GeneratorExp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5477,7 +5404,7 @@ def GeneratorExp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _GeneratorExp_field_unroller = unrolling_iterable(['elt', 'generators']) def GeneratorExp_init(space, w_self, __args__): @@ -5510,7 +5437,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5518,13 +5445,15 @@ def Yield_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, True) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Yield_field_unroller = unrolling_iterable(['value']) def Yield_init(space, w_self, __args__): @@ -5555,7 +5484,7 @@ w_obj = w_self.getdictvalue(space, 'left') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) @@ -5563,16 +5492,18 @@ def Compare_set_left(space, w_self, w_new_value): try: w_self.left = space.interp_w(expr, w_new_value, False) + if type(w_self.left) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'left', w_new_value) return w_self.deldictvalue(space, 'left') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Compare_get_ops(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: @@ -5586,10 +5517,10 @@ def Compare_set_ops(space, w_self, w_new_value): w_self.w_ops = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Compare_get_comparators(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: @@ -5603,7 +5534,7 @@ def Compare_set_comparators(space, w_self, w_new_value): w_self.w_comparators = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Compare_field_unroller = unrolling_iterable(['left', 'ops', 'comparators']) def Compare_init(space, w_self, __args__): @@ -5638,7 +5569,7 @@ w_obj = w_self.getdictvalue(space, 'func') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) @@ -5646,16 +5577,18 @@ def Call_set_func(space, w_self, w_new_value): try: w_self.func = space.interp_w(expr, w_new_value, False) + if type(w_self.func) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'func', w_new_value) return w_self.deldictvalue(space, 'func') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Call_get_args(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: @@ -5669,10 +5602,10 @@ def Call_set_args(space, w_self, w_new_value): w_self.w_args = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Call_get_keywords(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: @@ -5686,14 +5619,14 @@ def Call_set_keywords(space, w_self, w_new_value): w_self.w_keywords = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def Call_get_starargs(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'starargs') if w_obj is not None: return w_obj - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) @@ -5701,20 +5634,22 @@ def Call_set_starargs(space, w_self, w_new_value): try: w_self.starargs = space.interp_w(expr, w_new_value, True) + if type(w_self.starargs) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'starargs', w_new_value) return w_self.deldictvalue(space, 'starargs') - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 def Call_get_kwargs(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'kwargs') if w_obj is not None: return w_obj - if not w_self.initialization_state & 16: + if not w_self.initialization_state & 64: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) @@ -5722,13 +5657,15 @@ def Call_set_kwargs(space, w_self, w_new_value): try: w_self.kwargs = space.interp_w(expr, w_new_value, True) + if type(w_self.kwargs) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'kwargs', w_new_value) return w_self.deldictvalue(space, 'kwargs') - w_self.initialization_state |= 16 + w_self.initialization_state |= 64 _Call_field_unroller = unrolling_iterable(['func', 'args', 'keywords', 'starargs', 'kwargs']) def Call_init(space, w_self, __args__): @@ -5765,7 +5702,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5773,13 +5710,15 @@ def Repr_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Repr_field_unroller = unrolling_iterable(['value']) def Repr_init(space, w_self, __args__): @@ -5810,7 +5749,7 @@ w_obj = w_self.getdictvalue(space, 'n') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n @@ -5824,7 +5763,7 @@ w_self.setdictvalue(space, 'n', w_new_value) return w_self.deldictvalue(space, 'n') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Num_field_unroller = unrolling_iterable(['n']) def Num_init(space, w_self, __args__): @@ -5855,7 +5794,7 @@ w_obj = w_self.getdictvalue(space, 's') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s @@ -5869,7 +5808,7 @@ w_self.setdictvalue(space, 's', w_new_value) return w_self.deldictvalue(space, 's') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Str_field_unroller = unrolling_iterable(['s']) def Str_init(space, w_self, __args__): @@ -5900,7 +5839,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5908,20 +5847,22 @@ def Attribute_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Attribute_get_attr(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'attr') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) @@ -5935,14 +5876,14 @@ w_self.setdictvalue(space, 'attr', w_new_value) return w_self.deldictvalue(space, 'attr') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Attribute_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -5958,7 +5899,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Attribute_field_unroller = unrolling_iterable(['value', 'attr', 'ctx']) def Attribute_init(space, w_self, __args__): @@ -5991,7 +5932,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5999,20 +5940,22 @@ def Subscript_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Subscript_get_slice(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'slice') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) @@ -6020,20 +5963,22 @@ def Subscript_set_slice(space, w_self, w_new_value): try: w_self.slice = space.interp_w(slice, w_new_value, False) + if type(w_self.slice) is slice: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'slice', w_new_value) return w_self.deldictvalue(space, 'slice') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Subscript_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6049,7 +5994,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Subscript_field_unroller = unrolling_iterable(['value', 'slice', 'ctx']) def Subscript_init(space, w_self, __args__): @@ -6082,7 +6027,7 @@ w_obj = w_self.getdictvalue(space, 'id') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) @@ -6096,14 +6041,14 @@ w_self.setdictvalue(space, 'id', w_new_value) return w_self.deldictvalue(space, 'id') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Name_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6119,7 +6064,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Name_field_unroller = unrolling_iterable(['id', 'ctx']) def Name_init(space, w_self, __args__): @@ -6147,7 +6092,7 @@ ) def List_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -6161,14 +6106,14 @@ def List_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def List_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6184,7 +6129,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _List_field_unroller = unrolling_iterable(['elts', 'ctx']) def List_init(space, w_self, __args__): @@ -6213,7 +6158,7 @@ ) def Tuple_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -6227,14 +6172,14 @@ def Tuple_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Tuple_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6250,7 +6195,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Tuple_field_unroller = unrolling_iterable(['elts', 'ctx']) def Tuple_init(space, w_self, __args__): @@ -6283,7 +6228,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value @@ -6297,7 +6242,7 @@ w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Const_field_unroller = unrolling_iterable(['value']) def Const_init(space, w_self, __args__): @@ -6409,6 +6354,8 @@ def Slice_set_lower(space, w_self, w_new_value): try: w_self.lower = space.interp_w(expr, w_new_value, True) + if type(w_self.lower) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6430,6 +6377,8 @@ def Slice_set_upper(space, w_self, w_new_value): try: w_self.upper = space.interp_w(expr, w_new_value, True) + if type(w_self.upper) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6451,6 +6400,8 @@ def Slice_set_step(space, w_self, w_new_value): try: w_self.step = space.interp_w(expr, w_new_value, True) + if type(w_self.step) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6540,6 +6491,8 @@ def Index_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6809,6 +6762,8 @@ def comprehension_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6830,6 +6785,8 @@ def comprehension_set_iter(space, w_self, w_new_value): try: w_self.iter = space.interp_w(expr, w_new_value, False) + if type(w_self.iter) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6887,7 +6844,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -6901,14 +6858,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def excepthandler_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -6922,7 +6879,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 excepthandler.typedef = typedef.TypeDef("excepthandler", AST.typedef, @@ -6938,7 +6895,7 @@ w_obj = w_self.getdictvalue(space, 'type') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) @@ -6946,20 +6903,22 @@ def ExceptHandler_set_type(space, w_self, w_new_value): try: w_self.type = space.interp_w(expr, w_new_value, True) + if type(w_self.type) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'type', w_new_value) return w_self.deldictvalue(space, 'type') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ExceptHandler_get_name(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -6967,16 +6926,18 @@ def ExceptHandler_set_name(space, w_self, w_new_value): try: w_self.name = space.interp_w(expr, w_new_value, True) + if type(w_self.name) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ExceptHandler_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -6990,7 +6951,7 @@ def ExceptHandler_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _ExceptHandler_field_unroller = unrolling_iterable(['type', 'name', 'body']) def ExceptHandler_init(space, w_self, __args__): @@ -7164,6 +7125,8 @@ def keyword_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -79,6 +79,7 @@ else: self.emit("class %s(AST):" % (base,)) if sum.attributes: + self.emit("") args = ", ".join(attr.name.value for attr in sum.attributes) self.emit("def __init__(self, %s):" % (args,), 1) for attr in sum.attributes: @@ -114,7 +115,7 @@ else: names.append(repr(field.name.value)) sub = (", ".join(names), name.value) - self.emit("missing_field(space, self.initialization_state, [%s], %r)" + self.emit("self.missing_field(space, [%s], %r)" % sub, 3) self.emit("else:", 2) # Fill in all the default fields. @@ -195,17 +196,13 @@ def visitConstructor(self, cons, base, extra_attributes): self.emit("class %s(%s):" % (cons.name, base)) self.emit("") - for field in self.data.cons_attributes[cons]: - subst = (field.name, self.data.field_masks[field]) - self.emit("_%s_mask = %i" % subst, 1) - self.emit("") self.make_constructor(cons.fields, cons, extra_attributes, base) self.emit("") self.emit("def walkabout(self, visitor):", 1) self.emit("visitor.visit_%s(self)" % (cons.name,), 2) self.emit("") self.make_mutate_over(cons, cons.name) - self.make_var_syncer(cons.fields + self.data.cons_attributes[cons], + self.make_var_syncer(self.data.cons_attributes[cons] + cons.fields, cons, cons.name) def visitField(self, field): @@ -324,7 +321,7 @@ def visitSum(self, sum, name): for field in sum.attributes: - self.make_property(field, name, True) + self.make_property(field, name) self.make_typedef(name, "AST", sum.attributes, fields_name="_attributes") if not is_simple_sum(sum): @@ -400,13 +397,10 @@ def visitField(self, field, name): self.make_property(field, name) - def make_property(self, field, name, different_masks=False): + def make_property(self, field, name): func = "def %s_get_%s(space, w_self):" % (name, field.name) self.emit(func) - if different_masks: - flag = "w_self._%s_mask" % (field.name,) - else: - flag = self.data.field_masks[field] + flag = self.data.field_masks[field] if not field.seq: self.emit("if w_self.w_dict is not None:", 1) self.emit(" w_obj = w_self.getdictvalue(space, '%s')" % (field.name,), 1) @@ -458,6 +452,11 @@ config = (field.name, field.type, repr(field.opt)) self.emit("w_self.%s = space.interp_w(%s, w_new_value, %s)" % config, 2) + if field.type.value not in self.data.prod_simple: + self.emit("if type(w_self.%s) is %s:" % ( + field.name, field.type), 2) + self.emit("raise OperationError(space.w_TypeError, " + "space.w_None)", 3) else: level = 2 if field.opt and field.type.value != "int": @@ -505,7 +504,10 @@ optional_mask = 0 for i, field in enumerate(fields): flag = 1 << i - field_masks[field] = flag + if field not in field_masks: + field_masks[field] = flag + else: + assert field_masks[field] == flag if field.opt: optional_mask |= flag else: @@ -518,9 +520,9 @@ if is_simple_sum(sum): simple_types.add(tp.name.value) else: + attrs = [field for field in sum.attributes] for cons in sum.types: - attrs = [copy_field(field) for field in sum.attributes] - add_masks(cons.fields + attrs, cons) + add_masks(attrs + cons.fields, cons) cons_attributes[cons] = attrs else: prod = tp.value @@ -588,6 +590,24 @@ space.setattr(self, w_name, space.getitem(w_state, w_name)) + def missing_field(self, space, required, host): + "Find which required field is missing." + state = self.initialization_state + for i in range(len(required)): + if (state >> i) & 1: + continue # field is present + missing = required[i] + if missing is None: + continue # field is optional + w_obj = self.getdictvalue(space, missing) + if w_obj is None: + err = "required field \\"%s\\" missing from %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + else: + err = "incorrect type for field \\"%s\\" in %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + raise AssertionError("should not reach here") + class NodeVisitorNotImplemented(Exception): pass @@ -631,15 +651,6 @@ ) -def missing_field(space, state, required, host): - "Find which required field is missing." - for i in range(len(required)): - if not (state >> i) & 1: - missing = required[i] - if missing is not None: - err = "required field \\"%s\\" missing from %s" - raise operationerrfmt(space.w_TypeError, err, missing, host) - raise AssertionError("should not reach here") """ diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -98,7 +98,6 @@ "Abstract. Get the expected number of locals." raise TypeError, "abstract" - @jit.dont_look_inside def fast2locals(self): # Copy values from the fastlocals to self.w_locals if self.w_locals is None: @@ -112,7 +111,6 @@ w_name = self.space.wrap(name) self.space.setitem(self.w_locals, w_name, w_value) - @jit.dont_look_inside def locals2fast(self): # Copy values from self.w_locals to the fastlocals assert self.w_locals is not None diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -619,7 +619,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -655,7 +656,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -674,7 +676,8 @@ self.descr_reqcls, args.prepend(w_obj)) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -690,7 +693,8 @@ raise OperationError(space.w_SystemError, space.wrap("unexpected DescrMismatch error")) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -708,7 +712,8 @@ self.descr_reqcls, Arguments(space, [w1])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -726,7 +731,8 @@ self.descr_reqcls, Arguments(space, [w1, w2])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -744,7 +750,8 @@ self.descr_reqcls, Arguments(space, [w1, w2, w3])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -763,7 +770,8 @@ Arguments(space, [w1, w2, w3, w4])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -37,7 +37,7 @@ def get_arg_types(self): return self.arg_types - def get_return_type(self): + def get_result_type(self): return self.typeinfo def get_extra_info(self): diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -5,11 +5,7 @@ from pypy.jit.metainterp.history import AbstractDescr, getkind from pypy.jit.metainterp import history from pypy.jit.codewriter import heaptracker, longlong - -# The point of the class organization in this file is to make instances -# as compact as possible. This is done by not storing the field size or -# the 'is_pointer_field' flag in the instance itself but in the class -# (in methods actually) using a few classes instead of just one. +from pypy.jit.codewriter.longlong import is_longlong class GcCache(object): @@ -19,6 +15,7 @@ self._cache_size = {} self._cache_field = {} self._cache_array = {} + self._cache_arraylen = {} self._cache_call = {} self._cache_interiorfield = {} @@ -26,24 +23,15 @@ assert isinstance(STRUCT, lltype.GcStruct) def init_array_descr(self, ARRAY, arraydescr): - assert isinstance(ARRAY, lltype.GcArray) + assert (isinstance(ARRAY, lltype.GcArray) or + isinstance(ARRAY, lltype.GcStruct) and ARRAY._arrayfld) -if lltype.SignedLongLong is lltype.Signed: - def is_longlong(TYPE): - return False -else: - assert rffi.sizeof(lltype.SignedLongLong) == rffi.sizeof(lltype.Float) - def is_longlong(TYPE): - return TYPE in (lltype.SignedLongLong, lltype.UnsignedLongLong) - # ____________________________________________________________ # SizeDescrs class SizeDescr(AbstractDescr): size = 0 # help translation - is_immutable = False - tid = llop.combine_ushort(lltype.Signed, 0, 0) def __init__(self, size, count_fields_if_immut=-1): @@ -77,265 +65,247 @@ cache[STRUCT] = sizedescr return sizedescr + # ____________________________________________________________ # FieldDescrs -class BaseFieldDescr(AbstractDescr): +FLAG_POINTER = 'P' +FLAG_FLOAT = 'F' +FLAG_UNSIGNED = 'U' +FLAG_SIGNED = 'S' +FLAG_STRUCT = 'X' +FLAG_VOID = 'V' + +class FieldDescr(AbstractDescr): + name = '' offset = 0 # help translation - name = '' - _clsname = '' + field_size = 0 + flag = '\x00' - def __init__(self, name, offset): + def __init__(self, name, offset, field_size, flag): self.name = name self.offset = offset + self.field_size = field_size + self.flag = flag + + def is_pointer_field(self): + return self.flag == FLAG_POINTER + + def is_float_field(self): + return self.flag == FLAG_FLOAT + + def is_field_signed(self): + return self.flag == FLAG_SIGNED def sort_key(self): return self.offset - def get_field_size(self, translate_support_code): - raise NotImplementedError + def repr_of_descr(self): + return '' % (self.flag, self.name, self.offset) - _is_pointer_field = False # unless overridden by GcPtrFieldDescr - _is_float_field = False # unless overridden by FloatFieldDescr - _is_field_signed = False # unless overridden by XxxFieldDescr - - def is_pointer_field(self): - return self._is_pointer_field - - def is_float_field(self): - return self._is_float_field - - def is_field_signed(self): - return self._is_field_signed - - def repr_of_descr(self): - return '<%s %s %s>' % (self._clsname, self.name, self.offset) - -class DynamicFieldDescr(BaseFieldDescr): - def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): - self.offset = offset - self._fieldsize = fieldsize - self._is_pointer_field = is_pointer - self._is_float_field = is_float - self._is_field_signed = is_signed - - def get_field_size(self, translate_support_code): - return self._fieldsize - -class NonGcPtrFieldDescr(BaseFieldDescr): - _clsname = 'NonGcPtrFieldDescr' - def get_field_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrFieldDescr(NonGcPtrFieldDescr): - _clsname = 'GcPtrFieldDescr' - _is_pointer_field = True - -def getFieldDescrClass(TYPE): - return getDescrClass(TYPE, BaseFieldDescr, GcPtrFieldDescr, - NonGcPtrFieldDescr, 'Field', 'get_field_size', - '_is_float_field', '_is_field_signed') def get_field_descr(gccache, STRUCT, fieldname): cache = gccache._cache_field try: return cache[STRUCT][fieldname] except KeyError: - offset, _ = symbolic.get_field_token(STRUCT, fieldname, - gccache.translate_support_code) + offset, size = symbolic.get_field_token(STRUCT, fieldname, + gccache.translate_support_code) FIELDTYPE = getattr(STRUCT, fieldname) + flag = get_type_flag(FIELDTYPE) name = '%s.%s' % (STRUCT._name, fieldname) - fielddescr = getFieldDescrClass(FIELDTYPE)(name, offset) + fielddescr = FieldDescr(name, offset, size, flag) cachedict = cache.setdefault(STRUCT, {}) cachedict[fieldname] = fielddescr return fielddescr +def get_type_flag(TYPE): + if isinstance(TYPE, lltype.Ptr): + if TYPE.TO._gckind == 'gc': + return FLAG_POINTER + else: + return FLAG_UNSIGNED + if isinstance(TYPE, lltype.Struct): + return FLAG_STRUCT + if TYPE is lltype.Float or is_longlong(TYPE): + return FLAG_FLOAT + if (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and + rffi.cast(TYPE, -1) == -1): + return FLAG_SIGNED + return FLAG_UNSIGNED + +def get_field_arraylen_descr(gccache, ARRAY_OR_STRUCT): + cache = gccache._cache_arraylen + try: + return cache[ARRAY_OR_STRUCT] + except KeyError: + tsc = gccache.translate_support_code + (_, _, ofs) = symbolic.get_array_token(ARRAY_OR_STRUCT, tsc) + size = symbolic.get_size(lltype.Signed, tsc) + result = FieldDescr("len", ofs, size, get_type_flag(lltype.Signed)) + cache[ARRAY_OR_STRUCT] = result + return result + + # ____________________________________________________________ # ArrayDescrs -_A = lltype.GcArray(lltype.Signed) # a random gcarray -_AF = lltype.GcArray(lltype.Float) # an array of C doubles +class ArrayDescr(AbstractDescr): + tid = 0 + basesize = 0 # workaround for the annotator + itemsize = 0 + lendescr = None + flag = '\x00' - -class BaseArrayDescr(AbstractDescr): - _clsname = '' - tid = llop.combine_ushort(lltype.Signed, 0, 0) - - def get_base_size(self, translate_support_code): - basesize, _, _ = symbolic.get_array_token(_A, translate_support_code) - return basesize - - def get_ofs_length(self, translate_support_code): - _, _, ofslength = symbolic.get_array_token(_A, translate_support_code) - return ofslength - - def get_item_size(self, translate_support_code): - raise NotImplementedError - - _is_array_of_pointers = False # unless overridden by GcPtrArrayDescr - _is_array_of_floats = False # unless overridden by FloatArrayDescr - _is_array_of_structs = False # unless overridden by StructArrayDescr - _is_item_signed = False # unless overridden by XxxArrayDescr + def __init__(self, basesize, itemsize, lendescr, flag): + self.basesize = basesize + self.itemsize = itemsize + self.lendescr = lendescr # or None, if no length + self.flag = flag def is_array_of_pointers(self): - return self._is_array_of_pointers + return self.flag == FLAG_POINTER def is_array_of_floats(self): - return self._is_array_of_floats + return self.flag == FLAG_FLOAT + + def is_item_signed(self): + return self.flag == FLAG_SIGNED def is_array_of_structs(self): - return self._is_array_of_structs - - def is_item_signed(self): - return self._is_item_signed + return self.flag == FLAG_STRUCT def repr_of_descr(self): - return '<%s>' % self._clsname + return '' % (self.flag, self.itemsize) -class NonGcPtrArrayDescr(BaseArrayDescr): - _clsname = 'NonGcPtrArrayDescr' - def get_item_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrArrayDescr(NonGcPtrArrayDescr): - _clsname = 'GcPtrArrayDescr' - _is_array_of_pointers = True - -class FloatArrayDescr(BaseArrayDescr): - _clsname = 'FloatArrayDescr' - _is_array_of_floats = True - def get_base_size(self, translate_support_code): - basesize, _, _ = symbolic.get_array_token(_AF, translate_support_code) - return basesize - def get_item_size(self, translate_support_code): - return symbolic.get_size(lltype.Float, translate_support_code) - -class StructArrayDescr(BaseArrayDescr): - _clsname = 'StructArrayDescr' - _is_array_of_structs = True - -class BaseArrayNoLengthDescr(BaseArrayDescr): - def get_base_size(self, translate_support_code): - return 0 - - def get_ofs_length(self, translate_support_code): - return -1 - -class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): - def __init__(self, itemsize): - self.itemsize = itemsize - - def get_item_size(self, translate_support_code): - return self.itemsize - -class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): - _clsname = 'NonGcPtrArrayNoLengthDescr' - def get_item_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrArrayNoLengthDescr(NonGcPtrArrayNoLengthDescr): - _clsname = 'GcPtrArrayNoLengthDescr' - _is_array_of_pointers = True - -def getArrayDescrClass(ARRAY): - if ARRAY.OF is lltype.Float: - return FloatArrayDescr - elif isinstance(ARRAY.OF, lltype.Struct): - class Descr(StructArrayDescr): - _clsname = '%sArrayDescr' % ARRAY.OF._name - def get_item_size(self, translate_support_code): - return symbolic.get_size(ARRAY.OF, translate_support_code) - Descr.__name__ = Descr._clsname - return Descr - return getDescrClass(ARRAY.OF, BaseArrayDescr, GcPtrArrayDescr, - NonGcPtrArrayDescr, 'Array', 'get_item_size', - '_is_array_of_floats', '_is_item_signed') - -def getArrayNoLengthDescrClass(ARRAY): - return getDescrClass(ARRAY.OF, BaseArrayNoLengthDescr, GcPtrArrayNoLengthDescr, - NonGcPtrArrayNoLengthDescr, 'ArrayNoLength', 'get_item_size', - '_is_array_of_floats', '_is_item_signed') - -def get_array_descr(gccache, ARRAY): +def get_array_descr(gccache, ARRAY_OR_STRUCT): cache = gccache._cache_array try: - return cache[ARRAY] + return cache[ARRAY_OR_STRUCT] except KeyError: - # we only support Arrays that are either GcArrays, or raw no-length - # non-gc Arrays. - if ARRAY._hints.get('nolength', False): - assert not isinstance(ARRAY, lltype.GcArray) - arraydescr = getArrayNoLengthDescrClass(ARRAY)() + tsc = gccache.translate_support_code + basesize, itemsize, _ = symbolic.get_array_token(ARRAY_OR_STRUCT, tsc) + if isinstance(ARRAY_OR_STRUCT, lltype.Array): + ARRAY_INSIDE = ARRAY_OR_STRUCT else: - assert isinstance(ARRAY, lltype.GcArray) - arraydescr = getArrayDescrClass(ARRAY)() - # verify basic assumption that all arrays' basesize and ofslength - # are equal - basesize, itemsize, ofslength = symbolic.get_array_token(ARRAY, False) - assert basesize == arraydescr.get_base_size(False) - assert itemsize == arraydescr.get_item_size(False) - if not ARRAY._hints.get('nolength', False): - assert ofslength == arraydescr.get_ofs_length(False) - if isinstance(ARRAY, lltype.GcArray): - gccache.init_array_descr(ARRAY, arraydescr) - cache[ARRAY] = arraydescr + ARRAY_INSIDE = ARRAY_OR_STRUCT._flds[ARRAY_OR_STRUCT._arrayfld] + if ARRAY_INSIDE._hints.get('nolength', False): + lendescr = None + else: + lendescr = get_field_arraylen_descr(gccache, ARRAY_OR_STRUCT) + flag = get_type_flag(ARRAY_INSIDE.OF) + arraydescr = ArrayDescr(basesize, itemsize, lendescr, flag) + if ARRAY_OR_STRUCT._gckind == 'gc': + gccache.init_array_descr(ARRAY_OR_STRUCT, arraydescr) + cache[ARRAY_OR_STRUCT] = arraydescr return arraydescr + # ____________________________________________________________ # InteriorFieldDescr class InteriorFieldDescr(AbstractDescr): - arraydescr = BaseArrayDescr() # workaround for the annotator - fielddescr = BaseFieldDescr('', 0) + arraydescr = ArrayDescr(0, 0, None, '\x00') # workaround for the annotator + fielddescr = FieldDescr('', 0, 0, '\x00') def __init__(self, arraydescr, fielddescr): + assert arraydescr.flag == FLAG_STRUCT self.arraydescr = arraydescr self.fielddescr = fielddescr + def sort_key(self): + return self.fielddescr.sort_key() + def is_pointer_field(self): return self.fielddescr.is_pointer_field() def is_float_field(self): return self.fielddescr.is_float_field() - def sort_key(self): - return self.fielddescr.sort_key() - def repr_of_descr(self): return '' % self.fielddescr.repr_of_descr() -def get_interiorfield_descr(gc_ll_descr, ARRAY, FIELDTP, name): +def get_interiorfield_descr(gc_ll_descr, ARRAY, name): cache = gc_ll_descr._cache_interiorfield try: - return cache[(ARRAY, FIELDTP, name)] + return cache[(ARRAY, name)] except KeyError: arraydescr = get_array_descr(gc_ll_descr, ARRAY) - fielddescr = get_field_descr(gc_ll_descr, FIELDTP, name) + fielddescr = get_field_descr(gc_ll_descr, ARRAY.OF, name) descr = InteriorFieldDescr(arraydescr, fielddescr) - cache[(ARRAY, FIELDTP, name)] = descr + cache[(ARRAY, name)] = descr return descr +def get_dynamic_interiorfield_descr(gc_ll_descr, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = ArrayDescr(0, width, None, FLAG_STRUCT) + if is_pointer: + assert not is_float + flag = FLAG_POINTER + elif is_float: + flag = FLAG_FLOAT + elif is_signed: + flag = FLAG_SIGNED + else: + flag = FLAG_UNSIGNED + fielddescr = FieldDescr('dynamic', offset, fieldsize, flag) + return InteriorFieldDescr(arraydescr, fielddescr) + + # ____________________________________________________________ # CallDescrs -class BaseCallDescr(AbstractDescr): - _clsname = '' - loop_token = None +class CallDescr(AbstractDescr): arg_classes = '' # <-- annotation hack + result_type = '\x00' + result_flag = '\x00' ffi_flags = 1 + call_stub_i = staticmethod(lambda func, args_i, args_r, args_f: + 0) + call_stub_r = staticmethod(lambda func, args_i, args_r, args_f: + lltype.nullptr(llmemory.GCREF.TO)) + call_stub_f = staticmethod(lambda func,args_i,args_r,args_f: + longlong.ZEROF) - def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): - self.arg_classes = arg_classes # string of "r" and "i" (ref/int) + def __init__(self, arg_classes, result_type, result_signed, result_size, + extrainfo=None, ffi_flags=1): + """ + 'arg_classes' is a string of characters, one per argument: + 'i', 'r', 'f', 'L', 'S' + + 'result_type' is one character from the same list or 'v' + + 'result_signed' is a boolean True/False + """ + self.arg_classes = arg_classes + self.result_type = result_type + self.result_size = result_size self.extrainfo = extrainfo self.ffi_flags = ffi_flags # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which # makes sense on Windows as it's the one for all the C functions # we are compiling together with the JIT. On non-Windows platforms # it is just ignored anyway. + if result_type == 'v': + result_flag = FLAG_VOID + elif result_type == 'i': + if result_signed: + result_flag = FLAG_SIGNED + else: + result_flag = FLAG_UNSIGNED + elif result_type == history.REF: + result_flag = FLAG_POINTER + elif result_type == history.FLOAT or result_type == 'L': + result_flag = FLAG_FLOAT + elif result_type == 'S': + result_flag = FLAG_UNSIGNED + else: + raise NotImplementedError("result_type = '%s'" % (result_type,)) + self.result_flag = result_flag def __repr__(self): - res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) + res = 'CallDescr(%s)' % (self.arg_classes,) extraeffect = getattr(self.extrainfo, 'extraeffect', None) if extraeffect is not None: res += ' EF=%r' % extraeffect @@ -363,14 +333,14 @@ def get_arg_types(self): return self.arg_classes - def get_return_type(self): - return self._return_type + def get_result_type(self): + return self.result_type - def get_result_size(self, translate_support_code): - raise NotImplementedError + def get_result_size(self): + return self.result_size def is_result_signed(self): - return False # unless overridden + return self.result_flag == FLAG_SIGNED def create_call_stub(self, rtyper, RESULT): from pypy.rlib.clibffi import FFI_DEFAULT_ABI @@ -408,18 +378,26 @@ seen = {'i': 0, 'r': 0, 'f': 0} args = ", ".join([process(c) for c in self.arg_classes]) - if self.get_return_type() == history.INT: + result_type = self.get_result_type() + if result_type == history.INT: result = 'rffi.cast(lltype.Signed, res)' - elif self.get_return_type() == history.REF: + category = 'i' + elif result_type == history.REF: + assert RESULT == llmemory.GCREF # should be ensured by the caller result = 'lltype.cast_opaque_ptr(llmemory.GCREF, res)' - elif self.get_return_type() == history.FLOAT: + category = 'r' + elif result_type == history.FLOAT: result = 'longlong.getfloatstorage(res)' - elif self.get_return_type() == 'L': + category = 'f' + elif result_type == 'L': result = 'rffi.cast(lltype.SignedLongLong, res)' - elif self.get_return_type() == history.VOID: - result = 'None' - elif self.get_return_type() == 'S': + category = 'f' + elif result_type == history.VOID: + result = '0' + category = 'i' + elif result_type == 'S': result = 'longlong.singlefloat2int(res)' + category = 'i' else: assert 0 source = py.code.Source(""" @@ -433,10 +411,13 @@ d = globals().copy() d.update(locals()) exec source.compile() in d - self.call_stub = d['call_stub'] + call_stub = d['call_stub'] + # store the function into one of three attributes, to preserve + # type-correctness of the return value + setattr(self, 'call_stub_%s' % category, call_stub) def verify_types(self, args_i, args_r, args_f, return_type): - assert self._return_type in return_type + assert self.result_type in return_type assert (self.arg_classes.count('i') + self.arg_classes.count('S')) == len(args_i or ()) assert self.arg_classes.count('r') == len(args_r or ()) @@ -444,161 +425,56 @@ self.arg_classes.count('L')) == len(args_f or ()) def repr_of_descr(self): - return '<%s>' % self._clsname + res = 'Call%s %d' % (self.result_type, self.result_size) + if self.arg_classes: + res += ' ' + self.arg_classes + if self.extrainfo: + res += ' EF=%d' % self.extrainfo.extraeffect + oopspecindex = self.extrainfo.oopspecindex + if oopspecindex: + res += ' OS=%d' % oopspecindex + return '<%s>' % res -class BaseIntCallDescr(BaseCallDescr): - # Base class of the various subclasses of descrs corresponding to - # calls having a return kind of 'int' (including non-gc pointers). - # The inheritance hierarchy is a bit different than with other Descr - # classes because of the 'call_stub' attribute, which is of type - # - # lambda func, args_i, args_r, args_f --> int/ref/float/void - # - # The purpose of BaseIntCallDescr is to be the parent of all classes - # in which 'call_stub' has a return kind of 'int'. - _return_type = history.INT - call_stub = staticmethod(lambda func, args_i, args_r, args_f: 0) - - _is_result_signed = False # can be overridden in XxxCallDescr - def is_result_signed(self): - return self._is_result_signed - -class DynamicIntCallDescr(BaseIntCallDescr): - """ - calldescr that works for every integer type, by explicitly passing it the - size of the result. Used only by get_call_descr_dynamic - """ - _clsname = 'DynamicIntCallDescr' - - def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): - BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) - assert isinstance(result_sign, bool) - self._result_size = chr(result_size) - self._result_sign = result_sign - - def get_result_size(self, translate_support_code): - return ord(self._result_size) - - def is_result_signed(self): - return self._result_sign - - -class NonGcPtrCallDescr(BaseIntCallDescr): - _clsname = 'NonGcPtrCallDescr' - def get_result_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrCallDescr(BaseCallDescr): - _clsname = 'GcPtrCallDescr' - _return_type = history.REF - call_stub = staticmethod(lambda func, args_i, args_r, args_f: - lltype.nullptr(llmemory.GCREF.TO)) - def get_result_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class FloatCallDescr(BaseCallDescr): - _clsname = 'FloatCallDescr' - _return_type = history.FLOAT - call_stub = staticmethod(lambda func,args_i,args_r,args_f: longlong.ZEROF) - def get_result_size(self, translate_support_code): - return symbolic.get_size(lltype.Float, translate_support_code) - -class LongLongCallDescr(FloatCallDescr): - _clsname = 'LongLongCallDescr' - _return_type = 'L' - -class VoidCallDescr(BaseCallDescr): - _clsname = 'VoidCallDescr' - _return_type = history.VOID - call_stub = staticmethod(lambda func, args_i, args_r, args_f: None) - def get_result_size(self, translate_support_code): - return 0 - -_SingleFloatCallDescr = None # built lazily - -def getCallDescrClass(RESULT): - if RESULT is lltype.Void: - return VoidCallDescr - if RESULT is lltype.Float: - return FloatCallDescr - if RESULT is lltype.SingleFloat: - global _SingleFloatCallDescr - if _SingleFloatCallDescr is None: - assert rffi.sizeof(rffi.UINT) == rffi.sizeof(RESULT) - class SingleFloatCallDescr(getCallDescrClass(rffi.UINT)): - _clsname = 'SingleFloatCallDescr' - _return_type = 'S' - _SingleFloatCallDescr = SingleFloatCallDescr - return _SingleFloatCallDescr - if is_longlong(RESULT): - return LongLongCallDescr - return getDescrClass(RESULT, BaseIntCallDescr, GcPtrCallDescr, - NonGcPtrCallDescr, 'Call', 'get_result_size', - Ellipsis, # <= floatattrname should not be used here - '_is_result_signed') -getCallDescrClass._annspecialcase_ = 'specialize:memo' +def map_type_to_argclass(ARG, accept_void=False): + kind = getkind(ARG) + if kind == 'int': + if ARG is lltype.SingleFloat: return 'S' + else: return 'i' + elif kind == 'ref': return 'r' + elif kind == 'float': + if is_longlong(ARG): return 'L' + else: return 'f' + elif kind == 'void': + if accept_void: return 'v' + raise NotImplementedError('ARG = %r' % (ARG,)) def get_call_descr(gccache, ARGS, RESULT, extrainfo=None): - arg_classes = [] - for ARG in ARGS: - kind = getkind(ARG) - if kind == 'int': - if ARG is lltype.SingleFloat: - arg_classes.append('S') + arg_classes = map(map_type_to_argclass, ARGS) + arg_classes = ''.join(arg_classes) + result_type = map_type_to_argclass(RESULT, accept_void=True) + RESULT_ERASED = RESULT + if RESULT is lltype.Void: + result_size = 0 + result_signed = False + else: + if isinstance(RESULT, lltype.Ptr): + # avoid too many CallDescrs + if result_type == 'r': + RESULT_ERASED = llmemory.GCREF else: - arg_classes.append('i') - elif kind == 'ref': arg_classes.append('r') - elif kind == 'float': - if is_longlong(ARG): - arg_classes.append('L') - else: - arg_classes.append('f') - else: - raise NotImplementedError('ARG = %r' % (ARG,)) - arg_classes = ''.join(arg_classes) - cls = getCallDescrClass(RESULT) - key = (cls, arg_classes, extrainfo) + RESULT_ERASED = llmemory.Address + result_size = symbolic.get_size(RESULT_ERASED, + gccache.translate_support_code) + result_signed = get_type_flag(RESULT) == FLAG_SIGNED + key = (arg_classes, result_type, result_signed, RESULT_ERASED, extrainfo) cache = gccache._cache_call try: - return cache[key] + calldescr = cache[key] except KeyError: - calldescr = cls(arg_classes, extrainfo) - calldescr.create_call_stub(gccache.rtyper, RESULT) + calldescr = CallDescr(arg_classes, result_type, result_signed, + result_size, extrainfo) + calldescr.create_call_stub(gccache.rtyper, RESULT_ERASED) cache[key] = calldescr - return calldescr - - -# ____________________________________________________________ - -def getDescrClass(TYPE, BaseDescr, GcPtrDescr, NonGcPtrDescr, - nameprefix, methodname, floatattrname, signedattrname, - _cache={}): - if isinstance(TYPE, lltype.Ptr): - if TYPE.TO._gckind == 'gc': - return GcPtrDescr - else: - return NonGcPtrDescr - if TYPE is lltype.SingleFloat: - assert rffi.sizeof(rffi.UINT) == rffi.sizeof(TYPE) - TYPE = rffi.UINT - try: - return _cache[nameprefix, TYPE] - except KeyError: - # - class Descr(BaseDescr): - _clsname = '%s%sDescr' % (TYPE._name, nameprefix) - Descr.__name__ = Descr._clsname - # - def method(self, translate_support_code): - return symbolic.get_size(TYPE, translate_support_code) - setattr(Descr, methodname, method) - # - if TYPE is lltype.Float or is_longlong(TYPE): - setattr(Descr, floatattrname, True) - elif (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and - rffi.cast(TYPE, -1) == -1): - setattr(Descr, signedattrname, True) - # - _cache[nameprefix, TYPE] = Descr - return Descr + assert repr(calldescr.result_size) == repr(result_size) + return calldescr diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -1,9 +1,7 @@ from pypy.rlib.rarithmetic import intmask from pypy.jit.metainterp import history from pypy.rpython.lltypesystem import rffi -from pypy.jit.backend.llsupport.descr import ( - DynamicIntCallDescr, NonGcPtrCallDescr, FloatCallDescr, VoidCallDescr, - LongLongCallDescr, getCallDescrClass) +from pypy.jit.backend.llsupport.descr import CallDescr class UnsupportedKind(Exception): pass @@ -16,29 +14,13 @@ argkinds = [get_ffi_type_kind(cpu, arg) for arg in ffi_args] except UnsupportedKind: return None - arg_classes = ''.join(argkinds) - if reskind == history.INT: - size = intmask(ffi_result.c_size) - signed = is_ffi_type_signed(ffi_result) - return DynamicIntCallDescr(arg_classes, size, signed, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.REF: - return NonGcPtrCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.FLOAT: - return FloatCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.VOID: - return VoidCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == 'L': - return LongLongCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == 'S': - SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) - return SingleFloatCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - assert False + if reskind == history.VOID: + result_size = 0 + else: + result_size = intmask(ffi_result.c_size) + argkinds = ''.join(argkinds) + return CallDescr(argkinds, reskind, is_ffi_type_signed(ffi_result), + result_size, extrainfo, ffi_flags=ffi_flags) def get_ffi_type_kind(cpu, ffi_type): from pypy.rlib.libffi import types diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,6 +1,6 @@ import os from pypy.rlib import rgc -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, specialize from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr @@ -8,52 +8,93 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.translator.tool.cbuild import ExternalCompilationInfo -from pypy.jit.metainterp.history import BoxInt, BoxPtr, ConstInt, ConstPtr -from pypy.jit.metainterp.history import AbstractDescr +from pypy.jit.codewriter import heaptracker +from pypy.jit.metainterp.history import ConstPtr, AbstractDescr from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD -from pypy.jit.backend.llsupport.descr import BaseSizeDescr, BaseArrayDescr +from pypy.jit.backend.llsupport.descr import SizeDescr, ArrayDescr from pypy.jit.backend.llsupport.descr import GcCache, get_field_descr -from pypy.jit.backend.llsupport.descr import GcPtrFieldDescr +from pypy.jit.backend.llsupport.descr import get_array_descr from pypy.jit.backend.llsupport.descr import get_call_descr +from pypy.jit.backend.llsupport.rewrite import GcRewriterAssembler from pypy.rpython.memory.gctransform import asmgcroot # ____________________________________________________________ class GcLLDescription(GcCache): - minimal_size_in_nursery = 0 - get_malloc_slowpath_addr = None def __init__(self, gcdescr, translator=None, rtyper=None): GcCache.__init__(self, translator is not None, rtyper) self.gcdescr = gcdescr + if translator and translator.config.translation.gcremovetypeptr: + self.fielddescr_vtable = None + else: + self.fielddescr_vtable = get_field_descr(self, rclass.OBJECT, + 'typeptr') + self._generated_functions = [] + + def _setup_str(self): + self.str_descr = get_array_descr(self, rstr.STR) + self.unicode_descr = get_array_descr(self, rstr.UNICODE) + + def generate_function(self, funcname, func, ARGS, RESULT=llmemory.GCREF): + """Generates a variant of malloc with the given name and the given + arguments. It should return NULL if out of memory. If it raises + anything, it must be an optional MemoryError. + """ + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + descr = get_call_descr(self, ARGS, RESULT) + setattr(self, funcname, func) + setattr(self, funcname + '_FUNCPTR', FUNCPTR) + setattr(self, funcname + '_descr', descr) + self._generated_functions.append(funcname) + + @specialize.arg(1) + def get_malloc_fn(self, funcname): + func = getattr(self, funcname) + FUNC = getattr(self, funcname + '_FUNCPTR') + return llhelper(FUNC, func) + + @specialize.arg(1) + def get_malloc_fn_addr(self, funcname): + ll_func = self.get_malloc_fn(funcname) + return heaptracker.adr2int(llmemory.cast_ptr_to_adr(ll_func)) + def _freeze_(self): return True def initialize(self): pass def do_write_barrier(self, gcref_struct, gcref_newptr): pass - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - return operations - def can_inline_malloc(self, descr): - return False - def can_inline_malloc_varsize(self, descr, num_elem): + def can_use_nursery_malloc(self, size): return False def has_write_barrier_class(self): return None def freeing_block(self, start, stop): pass + def get_nursery_free_addr(self): + raise NotImplementedError + def get_nursery_top_addr(self): + raise NotImplementedError - def get_funcptr_for_newarray(self): - return llhelper(self.GC_MALLOC_ARRAY, self.malloc_array) - def get_funcptr_for_newstr(self): - return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_str) - def get_funcptr_for_newunicode(self): - return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_unicode) + def gc_malloc(self, sizedescr): + """Blackhole: do a 'bh_new'. Also used for 'bh_new_with_vtable', + with the vtable pointer set manually afterwards.""" + assert isinstance(sizedescr, SizeDescr) + return self._bh_malloc(sizedescr) + def gc_malloc_array(self, arraydescr, num_elem): + assert isinstance(arraydescr, ArrayDescr) + return self._bh_malloc_array(arraydescr, num_elem) - def record_constptrs(self, op, gcrefs_output_list): + def gc_malloc_str(self, num_elem): + return self._bh_malloc_array(self.str_descr, num_elem) + + def gc_malloc_unicode(self, num_elem): + return self._bh_malloc_array(self.unicode_descr, num_elem) + + def _record_constptrs(self, op, gcrefs_output_list): for i in range(op.numargs()): v = op.getarg(i) if isinstance(v, ConstPtr) and bool(v.value): @@ -61,11 +102,27 @@ rgc._make_sure_does_not_move(p) gcrefs_output_list.append(p) + def rewrite_assembler(self, cpu, operations, gcrefs_output_list): + rewriter = GcRewriterAssembler(self, cpu) + newops = rewriter.rewrite(operations) + # record all GCREFs, because the GC (or Boehm) cannot see them and + # keep them alive if they end up as constants in the assembler + for op in newops: + self._record_constptrs(op, gcrefs_output_list) + return newops + # ____________________________________________________________ class GcLLDescr_boehm(GcLLDescription): - moving_gc = False - gcrootmap = None + kind = 'boehm' + moving_gc = False + round_up = False + gcrootmap = None + write_barrier_descr = None + fielddescr_tid = None + str_type_id = 0 + unicode_type_id = 0 + get_malloc_slowpath_addr = None @classmethod def configure_boehm_once(cls): @@ -76,6 +133,16 @@ from pypy.rpython.tool import rffi_platform compilation_info = rffi_platform.configure_boehm() + # on some platform GC_init is required before any other + # GC_* functions, call it here for the benefit of tests + # XXX move this to tests + init_fn_ptr = rffi.llexternal("GC_init", + [], lltype.Void, + compilation_info=compilation_info, + sandboxsafe=True, + _nowrapper=True) + init_fn_ptr() + # Versions 6.x of libgc needs to use GC_local_malloc(). # Versions 7.x of libgc removed this function; GC_malloc() has # the same behavior if libgc was compiled with @@ -95,96 +162,42 @@ sandboxsafe=True, _nowrapper=True) cls.malloc_fn_ptr = malloc_fn_ptr - cls.compilation_info = compilation_info return malloc_fn_ptr def __init__(self, gcdescr, translator, rtyper): GcLLDescription.__init__(self, gcdescr, translator, rtyper) # grab a pointer to the Boehm 'malloc' function - malloc_fn_ptr = self.configure_boehm_once() - self.funcptr_for_new = malloc_fn_ptr + self.malloc_fn_ptr = self.configure_boehm_once() + self._setup_str() + self._make_functions() - def malloc_array(basesize, itemsize, ofs_length, num_elem): + def _make_functions(self): + + def malloc_fixedsize(size): + return self.malloc_fn_ptr(size) + self.generate_function('malloc_fixedsize', malloc_fixedsize, + [lltype.Signed]) + + def malloc_array(basesize, num_elem, itemsize, ofs_length): try: - size = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) + totalsize = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) except OverflowError: return lltype.nullptr(llmemory.GCREF.TO) - res = self.funcptr_for_new(size) - if not res: - return res - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem + res = self.malloc_fn_ptr(totalsize) + if res: + arrayptr = rffi.cast(rffi.CArrayPtr(lltype.Signed), res) + arrayptr[ofs_length/WORD] = num_elem return res - self.malloc_array = malloc_array - self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( - [lltype.Signed] * 4, llmemory.GCREF)) + self.generate_function('malloc_array', malloc_array, + [lltype.Signed] * 4) + def _bh_malloc(self, sizedescr): + return self.malloc_fixedsize(sizedescr.size) - (str_basesize, str_itemsize, str_ofs_length - ) = symbolic.get_array_token(rstr.STR, self.translate_support_code) - (unicode_basesize, unicode_itemsize, unicode_ofs_length - ) = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) - def malloc_str(length): - return self.malloc_array( - str_basesize, str_itemsize, str_ofs_length, length - ) - def malloc_unicode(length): - return self.malloc_array( - unicode_basesize, unicode_itemsize, unicode_ofs_length, length - ) - self.malloc_str = malloc_str - self.malloc_unicode = malloc_unicode - self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( - [lltype.Signed], llmemory.GCREF)) - - - # on some platform GC_init is required before any other - # GC_* functions, call it here for the benefit of tests - # XXX move this to tests - init_fn_ptr = rffi.llexternal("GC_init", - [], lltype.Void, - compilation_info=self.compilation_info, - sandboxsafe=True, - _nowrapper=True) - - init_fn_ptr() - - def gc_malloc(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return self.funcptr_for_new(sizedescr.size) - - def gc_malloc_array(self, arraydescr, num_elem): - assert isinstance(arraydescr, BaseArrayDescr) - ofs_length = arraydescr.get_ofs_length(self.translate_support_code) - basesize = arraydescr.get_base_size(self.translate_support_code) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return self.malloc_array(basesize, itemsize, ofs_length, num_elem) - - def gc_malloc_str(self, num_elem): - return self.malloc_str(num_elem) - - def gc_malloc_unicode(self, num_elem): - return self.malloc_unicode(num_elem) - - def args_for_new(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return [sizedescr.size] - - def args_for_new_array(self, arraydescr): - ofs_length = arraydescr.get_ofs_length(self.translate_support_code) - basesize = arraydescr.get_base_size(self.translate_support_code) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return [basesize, itemsize, ofs_length] - - def get_funcptr_for_new(self): - return self.funcptr_for_new - - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - # record all GCREFs too, because Boehm cannot see them and keep them - # alive if they end up as constants in the assembler - for op in operations: - self.record_constptrs(op, gcrefs_output_list) - return GcLLDescription.rewrite_assembler(self, cpu, operations, - gcrefs_output_list) + def _bh_malloc_array(self, arraydescr, num_elem): + return self.malloc_array(arraydescr.basesize, num_elem, + arraydescr.itemsize, + arraydescr.lendescr.offset) # ____________________________________________________________ @@ -554,12 +567,14 @@ class WriteBarrierDescr(AbstractDescr): def __init__(self, gc_ll_descr): - GCClass = gc_ll_descr.GCClass self.llop1 = gc_ll_descr.llop1 self.WB_FUNCPTR = gc_ll_descr.WB_FUNCPTR self.WB_ARRAY_FUNCPTR = gc_ll_descr.WB_ARRAY_FUNCPTR - self.fielddescr_tid = get_field_descr(gc_ll_descr, GCClass.HDR, 'tid') + self.fielddescr_tid = gc_ll_descr.fielddescr_tid # + GCClass = gc_ll_descr.GCClass + if GCClass is None: # for tests + return self.jit_wb_if_flag = GCClass.JIT_WB_IF_FLAG self.jit_wb_if_flag_byteofs, self.jit_wb_if_flag_singlebyte = ( self.extract_flag_byte(self.jit_wb_if_flag)) @@ -596,48 +611,74 @@ funcaddr = llmemory.cast_ptr_to_adr(funcptr) return cpu.cast_adr_to_int(funcaddr) # this may return 0 + def has_write_barrier_from_array(self, cpu): + return self.get_write_barrier_from_array_fn(cpu) != 0 + class GcLLDescr_framework(GcLLDescription): DEBUG = False # forced to True by x86/test/test_zrpy_gc.py + kind = 'framework' + round_up = True - def __init__(self, gcdescr, translator, rtyper, llop1=llop): - from pypy.rpython.memory.gctypelayout import check_typeid - from pypy.rpython.memory.gcheader import GCHeaderBuilder - from pypy.rpython.memory.gctransform import framework + def __init__(self, gcdescr, translator, rtyper, llop1=llop, + really_not_translated=False): GcLLDescription.__init__(self, gcdescr, translator, rtyper) - assert self.translate_support_code, "required with the framework GC" self.translator = translator self.llop1 = llop1 + if really_not_translated: + assert not self.translate_support_code # but half does not work + self._initialize_for_tests() + else: + assert self.translate_support_code,"required with the framework GC" + self._check_valid_gc() + self._make_gcrootmap() + self._make_layoutbuilder() + self._setup_gcclass() + self._setup_tid() + self._setup_write_barrier() + self._setup_str() + self._make_functions(really_not_translated) + def _initialize_for_tests(self): + self.layoutbuilder = None + self.fielddescr_tid = AbstractDescr() + self.max_size_of_young_obj = 1000 + self.GCClass = None + + def _check_valid_gc(self): # we need the hybrid or minimark GC for rgc._make_sure_does_not_move() # to work - if gcdescr.config.translation.gc not in ('hybrid', 'minimark'): + if self.gcdescr.config.translation.gc not in ('hybrid', 'minimark'): raise NotImplementedError("--gc=%s not implemented with the JIT" % (gcdescr.config.translation.gc,)) + def _make_gcrootmap(self): # to find roots in the assembler, make a GcRootMap - name = gcdescr.config.translation.gcrootfinder + name = self.gcdescr.config.translation.gcrootfinder try: cls = globals()['GcRootMap_' + name] except KeyError: raise NotImplementedError("--gcrootfinder=%s not implemented" " with the JIT" % (name,)) - gcrootmap = cls(gcdescr) + gcrootmap = cls(self.gcdescr) self.gcrootmap = gcrootmap + def _make_layoutbuilder(self): # make a TransformerLayoutBuilder and save it on the translator # where it can be fished and reused by the FrameworkGCTransformer + from pypy.rpython.memory.gctransform import framework + translator = self.translator self.layoutbuilder = framework.TransformerLayoutBuilder(translator) self.layoutbuilder.delay_encoding() - self.translator._jit2gc = {'layoutbuilder': self.layoutbuilder} - gcrootmap.add_jit2gc_hooks(self.translator._jit2gc) + translator._jit2gc = {'layoutbuilder': self.layoutbuilder} + self.gcrootmap.add_jit2gc_hooks(translator._jit2gc) + def _setup_gcclass(self): + from pypy.rpython.memory.gcheader import GCHeaderBuilder self.GCClass = self.layoutbuilder.GCClass self.moving_gc = self.GCClass.moving_gc self.HDRPTR = lltype.Ptr(self.GCClass.HDR) self.gcheaderbuilder = GCHeaderBuilder(self.HDRPTR.TO) - (self.array_basesize, _, self.array_length_ofs) = \ - symbolic.get_array_token(lltype.GcArray(lltype.Signed), True) self.max_size_of_young_obj = self.GCClass.JIT_max_size_of_young_obj() self.minimal_size_in_nursery=self.GCClass.JIT_minimal_size_in_nursery() @@ -645,87 +686,124 @@ assert self.GCClass.inline_simple_malloc assert self.GCClass.inline_simple_malloc_varsize - # make a malloc function, with two arguments - def malloc_basic(size, tid): - type_id = llop.extract_ushort(llgroup.HALFWORD, tid) - check_typeid(type_id) - res = llop1.do_malloc_fixedsize_clear(llmemory.GCREF, - type_id, size, - False, False, False) - # In case the operation above failed, we are returning NULL - # from this function to assembler. There is also an RPython - # exception set, typically MemoryError; but it's easier and - # faster to check for the NULL return value, as done by - # translator/exceptiontransform.py. - #llop.debug_print(lltype.Void, "\tmalloc_basic", size, type_id, - # "-->", res) - return res - self.malloc_basic = malloc_basic - self.GC_MALLOC_BASIC = lltype.Ptr(lltype.FuncType( - [lltype.Signed, lltype.Signed], llmemory.GCREF)) + def _setup_tid(self): + self.fielddescr_tid = get_field_descr(self, self.GCClass.HDR, 'tid') + + def _setup_write_barrier(self): self.WB_FUNCPTR = lltype.Ptr(lltype.FuncType( [llmemory.Address, llmemory.Address], lltype.Void)) self.WB_ARRAY_FUNCPTR = lltype.Ptr(lltype.FuncType( [llmemory.Address, lltype.Signed, llmemory.Address], lltype.Void)) self.write_barrier_descr = WriteBarrierDescr(self) - # + + def _make_functions(self, really_not_translated): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + (self.standard_array_basesize, _, self.standard_array_length_ofs) = \ + symbolic.get_array_token(lltype.GcArray(lltype.Signed), + not really_not_translated) + + def malloc_nursery_slowpath(size): + """Allocate 'size' null bytes out of the nursery. + Note that the fast path is typically inlined by the backend.""" + if self.DEBUG: + self._random_usage_of_xmm_registers() + type_id = rffi.cast(llgroup.HALFWORD, 0) # missing here + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, size, + False, False, False) + self.generate_function('malloc_nursery', malloc_nursery_slowpath, + [lltype.Signed]) + def malloc_array(itemsize, tid, num_elem): + """Allocate an array with a variable-size num_elem. + Only works for standard arrays.""" type_id = llop.extract_ushort(llgroup.HALFWORD, tid) check_typeid(type_id) return llop1.do_malloc_varsize_clear( llmemory.GCREF, - type_id, num_elem, self.array_basesize, itemsize, - self.array_length_ofs) - self.malloc_array = malloc_array - self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( - [lltype.Signed] * 3, llmemory.GCREF)) - # - (str_basesize, str_itemsize, str_ofs_length - ) = symbolic.get_array_token(rstr.STR, True) - (unicode_basesize, unicode_itemsize, unicode_ofs_length - ) = symbolic.get_array_token(rstr.UNICODE, True) - str_type_id = self.layoutbuilder.get_type_id(rstr.STR) - unicode_type_id = self.layoutbuilder.get_type_id(rstr.UNICODE) - # + type_id, num_elem, self.standard_array_basesize, itemsize, + self.standard_array_length_ofs) + self.generate_function('malloc_array', malloc_array, + [lltype.Signed] * 3) + + def malloc_array_nonstandard(basesize, itemsize, lengthofs, tid, + num_elem): + """For the rare case of non-standard arrays, i.e. arrays where + self.standard_array_{basesize,length_ofs} is wrong. It can + occur e.g. with arrays of floats on Win32.""" + type_id = llop.extract_ushort(llgroup.HALFWORD, tid) + check_typeid(type_id) + return llop1.do_malloc_varsize_clear( + llmemory.GCREF, + type_id, num_elem, basesize, itemsize, lengthofs) + self.generate_function('malloc_array_nonstandard', + malloc_array_nonstandard, + [lltype.Signed] * 5) + + str_type_id = self.str_descr.tid + str_basesize = self.str_descr.basesize + str_itemsize = self.str_descr.itemsize + str_ofs_length = self.str_descr.lendescr.offset + unicode_type_id = self.unicode_descr.tid + unicode_basesize = self.unicode_descr.basesize + unicode_itemsize = self.unicode_descr.itemsize + unicode_ofs_length = self.unicode_descr.lendescr.offset + def malloc_str(length): return llop1.do_malloc_varsize_clear( llmemory.GCREF, str_type_id, length, str_basesize, str_itemsize, str_ofs_length) + self.generate_function('malloc_str', malloc_str, + [lltype.Signed]) + def malloc_unicode(length): return llop1.do_malloc_varsize_clear( llmemory.GCREF, - unicode_type_id, length, unicode_basesize,unicode_itemsize, + unicode_type_id, length, unicode_basesize, unicode_itemsize, unicode_ofs_length) - self.malloc_str = malloc_str - self.malloc_unicode = malloc_unicode - self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( - [lltype.Signed], llmemory.GCREF)) - # - class ForTestOnly: - pass - for_test_only = ForTestOnly() - for_test_only.x = 1.23 - def random_usage_of_xmm_registers(): - x0 = for_test_only.x - x1 = x0 * 0.1 - x2 = x0 * 0.2 - x3 = x0 * 0.3 - for_test_only.x = x0 + x1 + x2 + x3 - # - def malloc_slowpath(size): - if self.DEBUG: - random_usage_of_xmm_registers() - assert size >= self.minimal_size_in_nursery - # NB. although we call do_malloc_fixedsize_clear() here, - # it's a bit of a hack because we set tid to 0 and may - # also use it to allocate varsized objects. The tid - # and possibly the length are both set afterward. - gcref = llop1.do_malloc_fixedsize_clear(llmemory.GCREF, - 0, size, False, False, False) - return rffi.cast(lltype.Signed, gcref) - self.malloc_slowpath = malloc_slowpath - self.MALLOC_SLOWPATH = lltype.FuncType([lltype.Signed], lltype.Signed) + self.generate_function('malloc_unicode', malloc_unicode, + [lltype.Signed]) + + # Rarely called: allocate a fixed-size amount of bytes, but + # not in the nursery, because it is too big. Implemented like + # malloc_nursery_slowpath() above. + self.generate_function('malloc_fixedsize', malloc_nursery_slowpath, + [lltype.Signed]) + + def _bh_malloc(self, sizedescr): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + type_id = llop.extract_ushort(llgroup.HALFWORD, sizedescr.tid) + check_typeid(type_id) + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, sizedescr.size, + False, False, False) + + def _bh_malloc_array(self, arraydescr, num_elem): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + type_id = llop.extract_ushort(llgroup.HALFWORD, arraydescr.tid) + check_typeid(type_id) + return llop1.do_malloc_varsize_clear(llmemory.GCREF, + type_id, num_elem, + arraydescr.basesize, + arraydescr.itemsize, + arraydescr.lendescr.offset) + + + class ForTestOnly: + pass + for_test_only = ForTestOnly() + for_test_only.x = 1.23 + + def _random_usage_of_xmm_registers(self): + x0 = self.for_test_only.x + x1 = x0 * 0.1 + x2 = x0 * 0.2 + x3 = x0 * 0.3 + self.for_test_only.x = x0 + x1 + x2 + x3 def get_nursery_free_addr(self): nurs_addr = llop.gc_adr_of_nursery_free(llmemory.Address) @@ -735,49 +813,26 @@ nurs_top_addr = llop.gc_adr_of_nursery_top(llmemory.Address) return rffi.cast(lltype.Signed, nurs_top_addr) - def get_malloc_slowpath_addr(self): - fptr = llhelper(lltype.Ptr(self.MALLOC_SLOWPATH), self.malloc_slowpath) - return rffi.cast(lltype.Signed, fptr) - def initialize(self): self.gcrootmap.initialize() def init_size_descr(self, S, descr): - type_id = self.layoutbuilder.get_type_id(S) - assert not self.layoutbuilder.is_weakref_type(S) - assert not self.layoutbuilder.has_finalizer(S) - descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) + if self.layoutbuilder is not None: + type_id = self.layoutbuilder.get_type_id(S) + assert not self.layoutbuilder.is_weakref_type(S) + assert not self.layoutbuilder.has_finalizer(S) + descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) def init_array_descr(self, A, descr): - type_id = self.layoutbuilder.get_type_id(A) - descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) + if self.layoutbuilder is not None: + type_id = self.layoutbuilder.get_type_id(A) + descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) - def gc_malloc(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return self.malloc_basic(sizedescr.size, sizedescr.tid) - - def gc_malloc_array(self, arraydescr, num_elem): - assert isinstance(arraydescr, BaseArrayDescr) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return self.malloc_array(itemsize, arraydescr.tid, num_elem) - - def gc_malloc_str(self, num_elem): - return self.malloc_str(num_elem) - - def gc_malloc_unicode(self, num_elem): - return self.malloc_unicode(num_elem) - - def args_for_new(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return [sizedescr.size, sizedescr.tid] - - def args_for_new_array(self, arraydescr): - assert isinstance(arraydescr, BaseArrayDescr) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return [itemsize, arraydescr.tid] - - def get_funcptr_for_new(self): - return llhelper(self.GC_MALLOC_BASIC, self.malloc_basic) + def _set_tid(self, gcptr, tid): + hdr_addr = llmemory.cast_ptr_to_adr(gcptr) + hdr_addr -= self.gcheaderbuilder.size_gc_header + hdr = llmemory.cast_adr_to_ptr(hdr_addr, self.HDRPTR) + hdr.tid = tid def do_write_barrier(self, gcref_struct, gcref_newptr): hdr_addr = llmemory.cast_ptr_to_adr(gcref_struct) @@ -791,108 +846,8 @@ funcptr(llmemory.cast_ptr_to_adr(gcref_struct), llmemory.cast_ptr_to_adr(gcref_newptr)) - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - # Perform two kinds of rewrites in parallel: - # - # - Add COND_CALLs to the write barrier before SETFIELD_GC and - # SETARRAYITEM_GC operations. - # - # - Record the ConstPtrs from the assembler. - # - newops = [] - known_lengths = {} - # we can only remember one malloc since the next malloc can possibly - # collect - last_malloc = None - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - continue - # ---------- record the ConstPtrs ---------- - self.record_constptrs(op, gcrefs_output_list) - if op.is_malloc(): - last_malloc = op.result - elif op.can_malloc(): - last_malloc = None - # ---------- write barrier for SETFIELD_GC ---------- - if op.getopnum() == rop.SETFIELD_GC: - val = op.getarg(0) - # no need for a write barrier in the case of previous malloc - if val is not last_malloc: - v = op.getarg(1) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier(newops, op.getarg(0), v) - op = op.copy_and_change(rop.SETFIELD_RAW) - # ---------- write barrier for SETINTERIORFIELD_GC ------ - if op.getopnum() == rop.SETINTERIORFIELD_GC: - val = op.getarg(0) - if val is not last_malloc: - v = op.getarg(2) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier(newops, op.getarg(0), v) - op = op.copy_and_change(rop.SETINTERIORFIELD_RAW) - # ---------- write barrier for SETARRAYITEM_GC ---------- - if op.getopnum() == rop.SETARRAYITEM_GC: - val = op.getarg(0) - # no need for a write barrier in the case of previous malloc - if val is not last_malloc: - v = op.getarg(2) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier_array(newops, op.getarg(0), - op.getarg(1), v, - cpu, known_lengths) - op = op.copy_and_change(rop.SETARRAYITEM_RAW) - elif op.getopnum() == rop.NEW_ARRAY: - v_length = op.getarg(0) - if isinstance(v_length, ConstInt): - known_lengths[op.result] = v_length.getint() - # ---------- - newops.append(op) - return newops - - def _gen_write_barrier(self, newops, v_base, v_value): - args = [v_base, v_value] - newops.append(ResOperation(rop.COND_CALL_GC_WB, args, None, - descr=self.write_barrier_descr)) - - def _gen_write_barrier_array(self, newops, v_base, v_index, v_value, - cpu, known_lengths): - if self.write_barrier_descr.get_write_barrier_from_array_fn(cpu) != 0: - # If we know statically the length of 'v', and it is not too - # big, then produce a regular write_barrier. If it's unknown or - # too big, produce instead a write_barrier_from_array. - LARGE = 130 - length = known_lengths.get(v_base, LARGE) - if length >= LARGE: - # unknown or too big: produce a write_barrier_from_array - args = [v_base, v_index, v_value] - newops.append(ResOperation(rop.COND_CALL_GC_WB_ARRAY, args, - None, - descr=self.write_barrier_descr)) - return - # fall-back case: produce a write_barrier - self._gen_write_barrier(newops, v_base, v_value) - - def can_inline_malloc(self, descr): - assert isinstance(descr, BaseSizeDescr) - if descr.size < self.max_size_of_young_obj: - has_finalizer = bool(descr.tid & (1<= LARGE: + # unknown or too big: produce a write_barrier_from_array + args = [v_base, v_index, v_value] + self.newops.append( + ResOperation(rop.COND_CALL_GC_WB_ARRAY, args, None, + descr=write_barrier_descr)) + return + # fall-back case: produce a write_barrier + self.gen_write_barrier(v_base, v_value) + + def round_up_for_allocation(self, size): + if not self.gc_ll_descr.round_up: + return size + if self.gc_ll_descr.translate_support_code: + from pypy.rpython.lltypesystem import llarena + return llarena.round_up_for_allocation( + size, self.gc_ll_descr.minimal_size_in_nursery) + else: + # non-translated: do it manually + # assume that "self.gc_ll_descr.minimal_size_in_nursery" is 2 WORDs + size = max(size, 2 * WORD) + return (size + WORD-1) & ~(WORD-1) # round up diff --git a/pypy/jit/backend/llsupport/test/test_descr.py b/pypy/jit/backend/llsupport/test/test_descr.py --- a/pypy/jit/backend/llsupport/test/test_descr.py +++ b/pypy/jit/backend/llsupport/test/test_descr.py @@ -1,4 +1,4 @@ -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.jit.backend.llsupport.descr import * from pypy.jit.backend.llsupport import symbolic from pypy.rlib.objectmodel import Symbolic @@ -53,18 +53,6 @@ ('z', lltype.Ptr(U)), ('f', lltype.Float), ('s', lltype.SingleFloat)) - assert getFieldDescrClass(lltype.Ptr(T)) is GcPtrFieldDescr - assert getFieldDescrClass(lltype.Ptr(U)) is NonGcPtrFieldDescr - cls = getFieldDescrClass(lltype.Char) - assert cls != getFieldDescrClass(lltype.Signed) - assert cls == getFieldDescrClass(lltype.Char) - clsf = getFieldDescrClass(lltype.Float) - assert clsf != cls - assert clsf == getFieldDescrClass(lltype.Float) - clss = getFieldDescrClass(lltype.SingleFloat) - assert clss not in (cls, clsf) - assert clss == getFieldDescrClass(lltype.SingleFloat) - assert clss == getFieldDescrClass(rffi.UINT) # for now # c0 = GcCache(False) c1 = GcCache(True) @@ -77,11 +65,7 @@ descr_z = get_field_descr(c2, S, 'z') descr_f = get_field_descr(c2, S, 'f') descr_s = get_field_descr(c2, S, 's') - assert descr_x.__class__ is cls - assert descr_y.__class__ is GcPtrFieldDescr - assert descr_z.__class__ is NonGcPtrFieldDescr - assert descr_f.__class__ is clsf - assert descr_s.__class__ is clss + assert isinstance(descr_x, FieldDescr) assert descr_x.name == 'S.x' assert descr_y.name == 'S.y' assert descr_z.name == 'S.z' @@ -90,33 +74,27 @@ if not tsc: assert descr_x.offset < descr_y.offset < descr_z.offset assert descr_x.sort_key() < descr_y.sort_key() < descr_z.sort_key() - assert descr_x.get_field_size(False) == rffi.sizeof(lltype.Char) - assert descr_y.get_field_size(False) == rffi.sizeof(lltype.Ptr(T)) - assert descr_z.get_field_size(False) == rffi.sizeof(lltype.Ptr(U)) - assert descr_f.get_field_size(False) == rffi.sizeof(lltype.Float) - assert descr_s.get_field_size(False) == rffi.sizeof( - lltype.SingleFloat) + assert descr_x.field_size == rffi.sizeof(lltype.Char) + assert descr_y.field_size == rffi.sizeof(lltype.Ptr(T)) + assert descr_z.field_size == rffi.sizeof(lltype.Ptr(U)) + assert descr_f.field_size == rffi.sizeof(lltype.Float) + assert descr_s.field_size == rffi.sizeof(lltype.SingleFloat) else: assert isinstance(descr_x.offset, Symbolic) assert isinstance(descr_y.offset, Symbolic) assert isinstance(descr_z.offset, Symbolic) assert isinstance(descr_f.offset, Symbolic) assert isinstance(descr_s.offset, Symbolic) - assert isinstance(descr_x.get_field_size(True), Symbolic) - assert isinstance(descr_y.get_field_size(True), Symbolic) - assert isinstance(descr_z.get_field_size(True), Symbolic) - assert isinstance(descr_f.get_field_size(True), Symbolic) - assert isinstance(descr_s.get_field_size(True), Symbolic) - assert not descr_x.is_pointer_field() - assert descr_y.is_pointer_field() - assert not descr_z.is_pointer_field() - assert not descr_f.is_pointer_field() - assert not descr_s.is_pointer_field() - assert not descr_x.is_float_field() - assert not descr_y.is_float_field() - assert not descr_z.is_float_field() - assert descr_f.is_float_field() - assert not descr_s.is_float_field() + assert isinstance(descr_x.field_size, Symbolic) + assert isinstance(descr_y.field_size, Symbolic) + assert isinstance(descr_z.field_size, Symbolic) + assert isinstance(descr_f.field_size, Symbolic) + assert isinstance(descr_s.field_size, Symbolic) + assert descr_x.flag == FLAG_UNSIGNED + assert descr_y.flag == FLAG_POINTER + assert descr_z.flag == FLAG_UNSIGNED + assert descr_f.flag == FLAG_FLOAT + assert descr_s.flag == FLAG_UNSIGNED def test_get_field_descr_sign(): @@ -128,7 +106,8 @@ for tsc in [False, True]: c2 = GcCache(tsc) descr_x = get_field_descr(c2, S, 'x') - assert descr_x.is_field_signed() == signed + assert descr_x.flag == {False: FLAG_UNSIGNED, + True: FLAG_SIGNED }[signed] def test_get_field_descr_longlong(): if sys.maxint > 2147483647: @@ -136,9 +115,8 @@ c0 = GcCache(False) S = lltype.GcStruct('S', ('y', lltype.UnsignedLongLong)) descr = get_field_descr(c0, S, 'y') - assert not descr.is_pointer_field() - assert descr.is_float_field() - assert descr.get_field_size(False) == 8 + assert descr.flag == FLAG_FLOAT + assert descr.field_size == 8 def test_get_array_descr(): @@ -149,19 +127,8 @@ A3 = lltype.GcArray(lltype.Ptr(U)) A4 = lltype.GcArray(lltype.Float) A5 = lltype.GcArray(lltype.Struct('x', ('v', lltype.Signed), - ('k', lltype.Signed))) + ('k', lltype.Signed))) A6 = lltype.GcArray(lltype.SingleFloat) - assert getArrayDescrClass(A2) is GcPtrArrayDescr - assert getArrayDescrClass(A3) is NonGcPtrArrayDescr - cls = getArrayDescrClass(A1) - assert cls != getArrayDescrClass(lltype.GcArray(lltype.Signed)) - assert cls == getArrayDescrClass(lltype.GcArray(lltype.Char)) - clsf = getArrayDescrClass(A4) - assert clsf != cls - assert clsf == getArrayDescrClass(lltype.GcArray(lltype.Float)) - clss = getArrayDescrClass(A6) - assert clss not in (clsf, cls) - assert clss == getArrayDescrClass(lltype.GcArray(rffi.UINT)) # c0 = GcCache(False) descr1 = get_array_descr(c0, A1) @@ -170,82 +137,61 @@ descr4 = get_array_descr(c0, A4) descr5 = get_array_descr(c0, A5) descr6 = get_array_descr(c0, A6) - assert descr1.__class__ is cls - assert descr2.__class__ is GcPtrArrayDescr - assert descr3.__class__ is NonGcPtrArrayDescr - assert descr4.__class__ is clsf - assert descr6.__class__ is clss + assert isinstance(descr1, ArrayDescr) assert descr1 == get_array_descr(c0, lltype.GcArray(lltype.Char)) - assert not descr1.is_array_of_pointers() - assert descr2.is_array_of_pointers() - assert not descr3.is_array_of_pointers() - assert not descr4.is_array_of_pointers() - assert not descr5.is_array_of_pointers() - assert not descr1.is_array_of_floats() - assert not descr2.is_array_of_floats() - assert not descr3.is_array_of_floats() - assert descr4.is_array_of_floats() - assert not descr5.is_array_of_floats() + assert descr1.flag == FLAG_UNSIGNED + assert descr2.flag == FLAG_POINTER + assert descr3.flag == FLAG_UNSIGNED + assert descr4.flag == FLAG_FLOAT + assert descr5.flag == FLAG_STRUCT + assert descr6.flag == FLAG_UNSIGNED # def get_alignment(code): # Retrieve default alignment for the compiler/platform return struct.calcsize('l' + code) - struct.calcsize(code) - assert descr1.get_base_size(False) == get_alignment('c') - assert descr2.get_base_size(False) == get_alignment('p') - assert descr3.get_base_size(False) == get_alignment('p') - assert descr4.get_base_size(False) == get_alignment('d') - assert descr5.get_base_size(False) == get_alignment('f') - assert descr1.get_ofs_length(False) == 0 - assert descr2.get_ofs_length(False) == 0 - assert descr3.get_ofs_length(False) == 0 - assert descr4.get_ofs_length(False) == 0 - assert descr5.get_ofs_length(False) == 0 - assert descr1.get_item_size(False) == rffi.sizeof(lltype.Char) - assert descr2.get_item_size(False) == rffi.sizeof(lltype.Ptr(T)) - assert descr3.get_item_size(False) == rffi.sizeof(lltype.Ptr(U)) - assert descr4.get_item_size(False) == rffi.sizeof(lltype.Float) - assert descr5.get_item_size(False) == rffi.sizeof(lltype.Signed) * 2 - assert descr6.get_item_size(False) == rffi.sizeof(lltype.SingleFloat) + assert descr1.basesize == get_alignment('c') + assert descr2.basesize == get_alignment('p') + assert descr3.basesize == get_alignment('p') + assert descr4.basesize == get_alignment('d') + assert descr5.basesize == get_alignment('f') + assert descr1.lendescr.offset == 0 + assert descr2.lendescr.offset == 0 + assert descr3.lendescr.offset == 0 + assert descr4.lendescr.offset == 0 + assert descr5.lendescr.offset == 0 + assert descr1.itemsize == rffi.sizeof(lltype.Char) + assert descr2.itemsize == rffi.sizeof(lltype.Ptr(T)) + assert descr3.itemsize == rffi.sizeof(lltype.Ptr(U)) + assert descr4.itemsize == rffi.sizeof(lltype.Float) + assert descr5.itemsize == rffi.sizeof(lltype.Signed) * 2 + assert descr6.itemsize == rffi.sizeof(lltype.SingleFloat) # - assert isinstance(descr1.get_base_size(True), Symbolic) - assert isinstance(descr2.get_base_size(True), Symbolic) - assert isinstance(descr3.get_base_size(True), Symbolic) - assert isinstance(descr4.get_base_size(True), Symbolic) - assert isinstance(descr5.get_base_size(True), Symbolic) - assert isinstance(descr1.get_ofs_length(True), Symbolic) - assert isinstance(descr2.get_ofs_length(True), Symbolic) - assert isinstance(descr3.get_ofs_length(True), Symbolic) - assert isinstance(descr4.get_ofs_length(True), Symbolic) - assert isinstance(descr5.get_ofs_length(True), Symbolic) - assert isinstance(descr1.get_item_size(True), Symbolic) - assert isinstance(descr2.get_item_size(True), Symbolic) - assert isinstance(descr3.get_item_size(True), Symbolic) - assert isinstance(descr4.get_item_size(True), Symbolic) - assert isinstance(descr5.get_item_size(True), Symbolic) CA = rffi.CArray(lltype.Signed) descr = get_array_descr(c0, CA) - assert not descr.is_array_of_floats() - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_SIGNED + assert descr.basesize == 0 + assert descr.lendescr is None CA = rffi.CArray(lltype.Ptr(lltype.GcStruct('S'))) descr = get_array_descr(c0, CA) - assert descr.is_array_of_pointers() - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_POINTER + assert descr.basesize == 0 + assert descr.lendescr is None CA = rffi.CArray(lltype.Ptr(lltype.Struct('S'))) descr = get_array_descr(c0, CA) - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_UNSIGNED + assert descr.basesize == 0 + assert descr.lendescr is None CA = rffi.CArray(lltype.Float) descr = get_array_descr(c0, CA) - assert descr.is_array_of_floats() - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_FLOAT + assert descr.basesize == 0 + assert descr.lendescr is None CA = rffi.CArray(rffi.FLOAT) descr = get_array_descr(c0, CA) - assert not descr.is_array_of_floats() - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_UNSIGNED + assert descr.basesize == 0 + assert descr.itemsize == rffi.sizeof(lltype.SingleFloat) + assert descr.lendescr is None def test_get_array_descr_sign(): @@ -257,46 +203,55 @@ for tsc in [False, True]: c2 = GcCache(tsc) arraydescr = get_array_descr(c2, A) - assert arraydescr.is_item_signed() == signed + assert arraydescr.flag == {False: FLAG_UNSIGNED, + True: FLAG_SIGNED }[signed] # RA = rffi.CArray(RESTYPE) for tsc in [False, True]: c2 = GcCache(tsc) arraydescr = get_array_descr(c2, RA) - assert arraydescr.is_item_signed() == signed + assert arraydescr.flag == {False: FLAG_UNSIGNED, + True: FLAG_SIGNED }[signed] + + +def test_get_array_descr_str(): + c0 = GcCache(False) + descr1 = get_array_descr(c0, rstr.STR) + assert descr1.itemsize == rffi.sizeof(lltype.Char) + assert descr1.flag == FLAG_UNSIGNED def test_get_call_descr_not_translated(): c0 = GcCache(False) descr1 = get_call_descr(c0, [lltype.Char, lltype.Signed], lltype.Char) - assert descr1.get_result_size(False) == rffi.sizeof(lltype.Char) - assert descr1.get_return_type() == history.INT + assert descr1.get_result_size() == rffi.sizeof(lltype.Char) + assert descr1.get_result_type() == history.INT assert descr1.arg_classes == "ii" # T = lltype.GcStruct('T') descr2 = get_call_descr(c0, [lltype.Ptr(T)], lltype.Ptr(T)) - assert descr2.get_result_size(False) == rffi.sizeof(lltype.Ptr(T)) - assert descr2.get_return_type() == history.REF + assert descr2.get_result_size() == rffi.sizeof(lltype.Ptr(T)) + assert descr2.get_result_type() == history.REF assert descr2.arg_classes == "r" # U = lltype.GcStruct('U', ('x', lltype.Signed)) assert descr2 == get_call_descr(c0, [lltype.Ptr(U)], lltype.Ptr(U)) # V = lltype.Struct('V', ('x', lltype.Signed)) - assert (get_call_descr(c0, [], lltype.Ptr(V)).get_return_type() == + assert (get_call_descr(c0, [], lltype.Ptr(V)).get_result_type() == history.INT) # - assert (get_call_descr(c0, [], lltype.Void).get_return_type() == + assert (get_call_descr(c0, [], lltype.Void).get_result_type() == history.VOID) # descr4 = get_call_descr(c0, [lltype.Float, lltype.Float], lltype.Float) - assert descr4.get_result_size(False) == rffi.sizeof(lltype.Float) - assert descr4.get_return_type() == history.FLOAT + assert descr4.get_result_size() == rffi.sizeof(lltype.Float) + assert descr4.get_result_type() == history.FLOAT assert descr4.arg_classes == "ff" # descr5 = get_call_descr(c0, [lltype.SingleFloat], lltype.SingleFloat) - assert descr5.get_result_size(False) == rffi.sizeof(lltype.SingleFloat) - assert descr5.get_return_type() == "S" + assert descr5.get_result_size() == rffi.sizeof(lltype.SingleFloat) + assert descr5.get_result_type() == "S" assert descr5.arg_classes == "S" def test_get_call_descr_not_translated_longlong(): @@ -305,13 +260,13 @@ c0 = GcCache(False) # descr5 = get_call_descr(c0, [lltype.SignedLongLong], lltype.Signed) - assert descr5.get_result_size(False) == 4 - assert descr5.get_return_type() == history.INT + assert descr5.get_result_size() == 4 + assert descr5.get_result_type() == history.INT assert descr5.arg_classes == "L" # descr6 = get_call_descr(c0, [lltype.Signed], lltype.SignedLongLong) - assert descr6.get_result_size(False) == 8 - assert descr6.get_return_type() == "L" + assert descr6.get_result_size() == 8 + assert descr6.get_result_type() == "L" assert descr6.arg_classes == "i" def test_get_call_descr_translated(): @@ -319,18 +274,18 @@ T = lltype.GcStruct('T') U = lltype.GcStruct('U', ('x', lltype.Signed)) descr3 = get_call_descr(c1, [lltype.Ptr(T)], lltype.Ptr(U)) - assert isinstance(descr3.get_result_size(True), Symbolic) - assert descr3.get_return_type() == history.REF + assert isinstance(descr3.get_result_size(), Symbolic) + assert descr3.get_result_type() == history.REF assert descr3.arg_classes == "r" # descr4 = get_call_descr(c1, [lltype.Float, lltype.Float], lltype.Float) - assert isinstance(descr4.get_result_size(True), Symbolic) - assert descr4.get_return_type() == history.FLOAT + assert isinstance(descr4.get_result_size(), Symbolic) + assert descr4.get_result_type() == history.FLOAT assert descr4.arg_classes == "ff" # descr5 = get_call_descr(c1, [lltype.SingleFloat], lltype.SingleFloat) - assert isinstance(descr5.get_result_size(True), Symbolic) - assert descr5.get_return_type() == "S" + assert isinstance(descr5.get_result_size(), Symbolic) + assert descr5.get_result_type() == "S" assert descr5.arg_classes == "S" def test_call_descr_extra_info(): @@ -358,6 +313,10 @@ def test_repr_of_descr(): + def repr_of_descr(descr): + s = descr.repr_of_descr() + assert ',' not in s # makes the life easier for pypy.tool.jitlogparser + return s c0 = GcCache(False) T = lltype.GcStruct('T') S = lltype.GcStruct('S', ('x', lltype.Char), @@ -365,33 +324,34 @@ ('z', lltype.Ptr(T))) descr1 = get_size_descr(c0, S) s = symbolic.get_size(S, False) - assert descr1.repr_of_descr() == '' % s + assert repr_of_descr(descr1) == '' % s # descr2 = get_field_descr(c0, S, 'y') o, _ = symbolic.get_field_token(S, 'y', False) - assert descr2.repr_of_descr() == '' % o + assert repr_of_descr(descr2) == '' % o # descr2i = get_field_descr(c0, S, 'x') o, _ = symbolic.get_field_token(S, 'x', False) - assert descr2i.repr_of_descr() == '' % o + assert repr_of_descr(descr2i) == '' % o # descr3 = get_array_descr(c0, lltype.GcArray(lltype.Ptr(S))) - assert descr3.repr_of_descr() == '' + o = symbolic.get_size(lltype.Ptr(S), False) + assert repr_of_descr(descr3) == '' % o # descr3i = get_array_descr(c0, lltype.GcArray(lltype.Char)) - assert descr3i.repr_of_descr() == '' + assert repr_of_descr(descr3i) == '' # descr4 = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Ptr(S)) - assert 'GcPtrCallDescr' in descr4.repr_of_descr() + assert repr_of_descr(descr4) == '' % o # descr4i = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Char) - assert 'CharCallDescr' in descr4i.repr_of_descr() + assert repr_of_descr(descr4i) == '' # descr4f = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Float) - assert 'FloatCallDescr' in descr4f.repr_of_descr() + assert repr_of_descr(descr4f) == '' # descr5f = get_call_descr(c0, [lltype.Char], lltype.SingleFloat) - assert 'SingleFloatCallDescr' in descr5f.repr_of_descr() + assert repr_of_descr(descr5f) == '' def test_call_stubs_1(): c0 = GcCache(False) @@ -401,10 +361,10 @@ def f(a, b): return 'c' - call_stub = descr1.call_stub fnptr = llhelper(lltype.Ptr(lltype.FuncType(ARGS, RES)), f) - res = call_stub(rffi.cast(lltype.Signed, fnptr), [1, 2], None, None) + res = descr1.call_stub_i(rffi.cast(lltype.Signed, fnptr), + [1, 2], None, None) assert res == ord('c') def test_call_stubs_2(): @@ -421,8 +381,8 @@ a = lltype.malloc(ARRAY, 3) opaquea = lltype.cast_opaque_ptr(llmemory.GCREF, a) a[0] = 1 - res = descr2.call_stub(rffi.cast(lltype.Signed, fnptr), - [], [opaquea], [longlong.getfloatstorage(3.5)]) + res = descr2.call_stub_f(rffi.cast(lltype.Signed, fnptr), + [], [opaquea], [longlong.getfloatstorage(3.5)]) assert longlong.getrealfloat(res) == 4.5 def test_call_stubs_single_float(): @@ -445,6 +405,22 @@ a = intmask(singlefloat2uint(r_singlefloat(-10.0))) b = intmask(singlefloat2uint(r_singlefloat(3.0))) c = intmask(singlefloat2uint(r_singlefloat(2.0))) - res = descr2.call_stub(rffi.cast(lltype.Signed, fnptr), - [a, b, c], [], []) + res = descr2.call_stub_i(rffi.cast(lltype.Signed, fnptr), + [a, b, c], [], []) assert float(uint2singlefloat(rffi.r_uint(res))) == -11.5 + +def test_field_arraylen_descr(): + c0 = GcCache(True) + A1 = lltype.GcArray(lltype.Signed) + fielddescr = get_field_arraylen_descr(c0, A1) + assert isinstance(fielddescr, FieldDescr) + ofs = fielddescr.offset + assert repr(ofs) == '< ArrayLengthOffset >' + # + fielddescr = get_field_arraylen_descr(c0, rstr.STR) + ofs = fielddescr.offset + assert repr(ofs) == ("< " + " 'chars'> + < ArrayLengthOffset" + " > >") + # caching: + assert fielddescr is get_field_arraylen_descr(c0, rstr.STR) diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -1,5 +1,6 @@ from pypy.rlib.libffi import types from pypy.jit.codewriter.longlong import is_64_bit +from pypy.jit.backend.llsupport.descr import * from pypy.jit.backend.llsupport.ffisupport import * @@ -15,7 +16,9 @@ args = [types.sint, types.pointer] descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, ffi_flags=42) - assert isinstance(descr, DynamicIntCallDescr) + assert isinstance(descr, CallDescr) + assert descr.result_type == 'i' + assert descr.result_flag == FLAG_SIGNED assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 @@ -24,18 +27,20 @@ assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), args, types.void, None, ffi_flags=43) - assert isinstance(descr, VoidCallDescr) + assert descr.result_type == 'v' + assert descr.result_flag == FLAG_VOID assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) - assert isinstance(descr, DynamicIntCallDescr) - assert descr.get_result_size(False) == 1 + assert descr.get_result_size() == 1 + assert descr.result_flag == FLAG_SIGNED assert descr.is_result_signed() == True descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) - assert isinstance(descr, DynamicIntCallDescr) - assert descr.get_result_size(False) == 1 + assert isinstance(descr, CallDescr) + assert descr.get_result_size() == 1 + assert descr.result_flag == FLAG_UNSIGNED assert descr.is_result_signed() == False if not is_64_bit: @@ -44,7 +49,9 @@ assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), [], types.slonglong, None, ffi_flags=43) - assert isinstance(descr, LongLongCallDescr) + assert isinstance(descr, CallDescr) + assert descr.result_flag == FLAG_FLOAT + assert descr.result_type == 'L' assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong @@ -53,6 +60,6 @@ assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), [], types.float, None, ffi_flags=44) - SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) - assert isinstance(descr, SingleFloatCallDescr) + assert descr.result_flag == FLAG_UNSIGNED + assert descr.result_type == 'S' assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -6,6 +6,7 @@ from pypy.jit.backend.llsupport.gc import * from pypy.jit.backend.llsupport import symbolic from pypy.jit.metainterp.gc import get_description +from pypy.jit.metainterp.history import BoxPtr, BoxInt, ConstPtr from pypy.jit.metainterp.resoperation import get_deep_immutable_oplist from pypy.jit.tool.oparser import parse from pypy.rpython.lltypesystem.rclass import OBJECT, OBJECT_VTABLE @@ -15,12 +16,12 @@ gc_ll_descr = GcLLDescr_boehm(None, None, None) # record = [] - prev_funcptr_for_new = gc_ll_descr.funcptr_for_new - def my_funcptr_for_new(size): - p = prev_funcptr_for_new(size) + prev_malloc_fn_ptr = gc_ll_descr.malloc_fn_ptr + def my_malloc_fn_ptr(size): + p = prev_malloc_fn_ptr(size) record.append((size, p)) return p - gc_ll_descr.funcptr_for_new = my_funcptr_for_new + gc_ll_descr.malloc_fn_ptr = my_malloc_fn_ptr # # ---------- gc_malloc ---------- S = lltype.GcStruct('S', ('x', lltype.Signed)) @@ -32,8 +33,8 @@ A = lltype.GcArray(lltype.Signed) arraydescr = get_array_descr(gc_ll_descr, A) p = gc_ll_descr.gc_malloc_array(arraydescr, 10) - assert record == [(arraydescr.get_base_size(False) + - 10 * arraydescr.get_item_size(False), p)] + assert record == [(arraydescr.basesize + + 10 * arraydescr.itemsize, p)] del record[:] # ---------- gc_malloc_str ---------- p = gc_ll_descr.gc_malloc_str(10) @@ -246,24 +247,28 @@ def __init__(self): self.record = [] + def _malloc(self, type_id, size): + tid = llop.combine_ushort(lltype.Signed, type_id, 0) + x = llmemory.raw_malloc(self.gcheaderbuilder.size_gc_header + size) + x += self.gcheaderbuilder.size_gc_header + return x, tid + def do_malloc_fixedsize_clear(self, RESTYPE, type_id, size, has_finalizer, has_light_finalizer, contains_weakptr): assert not contains_weakptr - assert not has_finalizer # in these tests - assert not has_light_finalizer # in these tests - p = llmemory.raw_malloc(size) + assert not has_finalizer + assert not has_light_finalizer + p, tid = self._malloc(type_id, size) p = llmemory.cast_adr_to_ptr(p, RESTYPE) - tid = llop.combine_ushort(lltype.Signed, type_id, 0) self.record.append(("fixedsize", repr(size), tid, p)) return p def do_malloc_varsize_clear(self, RESTYPE, type_id, length, size, itemsize, offset_to_length): - p = llmemory.raw_malloc(size + itemsize * length) + p, tid = self._malloc(type_id, size + itemsize * length) (p + offset_to_length).signed[0] = length p = llmemory.cast_adr_to_ptr(p, RESTYPE) - tid = llop.combine_ushort(lltype.Signed, type_id, 0) self.record.append(("varsize", tid, length, repr(size), repr(itemsize), repr(offset_to_length), p)) @@ -322,43 +327,40 @@ gc_ll_descr = GcLLDescr_framework(gcdescr, FakeTranslator(), None, llop1) gc_ll_descr.initialize() + llop1.gcheaderbuilder = gc_ll_descr.gcheaderbuilder self.llop1 = llop1 self.gc_ll_descr = gc_ll_descr self.fake_cpu = FakeCPU() - def test_args_for_new(self): - S = lltype.GcStruct('S', ('x', lltype.Signed)) - sizedescr = get_size_descr(self.gc_ll_descr, S) - args = self.gc_ll_descr.args_for_new(sizedescr) - for x in args: - assert lltype.typeOf(x) == lltype.Signed - A = lltype.GcArray(lltype.Signed) - arraydescr = get_array_descr(self.gc_ll_descr, A) - args = self.gc_ll_descr.args_for_new(sizedescr) - for x in args: - assert lltype.typeOf(x) == lltype.Signed +## def test_args_for_new(self): +## S = lltype.GcStruct('S', ('x', lltype.Signed)) +## sizedescr = get_size_descr(self.gc_ll_descr, S) +## args = self.gc_ll_descr.args_for_new(sizedescr) +## for x in args: +## assert lltype.typeOf(x) == lltype.Signed +## A = lltype.GcArray(lltype.Signed) +## arraydescr = get_array_descr(self.gc_ll_descr, A) +## args = self.gc_ll_descr.args_for_new(sizedescr) +## for x in args: +## assert lltype.typeOf(x) == lltype.Signed def test_gc_malloc(self): S = lltype.GcStruct('S', ('x', lltype.Signed)) sizedescr = get_size_descr(self.gc_ll_descr, S) p = self.gc_ll_descr.gc_malloc(sizedescr) - assert self.llop1.record == [("fixedsize", - repr(sizedescr.size), + assert lltype.typeOf(p) == llmemory.GCREF + assert self.llop1.record == [("fixedsize", repr(sizedescr.size), sizedescr.tid, p)] - assert repr(self.gc_ll_descr.args_for_new(sizedescr)) == repr( - [sizedescr.size, sizedescr.tid]) def test_gc_malloc_array(self): A = lltype.GcArray(lltype.Signed) arraydescr = get_array_descr(self.gc_ll_descr, A) p = self.gc_ll_descr.gc_malloc_array(arraydescr, 10) assert self.llop1.record == [("varsize", arraydescr.tid, 10, - repr(arraydescr.get_base_size(True)), - repr(arraydescr.get_item_size(True)), - repr(arraydescr.get_ofs_length(True)), + repr(arraydescr.basesize), + repr(arraydescr.itemsize), + repr(arraydescr.lendescr.offset), p)] - assert repr(self.gc_ll_descr.args_for_new_array(arraydescr)) == repr( - [arraydescr.get_item_size(True), arraydescr.tid]) def test_gc_malloc_str(self): p = self.gc_ll_descr.gc_malloc_str(10) @@ -404,10 +406,11 @@ gc_ll_descr = self.gc_ll_descr llop1 = self.llop1 # - newops = [] + rewriter = GcRewriterAssembler(gc_ll_descr, None) + newops = rewriter.newops v_base = BoxPtr() v_value = BoxPtr() - gc_ll_descr._gen_write_barrier(newops, v_base, v_value) + rewriter.gen_write_barrier(v_base, v_value) assert llop1.record == [] assert len(newops) == 1 assert newops[0].getopnum() == rop.COND_CALL_GC_WB @@ -427,8 +430,7 @@ operations = gc_ll_descr.rewrite_assembler(None, operations, []) assert len(operations) == 0 - def test_rewrite_assembler_1(self): - # check recording of ConstPtrs + def test_record_constptrs(self): class MyFakeCPU(object): def cast_adr_to_int(self, adr): assert adr == "some fake address" @@ -455,211 +457,6 @@ assert operations2 == operations assert gcrefs == [s_gcref] - def test_rewrite_assembler_2(self): - # check write barriers before SETFIELD_GC - v_base = BoxPtr() - v_value = BoxPtr() - field_descr = AbstractDescr() - operations = [ - ResOperation(rop.SETFIELD_GC, [v_base, v_value], None, - descr=field_descr), - ] - gc_ll_descr = self.gc_ll_descr - operations = get_deep_immutable_oplist(operations) - operations = gc_ll_descr.rewrite_assembler(self.fake_cpu, operations, - []) - assert len(operations) == 2 - # - assert operations[0].getopnum() == rop.COND_CALL_GC_WB - assert operations[0].getarg(0) == v_base - assert operations[0].getarg(1) == v_value - assert operations[0].result is None - # - assert operations[1].getopnum() == rop.SETFIELD_RAW - assert operations[1].getarg(0) == v_base - assert operations[1].getarg(1) == v_value - assert operations[1].getdescr() == field_descr - - def test_rewrite_assembler_3(self): - # check write barriers before SETARRAYITEM_GC - for v_new_length in (None, ConstInt(5), ConstInt(5000), BoxInt()): - v_base = BoxPtr() - v_index = BoxInt() - v_value = BoxPtr() - array_descr = AbstractDescr() - operations = [ - ResOperation(rop.SETARRAYITEM_GC, [v_base, v_index, v_value], - None, descr=array_descr), - ] - if v_new_length is not None: - operations.insert(0, ResOperation(rop.NEW_ARRAY, - [v_new_length], v_base, - descr=array_descr)) - # we need to insert another, unrelated NEW_ARRAY here - # to prevent the initialization_store optimization - operations.insert(1, ResOperation(rop.NEW_ARRAY, - [ConstInt(12)], BoxPtr(), - descr=array_descr)) - gc_ll_descr = self.gc_ll_descr - operations = get_deep_immutable_oplist(operations) - operations = gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - if v_new_length is not None: - assert operations[0].getopnum() == rop.NEW_ARRAY - assert operations[1].getopnum() == rop.NEW_ARRAY - del operations[:2] - assert len(operations) == 2 - # - assert operations[0].getopnum() == rop.COND_CALL_GC_WB - assert operations[0].getarg(0) == v_base - assert operations[0].getarg(1) == v_value - assert operations[0].result is None - # - assert operations[1].getopnum() == rop.SETARRAYITEM_RAW - assert operations[1].getarg(0) == v_base - assert operations[1].getarg(1) == v_index - assert operations[1].getarg(2) == v_value - assert operations[1].getdescr() == array_descr - - def test_rewrite_assembler_4(self): - # check write barriers before SETARRAYITEM_GC, - # if we have actually a write_barrier_from_array. - self.llop1._have_wb_from_array = True - for v_new_length in (None, ConstInt(5), ConstInt(5000), BoxInt()): - v_base = BoxPtr() - v_index = BoxInt() - v_value = BoxPtr() - array_descr = AbstractDescr() - operations = [ - ResOperation(rop.SETARRAYITEM_GC, [v_base, v_index, v_value], - None, descr=array_descr), - ] - if v_new_length is not None: - operations.insert(0, ResOperation(rop.NEW_ARRAY, - [v_new_length], v_base, - descr=array_descr)) - # we need to insert another, unrelated NEW_ARRAY here - # to prevent the initialization_store optimization - operations.insert(1, ResOperation(rop.NEW_ARRAY, - [ConstInt(12)], BoxPtr(), - descr=array_descr)) - gc_ll_descr = self.gc_ll_descr - operations = get_deep_immutable_oplist(operations) - operations = gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - if v_new_length is not None: - assert operations[0].getopnum() == rop.NEW_ARRAY - assert operations[1].getopnum() == rop.NEW_ARRAY - del operations[:2] - assert len(operations) == 2 - # - if isinstance(v_new_length, ConstInt) and v_new_length.value < 130: - assert operations[0].getopnum() == rop.COND_CALL_GC_WB - assert operations[0].getarg(0) == v_base - assert operations[0].getarg(1) == v_value - else: - assert operations[0].getopnum() == rop.COND_CALL_GC_WB_ARRAY - assert operations[0].getarg(0) == v_base - assert operations[0].getarg(1) == v_index - assert operations[0].getarg(2) == v_value - assert operations[0].result is None - # - assert operations[1].getopnum() == rop.SETARRAYITEM_RAW - assert operations[1].getarg(0) == v_base - assert operations[1].getarg(1) == v_index - assert operations[1].getarg(2) == v_value - assert operations[1].getdescr() == array_descr - - def test_rewrite_assembler_5(self): - S = lltype.GcStruct('S') - A = lltype.GcArray(lltype.Struct('A', ('x', lltype.Ptr(S)))) - interiordescr = get_interiorfield_descr(self.gc_ll_descr, A, - A.OF, 'x') - wbdescr = self.gc_ll_descr.write_barrier_descr - ops = parse(""" - [p1, p2] - setinteriorfield_gc(p1, 0, p2, descr=interiordescr) - jump(p1, p2) - """, namespace=locals()) - expected = parse(""" - [p1, p2] - cond_call_gc_wb(p1, p2, descr=wbdescr) - setinteriorfield_raw(p1, 0, p2, descr=interiordescr) - jump(p1, p2) - """, namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) - - def test_rewrite_assembler_initialization_store(self): - S = lltype.GcStruct('S', ('parent', OBJECT), - ('x', lltype.Signed)) - s_vtable = lltype.malloc(OBJECT_VTABLE, immortal=True) - xdescr = get_field_descr(self.gc_ll_descr, S, 'x') - ops = parse(""" - [p1] - p0 = new_with_vtable(ConstClass(s_vtable)) - setfield_gc(p0, p1, descr=xdescr) - jump() - """, namespace=locals()) - expected = parse(""" - [p1] - p0 = new_with_vtable(ConstClass(s_vtable)) - # no write barrier - setfield_gc(p0, p1, descr=xdescr) - jump() - """, namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) - - def test_rewrite_assembler_initialization_store_2(self): - S = lltype.GcStruct('S', ('parent', OBJECT), - ('x', lltype.Signed)) - s_vtable = lltype.malloc(OBJECT_VTABLE, immortal=True) - wbdescr = self.gc_ll_descr.write_barrier_descr - xdescr = get_field_descr(self.gc_ll_descr, S, 'x') - ops = parse(""" - [p1] - p0 = new_with_vtable(ConstClass(s_vtable)) - p3 = new_with_vtable(ConstClass(s_vtable)) - setfield_gc(p0, p1, descr=xdescr) - jump() - """, namespace=locals()) - expected = parse(""" - [p1] - p0 = new_with_vtable(ConstClass(s_vtable)) - p3 = new_with_vtable(ConstClass(s_vtable)) - cond_call_gc_wb(p0, p1, descr=wbdescr) - setfield_raw(p0, p1, descr=xdescr) - jump() - """, namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) - - def test_rewrite_assembler_initialization_store_3(self): - A = lltype.GcArray(lltype.Ptr(lltype.GcStruct('S'))) - arraydescr = get_array_descr(self.gc_ll_descr, A) - ops = parse(""" - [p1] - p0 = new_array(3, descr=arraydescr) - setarrayitem_gc(p0, 0, p1, descr=arraydescr) - jump() - """, namespace=locals()) - expected = parse(""" - [p1] - p0 = new_array(3, descr=arraydescr) - setarrayitem_gc(p0, 0, p1, descr=arraydescr) - jump() - """, namespace=locals()) From noreply at buildbot.pypy.org Wed Jan 11 15:55:31 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 15:55:31 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed tests: stringobject has now a listview_str method, too Message-ID: <20120111145531.2DA1F82C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51229:37da0bb1707e Date: 2012-01-11 15:54 +0100 http://bitbucket.org/pypy/pypy/changeset/37da0bb1707e/ Log: fixed tests: stringobject has now a listview_str method, too diff --git a/pypy/objspace/std/test/test_liststrategies.py b/pypy/objspace/std/test/test_liststrategies.py --- a/pypy/objspace/std/test/test_liststrategies.py +++ b/pypy/objspace/std/test/test_liststrategies.py @@ -420,7 +420,7 @@ def test_listview_str(self): space = self.space - assert space.listview_str(space.wrap("a")) is None + assert space.listview_str(space.wrap("a")) == ["a"] w_l = self.space.newlist([self.space.wrap('a'), self.space.wrap('b')]) assert space.listview_str(w_l) == ["a", "b"] From noreply at buildbot.pypy.org Wed Jan 11 15:55:32 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 15:55:32 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: (cfbolz, l.diekmann): added fastpath for dict.keys if keys are strings Message-ID: <20120111145532.5F1C382C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51230:a26b3141a0d4 Date: 2012-01-11 15:55 +0100 http://bitbucket.org/pypy/pypy/changeset/a26b3141a0d4/ Log: (cfbolz, l.diekmann): added fastpath for dict.keys if keys are strings diff --git a/pypy/objspace/std/celldict.py b/pypy/objspace/std/celldict.py --- a/pypy/objspace/std/celldict.py +++ b/pypy/objspace/std/celldict.py @@ -127,10 +127,10 @@ def iter(self, w_dict): return ModuleDictIteratorImplementation(self.space, self, w_dict) - def keys(self, w_dict): + def w_keys(self, w_dict): space = self.space - iterator = self.unerase(w_dict.dstorage).iteritems - return [space.wrap(key) for key, cell in iterator()] + l = self.unerase(w_dict.dstorage).keys() + return space.newlist_str(l) def values(self, w_dict): iterator = self.unerase(w_dict.dstorage).itervalues diff --git a/pypy/objspace/std/dictmultiobject.py b/pypy/objspace/std/dictmultiobject.py --- a/pypy/objspace/std/dictmultiobject.py +++ b/pypy/objspace/std/dictmultiobject.py @@ -89,7 +89,7 @@ def _add_indirections(): dict_methods = "setitem setitem_str getitem \ getitem_str delitem length \ - clear keys values \ + clear w_keys values \ items iter setdefault \ popitem listview_str listview_int".split() @@ -112,7 +112,7 @@ def get_empty_storage(self): raise NotImplementedError - def keys(self, w_dict): + def w_keys(self, w_dict): iterator = self.iter(w_dict) result = [] while 1: @@ -120,7 +120,7 @@ if w_key is not None: result.append(w_key) else: - return result + return self.space.newlist(result) def values(self, w_dict): iterator = self.iter(w_dict) @@ -364,8 +364,9 @@ self.switch_to_object_strategy(w_dict) return w_dict.getitem(w_key) - def keys(self, w_dict): - return [self.wrap(key) for key in self.unerase(w_dict.dstorage).iterkeys()] + def w_keys(self, w_dict): + l = [self.wrap(key) for key in self.unerase(w_dict.dstorage).iterkeys()] + return self.space.newlist(l) def values(self, w_dict): return self.unerase(w_dict.dstorage).values() @@ -418,8 +419,8 @@ def iter(self, w_dict): return ObjectIteratorImplementation(self.space, self, w_dict) - def keys(self, w_dict): - return self.unerase(w_dict.dstorage).keys() + def w_keys(self, w_dict): + return self.space.newlist(self.unerase(w_dict.dstorage).keys()) class StringDictStrategy(AbstractTypedStrategy, DictStrategy): @@ -468,6 +469,9 @@ def iter(self, w_dict): return StrIteratorImplementation(self.space, self, w_dict) + def w_keys(self, w_dict): + return self.space.newlist_str(self.listview_str(w_dict)) + class _WrappedIteratorMixin(object): _mixin_ = True @@ -533,6 +537,11 @@ def listview_int(self, w_dict): return self.unerase(w_dict.dstorage).keys() + def w_keys(self, w_dict): + # XXX there is no space.newlist_int yet + space = self.space + return space.call_function(space.w_list, w_dict) + class IntIteratorImplementation(_WrappedIteratorMixin, IteratorImplementation): pass @@ -687,8 +696,7 @@ return space.newlist(w_self.items()) def dict_keys__DictMulti(space, w_self): - #XXX add fastpath for strategies here - return space.newlist(w_self.keys()) + return w_self.w_keys() def dict_values__DictMulti(space, w_self): return space.newlist(w_self.values()) diff --git a/pypy/objspace/std/dictproxyobject.py b/pypy/objspace/std/dictproxyobject.py --- a/pypy/objspace/std/dictproxyobject.py +++ b/pypy/objspace/std/dictproxyobject.py @@ -75,7 +75,7 @@ def keys(self, w_dict): space = self.space - return [space.wrap(key) for key in self.unerase(w_dict.dstorage).dict_w.iterkeys()] + return space.newlist_str(self.unerase(w_dict.dstorage).dict_w.keys()) def values(self, w_dict): return [unwrap_cell(self.space, w_value) for w_value in self.unerase(w_dict.dstorage).dict_w.itervalues()] diff --git a/pypy/objspace/std/mapdict.py b/pypy/objspace/std/mapdict.py --- a/pypy/objspace/std/mapdict.py +++ b/pypy/objspace/std/mapdict.py @@ -694,6 +694,8 @@ self.delitem(w_dict, w_key) return (w_key, w_value) + # XXX could implement a more efficient w_keys based on space.newlist_str + def materialize_r_dict(space, obj, dict_w): map = obj._get_mapdict_map() new_obj = map.materialize_r_dict(space, obj, dict_w) diff --git a/pypy/objspace/std/test/test_dictmultiobject.py b/pypy/objspace/std/test/test_dictmultiobject.py --- a/pypy/objspace/std/test/test_dictmultiobject.py +++ b/pypy/objspace/std/test/test_dictmultiobject.py @@ -157,6 +157,20 @@ assert self.space.listview_int(w_d) == [1, 2] + def test_keys_on_string_int_dict(self): + w = self.space.wrap + w_d = self.space.newdict() + w_d.initialize_content([(w(1), w("a")), (w(2), w("b"))]) + + w_l = self.space.call_method(w_d, "keys") + assert sorted(self.space.listview_int(w_l)) == [1,2] + + w_d = self.space.newdict() + w_d.initialize_content([(w("a"), w(1)), (w("b"), w(6))]) + + w_l = self.space.call_method(w_d, "keys") + assert sorted(self.space.listview_str(w_l)) == ["a", "b"] + class AppTest_DictObject: def setup_class(cls): cls.w_on_pypy = cls.space.wrap("__pypy__" in sys.builtin_module_names) @@ -816,7 +830,9 @@ return x == y eq_w = eq def newlist(self, l): - return [] + return l + def newlist_str(self, l): + return l DictObjectCls = W_DictMultiObject def type(self, w_obj): if isinstance(w_obj, FakeString): @@ -956,7 +972,7 @@ def test_keys(self): self.fill_impl() - keys = self.impl.keys() + keys = self.impl.w_keys() # wrapped lists = lists in the fake space keys.sort() assert keys == [self.string, self.string2] self.check_not_devolved() @@ -1034,8 +1050,8 @@ d.setitem("s", 12) d.delitem(F()) - assert "s" not in d.keys() - assert F() not in d.keys() + assert "s" not in d.w_keys() + assert F() not in d.w_keys() class TestStrDictImplementation(BaseTestRDictImplementation): StrategyClass = StringDictStrategy From noreply at buildbot.pypy.org Wed Jan 11 16:07:43 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 16:07:43 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixed test: show that listview_str returns None for other objects Message-ID: <20120111150743.23DA882C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51231:aae0411e2217 Date: 2012-01-11 16:07 +0100 http://bitbucket.org/pypy/pypy/changeset/aae0411e2217/ Log: fixed test: show that listview_str returns None for other objects diff --git a/pypy/objspace/std/test/test_liststrategies.py b/pypy/objspace/std/test/test_liststrategies.py --- a/pypy/objspace/std/test/test_liststrategies.py +++ b/pypy/objspace/std/test/test_liststrategies.py @@ -420,7 +420,7 @@ def test_listview_str(self): space = self.space - assert space.listview_str(space.wrap("a")) == ["a"] + assert space.listview_str(space.wrap(1)) == None w_l = self.space.newlist([self.space.wrap('a'), self.space.wrap('b')]) assert space.listview_str(w_l) == ["a", "b"] From noreply at buildbot.pypy.org Wed Jan 11 16:53:39 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Wed, 11 Jan 2012 16:53:39 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: fixes Message-ID: <20120111155339.3CB6E82C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51232:7578dd73ccf8 Date: 2012-01-11 16:53 +0100 http://bitbucket.org/pypy/pypy/changeset/7578dd73ccf8/ Log: fixes diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -923,14 +923,14 @@ return stringlist = space.listview_str(w_iterable) - if stringlist != None: + if stringlist is not None: strategy = space.fromcache(StringSetStrategy) w_set.strategy = strategy w_set.sstorage = strategy.get_storage_from_unwrapped_list(stringlist) return intlist = space.listview_int(w_iterable) - if intlist != None: + if intlist is not None: strategy = space.fromcache(IntegerSetStrategy) w_set.strategy = strategy w_set.sstorage = strategy.get_storage_from_unwrapped_list(intlist) diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -61,7 +61,7 @@ return plain_str2unicode(space, w_self._value) def listview_str(w_self): - return list(w_self._value) + return [s for s in w_self._value] registerimplementation(W_StringObject) From noreply at buildbot.pypy.org Wed Jan 11 17:44:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 17:44:19 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: merge default Message-ID: <20120111164419.D72D082C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51233:9892289121d2 Date: 2012-01-11 18:39 +0200 http://bitbucket.org/pypy/pypy/changeset/9892289121d2/ Log: merge default diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from fromnumeric import * +from .fromnumeric import * diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -341,7 +341,8 @@ **objects** - Normal rules apply. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -741,7 +741,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.force_allocate_reg(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -104,7 +104,7 @@ _attrs_ = () class W_IntegerBox(W_NumberBox): - descr__new__, get_dtype = new_dtype_getter("long") + pass class W_SignedIntegerBox(W_IntegerBox): pass @@ -200,7 +200,6 @@ ) W_IntegerBox.typedef = TypeDef("integer", W_NumberBox.typedef, - __new__ = interp2app(W_IntegerBox.descr__new__.im_func), __module__ = "numpypy", ) @@ -248,6 +247,7 @@ long_name = "int64" W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), __module__ = "numpypy", + __new__ = interp2app(W_LongBox.descr__new__.im_func), ) W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -569,7 +569,7 @@ return space.div(self.descr_sum(space), space.wrap(self.size)) def descr_var(self, space): - ''' var = mean( (values - mean(values))**2 ) ''' + # var = mean((values - mean(values)) ** 2) w_res = self.descr_sub(space, self.descr_mean(space)) assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) @@ -577,8 +577,8 @@ return w_res.descr_mean(space) def descr_std(self, space): - ''' std(v) = sqrt(var(v)) ''' - return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)] ) + # std(v) = sqrt(var(v)) + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) def descr_nonzero(self, space): if self.size > 1: diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -360,12 +363,36 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -389,12 +389,12 @@ 'threshold': 'number of times a loop has to run for it to become hot', 'function_threshold': 'number of times a function must run for it to become traced from start', 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', - 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TRACE_TOO_LONG', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG', 'inlining': 'inline python functions or not (1/0)', 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', 'retrace_limit': 'how many times we can try retracing before giving up', 'max_retrace_guards': 'number of extra guards a retrace can cause', - 'enable_opts': 'optimizations to enabled or all, INTERNAL USE ONLY' + 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' } PARAMETERS = {'threshold': 1039, # just above 1024, prime diff --git a/pypy/tool/release_dates.py b/pypy/tool/release_dates.py deleted file mode 100644 --- a/pypy/tool/release_dates.py +++ /dev/null @@ -1,14 +0,0 @@ -import py - -release_URL = 'http://codespeak.net/svn/pypy/release/' -releases = [r[:-2] for r in py.std.os.popen('svn list ' + release_URL).readlines() if 'x' not in r] - -f = file('release_dates.txt', 'w') -print >> f, 'date, release' -for release in releases: - for s in py.std.os.popen('svn info ' + release_URL + release).readlines(): - if s.startswith('Last Changed Date'): - date = s.split()[3] - print >> f, date, ',', release - break -f.close() diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,8 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %slow-level JIT parameter (default %s)' % ( - key, ' '*(18-len(key)), value) + print ' --jit %s=N %s%s (default %s)' % ( + key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) print ' --jit off turn off the JIT' def print_version(*args): From noreply at buildbot.pypy.org Wed Jan 11 17:44:21 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 17:44:21 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: be secure against not having any jit_merge_points found Message-ID: <20120111164421.157A082C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51234:b53f0ac39e01 Date: 2012-01-11 18:43 +0200 http://bitbucket.org/pypy/pypy/changeset/b53f0ac39e01/ Log: be secure against not having any jit_merge_points found diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1794,9 +1794,14 @@ self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') jd_sd = self.jitdriver_sd - greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] - self.staticdata.warmrunnerdesc.hooks.on_abort(reason, jd_sd.jitdriver, - greenkey, jd_sd.warmstate.get_location_str(greenkey)) + if not self.current_merge_points: + greenkey = None # we're in the bridge + else: + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, + jd_sd.jitdriver, + greenkey, + jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -20,6 +20,8 @@ self.w_optimize_hook = space.w_None def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr): + if greenkey is None: + return space.wrap('UNKNOWN') jitdriver_name = jitdriver.name if jitdriver_name == 'pypyjit': next_instr = greenkey[0].getint() From noreply at buildbot.pypy.org Wed Jan 11 17:46:05 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 17:46:05 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: use None here Message-ID: <20120111164605.0E13F82C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51235:d5efe4cff53d Date: 2012-01-11 18:45 +0200 http://bitbucket.org/pypy/pypy/changeset/d5efe4cff53d/ Log: use None here diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -21,7 +21,7 @@ def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr): if greenkey is None: - return space.wrap('UNKNOWN') + return space.w_None jitdriver_name = jitdriver.name if jitdriver_name == 'pypyjit': next_instr = greenkey[0].getint() From noreply at buildbot.pypy.org Wed Jan 11 18:15:04 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 18:15:04 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: fix test runner Message-ID: <20120111171504.B03D182C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51236:d1f73a4788a2 Date: 2012-01-11 19:14 +0200 http://bitbucket.org/pypy/pypy/changeset/d1f73a4788a2/ Log: fix test runner diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3002,28 +3002,30 @@ bridge = parse(bridge_ops, self.cpu, namespace=locals()) looptoken = JitCellToken() self.cpu.assembler.set_debug(False) - _, asm, asmlen = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) - _, basm, basmlen = self.cpu.compile_bridge(faildescr, bridge.inputargs, - bridge.operations, - looptoken) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) self.cpu.assembler.set_debug(True) # always on untranslated - assert asmlen != 0 + assert info.asmlen != 0 cpuname = autodetect_main_model_and_size() # XXX we have to check the precise assembler, otherwise # we don't quite know if borders are correct - def checkops(mc, startline, ops): - for i in range(startline, len(mc)): - assert mc[i].split("\t")[-1].startswith(ops[i - startline]) + def checkops(mc, ops): + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) - data = ctypes.string_at(asm, asmlen) - mc = list(machine_code_dump(data, asm, cpuname)) - assert len(mc) == 5 - checkops(mc, 1, self.add_loop_instructions) - data = ctypes.string_at(basm, basmlen) - mc = list(machine_code_dump(data, basm, cpuname)) - assert len(mc) == 4 - checkops(mc, 1, self.bridge_loop_instructions) + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + assert len(lines) == 4 + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + assert len(lines) == 3 + checkops(lines, self.bridge_loop_instructions) def test_compile_bridge_with_target(self): diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -419,7 +419,8 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken)[0] + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset From noreply at buildbot.pypy.org Wed Jan 11 18:22:57 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 18:22:57 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: fix a translation problem Message-ID: <20120111172257.F123D82C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51237:267e8180a342 Date: 2012-01-11 19:22 +0200 http://bitbucket.org/pypy/pypy/changeset/267e8180a342/ Log: fix a translation problem diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -76,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.setdescr(None) elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated From noreply at buildbot.pypy.org Wed Jan 11 18:24:01 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 18:24:01 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: one more place Message-ID: <20120111172401.AF39082C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51238:6d90b9e1e3ab Date: 2012-01-11 19:23 +0200 http://bitbucket.org/pypy/pypy/changeset/6d90b9e1e3ab/ Log: one more place diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -91,7 +91,7 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats + op.setdescr(None) # clear reference to prevent the history.Stats # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: From noreply at buildbot.pypy.org Wed Jan 11 18:26:00 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 18:26:00 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: oops bring back the comment Message-ID: <20120111172600.829EF82C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51239:7d78bb3e99e9 Date: 2012-01-11 19:25 +0200 http://bitbucket.org/pypy/pypy/changeset/7d78bb3e99e9/ Log: oops bring back the comment diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -76,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op.setdescr(None) + op.setdescr(None) # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated From noreply at buildbot.pypy.org Wed Jan 11 18:35:17 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 18:35:17 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: use a better interface to make tests pass Message-ID: <20120111173517.D754682C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51240:4dd1794b695d Date: 2012-01-11 19:34 +0200 http://bitbucket.org/pypy/pypy/changeset/4dd1794b695d/ Log: use a better interface to make tests pass diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -76,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op.setdescr(None) # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -91,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op.setdescr(None) # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -64,6 +64,9 @@ def setdescr(self, descr): raise NotImplementedError + def cleardescr(self): + pass + # common methods # -------------- @@ -196,6 +199,9 @@ self._check_descr(descr) self._descr = descr + def cleardescr(self): + self._descr = None + def _check_descr(self, descr): if not we_are_translated() and getattr(descr, 'I_am_a_descr', False): return # needed for the mock case in oparser_model From noreply at buildbot.pypy.org Wed Jan 11 19:12:00 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 11 Jan 2012 19:12:00 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: Add an option to recognize "x86_32" too. Message-ID: <20120111181200.D4CB982C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: better-jit-hooks Changeset: r51241:7435bc0b9fa2 Date: 2012-01-11 18:57 +0100 http://bitbucket.org/pypy/pypy/changeset/7435bc0b9fa2/ Log: Add an option to recognize "x86_32" too. diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } From noreply at buildbot.pypy.org Wed Jan 11 19:12:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 11 Jan 2012 19:12:02 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: Use a lower bound on 32-bit: 5 is enough, we don't need 13. Message-ID: <20120111181202.0985382C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: better-jit-hooks Changeset: r51242:24de52d56e68 Date: 2012-01-11 18:58 +0100 http://bitbucket.org/pypy/pypy/changeset/24de52d56e68/ Log: Use a lower bound on 32-bit: 5 is enough, we don't need 13. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -814,7 +814,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -188,7 +188,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, From noreply at buildbot.pypy.org Wed Jan 11 19:12:03 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 11 Jan 2012 19:12:03 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: Fix test_compile_asmlen on 32 bits too. Message-ID: <20120111181203.311C182C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: better-jit-hooks Changeset: r51243:87406b9ca67a Date: 2012-01-11 19:11 +0100 http://bitbucket.org/pypy/pypy/changeset/87406b9ca67a/ Log: Fix test_compile_asmlen on 32 bits too. diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2985,7 +2985,8 @@ from pypy.jit.backend.x86.tool.viewcode import machine_code_dump import ctypes ops = """ - [i0] + [i2] + i0 = same_as(i2) # but forced to be in a register label(i0, descr=1) i1 = int_add(i0, i0) guard_true(i1, descr=faildesr) [i1] @@ -3013,18 +3014,17 @@ # we don't quite know if borders are correct def checkops(mc, ops): + assert len(mc) == len(ops) for i in range(len(mc)): assert mc[i].split("\t")[-1].startswith(ops[i]) data = ctypes.string_at(info.asmaddr, info.asmlen) mc = list(machine_code_dump(data, info.asmaddr, cpuname)) lines = [line for line in mc if line.count('\t') == 2] - assert len(lines) == 4 checkops(lines, self.add_loop_instructions) data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) lines = [line for line in mc if line.count('\t') == 2] - assert len(lines) == 3 checkops(lines, self.bridge_loop_instructions) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,8 +33,12 @@ # for the individual tests see # ====> ../../test/runner_test.py - add_loop_instructions = ['add', 'test', 'je', 'jmp'] - bridge_loop_instructions = ['lea', 'mov', 'jmp'] + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) From noreply at buildbot.pypy.org Wed Jan 11 19:58:47 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 11 Jan 2012 19:58:47 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: This case should not occur any more. See comment. Message-ID: <20120111185847.3AEE882C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: better-jit-hooks Changeset: r51244:450f9d31a735 Date: 2012-01-11 19:58 +0100 http://bitbucket.org/pypy/pypy/changeset/450f9d31a735/ Log: This case should not occur any more. See comment. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -488,12 +488,7 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: @@ -625,7 +620,11 @@ def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise AssertionError("bug: the front-end asks us to compile a " + "bridge from the same guard twice") # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) From noreply at buildbot.pypy.org Wed Jan 11 20:07:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 11 Jan 2012 20:07:19 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: follow fijal's suggestion and raise an explicit exception instead. Message-ID: <20120111190719.3618F82C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: better-jit-hooks Changeset: r51245:dcd30a900b17 Date: 2012-01-11 20:07 +0100 http://bitbucket.org/pypy/pypy/changeset/dcd30a900b17/ Log: follow fijal's suggestion and raise an explicit exception instead. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -623,8 +623,7 @@ # This case should be prevented by the logic in compile.py: # look for CNT_BUSY_FLAG, which disables tracing from a guard # when another tracing from the same guard is already in progress. - raise AssertionError("bug: the front-end asks us to compile a " - "bridge from the same guard twice") + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -2556,3 +2555,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass From noreply at buildbot.pypy.org Wed Jan 11 21:35:45 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 11 Jan 2012 21:35:45 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: separate AxisReduceSignature, does not solve bug Message-ID: <20120111203545.6BE2782C03@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51246:5463174c165f Date: 2012-01-11 22:29 +0200 http://bitbucket.org/pypy/pypy/changeset/5463174c165f/ Log: separate AxisReduceSignature, does not solve bug diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -575,10 +575,9 @@ def descr_mean(self, space, w_dim=None): if space.is_w(w_dim, space.w_None): w_dim = space.wrap(-1) - dim = space.int_w(w_dim) - if dim < 0: w_denom = space.wrap(self.size) else: + dim = space.int_w(w_dim) w_denom = space.wrap(self.shape[dim]) return space.div(self.descr_sum_promote(space, w_dim), w_denom) @@ -780,7 +779,7 @@ def create_sig(self, res_shape): if self.forced_result is not None: return self.forced_result.create_sig(res_shape) - return signature.ReduceSignature(self.binfunc, self.name, self.dtype, + return signature.AxisReduceSignature(self.binfunc, self.name, self.dtype, signature.ViewSignature(self.dtype), self.values.create_sig(res_shape)) @@ -805,7 +804,7 @@ ri = ArrayIterator(result.size) frame = sig.create_frame(self.values, dim=self.dim) value = self.get_identity(sig, frame, shapelen) - assert isinstance(sig, signature.ReduceSignature) + assert isinstance(sig, signature.AxisReduceSignature) while not frame.done(): axisreduce_driver.jit_merge_point(frame=frame, self=self, value=value, sig=sig, diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -130,8 +130,9 @@ "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: from pypy.module.micronumpy.interp_numarray import Reduce - return space.wrap(Reduce(self.func, self.name, dim, dtype, - obj, self.identity)) + res = Reduce(self.func, self.name, dim, dtype, obj, self.identity) + obj.add_invalidates(res) + return space.wrap(res) sig = find_sig(ReduceSignature(self.func, self.name, dtype, ScalarSignature(dtype), obj.create_sig(obj.shape)), obj) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -169,7 +169,6 @@ def eval(self, frame, arr): iter = frame.iterators[self.iter_no] - assert arr.dtype is self.dtype return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) class ScalarSignature(ConcreteSignature): @@ -326,43 +325,49 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) + class ReduceSignature(Call2): def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): - if dim < 0: - self.right._create_iter(iterlist, arraylist, arr, res_shape, + self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist, dim) - else: - from pypy.module.micronumpy.interp_numarray import ConcreteArray - concr = arr.get_concrete() - assert isinstance(concr, ConcreteArray) - storage = concr.storage - if self.iter_no >= len(iterlist): - _iter = axis_iter_from_arr(concr, dim) - from interp_iter import AxisIterator - assert isinstance(_iter, AxisIterator) - iterlist.append(_iter) - if self.array_no >= len(arraylist): - arraylist.append(storage) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) def _invent_array_numbering(self, arr, cache): - #Could be called with arr as output or arr as input. - from pypy.module.micronumpy.interp_numarray import Reduce - if isinstance(arr, Reduce): - self.left._invent_array_numbering(arr, cache) - else: - self.right._invent_array_numbering(arr, cache) + self.right._invent_array_numbering(arr, cache) def eval(self, frame, arr): - #Could be called with arr as output or arr as input. - from pypy.module.micronumpy.interp_numarray import Reduce - if isinstance(arr, Reduce): - return self.left.eval(frame, arr) - else: - return self.right.eval(frame, arr) + return self.right.eval(frame, arr) def debug_repr(self): return 'ReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) + +class AxisReduceSignature(Call2): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + from pypy.module.micronumpy.interp_numarray import ConcreteArray + concr = arr.get_concrete() + assert isinstance(concr, ConcreteArray) + storage = concr.storage + if self.iter_no >= len(iterlist): + _iter = axis_iter_from_arr(concr, dim) + from interp_iter import AxisIterator + assert isinstance(_iter, AxisIterator) + iterlist.append(_iter) + if self.array_no >= len(arraylist): + arraylist.append(storage) + + def _invent_numbering(self, cache, allnumbers): + self.right._invent_numbering(cache, allnumbers) + + def _invent_array_numbering(self, arr, cache): + self.right._invent_array_numbering(arr, cache) + + def eval(self, frame, arr): + return self.right.eval(frame, arr) + + def debug_repr(self): + return 'AxisReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -729,8 +729,10 @@ assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - assert (mean(a, axis=0) == array(range(35, 70)).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15)).reshape(3, 5) * 7 + 3).all() + b = mean(a, axis=0) + b[0,0]==35. + assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() + assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from numpypy import array, arange From noreply at buildbot.pypy.org Wed Jan 11 21:37:42 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 21:37:42 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: reindent the docstring Message-ID: <20120111203742.CBA6482C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51247:b906cc4a9740 Date: 2012-01-11 22:17 +0200 http://bitbucket.org/pypy/pypy/changeset/b906cc4a9740/ Log: reindent the docstring diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -50,58 +50,58 @@ return self.call(space, __args__.arguments_w) def descr_reduce(self, space, w_obj, w_dim=0): - '''reduce(...) - reduce(a, axis=0) + """reduce(...) + reduce(a, axis=0) - Reduces `a`'s dimension by one, by applying ufunc along one axis. + Reduces `a`'s dimension by one, by applying ufunc along one axis. - Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then - :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = - the result of iterating `j` over :math:`range(N_i)`, cumulatively applying - ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. - For a one-dimensional array, reduce produces results equivalent to: - :: + Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then + :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = + the result of iterating `j` over :math:`range(N_i)`, cumulatively applying + ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. + For a one-dimensional array, reduce produces results equivalent to: + :: - r = op.identity # op = ufunc - for i in xrange(len(A)): - r = op(r, A[i]) - return r + r = op.identity # op = ufunc + for i in xrange(len(A)): + r = op(r, A[i]) + return r - For example, add.reduce() is equivalent to sum(). + For example, add.reduce() is equivalent to sum(). - Parameters - ---------- - a : array_like - The array to act on. - axis : int, optional - The axis along which to apply the reduction. + Parameters + ---------- + a : array_like + The array to act on. + axis : int, optional + The axis along which to apply the reduction. - Examples - -------- - >>> np.multiply.reduce([2,3,5]) - 30 + Examples + -------- + >>> np.multiply.reduce([2,3,5]) + 30 - A multi-dimensional array example: + A multi-dimensional array example: - >>> X = np.arange(8).reshape((2,2,2)) - >>> X - array([[[0, 1], - [2, 3]], - [[4, 5], - [6, 7]]]) - >>> np.add.reduce(X, 0) - array([[ 4, 6], - [ 8, 10]]) - >>> np.add.reduce(X) # confirm: default axis value is 0 - array([[ 4, 6], - [ 8, 10]]) - >>> np.add.reduce(X, 1) - array([[ 2, 4], - [10, 12]]) - >>> np.add.reduce(X, 2) - array([[ 1, 5], - [ 9, 13]]) - ''' + >>> X = np.arange(8).reshape((2,2,2)) + >>> X + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> np.add.reduce(X, 0) + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X) # confirm: default axis value is 0 + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X, 1) + array([[ 2, 4], + [10, 12]]) + >>> np.add.reduce(X, 2) + array([[ 1, 5], + [ 9, 13]]) + """ return self.reduce(space, w_obj, False, False, w_dim) def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): From noreply at buildbot.pypy.org Wed Jan 11 21:37:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 21:37:45 +0100 (CET) Subject: [pypy-commit] pypy default: merge better-jit-hooks. This branch introduces few hooks on applevel that Message-ID: <20120111203745.35F7682C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51248:db33420263f5 Date: 2012-01-11 22:36 +0200 http://bitbucket.org/pypy/pypy/changeset/db33420263f5/ Log: merge better-jit-hooks. This branch introduces few hooks on applevel that let you introspect and modify the list of resops as it gets compiled. Consult docstrings in pypyjit.set_compile_hook/set_abort_hook/set_optimize_hook. for details. It also exposes interface how to get to JIT structures from annotated parts of RPython, the details are in pypy.rlib.jit_hooks diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -27,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -2974,6 +2978,56 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -411,6 +412,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -476,7 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -485,12 +488,7 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: @@ -503,6 +501,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -537,7 +536,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub @@ -621,7 +620,10 @@ def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -810,7 +812,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): @@ -2550,3 +2555,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -188,7 +188,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,13 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -416,7 +423,8 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,11 +8,15 @@ class JitPolicy(object): - def __init__(self): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -75,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -90,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -296,8 +297,6 @@ patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -307,21 +306,38 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) + else: + debug_info = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset, name=loopname) @@ -332,25 +348,40 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) + else: + hooks = None + debug_info = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/jitprof.py b/pypy/jit/metainterp/jitprof.py --- a/pypy/jit/metainterp/jitprof.py +++ b/pypy/jit/metainterp/jitprof.py @@ -18,8 +18,8 @@ OPT_FORCINGS ABORT_TOO_LONG ABORT_BRIDGE +ABORT_BAD_LOOP ABORT_ESCAPE -ABORT_BAD_LOOP ABORT_FORCE_QUASIIMMUT NVIRTUALS NVHOLES @@ -30,10 +30,13 @@ TOTAL_FREED_BRIDGES """ +counter_names = [] + def _setup(): names = counters.split() for i, name in enumerate(names): globals()[name] = i + counter_names.append(name) global ncounters ncounters = len(names) _setup() diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1793,6 +1793,15 @@ def aborted_tracing(self, reason): self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') + jd_sd = self.jitdriver_sd + if not self.current_merge_points: + greenkey = None # we're in the bridge + else: + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, + jd_sd.jitdriver, + greenkey, + jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): @@ -1967,8 +1976,6 @@ self.compile_loop(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! self.staticdata.log('cancelled, tracing more...') - #self.staticdata.log('cancelled, stopping tracing') - #raise SwitchToBlackhole(ABORT_BAD_LOOP) # Otherwise, no loop found so far, so continue tracing. start = len(self.history.operations) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -18,6 +18,8 @@ pc = 0 opnum = 0 + _attrs_ = ('result',) + def __init__(self, result): self.result = result @@ -62,6 +64,9 @@ def setdescr(self, descr): raise NotImplementedError + def cleardescr(self): + pass + # common methods # -------------- @@ -194,6 +199,9 @@ self._check_descr(descr) self._descr = descr + def cleardescr(self): + self._descr = None + def _check_descr(self, descr): if not we_are_translated() and getattr(descr, 'I_am_a_descr', False): return # needed for the mock case in oparser_model diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -56,8 +56,6 @@ greenfield_info = None result_type = result_kind portal_runner_ptr = "???" - on_compile = lambda *args: None - on_compile_bridge = lambda *args: None stats = history.Stats() cpu = CPUClass(rtyper, stats, None, False) diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -53,8 +53,6 @@ call_pure_results = {} class jitdriver_sd: warmstate = FakeState() - on_compile = staticmethod(lambda *args: None) - on_compile_bridge = staticmethod(lambda *args: None) virtualizable_info = None def test_compile_loop(): diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -10,57 +10,6 @@ def getloc2(g): return "in jitdriver2, with g=%d" % g -class JitDriverTests(object): - def test_on_compile(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = looptoken - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - i += 1 - - self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "loop")] - self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] - - def test_on_compile_bridge(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = loop - def on_compile_bridge(self, logger, orig_token, operations, n): - assert 'bridge' not in called - called['bridge'] = orig_token - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - if i >= 4: - i += 2 - i += 1 - - self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] - - -class TestLLtypeSingle(JitDriverTests, LLJitMixin): - pass - class MultipleJitDriversTests(object): def test_simple(self): diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -0,0 +1,148 @@ + +from pypy.rlib.jit import JitDriver, JitHookInterface +from pypy.rlib import jit_hooks +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.jit.codewriter.policy import JitPolicy +from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT +from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr + +class TestJitHookInterface(LLJitMixin): + def test_abort_quasi_immut(self): + reasons = [] + + class MyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + assert jitdriver is myjitdriver + assert len(greenkey) == 1 + reasons.append(reason) + assert greenkey_repr == 'blah' + + iface = MyJitIface() + + myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'], + get_printable_location=lambda *args: 'blah') + + class Foo: + _immutable_fields_ = ['a?'] + def __init__(self, a): + self.a = a + def f(a, x): + foo = Foo(a) + total = 0 + while x > 0: + myjitdriver.jit_merge_point(foo=foo, x=x, total=total) + # read a quasi-immutable field out of a Constant + total += foo.a + foo.a += 1 + x -= 1 + return total + # + assert f(100, 7) == 721 + res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface)) + assert res == 721 + assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 + + def test_on_compile(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append(("compile", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + def before_compile(self, di): + called.append(("optimize", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + #def before_optimize(self, jitdriver, logger, looptoken, oeprations, + # type, greenkey): + # called.append(("trace", greenkey[1].getint(), + # greenkey[0].getint(), type)) + + iface = MyJitIface() + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + i += 1 + + self.meta_interp(loop, [1, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] + self.meta_interp(loop, [2, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] + + def test_on_compile_bridge(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append("compile") + + def after_compile_bridge(self, di): + called.append("compile_bridge") + + def before_compile_bridge(self, di): + called.append("before_compile_bridge") + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + if i >= 4: + i += 2 + i += 1 + + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface())) + assert called == ["compile", "before_compile_bridge", "compile_bridge"] + + def test_resop_interface(self): + driver = JitDriver(greens = [], reds = ['i']) + + def loop(i): + while i > 0: + driver.jit_merge_point(i=i) + i -= 1 + + def main(): + loop(1) + op = jit_hooks.resop_new(rop.INT_ADD, + [jit_hooks.boxint_new(3), + jit_hooks.boxint_new(4)], + jit_hooks.boxint_new(1)) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) + box5 = jit_hooks.boxint_new(18) + jit_hooks.resop_setarg(op, 0, box5) + assert jit_hooks.resop_getarg(op, 0) == box5 + box6 = jit_hooks.resop_getresult(op) + assert jit_hooks.box_getint(box6) == 1 + jit_hooks.resop_setresult(op, box5) + assert jit_hooks.resop_getresult(op) == box5 + + self.meta_interp(main, []) diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,9 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_opnum from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory class TranslationTest: @@ -22,6 +24,7 @@ # - jitdriver hooks # - two JITs # - string concatenation, slicing and comparison + # - jit hooks interface class Frame(object): _virtualizable2_ = ['l[*]'] @@ -91,7 +94,9 @@ return f.i # def main(i, j): - return f(i) - f2(i+j, i, j) + op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], + boxint_new(8)) + return f(i) - f2(i+j, i, j) + resop_opnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -1,4 +1,5 @@ import sys, py +from pypy.tool.sourcetools import func_with_new_name from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\ cast_base_ptr_to_instance, hlstr @@ -112,7 +113,7 @@ return ll_meta_interp(function, args, backendopt=backendopt, translate_support_code=True, **kwds) -def _find_jit_marker(graphs, marker_name): +def _find_jit_marker(graphs, marker_name, check_driver=True): results = [] for graph in graphs: for block in graph.iterblocks(): @@ -120,8 +121,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - (op.args[1].value is None or - op.args[1].value.active)): # the jitdriver + (not check_driver or op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -140,6 +141,9 @@ "found several jit_merge_points in the same graph") return results +def find_access_helpers(graphs): + return _find_jit_marker(graphs, 'access_helper', False) + def locate_jit_merge_point(graph): [(graph, block, pos)] = find_jit_merge_points([graph]) return block, pos, block.operations[pos] @@ -206,6 +210,7 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # + self.hooks = policy.jithookiface self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() @@ -213,6 +218,7 @@ self.rewrite_jit_merge_points(policy) verbose = False # not self.cpu.translate_support_code + self.rewrite_access_helpers() self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() self.rewrite_set_param() @@ -619,6 +625,24 @@ graph = self.annhelper.getgraph(func, args_s, s_result) return self.annhelper.graph2delayed(graph, FUNC) + def rewrite_access_helpers(self): + ah = find_access_helpers(self.translator.graphs) + for graph, block, index in ah: + op = block.operations[index] + self.rewrite_access_helper(op) + + def rewrite_access_helper(self, op): + ARGS = [arg.concretetype for arg in op.args[2:]] + RESULT = op.result.concretetype + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + # make sure we make a copy of function so it no longer belongs + # to extregistry + func = op.args[1].value + func = func_with_new_name(func, func.func_name + '_compiled') + ptr = self.helper_func(FUNCPTR, func) + op.opname = 'direct_call' + op.args = [Constant(ptr, FUNCPTR)] + op.args[2:] + def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: self.rewrite_jit_merge_point(jd, policy) diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -596,20 +596,6 @@ return fn(*greenargs) self.should_unroll_one_iteration = should_unroll_one_iteration - if hasattr(jd.jitdriver, 'on_compile'): - def on_compile(logger, token, operations, type, greenkey): - greenargs = unwrap_greenkey(greenkey) - return jd.jitdriver.on_compile(logger, token, operations, type, - *greenargs) - def on_compile_bridge(logger, orig_token, operations, n): - return jd.jitdriver.on_compile_bridge(logger, orig_token, - operations, n) - jd.on_compile = on_compile - jd.on_compile_bridge = on_compile_bridge - else: - jd.on_compile = lambda *args: None - jd.on_compile_bridge = lambda *args: None - redargtypes = ''.join([kind[0] for kind in jd.red_args_types]) def get_assembler_token(greenkey): diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -89,11 +89,18 @@ assert typ == 'class' return self.model.ConstObj(ootype.cast_to_object(obj)) - def get_descr(self, poss_descr): + def get_descr(self, poss_descr, allow_invent): if poss_descr.startswith('<'): return None - else: + try: return self._consts[poss_descr] + except KeyError: + if allow_invent: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: @@ -186,7 +193,8 @@ poss_descr = allargs[-1].strip() if poss_descr.startswith('descr='): - descr = self.get_descr(poss_descr[len('descr='):]) + descr = self.get_descr(poss_descr[len('descr='):], + opname == 'label') allargs = allargs[:-1] for arg in allargs: arg = arg.strip() diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -6,7 +6,7 @@ from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat - from pypy.jit.metainterp.history import BasicFailDescr + from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.history import get_const_ptr_for_string @@ -42,6 +42,10 @@ class JitCellToken(object): I_am_a_descr = True + class TargetToken(object): + def __init__(self, jct): + pass + class BasicFailDescr(object): I_am_a_descr = True diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,8 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken,\ + TargetToken class BaseTestOparser(object): @@ -243,6 +244,16 @@ b = loop.getboxes() assert isinstance(b.sum0, BoxInt) + def test_label(self): + x = """ + [i0] + label(i0, descr=1) + jump(i0, descr=1) + """ + loop = self.parse(x) + assert loop.operations[0].getdescr() is loop.operations[1].getdescr() + assert isinstance(loop.operations[0].getdescr(), TargetToken) + class ForbiddenModule(object): def __init__(self, name, old_mod): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -16,24 +16,28 @@ virtualizables=['frame'], reds=['result_size', 'frame', 'ri', 'self', 'result'], get_printable_location=signature.new_printable_location('numpy'), + name='numpy', ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('all'), + name='numpy_all', ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('any'), + name='numpy_any', ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['self', 'frame', 'source', 'res_iter'], get_printable_location=signature.new_printable_location('slice'), + name='numpy_slice', ) def _find_shape_and_elems(space, w_iterable): @@ -297,6 +301,7 @@ greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), + name='numpy_' + op_name, ) def loop(self): sig = self.find_sig() diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -14,6 +14,7 @@ virtualizables = ["frame"], reds = ["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), + name='numpy_reduce', ) class W_Ufunc(Wrappable): diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -7,16 +7,21 @@ interpleveldefs = { 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', - 'set_compile_hook': 'interp_jit.set_compile_hook', - 'DebugMergePoint': 'interp_resop.W_DebugMergePoint', + 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_optimize_hook': 'interp_resop.set_optimize_hook', + 'set_abort_hook': 'interp_resop.set_abort_hook', + 'ResOperation': 'interp_resop.WrappedOp', + 'Box': 'interp_resop.WrappedBox', } def setup_after_space_initialization(self): # force the __extend__ hacks to occur early from pypy.module.pypyjit.interp_jit import pypyjitdriver + from pypy.module.pypyjit.policy import pypy_hooks # add the 'defaults' attribute from pypy.rlib.jit import PARAMETERS space = self.space pypyjitdriver.space = space w_obj = space.wrap(PARAMETERS) space.setattr(space.wrap(self), space.wrap('defaults'), w_obj) + pypy_hooks.space = space diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -13,11 +13,7 @@ from pypy.interpreter.pycode import PyCode, CO_GENERATOR from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame -from pypy.interpreter.gateway import unwrap_spec from opcode import opmap -from pypy.rlib.nonconst import NonConstant -from pypy.jit.metainterp.resoperation import rop -from pypy.module.pypyjit.interp_resop import debug_merge_point_from_boxes PyFrame._virtualizable2_ = ['last_instr', 'pycode', 'valuestackdepth', 'locals_stack_w[*]', @@ -51,72 +47,19 @@ def should_unroll_one_iteration(next_instr, is_being_profiled, bytecode): return (bytecode.co_flags & CO_GENERATOR) != 0 -def wrap_oplist(space, logops, operations): - list_w = [] - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - list_w.append(space.wrap(debug_merge_point_from_boxes( - op.getarglist()))) - else: - list_w.append(space.wrap(logops.repr_of_resop(op))) - return list_w - class PyPyJitDriver(JitDriver): reds = ['frame', 'ec'] greens = ['next_instr', 'is_being_profiled', 'pycode'] virtualizables = ['frame'] - def on_compile(self, logger, looptoken, operations, type, next_instr, - is_being_profiled, ll_pycode): - from pypy.rpython.annlowlevel import cast_base_ptr_to_instance - - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap(type), - space.newtuple([pycode, - space.wrap(next_instr), - space.wrap(is_being_profiled)]), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap('bridge'), - space.wrap(n), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - pypyjitdriver = PyPyJitDriver(get_printable_location = get_printable_location, get_jitcell_at = get_jitcell_at, set_jitcell_at = set_jitcell_at, confirm_enter_jit = confirm_enter_jit, can_never_inline = can_never_inline, should_unroll_one_iteration = - should_unroll_one_iteration) + should_unroll_one_iteration, + name='pypyjit') class __extend__(PyFrame): @@ -223,34 +166,3 @@ '''For testing. Invokes callable(...), but without letting the JIT follow the call.''' return space.call_args(w_callable, __args__) - -class Cache(object): - in_recursion = False - - def __init__(self, space): - self.w_compile_hook = space.w_None - -def set_compile_hook(space, w_hook): - """ set_compile_hook(hook) - - Set a compiling hook that will be called each time a loop is compiled. - The hook will be called with the following signature: - hook(merge_point_type, loop_type, greenkey or guard_number, operations) - - for now merge point type is always `main` - - loop_type can be either `loop` `entry_bridge` or `bridge` - in case loop is not `bridge`, greenkey will be a set of constants - for jit merge point. in case it's `main` it'll be a tuple - (code, offset, is_being_profiled) - - Note that jit hook is not reentrant. It means that if the code - inside the jit hook is itself jitted, it will get compiled, but the - jit hook won't be called for that. - - XXX write down what else - """ - cache = space.fromcache(Cache) - cache.w_compile_hook = w_hook - cache.in_recursion = NonConstant(False) - return space.w_None diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,41 +1,197 @@ -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import unwrap_spec, interp2app +from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode -from pypy.rpython.lltypesystem import lltype -from pypy.rpython.annlowlevel import cast_base_ptr_to_instance +from pypy.interpreter.error import OperationError +from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr from pypy.rpython.lltypesystem.rclass import OBJECT +from pypy.jit.metainterp.resoperation import rop, AbstractResOp +from pypy.rlib.nonconst import NonConstant +from pypy.rlib import jit_hooks -class W_DebugMergePoint(Wrappable): - """ A class representing debug_merge_point JIT operation +class Cache(object): + in_recursion = False + + def __init__(self, space): + self.w_compile_hook = space.w_None + self.w_abort_hook = space.w_None + self.w_optimize_hook = space.w_None + +def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr): + if greenkey is None: + return space.w_None + jitdriver_name = jitdriver.name + if jitdriver_name == 'pypyjit': + next_instr = greenkey[0].getint() + is_being_profiled = greenkey[1].getint() + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + greenkey[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return space.newtuple([space.wrap(pycode), space.wrap(next_instr), + space.newbool(bool(is_being_profiled))]) + else: + return space.wrap(greenkey_repr) + +def set_compile_hook(space, w_hook): + """ set_compile_hook(hook) + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. """ + cache = space.fromcache(Cache) + cache.w_compile_hook = w_hook + cache.in_recursion = NonConstant(False) - def __init__(self, mp_no, offset, pycode): - self.mp_no = mp_no +def set_optimize_hook(space, w_hook): + """ set_optimize_hook(hook) + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + """ + cache = space.fromcache(Cache) + cache.w_optimize_hook = w_hook + cache.in_recursion = NonConstant(False) + +def set_abort_hook(space, w_hook): + """ set_abort_hook(hook) + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. + """ + cache = space.fromcache(Cache) + cache.w_abort_hook = w_hook + cache.in_recursion = NonConstant(False) + +def wrap_oplist(space, logops, operations, ops_offset=None): + l_w = [] + for op in operations: + if ops_offset is None: + ofs = -1 + else: + ofs = ops_offset.get(op, 0) + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) + return l_w + +class WrappedBox(Wrappable): + """ A class representing a single box + """ + def __init__(self, llbox): + self.llbox = llbox + + def descr_getint(self, space): + return space.wrap(jit_hooks.box_getint(self.llbox)) + + at unwrap_spec(no=int) +def descr_new_box(space, w_tp, no): + return WrappedBox(jit_hooks.boxint_new(no)) + +WrappedBox.typedef = TypeDef( + 'Box', + __new__ = interp2app(descr_new_box), + getint = interp2app(WrappedBox.descr_getint), +) + + at unwrap_spec(num=int, offset=int, repr=str, res=WrappedBox) +def descr_new_resop(space, w_tp, num, w_args, res, offset=-1, + repr=''): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + if res is None: + llres = jit_hooks.emptyval() + else: + llres = res.llbox + return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + +class WrappedOp(Wrappable): + """ A class representing a single ResOperation, wrapped nicely + """ + def __init__(self, op, offset, repr_of_resop): + self.op = op self.offset = offset - self.pycode = pycode + self.repr_of_resop = repr_of_resop def descr_repr(self, space): - return space.wrap('DebugMergePoint()') + return space.wrap(self.repr_of_resop) - at unwrap_spec(mp_no=int, offset=int, pycode=PyCode) -def new_debug_merge_point(space, w_tp, mp_no, offset, pycode): - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_num(self, space): + return space.wrap(jit_hooks.resop_getopnum(self.op)) -def debug_merge_point_from_boxes(boxes): - mp_no = boxes[0].getint() - offset = boxes[2].getint() - llcode = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), - boxes[4].getref_base()) - pycode = cast_base_ptr_to_instance(PyCode, llcode) - assert pycode is not None - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_name(self, space): + return space.wrap(hlstr(jit_hooks.resop_getopname(self.op))) -W_DebugMergePoint.typedef = TypeDef( - 'DebugMergePoint', - __new__ = interp2app(new_debug_merge_point), - __doc__ = W_DebugMergePoint.__doc__, - __repr__ = interp2app(W_DebugMergePoint.descr_repr), - code = interp_attrproperty('pycode', W_DebugMergePoint), + @unwrap_spec(no=int) + def descr_getarg(self, space, no): + return WrappedBox(jit_hooks.resop_getarg(self.op, no)) + + @unwrap_spec(no=int, box=WrappedBox) + def descr_setarg(self, space, no, box): + jit_hooks.resop_setarg(self.op, no, box.llbox) + + def descr_getresult(self, space): + return WrappedBox(jit_hooks.resop_getresult(self.op)) + + def descr_setresult(self, space, w_box): + box = space.interp_w(WrappedBox, w_box) + jit_hooks.resop_setresult(self.op, box.llbox) + +WrappedOp.typedef = TypeDef( + 'ResOperation', + __doc__ = WrappedOp.__doc__, + __new__ = interp2app(descr_new_resop), + __repr__ = interp2app(WrappedOp.descr_repr), + num = GetSetProperty(WrappedOp.descr_num), + name = GetSetProperty(WrappedOp.descr_name), + getarg = interp2app(WrappedOp.descr_getarg), + setarg = interp2app(WrappedOp.descr_setarg), + result = GetSetProperty(WrappedOp.descr_getresult, + WrappedOp.descr_setresult) ) +WrappedOp.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,4 +1,112 @@ from pypy.jit.codewriter.policy import JitPolicy +from pypy.rlib.jit import JitHookInterface +from pypy.rlib import jit_hooks +from pypy.interpreter.error import OperationError +from pypy.jit.metainterp.jitprof import counter_names +from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\ + WrappedOp + +class PyPyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_abort_hook): + cache.in_recursion = True + try: + try: + space.call_function(cache.w_abort_hook, + space.wrap(jitdriver.name), + wrap_greenkey(space, jitdriver, + greenkey, greenkey_repr), + space.wrap(counter_names[reason])) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_abort_hook) + finally: + cache.in_recursion = False + + def after_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._compile_hook(debug_info, w_greenkey) + + def after_compile_bridge(self, debug_info): + self._compile_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def before_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._optimize_hook(debug_info, w_greenkey) + + def before_compile_bridge(self, debug_info): + self._optimize_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def _compile_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_compile_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations, + debug_info.asminfo.ops_offset) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + asminfo = debug_info.asminfo + space.call_function(cache.w_compile_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w), + space.wrap(asminfo.asmaddr), + space.wrap(asminfo.asmlen)) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + + def _optimize_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_optimize_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + w_res = space.call_function(cache.w_optimize_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w)) + if space.is_w(w_res, space.w_None): + return + l = [] + for w_item in space.listview(w_res): + item = space.interp_w(WrappedOp, w_item) + l.append(jit_hooks._cast_to_resop(item.op)) + del debug_info.operations[:] # modifying operations above is + # probably not a great idea since types may not work + # and we'll end up with half-working list and + # a segfault/fatal RPython error + for elem in l: + debug_info.operations.append(elem) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + +pypy_hooks = PyPyJitIface() class PyPyJitPolicy(JitPolicy): @@ -12,12 +120,16 @@ modname == 'thread.os_thread'): return True if '.' in modname: - modname, _ = modname.split('.', 1) + modname, rest = modname.split('.', 1) + else: + rest = '' if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', 'mmap', 'marshal']: + if modname == 'pypyjit' and 'interp_resop' in rest: + return False return True return False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -1,22 +1,40 @@ import py from pypy.conftest import gettestobjspace, option +from pypy.interpreter.gateway import interp2app from pypy.interpreter.pycode import PyCode -from pypy.interpreter.gateway import interp2app -from pypy.jit.metainterp.history import JitCellToken -from pypy.jit.metainterp.resoperation import ResOperation, rop +from pypy.jit.metainterp.history import JitCellToken, ConstInt, ConstPtr +from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.logger import Logger from pypy.rpython.annlowlevel import (cast_instance_to_base_ptr, cast_base_ptr_to_instance) from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.module.pypyjit.interp_jit import pypyjitdriver +from pypy.module.pypyjit.policy import pypy_hooks from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.typesystem import llhelper +from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG +from pypy.rlib.jit import JitDebugInfo, AsmInfo + +class MockJitDriverSD(object): + class warmstate(object): + @staticmethod + def get_location_str(boxes): + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + boxes[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return pycode.co_name + + jitdriver = pypyjitdriver + class MockSD(object): class cpu(object): ts = llhelper + jitdrivers_sd = [MockJitDriverSD] + class AppTestJitHook(object): def setup_class(cls): if option.runappdirect: @@ -24,9 +42,9 @@ space = gettestobjspace(usemodules=('pypyjit',)) cls.space = space w_f = space.appexec([], """(): - def f(): + def function(): pass - return f + return function """) cls.w_f = w_f ll_code = cast_instance_to_base_ptr(w_f.code) @@ -34,41 +52,73 @@ logger = Logger(MockSD()) oplist = parse(""" - [i1, i2] + [i1, i2, p2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) + guard_nonnull(p2) [] guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations + greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] + offset = {} + for i, op in enumerate(oplist): + if i != 1: + offset[op] = i + + di_loop = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop_optimize = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop.asminfo = AsmInfo(offset, 0, 0) + di_bridge = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'bridge', fail_descr_no=0) + di_bridge.asminfo = AsmInfo(offset, 0, 0) def interp_on_compile(): - pypyjitdriver.on_compile(logger, JitCellToken(), oplist, 'loop', - 0, False, ll_code) + di_loop.oplist = cls.oplist + pypy_hooks.after_compile(di_loop) def interp_on_compile_bridge(): - pypyjitdriver.on_compile_bridge(logger, JitCellToken(), oplist, 0) + pypy_hooks.after_compile_bridge(di_bridge) + + def interp_on_optimize(): + di_loop_optimize.oplist = cls.oplist + pypy_hooks.before_compile(di_loop_optimize) + + def interp_on_abort(): + pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey, + 'blah') cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) + cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) + cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) + cls.orig_oplist = oplist + + def setup_method(self, meth): + self.__class__.oplist = self.orig_oplist[:] def test_on_compile(self): import pypyjit all = [] - def hook(*args): - assert args[0] == 'main' - assert args[1] in ['loop', 'bridge'] - all.append(args[2:]) + def hook(name, looptype, tuple_or_guard_no, ops, asmstart, asmlen): + all.append((name, looptype, tuple_or_guard_no, ops)) self.on_compile() pypyjit.set_compile_hook(hook) assert not all self.on_compile() assert len(all) == 1 - assert all[0][0][0].co_name == 'f' - assert all[0][0][1] == 0 - assert all[0][0][2] == False - assert len(all[0][1]) == 3 - assert 'int_add' in all[0][1][0] + elem = all[0] + assert elem[0] == 'pypyjit' + assert elem[2][0].co_name == 'function' + assert elem[2][1] == 0 + assert elem[2][2] == False + assert len(elem[3]) == 4 + int_add = elem[3][0] + #assert int_add.name == 'int_add' + assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 pypyjit.set_compile_hook(None) @@ -116,11 +166,48 @@ pypyjit.set_compile_hook(hook) self.on_compile() - dmp = l[0][3][1] - assert isinstance(dmp, pypyjit.DebugMergePoint) - assert dmp.code is self.f.func_code + op = l[0][3][1] + assert isinstance(op, pypyjit.ResOperation) + assert 'function' in repr(op) + + def test_on_abort(self): + import pypyjit + l = [] + + def hook(jitdriver_name, greenkey, reason): + l.append((jitdriver_name, reason)) + + pypyjit.set_abort_hook(hook) + self.on_abort() + assert l == [('pypyjit', 'ABORT_TOO_LONG')] + + def test_on_optimize(self): + import pypyjit + l = [] + + def hook(name, looptype, tuple_or_guard_no, ops, *args): + l.append(ops) + + def optimize_hook(name, looptype, tuple_or_guard_no, ops): + return [] + + pypyjit.set_compile_hook(hook) + pypyjit.set_optimize_hook(optimize_hook) + self.on_optimize() + self.on_compile() + assert l == [[]] def test_creation(self): - import pypyjit - dmp = pypyjit.DebugMergePoint(0, 0, self.f.func_code) - assert dmp.code is self.f.func_code + from pypyjit import Box, ResOperation + + op = ResOperation(self.int_add_num, [Box(1), Box(3)], Box(4)) + assert op.num == self.int_add_num + assert op.name == 'int_add' + box = op.getarg(0) + assert box.getint() == 1 + box2 = op.result + assert box2.getint() == 4 + op.setarg(0, box2) + assert op.getarg(0).getint() == 4 + op.result = box + assert op.result.getint() == 1 diff --git a/pypy/module/pypyjit/test/test_policy.py b/pypy/module/pypyjit/test/test_policy.py --- a/pypy/module/pypyjit/test/test_policy.py +++ b/pypy/module/pypyjit/test/test_policy.py @@ -52,6 +52,7 @@ for modname in 'pypyjit', 'signal', 'micronumpy', 'math', 'imp': assert pypypolicy.look_inside_pypy_module(modname) assert pypypolicy.look_inside_pypy_module(modname + '.foo') + assert not pypypolicy.look_inside_pypy_module('pypyjit.interp_resop') def test_see_jit_module(): assert pypypolicy.look_inside_pypy_module('pypyjit.interp_jit') diff --git a/pypy/module/pypyjit/test/test_ztranslation.py b/pypy/module/pypyjit/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test/test_ztranslation.py @@ -0,0 +1,5 @@ + +from pypy.objspace.fake.checkmodule import checkmodule + +def test_pypyjit_translates(): + checkmodule('pypyjit') diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -6,7 +6,6 @@ from pypy.rlib.objectmodel import CDefinedIntSymbolic, keepalive_until_here, specialize from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.extregistry import ExtRegistryEntry -from pypy.tool.sourcetools import func_with_new_name DEBUG_ELIDABLE_FUNCTIONS = False @@ -422,13 +421,16 @@ active = True # if set to False, this JitDriver is ignored virtualizables = [] + name = 'jitdriver' def __init__(self, greens=None, reds=None, virtualizables=None, get_jitcell_at=None, set_jitcell_at=None, get_printable_location=None, confirm_enter_jit=None, - can_never_inline=None, should_unroll_one_iteration=None): + can_never_inline=None, should_unroll_one_iteration=None, + name='jitdriver'): if greens is not None: self.greens = greens + self.name = name if reds is not None: self.reds = reds if not hasattr(self, 'greens') or not hasattr(self, 'reds'): @@ -462,23 +464,6 @@ # special-cased by ExtRegistryEntry pass - def on_compile(self, logger, looptoken, operations, type, *greenargs): - """ A hook called when loop is compiled. Overwrite - for your own jitdriver if you want to do something special, like - call applevel code - """ - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - """ A hook called when a bridge is compiled. Overwrite - for your own jitdriver if you want to do something special - """ - - # note: if you overwrite this functions with the above signature it'll - # work, but the *greenargs is different for each jitdriver, so we - # can't share the same methods - del on_compile - del on_compile_bridge - def _make_extregistryentries(self): # workaround: we cannot declare ExtRegistryEntries for functions # used as methods of a frozen object, but we can attach the @@ -640,7 +625,6 @@ def specialize_call(self, hop, **kwds_i): # XXX to be complete, this could also check that the concretetype # of the variables are the same for each of the calls. - from pypy.rpython.error import TyperError from pypy.rpython.lltypesystem import lltype driver = self.instance.im_self greens_v = [] @@ -753,6 +737,105 @@ return hop.genop('jit_marker', vlist, resulttype=lltype.Void) +class AsmInfo(object): + """ An addition to JitDebugInfo concerning assembler. Attributes: + + ops_offset - dict of offsets of operations or None + asmaddr - (int) raw address of assembler block + asmlen - assembler block length + """ + def __init__(self, ops_offset, asmaddr, asmlen): + self.ops_offset = ops_offset + self.asmaddr = asmaddr + self.asmlen = asmlen + +class JitDebugInfo(object): + """ An object representing debug info. Attributes meanings: + + greenkey - a list of green boxes or None for bridge + logger - an instance of jit.metainterp.logger.LogOperations + type - either 'loop', 'entry bridge' or 'bridge' + looptoken - description of a loop + fail_descr_no - number of failing descr for bridges, -1 otherwise + asminfo - extra assembler information + """ + + asminfo = None + def __init__(self, jitdriver_sd, logger, looptoken, operations, type, + greenkey=None, fail_descr_no=-1): + self.jitdriver_sd = jitdriver_sd + self.logger = logger + self.looptoken = looptoken + self.operations = operations + self.type = type + if type == 'bridge': + assert fail_descr_no != -1 + else: + assert greenkey is not None + self.greenkey = greenkey + self.fail_descr_no = fail_descr_no + + def get_jitdriver(self): + """ Return where the jitdriver on which the jitting started + """ + return self.jitdriver_sd.jitdriver + + def get_greenkey_repr(self): + """ Return the string repr of a greenkey + """ + return self.jitdriver_sd.warmstate.get_location_str(self.greenkey) + +class JitHookInterface(object): + """ This is the main connector between the JIT and the interpreter. + Several methods on this class will be invoked at various stages + of JIT running like JIT loops compiled, aborts etc. + An instance of this class will be available as policy.jithookiface. + """ + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + """ A hook called each time a loop is aborted with jitdriver and + greenkey where it started, reason is a string why it got aborted + """ + + #def before_optimize(self, debug_info): + # """ A hook called before optimizer is run, called with instance of + # JitDebugInfo. Overwrite for custom behavior + # """ + # DISABLED + + def before_compile(self, debug_info): + """ A hook called after a loop is optimized, before compiling assembler, + called with JitDebugInfo instance. Overwrite for custom behavior + """ + + def after_compile(self, debug_info): + """ A hook called after a loop has compiled assembler, + called with JitDebugInfo instance. Overwrite for custom behavior + """ + + #def before_optimize_bridge(self, debug_info): + # operations, fail_descr_no): + # """ A hook called before a bridge is optimized. + # Called with JitDebugInfo instance, overwrite for + # custom behavior + # """ + # DISABLED + + def before_compile_bridge(self, debug_info): + """ A hook called before a bridge is compiled, but after optimizations + are performed. Called with instance of debug_info, overwrite for + custom behavior + """ + + def after_compile_bridge(self, debug_info): + """ A hook called after a bridge is compiled, called with JitDebugInfo + instance, overwrite for custom behavior + """ + + def get_stats(self): + """ Returns various statistics + """ + raise NotImplementedError + def record_known_class(value, cls): """ Assure the JIT that value is an instance of cls. This is not a precise @@ -760,7 +843,6 @@ """ assert isinstance(value, cls) - class Entry(ExtRegistryEntry): _about_ = record_known_class @@ -771,7 +853,8 @@ assert isinstance(s_inst, annmodel.SomeInstance) def specialize_call(self, hop): - from pypy.rpython.lltypesystem import lltype, rclass + from pypy.rpython.lltypesystem import rclass, lltype + classrepr = rclass.get_type_repr(hop.rtyper) hop.exception_cannot_occur() diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/jit_hooks.py @@ -0,0 +1,106 @@ + +from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.annotation import model as annmodel +from pypy.rpython.lltypesystem import llmemory, lltype +from pypy.rpython.lltypesystem import rclass +from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\ + cast_base_ptr_to_instance, llstr, hlstr +from pypy.rlib.objectmodel import specialize + +def register_helper(s_result): + def wrapper(helper): + class Entry(ExtRegistryEntry): + _about_ = helper + + def compute_result_annotation(self, *args): + return s_result + + def specialize_call(self, hop): + from pypy.rpython.lltypesystem import lltype + + c_func = hop.inputconst(lltype.Void, helper) + c_name = hop.inputconst(lltype.Void, 'access_helper') + args_v = [hop.inputarg(arg, arg=i) + for i, arg in enumerate(hop.args_r)] + return hop.genop('jit_marker', [c_name, c_func] + args_v, + resulttype=hop.r_result) + return helper + return wrapper + +def _cast_to_box(llref): + from pypy.jit.metainterp.history import AbstractValue + + ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) + return cast_base_ptr_to_instance(AbstractValue, ptr) + +def _cast_to_resop(llref): + from pypy.jit.metainterp.resoperation import AbstractResOp + + ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) + return cast_base_ptr_to_instance(AbstractResOp, ptr) + + at specialize.argtype(0) +def _cast_to_gcref(obj): + return lltype.cast_opaque_ptr(llmemory.GCREF, + cast_instance_to_base_ptr(obj)) + +def emptyval(): + return lltype.nullptr(llmemory.GCREF.TO) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_new(no, llargs, llres): + from pypy.jit.metainterp.history import ResOperation + + args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] + res = _cast_to_box(llres) + return _cast_to_gcref(ResOperation(no, args, res)) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def boxint_new(no): + from pypy.jit.metainterp.history import BoxInt + return _cast_to_gcref(BoxInt(no)) + + at register_helper(annmodel.SomeInteger()) +def resop_getopnum(llop): + return _cast_to_resop(llop).getopnum() + + at register_helper(annmodel.SomeString(can_be_None=True)) +def resop_getopname(llop): + return llstr(_cast_to_resop(llop).getopname()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getarg(llop, no): + return _cast_to_gcref(_cast_to_resop(llop).getarg(no)) + + at register_helper(annmodel.s_None) +def resop_setarg(llop, no, llbox): + _cast_to_resop(llop).setarg(no, _cast_to_box(llbox)) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getresult(llop): + return _cast_to_gcref(_cast_to_resop(llop).result) + + at register_helper(annmodel.s_None) +def resop_setresult(llop, llbox): + _cast_to_resop(llop).result = _cast_to_box(llbox) + + at register_helper(annmodel.SomeInteger()) +def box_getint(llbox): + return _cast_to_box(llbox).getint() + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_clone(llbox): + return _cast_to_gcref(_cast_to_box(llbox).clonebox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_constbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).constbox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_nonconstbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).nonconstbox()) + + at register_helper(annmodel.SomeBool()) +def box_isconst(llbox): + from pypy.jit.metainterp.history import Const + return isinstance(_cast_to_box(llbox), Const) diff --git a/pypy/rlib/rsre/rsre_jit.py b/pypy/rlib/rsre/rsre_jit.py --- a/pypy/rlib/rsre/rsre_jit.py +++ b/pypy/rlib/rsre/rsre_jit.py @@ -5,7 +5,7 @@ active = True def __init__(self, name, debugprint, **kwds): - JitDriver.__init__(self, **kwds) + JitDriver.__init__(self, name='rsre_' + name, **kwds) # def get_printable_location(*args): # we print based on indices in 'args'. We first print diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -226,8 +226,8 @@ return self.get_entry_point(config) def jitpolicy(self, driver): - from pypy.module.pypyjit.policy import PyPyJitPolicy - return PyPyJitPolicy() + from pypy.module.pypyjit.policy import PyPyJitPolicy, pypy_hooks + return PyPyJitPolicy(pypy_hooks) def get_entry_point(self, config): from pypy.tool.lib_pypy import import_from_lib_pypy From noreply at buildbot.pypy.org Wed Jan 11 21:37:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 21:37:46 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks: close merged branch Message-ID: <20120111203746.57AF782C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks Changeset: r51249:4d9eda6790fd Date: 2012-01-11 22:36 +0200 http://bitbucket.org/pypy/pypy/changeset/4d9eda6790fd/ Log: close merged branch From noreply at buildbot.pypy.org Wed Jan 11 21:37:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 21:37:47 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: merge Message-ID: <20120111203747.90F4D82C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51250:9b14783334f5 Date: 2012-01-11 22:37 +0200 http://bitbucket.org/pypy/pypy/changeset/9b14783334f5/ Log: merge diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -575,10 +575,9 @@ def descr_mean(self, space, w_dim=None): if space.is_w(w_dim, space.w_None): w_dim = space.wrap(-1) - dim = space.int_w(w_dim) - if dim < 0: w_denom = space.wrap(self.size) else: + dim = space.int_w(w_dim) w_denom = space.wrap(self.shape[dim]) return space.div(self.descr_sum_promote(space, w_dim), w_denom) @@ -780,7 +779,7 @@ def create_sig(self, res_shape): if self.forced_result is not None: return self.forced_result.create_sig(res_shape) - return signature.ReduceSignature(self.binfunc, self.name, self.dtype, + return signature.AxisReduceSignature(self.binfunc, self.name, self.dtype, signature.ViewSignature(self.dtype), self.values.create_sig(res_shape)) @@ -805,7 +804,7 @@ ri = ArrayIterator(result.size) frame = sig.create_frame(self.values, dim=self.dim) value = self.get_identity(sig, frame, shapelen) - assert isinstance(sig, signature.ReduceSignature) + assert isinstance(sig, signature.AxisReduceSignature) while not frame.done(): axisreduce_driver.jit_merge_point(frame=frame, self=self, value=value, sig=sig, diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -130,8 +130,9 @@ "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: from pypy.module.micronumpy.interp_numarray import Reduce - return space.wrap(Reduce(self.func, self.name, dim, dtype, - obj, self.identity)) + res = Reduce(self.func, self.name, dim, dtype, obj, self.identity) + obj.add_invalidates(res) + return space.wrap(res) sig = find_sig(ReduceSignature(self.func, self.name, dtype, ScalarSignature(dtype), obj.create_sig(obj.shape)), obj) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -169,7 +169,6 @@ def eval(self, frame, arr): iter = frame.iterators[self.iter_no] - assert arr.dtype is self.dtype return self.dtype.getitem(frame.arrays[self.array_no], iter.offset) class ScalarSignature(ConcreteSignature): @@ -326,43 +325,49 @@ return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) + class ReduceSignature(Call2): def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): - if dim < 0: - self.right._create_iter(iterlist, arraylist, arr, res_shape, + self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist, dim) - else: - from pypy.module.micronumpy.interp_numarray import ConcreteArray - concr = arr.get_concrete() - assert isinstance(concr, ConcreteArray) - storage = concr.storage - if self.iter_no >= len(iterlist): - _iter = axis_iter_from_arr(concr, dim) - from interp_iter import AxisIterator - assert isinstance(_iter, AxisIterator) - iterlist.append(_iter) - if self.array_no >= len(arraylist): - arraylist.append(storage) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) def _invent_array_numbering(self, arr, cache): - #Could be called with arr as output or arr as input. - from pypy.module.micronumpy.interp_numarray import Reduce - if isinstance(arr, Reduce): - self.left._invent_array_numbering(arr, cache) - else: - self.right._invent_array_numbering(arr, cache) + self.right._invent_array_numbering(arr, cache) def eval(self, frame, arr): - #Could be called with arr as output or arr as input. - from pypy.module.micronumpy.interp_numarray import Reduce - if isinstance(arr, Reduce): - return self.left.eval(frame, arr) - else: - return self.right.eval(frame, arr) + return self.right.eval(frame, arr) def debug_repr(self): return 'ReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) + +class AxisReduceSignature(Call2): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + from pypy.module.micronumpy.interp_numarray import ConcreteArray + concr = arr.get_concrete() + assert isinstance(concr, ConcreteArray) + storage = concr.storage + if self.iter_no >= len(iterlist): + _iter = axis_iter_from_arr(concr, dim) + from interp_iter import AxisIterator + assert isinstance(_iter, AxisIterator) + iterlist.append(_iter) + if self.array_no >= len(arraylist): + arraylist.append(storage) + + def _invent_numbering(self, cache, allnumbers): + self.right._invent_numbering(cache, allnumbers) + + def _invent_array_numbering(self, arr, cache): + self.right._invent_array_numbering(arr, cache) + + def eval(self, frame, arr): + return self.right.eval(frame, arr) + + def debug_repr(self): + return 'AxisReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -729,8 +729,10 @@ assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - assert (mean(a, axis=0) == array(range(35, 70)).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15)).reshape(3, 5) * 7 + 3).all() + b = mean(a, axis=0) + b[0,0]==35. + assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() + assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from numpypy import array, arange From noreply at buildbot.pypy.org Wed Jan 11 21:47:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 11 Jan 2012 21:47:34 +0100 (CET) Subject: [pypy-commit] pypy default: document Message-ID: <20120111204734.A236B82C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51251:30e3fdc262ca Date: 2012-01-11 22:47 +0200 http://bitbucket.org/pypy/pypy/changeset/30e3fdc262ca/ Log: document diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -137,6 +137,9 @@ def _invent_array_numbering(self, arr, cache): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() + # this get_concrete never forces assembler. If we're here and array + # is not of a concrete class it means that we have a _forced_result, + # otherwise the signature would not match assert isinstance(concr, ConcreteArray) self.array_no = _add_ptr_to_cache(concr.storage, cache) From noreply at buildbot.pypy.org Thu Jan 12 00:25:42 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 12 Jan 2012 00:25:42 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove unused code Message-ID: <20120111232542.61FBE82C03@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51252:a2005205886e Date: 2012-01-11 15:21 -0800 http://bitbucket.org/pypy/pypy/changeset/a2005205886e/ Log: remove unused code diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -493,34 +493,6 @@ if loc is not None and loc.is_stack(): self.frame_manager.hint_frame_locations[box] = loc - def prepare_op_jump(self, op): - descr = op.getdescr() - assert isinstance(descr, TargetToken) - self.jump_target_descr = descr - arglocs = self.assembler.target_arglocs(descr) - - # get temporary locs - tmploc = r.SCRATCH - - # Part about non-floats - src_locations1 = [] - dst_locations1 = [] - - # Build the two lists - for i in range(op.numargs()): - box = op.getarg(i) - src_loc = self.loc(box) - dst_loc = arglocs[i] - if box.type != FLOAT: - src_locations1.append(src_loc) - dst_locations1.append(dst_loc) - else: - assert 0, "not implemented yet" - - remap_frame_layout(self.assembler, src_locations1, - dst_locations1, tmploc) - return [] - def prepare_guard_call_release_gil(self, op, guard_op): # first, close the stack in the sense of the asmgcc GC root tracker gcrootmap = self.cpu.gc_ll_descr.gcrootmap From noreply at buildbot.pypy.org Thu Jan 12 00:25:43 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 12 Jan 2012 00:25:43 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): Adjust size of FPR_SAVE_AREA on PPC64. Also, resize stackframe if bridges need more space. Message-ID: <20120111232543.8C94582CAA@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51253:b55636b23f7a Date: 2012-01-11 15:24 -0800 http://bitbucket.org/pypy/pypy/changeset/b55636b23f7a/ Log: (bivab, hager): Adjust size of FPR_SAVE_AREA on PPC64. Also, resize stackframe if bridges need more space. diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/ppcgen/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/ppcgen/arch.py @@ -9,10 +9,12 @@ WORD = 4 IS_PPC_32 = True BACKCHAIN_SIZE = 2 + FPR_SAVE_AREA = len(NONVOLATILES_FLOAT) * DWORD else: WORD = 8 IS_PPC_32 = False BACKCHAIN_SIZE = 6 + FPR_SAVE_AREA = len(NONVOLATILES_FLOAT) * WORD DWORD = 2 * WORD IS_PPC_64 = not IS_PPC_32 @@ -20,8 +22,10 @@ FORCE_INDEX = WORD GPR_SAVE_AREA = len(NONVOLATILES) * WORD -FPR_SAVE_AREA = len(NONVOLATILES_FLOAT) * DWORD FLOAT_INT_CONVERSION = WORD MAX_REG_PARAMS = 8 +# we need at most 5 instructions to load a constant +# and one instruction to patch the stack pointer +SIZE_LOAD_IMM_PATCH_SP = 6 FORCE_INDEX_OFS = len(MANAGED_REGS) * WORD diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -13,7 +13,8 @@ NONVOLATILES, MAX_REG_PARAMS, GPR_SAVE_AREA, BACKCHAIN_SIZE, FPR_SAVE_AREA, - FLOAT_INT_CONVERSION, FORCE_INDEX) + FLOAT_INT_CONVERSION, FORCE_INDEX, + SIZE_LOAD_IMM_PATCH_SP) from pypy.jit.backend.ppc.ppcgen.helper.assembler import (gen_emit_cmp_op, encode32, encode64, decode32, decode64, @@ -405,12 +406,13 @@ start_pos = self.mc.currpos() looptoken._ppc_loop_code = start_pos - clt.frame_depth = -1 - spilling_area = self._assemble(operations, regalloc) + clt.frame_depth = clt.param_depth = -1 + spilling_area, param_depth = self._assemble(operations, regalloc) clt.frame_depth = spilling_area + clt.param_depth = param_depth direct_bootstrap_code = self.mc.currpos() - frame_depth = self.compute_frame_depth(spilling_area) + frame_depth = self.compute_frame_depth(spilling_area, param_depth) self.gen_bootstrap_code(start_pos, frame_depth) self.write_pending_failure_recoveries() @@ -439,14 +441,16 @@ regalloc.compute_hint_frame_locations(operations) self._walk_operations(operations, regalloc) frame_depth = regalloc.frame_manager.get_frame_depth() + param_depth = self.max_stack_params jump_target_descr = regalloc.jump_target_descr if jump_target_descr is not None: frame_depth = max(frame_depth, jump_target_descr._ppc_clt.frame_depth) - return frame_depth + param_depth = max(param_depth, + jump_target_descr._ppc_clt.param_depth) + return frame_depth, param_depth - # XXX stack needs to be moved if bridge needs to much space def assemble_bridge(self, faildescr, inputargs, operations, looptoken, log): operations = self.setup(looptoken, operations) assert isinstance(faildescr, AbstractFailDescr) @@ -456,22 +460,39 @@ arglocs = self.decode_inputargs(enc) if not we_are_translated(): assert len(inputargs) == len(arglocs) - regalloc = Regalloc(assembler=self, frame_manager=PPCFrameManager()) regalloc.prepare_bridge(inputargs, arglocs, operations) - spilling_area = self._assemble(operations, regalloc) + sp_patch_location = self._prepare_sp_patch_position() + + spilling_area, param_depth = self._assemble(operations, regalloc) + self.write_pending_failure_recoveries() rawstart = self.materialize_loop(looptoken, False) self.process_pending_guards(rawstart) self.patch_trace(faildescr, looptoken, rawstart, regalloc) - self.fixup_target_tokens(rawstart) self.current_clt.frame_depth = max(self.current_clt.frame_depth, spilling_area) + self.current_clt.param_depth = max(self.current_clt.param_depth, param_depth) + self._patch_sp_offset(sp_patch_location, rawstart) + if not we_are_translated(): + print 'Loop', inputargs, operations + self.mc._dump_trace(rawstart, 'bridge_%s.asm' % self.cpu.total_compiled_loops) + print 'Done assembling bridge with token %r' % looptoken self._teardown() + def _patch_sp_offset(self, sp_patch_location, rawstart): + mc = PPCBuilder() + frame_depth = self.compute_frame_depth(self.current_clt.frame_depth, + self.current_clt.param_depth) + frame_depth -= self.OFFSET_SPP_TO_OLD_BACKCHAIN + mc.load_imm(r.SCRATCH, -frame_depth) + mc.add(r.SP.value, r.SPP.value, r.SCRATCH.value) + mc.prepare_insts_blocks() + mc.copy_to_raw_memory(rawstart + sp_patch_location) + # For an explanation of the encoding, see # backend/arm/assembler.py def gen_descr_encoding(self, descr, args, arglocs): @@ -599,8 +620,8 @@ data[1] = 0 data[2] = 0 - def compute_frame_depth(self, spilling_area): - PARAMETER_AREA = self.max_stack_params * WORD + def compute_frame_depth(self, spilling_area, param_depth): + PARAMETER_AREA = param_depth * WORD if IS_PPC_64: PARAMETER_AREA += MAX_REG_PARAMS * WORD SPILLING_AREA = spilling_area * WORD @@ -701,6 +722,15 @@ clt.asmmemmgr_blocks = [] return clt.asmmemmgr_blocks + def _prepare_sp_patch_position(self): + """Generate NOPs as placeholder to patch the instruction(s) to update + the sp according to the number of spilled variables""" + size = SIZE_LOAD_IMM_PATCH_SP + l = self.mc.currpos() + for _ in range(size): + self.mc.nop() + return l + def regalloc_mov(self, prev_loc, loc): if prev_loc.is_imm(): value = prev_loc.getint() From noreply at buildbot.pypy.org Thu Jan 12 00:37:46 2012 From: noreply at buildbot.pypy.org (mattip) Date: Thu, 12 Jan 2012 00:37:46 +0100 (CET) Subject: [pypy-commit] pypy numpypy-frompyfunc: reimplement when there is a usecase Message-ID: <20120111233746.4CB9782C03@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-frompyfunc Changeset: r51254:f36626591158 Date: 2012-01-12 00:14 +0200 http://bitbucket.org/pypy/pypy/changeset/f36626591158/ Log: reimplement when there is a usecase From noreply at buildbot.pypy.org Thu Jan 12 00:37:47 2012 From: noreply at buildbot.pypy.org (mattip) Date: Thu, 12 Jan 2012 00:37:47 +0100 (CET) Subject: [pypy-commit] pypy numpypy-is_contiguous: no one really wanted this in the first place Message-ID: <20120111233747.7CF0182C03@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-is_contiguous Changeset: r51255:e5b79894ad1e Date: 2012-01-12 00:16 +0200 http://bitbucket.org/pypy/pypy/changeset/e5b79894ad1e/ Log: no one really wanted this in the first place From noreply at buildbot.pypy.org Thu Jan 12 00:37:48 2012 From: noreply at buildbot.pypy.org (mattip) Date: Thu, 12 Jan 2012 00:37:48 +0100 (CET) Subject: [pypy-commit] pypy numpypy-ufuncs: add ceil Message-ID: <20120111233748.A410C82C03@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-ufuncs Changeset: r51256:f57dcc7ac95e Date: 2012-01-12 01:34 +0200 http://bitbucket.org/pypy/pypy/changeset/f57dcc7ac95e/ Log: add ceil diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -70,6 +70,7 @@ ("exp", "exp"), ("fabs", "fabs"), ("floor", "floor"), + ("ceil", "ceil"), ("greater", "greater"), ("greater_equal", "greater_equal"), ("less", "less"), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -327,6 +327,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,20 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) b = floor(a) for i in range(5): assert b[i] == reference[i] + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +244,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +265,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +288,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: @@ -298,7 +302,7 @@ for i in range(len(a)): assert b[i] == math.atan(a[i]) - a = array([float('nan')]) + a = array([float('nan')]) b = arctan(a) assert math.isnan(b[0]) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -374,6 +374,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) @@ -436,4 +440,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" From noreply at buildbot.pypy.org Thu Jan 12 01:27:38 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 12 Jan 2012 01:27:38 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Use get_scratch_reg to obtain an additional scratch register in prepare_(set/get)arrayitem_gc Message-ID: <20120112002738.80A0582C03@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51257:65654d65854a Date: 2012-01-11 16:26 -0800 http://bitbucket.org/pypy/pypy/changeset/65654d65854a/ Log: Use get_scratch_reg to obtain an additional scratch register in prepare_(set/get)arrayitem_gc diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -547,17 +547,12 @@ self.mc.load(res.value, base_loc.value, ofs.value) def emit_setarrayitem_gc(self, op, arglocs, regalloc): - value_loc, base_loc, ofs_loc, scale, ofs = arglocs + value_loc, base_loc, ofs_loc, scratch_loc, scale, ofs = arglocs assert ofs_loc.is_reg() - # use r20 as scratch reg - SAVE_SCRATCH = r.r20 - # save value temporarily - self.mc.mtctr(SAVE_SCRATCH.value) - if scale.value > 0: #scale_loc = r.SCRATCH - scale_loc = SAVE_SCRATCH + scale_loc = scratch_loc if IS_PPC_32: self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: @@ -581,23 +576,14 @@ else: assert 0, "scale %s not supported" % (scale.value) - # restore value of SAVE_SCRATCH - self.mc.mfctr(SAVE_SCRATCH.value) - emit_setarrayitem_raw = emit_setarrayitem_gc def emit_getarrayitem_gc(self, op, arglocs, regalloc): - res, base_loc, ofs_loc, scale, ofs = arglocs + res, base_loc, ofs_loc, scratch_loc, scale, ofs = arglocs assert ofs_loc.is_reg() - # use r20 as scratch reg - SAVE_SCRATCH = r.r20 - # save value temporarily - self.mc.mtctr(SAVE_SCRATCH.value) - if scale.value > 0: - #scale_loc = r.SCRATCH - scale_loc = SAVE_SCRATCH + scale_loc = scratch_loc if IS_PPC_32: self.mc.slwi(scale_loc.value, ofs_loc.value, scale.value) else: @@ -621,9 +607,6 @@ else: assert 0 - # restore value of SAVE_SCRATCH - self.mc.mfctr(SAVE_SCRATCH.value) - #XXX Hack, Hack, Hack if not we_are_translated(): descr = op.getdescr() diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -585,6 +585,7 @@ if op.result: regalloc.possibly_free_var(op.result) regalloc.possibly_free_vars_for_op(op) + regalloc.free_temp_vars() regalloc._check_invariants() def can_merge_with_next_guard(self, op, i, operations): diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -105,6 +105,14 @@ forbidden_vars=forbidden_vars) return reg, box + def get_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): + assert type == INT or type == REF + box = TempBox() + self.temp_boxes.append(box) + reg = self.force_allocate_reg(box, forbidden_vars=forbidden_vars, + selected_reg=selected_reg) + return reg + class PPCFrameManager(FrameManager): def __init__(self): FrameManager.__init__(self) @@ -287,6 +295,15 @@ box = thing return loc, box + def get_scratch_reg(self, type, forbidden_vars=[], selected_reg=None): + if type == FLOAT: + assert 0, "not implemented yet" + else: + return self.rm.get_scratch_reg(type, forbidden_vars, selected_reg) + + def free_temp_vars(self): + self.rm.free_temp_vars() + def make_sure_var_in_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): return self.rm.make_sure_var_in_reg(var, forbidden_vars, @@ -606,7 +623,10 @@ ofs_loc, _ = self._ensure_value_is_boxed(a1, args) value_loc, _ = self._ensure_value_is_boxed(a2, args) assert _check_imm_arg(ofs) - return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs)] + scratch_loc = self.rm.get_scratch_reg(INT, [base_loc, ofs_loc]) + assert scratch_loc not in [base_loc, ofs_loc] + return [value_loc, base_loc, ofs_loc, + scratch_loc, imm(scale), imm(ofs)] prepare_setarrayitem_raw = prepare_setarrayitem_gc @@ -618,11 +638,14 @@ boxes.append(base_box) ofs_loc, ofs_box = self._ensure_value_is_boxed(a1, boxes) boxes.append(ofs_box) + scratch_loc = self.rm.get_scratch_reg(INT, [base_loc, ofs_loc]) + assert scratch_loc not in [base_loc, ofs_loc] self.possibly_free_vars(boxes) res = self.force_allocate_reg(op.result) self.possibly_free_var(op.result) assert _check_imm_arg(ofs) - return [res, base_loc, ofs_loc, imm(scale), imm(ofs)] + return [res, base_loc, ofs_loc, + scratch_loc, imm(scale), imm(ofs)] prepare_getarrayitem_raw = prepare_getarrayitem_gc prepare_getarrayitem_gc_pure = prepare_getarrayitem_gc From notifications-noreply at bitbucket.org Thu Jan 12 12:11:26 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Thu, 12 Jan 2012 11:11:26 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120112111126.26499.85321@bitbucket03.managed.contegix.com> You have received a notification from berkerpeksag. Hi, I forked pypy. My fork is at https://bitbucket.org/berkerpeksag/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Thu Jan 12 12:17:53 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 12 Jan 2012 12:17:53 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Weakrefs. Message-ID: <20120112111753.618D782C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51258:2ae56d12bba9 Date: 2012-01-12 12:17 +0100 http://bitbucket.org/pypy/pypy/changeset/2ae56d12bba9/ Log: Weakrefs. diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -40,7 +40,7 @@ class ConcurrentGenGC(GCBase): _alloc_flavor_ = "raw" - #inline_simple_malloc = True + inline_simple_malloc = True #inline_simple_malloc_varsize = True needs_deletion_barrier = True needs_weakref_read_barrier = True @@ -109,8 +109,10 @@ # contains the objects that the write barrier re-marked as young # (so they are "old young objects"). self.new_young_objects = self.NULL + self.new_young_objects_wr = self.NULL # weakrefs self.new_young_objects_size = r_uint(0) self.old_objects = self.NULL + self.old_objects_wr = self.NULL self.old_objects_size = r_uint(0) # total size of self.old_objects # # See concurrentgen.txt for more information about these fields. @@ -215,6 +217,7 @@ needs_finalizer=False, finalizer_is_light=False, contains_weakptr=False): + # Generic function to allocate any fixed-size object. # # Case of finalizers (test constant-folded) if needs_finalizer: @@ -225,32 +228,16 @@ # # Case of weakreferences (test constant-folded) if contains_weakptr: - raise NotImplementedError return self._malloc_weakref(typeid, size) # # Regular case - size_gc_header = self.gcheaderbuilder.size_gc_header - totalsize = size_gc_header + size - rawtotalsize = raw_malloc_usage(totalsize) - adr = llarena.arena_malloc(rawtotalsize, 2) - if adr == llmemory.NULL: - raise MemoryError - llarena.arena_reserve(adr, totalsize) - obj = adr + size_gc_header - hdr = self.header(obj) - hdr.tid = self.combine(typeid, self.current_young_marker, 0) - hdr.next = self.new_young_objects - #debug_print("malloc:", rawtotalsize, obj) - self.new_young_objects = hdr - self.new_young_objects_size += r_uint(rawtotalsize) - if self.new_young_objects_size > self.nursery_limit: - self.nursery_overflowed(obj) - return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) + return self._malloc_regular(typeid, size) def malloc_varsize_clear(self, typeid, length, size, itemsize, offset_to_length): - size_gc_header = self.gcheaderbuilder.size_gc_header - nonvarsize = size_gc_header + size + # Generic function to allocate any variable-size object. + # + nonvarsize = self.gcheaderbuilder.size_gc_header + size # if length < 0: raise MemoryError @@ -259,24 +246,49 @@ except OverflowError: raise MemoryError # + return self._do_malloc(typeid, totalsize, offset_to_length, length, 0) + + + def _malloc_regular(self, typeid, size): + totalsize = self.gcheaderbuilder.size_gc_header + size + return self._do_malloc(typeid, totalsize, -1, -1, 0) + _malloc_regular._dont_inline_ = True + + def _malloc_weakref(self, typeid, size): + totalsize = self.gcheaderbuilder.size_gc_header + size + return self._do_malloc(typeid, totalsize, -1, -1, 1) + _malloc_weakref._dont_inline_ = True + + + def _do_malloc(self, typeid, totalsize, offset_to_length, length, + linked_list_number): + # Generic function to perform allocation. Inlined in its few callers, + # so that some checks like 'offset_to_length >= 0' are removed. rawtotalsize = raw_malloc_usage(totalsize) adr = llarena.arena_malloc(rawtotalsize, 2) if adr == llmemory.NULL: raise MemoryError llarena.arena_reserve(adr, totalsize) - obj = adr + size_gc_header - (obj + offset_to_length).signed[0] = length + obj = adr + self.gcheaderbuilder.size_gc_header + if offset_to_length >= 0: + (obj + offset_to_length).signed[0] = length + totalsize = llarena.round_up_for_allocation(totalsize) + rawtotalsize = raw_malloc_usage(totalsize) hdr = self.header(obj) hdr.tid = self.combine(typeid, self.current_young_marker, 0) - hdr.next = self.new_young_objects - totalsize = llarena.round_up_for_allocation(totalsize) - rawtotalsize = raw_malloc_usage(totalsize) - #debug_print("malloc:", rawtotalsize, obj) - self.new_young_objects = hdr + if linked_list_number == 0: + hdr.next = self.new_young_objects + self.new_young_objects = hdr + elif linked_list_number == 1: + hdr.next = self.new_young_objects_wr + self.new_young_objects_wr = hdr + else: + raise AssertionError(linked_list_number) self.new_young_objects_size += r_uint(rawtotalsize) if self.new_young_objects_size > self.nursery_limit: self.nursery_overflowed(obj) return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) + _do_malloc._always_inline_ = True # ---------- # Other functions in the GC API @@ -582,8 +594,10 @@ self.current_aging_marker = other # # Copy a few 'mutator' fields to 'collector' fields - self.collector.aging_objects = self.new_young_objects + self.collector.aging_objects = self.new_young_objects + self.collector.aging_objects_wr = self.new_young_objects_wr self.new_young_objects = self.NULL + self.new_young_objects_wr = self.NULL self.new_young_objects_size = r_uint(0) #self.collect_weakref_pages = self.weakref_pages #self.collect_finalizer_pages = self.finalizer_pages @@ -611,6 +625,8 @@ "flagged_objects should be empty here") ll_assert(self.new_young_objects == self.NULL, "new_young_obejcts should be empty here") + ll_assert(self.new_young_objects_wr == self.NULL, + "new_young_obejcts_wr should be empty here") # # Keep this newest_obj alive if newest_obj: @@ -635,8 +651,11 @@ # # Copy a few 'mutator' fields to 'collector' fields self.collector.delayed_aging_objects = self.collector.aging_objects + self.collector.delayed_aging_objects_wr=self.collector.aging_objects_wr self.collector.aging_objects = self.old_objects + self.collector.aging_objects_wr = self.old_objects_wr self.old_objects = self.NULL + self.old_objects_wr = self.NULL self.old_objects_size = r_uint(0) #self.collect_weakref_pages = self.weakref_pages #self.collect_finalizer_pages = self.finalizer_pages @@ -686,12 +705,20 @@ self.collector.gray_objects.append(obj) def debug_check_lists(self): - # just check that they are correct, non-infinite linked lists - self.debug_check_list(self.new_young_objects, - self.new_young_objects_size) - self.debug_check_list(self.old_objects, self.old_objects_size) + # check that they are correct, non-infinite linked lists, + # and check that the total size of objects in the lists corresponds + # precisely to the value recorded + size = self.debug_check_list(self.new_young_objects) + size += self.debug_check_list(self.new_young_objects_wr) + ll_assert(size == self.new_young_objects_size, + "bogus total size in new_young_objects") + # + size = self.debug_check_list(self.old_objects) + size += self.debug_check_list(self.old_objects_wr) + ll_assert(size == self.old_objects_size, + "bogus total size in old_objects") - def debug_check_list(self, list, totalsize): + def debug_check_list(self, list): previous = self.NULL count = 0 size = r_uint(0) @@ -708,8 +735,7 @@ previous = list # detect loops of any size list = list.next #print "\tTOTAL:", size - ll_assert(size == totalsize, "bogus total size in linked list") - return count + return size def acquire(self, lock): if we_are_translated(): @@ -748,7 +774,6 @@ # Weakrefs def weakref_deref(self, wrobj): - raise NotImplementedError # Weakrefs need some care. This code acts as a read barrier. # The only way I found is to acquire the mutex_lock to prevent # the collection thread from going from collector.running==1 @@ -775,7 +800,7 @@ # collector phase was already finished (deal_with_weakrefs). # Otherwise we would be returning an object that is about to # be swept away. - if not self.is_marked_or_static(targetobj, self.current_mark): + if not self.collector.is_marked_or_static(targetobj): targetobj = llmemory.NULL # else: @@ -841,7 +866,9 @@ # when the collection starts, we make all young objects aging and # move 'new_young_objects' into 'aging_objects' self.aging_objects = self.NULL + self.aging_objects_wr = self.NULL self.delayed_aging_objects = self.NULL + self.delayed_aging_objects_wr = self.NULL def setup(self): self.ready_to_start_lock = self.gc.ready_to_start_lock @@ -994,7 +1021,7 @@ def _collect_add_pending(self, root, ignored): obj = root.address[0] - # these 'get_mark(obj) are here for debugging invalid marks. + # these 'get_mark(obj)' are here for debugging invalid marks. # XXX check that the C compiler removes them if lldebug is off self.get_mark(obj) self.gray_objects.append(obj) @@ -1003,10 +1030,17 @@ def collector_sweep(self): if self.major_collection_phase != 1: # no sweeping during phase 1 self.update_size = self.gc.old_objects_size + # lst = self._collect_do_sweep(self.aging_objects, self.current_aging_marker, self.gc.old_objects) self.gc.old_objects = lst + # + lst = self._collect_do_sweep(self.aging_objects_wr, + self.current_aging_marker, + self.gc.old_objects_wr) + self.gc.old_objects_wr = lst + # self.gc.old_objects_size = self.update_size # self.running = -1 @@ -1023,6 +1057,12 @@ self.aging_objects) self.aging_objects = lst self.delayed_aging_objects = self.NULL + # + lst = self._collect_do_sweep(self.delayed_aging_objects_wr, + self.current_old_marker, + self.aging_objects_wr) + self.aging_objects_wr = lst + self.delayed_aging_objects_wr = self.NULL def _collect_do_sweep(self, hdr, still_not_marked, linked_list): size_gc_header = self.gc.gcheaderbuilder.size_gc_header @@ -1059,43 +1099,59 @@ # ------------------------- # CollectorThread: Weakrefs + def is_marked_or_static(self, obj): + return self.get_mark(obj) != self.current_aging_marker + def deal_with_weakrefs(self): - self.running = 3; return - # ^XXX^ - size_gc_header = self.gcheaderbuilder.size_gc_header - current_mark = self.current_mark - weakref_page = self.collect_weakref_pages - self.collect_weakref_pages = self.NULL - self.collect_weakref_tails = self.NULL - while weakref_page != self.NULL: - next_page = list_next(weakref_page) + # For simplicity, we do the minimal amount of work here: if a weakref + # dies or points to a dying object, we clear it and move it from + # 'aging_objects_wr' to 'aging_objects'. Otherwise, we keep it in + # 'aging_objects_wr'. + size_gc_header = self.gc.gcheaderbuilder.size_gc_header + linked_list = self.aging_objects + linked_list_wr = self.NULL + # + hdr = self.aging_objects_wr + while hdr != self.NULL: + nexthdr = hdr.next # - # If the weakref points to a dead object, make it point to NULL. - x = llmemory.cast_ptr_to_adr(weakref_page) - x = llarena.getfakearenaaddress(x) + 8 - hdr = llmemory.cast_adr_to_ptr(x, self.HDRPTR) - type_id = llop.extract_high_ushort(llgroup.HALFWORD, hdr.tid) - offset = self.weakpointer_offset(type_id) - ll_assert(offset >= 0, "bad weakref") - obj = x + size_gc_header - pointing_to = (obj + offset).address[0] - ll_assert(pointing_to != llmemory.NULL, "null weakref?") - if not self.is_marked_or_static(pointing_to, current_mark): - # 'pointing_to' dies: relink to self.collect_pages[0] - (obj + offset).address[0] = llmemory.NULL - set_next(weakref_page, self.collect_pages[0]) - self.collect_pages[0] = weakref_page + mark = hdr.tid & 0xFF + if mark == self.current_aging_marker: + # the weakref object itself is not referenced any more + valid = False + # else: - # the weakref stays alive - set_next(weakref_page, self.collect_weakref_pages) - self.collect_weakref_pages = weakref_page - if self.collect_weakref_tails == self.NULL: - self.collect_weakref_tails = weakref_page + # + type_id = llop.extract_high_ushort(llgroup.HALFWORD, hdr.tid) + offset = self.gc.weakpointer_offset(type_id) + ll_assert(offset >= 0, "bad weakref") + obj = llmemory.cast_ptr_to_adr(hdr) + size_gc_header + pointing_to = (obj + offset).address[0] + if pointing_to == llmemory.NULL: + # special case only for fresh new weakrefs not yet filled + valid = True + # + elif not self.is_marked_or_static(pointing_to): + # 'pointing_to' dies + (obj + offset).address[0] = llmemory.NULL + valid = False + else: + valid = True # - weakref_page = next_page + if valid: + hdr.next = linked_list_wr + linked_list = linked_list_wr + else: + hdr.next = linked_list + linked_list = hdr.next + # + hdr = nexthdr + # + self.aging_objects = linked_list + self.aging_objects_wr = linked_list_wr # self.acquire(self.mutex_lock) - self.collector.running = 3 + self.running = 3 #debug_print("collector.running = 3") self.release(self.mutex_lock) From noreply at buildbot.pypy.org Thu Jan 12 14:09:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 12 Jan 2012 14:09:31 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: hg merge default Message-ID: <20120112130931.5631582C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51259:142b814bf182 Date: 2012-01-12 14:09 +0100 http://bitbucket.org/pypy/pypy/changeset/142b814bf182/ Log: hg merge default diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -257,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,8 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -27,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -2974,6 +2978,56 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -411,6 +412,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -476,7 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -485,12 +488,7 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: @@ -503,6 +501,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -537,7 +536,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub @@ -621,7 +620,10 @@ def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -810,7 +812,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): @@ -2550,3 +2555,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -188,7 +188,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, @@ -741,7 +744,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,13 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -416,12 +423,13 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset - # getfield_raw/int_add/setfield_raw + ops + None - assert len(ops_offset) == 3 + len(operations) + 1 + # 2*(getfield_raw/int_add/setfield_raw) + ops + None + assert len(ops_offset) == 2*3 + len(operations) + 1 assert (ops_offset[operations[0]] <= ops_offset[operations[1]] <= ops_offset[operations[2]] <= diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,11 +8,15 @@ class JitPolicy(object): - def __init__(self): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -75,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -90,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -296,8 +297,6 @@ patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -307,21 +306,38 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) + else: + debug_info = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset, name=loopname) @@ -332,25 +348,40 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) + else: + hooks = None + debug_info = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/jitprof.py b/pypy/jit/metainterp/jitprof.py --- a/pypy/jit/metainterp/jitprof.py +++ b/pypy/jit/metainterp/jitprof.py @@ -18,8 +18,8 @@ OPT_FORCINGS ABORT_TOO_LONG ABORT_BRIDGE +ABORT_BAD_LOOP ABORT_ESCAPE -ABORT_BAD_LOOP ABORT_FORCE_QUASIIMMUT NVIRTUALS NVHOLES @@ -30,10 +30,13 @@ TOTAL_FREED_BRIDGES """ +counter_names = [] + def _setup(): names = counters.split() for i, name in enumerate(names): globals()[name] = i + counter_names.append(name) global ncounters ncounters = len(names) _setup() diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -442,6 +442,22 @@ """ self.optimize_loop(ops, expected) + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -271,6 +271,10 @@ if newresult is not op.result and not newvalue.is_constant(): op = ResOperation(rop.SAME_AS, [op.result], newresult) self.optimizer._newoperations.append(op) + if self.optimizer.loop.logops: + debug_print(' Falling back to add extra: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + self.optimizer.flush() self.optimizer.emitting_dissabled = False @@ -435,7 +439,13 @@ return for a in op.getarglist(): if not isinstance(a, Const) and a not in seen: - self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen) + self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, + seen) + + if self.optimizer.loop.logops: + debug_print(' Emitting short op: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + optimizer.send_extra_operation(op) seen[op.result] = True if op.is_ovf(): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1793,6 +1793,15 @@ def aborted_tracing(self, reason): self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') + jd_sd = self.jitdriver_sd + if not self.current_merge_points: + greenkey = None # we're in the bridge + else: + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, + jd_sd.jitdriver, + greenkey, + jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): @@ -1967,8 +1976,6 @@ self.compile_loop(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! self.staticdata.log('cancelled, tracing more...') - #self.staticdata.log('cancelled, stopping tracing') - #raise SwitchToBlackhole(ABORT_BAD_LOOP) # Otherwise, no loop found so far, so continue tracing. start = len(self.history.operations) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -16,15 +16,15 @@ # debug name = "" pc = 0 + opnum = 0 + + _attrs_ = ('result',) def __init__(self, result): self.result = result - # methods implemented by each concrete class - # ------------------------------------------ - def getopnum(self): - raise NotImplementedError + return self.opnum # methods implemented by the arity mixins # --------------------------------------- @@ -64,6 +64,9 @@ def setdescr(self, descr): raise NotImplementedError + def cleardescr(self): + pass + # common methods # -------------- @@ -196,6 +199,9 @@ self._check_descr(descr) self._descr = descr + def cleardescr(self): + self._descr = None + def _check_descr(self, descr): if not we_are_translated() and getattr(descr, 'I_am_a_descr', False): return # needed for the mock case in oparser_model @@ -590,12 +596,9 @@ baseclass = PlainResOp mixin = arity2mixin.get(arity, N_aryOp) - def getopnum(self): - return opnum - cls_name = '%s_OP' % name bases = (get_base_class(mixin, baseclass),) - dic = {'getopnum': getopnum} + dic = {'opnum': opnum} return type(cls_name, bases, dic) setup(__name__ == '__main__') # print out the table when run directly diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -56,8 +56,6 @@ greenfield_info = None result_type = result_kind portal_runner_ptr = "???" - on_compile = lambda *args: None - on_compile_bridge = lambda *args: None stats = history.Stats() cpu = CPUClass(rtyper, stats, None, False) diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -53,8 +53,6 @@ call_pure_results = {} class jitdriver_sd: warmstate = FakeState() - on_compile = staticmethod(lambda *args: None) - on_compile_bridge = staticmethod(lambda *args: None) virtualizable_info = None def test_compile_loop(): diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -10,57 +10,6 @@ def getloc2(g): return "in jitdriver2, with g=%d" % g -class JitDriverTests(object): - def test_on_compile(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = looptoken - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - i += 1 - - self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "loop")] - self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] - - def test_on_compile_bridge(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = loop - def on_compile_bridge(self, logger, orig_token, operations, n): - assert 'bridge' not in called - called['bridge'] = orig_token - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - if i >= 4: - i += 2 - i += 1 - - self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] - - -class TestLLtypeSingle(JitDriverTests, LLJitMixin): - pass - class MultipleJitDriversTests(object): def test_simple(self): diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -0,0 +1,148 @@ + +from pypy.rlib.jit import JitDriver, JitHookInterface +from pypy.rlib import jit_hooks +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.jit.codewriter.policy import JitPolicy +from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT +from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr + +class TestJitHookInterface(LLJitMixin): + def test_abort_quasi_immut(self): + reasons = [] + + class MyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + assert jitdriver is myjitdriver + assert len(greenkey) == 1 + reasons.append(reason) + assert greenkey_repr == 'blah' + + iface = MyJitIface() + + myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'], + get_printable_location=lambda *args: 'blah') + + class Foo: + _immutable_fields_ = ['a?'] + def __init__(self, a): + self.a = a + def f(a, x): + foo = Foo(a) + total = 0 + while x > 0: + myjitdriver.jit_merge_point(foo=foo, x=x, total=total) + # read a quasi-immutable field out of a Constant + total += foo.a + foo.a += 1 + x -= 1 + return total + # + assert f(100, 7) == 721 + res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface)) + assert res == 721 + assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 + + def test_on_compile(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append(("compile", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + def before_compile(self, di): + called.append(("optimize", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + #def before_optimize(self, jitdriver, logger, looptoken, oeprations, + # type, greenkey): + # called.append(("trace", greenkey[1].getint(), + # greenkey[0].getint(), type)) + + iface = MyJitIface() + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + i += 1 + + self.meta_interp(loop, [1, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] + self.meta_interp(loop, [2, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] + + def test_on_compile_bridge(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append("compile") + + def after_compile_bridge(self, di): + called.append("compile_bridge") + + def before_compile_bridge(self, di): + called.append("before_compile_bridge") + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + if i >= 4: + i += 2 + i += 1 + + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface())) + assert called == ["compile", "before_compile_bridge", "compile_bridge"] + + def test_resop_interface(self): + driver = JitDriver(greens = [], reds = ['i']) + + def loop(i): + while i > 0: + driver.jit_merge_point(i=i) + i -= 1 + + def main(): + loop(1) + op = jit_hooks.resop_new(rop.INT_ADD, + [jit_hooks.boxint_new(3), + jit_hooks.boxint_new(4)], + jit_hooks.boxint_new(1)) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) + box5 = jit_hooks.boxint_new(18) + jit_hooks.resop_setarg(op, 0, box5) + assert jit_hooks.resop_getarg(op, 0) == box5 + box6 = jit_hooks.resop_getresult(op) + assert jit_hooks.box_getint(box6) == 1 + jit_hooks.resop_setresult(op, box5) + assert jit_hooks.resop_getresult(op) == box5 + + self.meta_interp(main, []) diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py --- a/pypy/jit/metainterp/test/test_resoperation.py +++ b/pypy/jit/metainterp/test/test_resoperation.py @@ -30,17 +30,17 @@ cls = rop.opclasses[rop.rop.INT_ADD] assert issubclass(cls, rop.PlainResOp) assert issubclass(cls, rop.BinaryOp) - assert cls.getopnum.im_func(None) == rop.rop.INT_ADD + assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD cls = rop.opclasses[rop.rop.CALL] assert issubclass(cls, rop.ResOpWithDescr) assert issubclass(cls, rop.N_aryOp) - assert cls.getopnum.im_func(None) == rop.rop.CALL + assert cls.getopnum.im_func(cls) == rop.rop.CALL cls = rop.opclasses[rop.rop.GUARD_TRUE] assert issubclass(cls, rop.GuardResOp) assert issubclass(cls, rop.UnaryOp) - assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE + assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE def test_mixins_in_common_base(): INT_ADD = rop.opclasses[rop.rop.INT_ADD] diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,9 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_opnum from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory class TranslationTest: @@ -22,6 +24,7 @@ # - jitdriver hooks # - two JITs # - string concatenation, slicing and comparison + # - jit hooks interface class Frame(object): _virtualizable2_ = ['l[*]'] @@ -91,7 +94,9 @@ return f.i # def main(i, j): - return f(i) - f2(i+j, i, j) + op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], + boxint_new(8)) + return f(i) - f2(i+j, i, j) + resop_opnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -1,4 +1,5 @@ import sys, py +from pypy.tool.sourcetools import func_with_new_name from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\ cast_base_ptr_to_instance, hlstr @@ -112,7 +113,7 @@ return ll_meta_interp(function, args, backendopt=backendopt, translate_support_code=True, **kwds) -def _find_jit_marker(graphs, marker_name): +def _find_jit_marker(graphs, marker_name, check_driver=True): results = [] for graph in graphs: for block in graph.iterblocks(): @@ -120,8 +121,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - (op.args[1].value is None or - op.args[1].value.active)): # the jitdriver + (not check_driver or op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -140,6 +141,9 @@ "found several jit_merge_points in the same graph") return results +def find_access_helpers(graphs): + return _find_jit_marker(graphs, 'access_helper', False) + def locate_jit_merge_point(graph): [(graph, block, pos)] = find_jit_merge_points([graph]) return block, pos, block.operations[pos] @@ -206,6 +210,7 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # + self.hooks = policy.jithookiface self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() @@ -213,6 +218,7 @@ self.rewrite_jit_merge_points(policy) verbose = False # not self.cpu.translate_support_code + self.rewrite_access_helpers() self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() self.rewrite_set_param() @@ -619,6 +625,24 @@ graph = self.annhelper.getgraph(func, args_s, s_result) return self.annhelper.graph2delayed(graph, FUNC) + def rewrite_access_helpers(self): + ah = find_access_helpers(self.translator.graphs) + for graph, block, index in ah: + op = block.operations[index] + self.rewrite_access_helper(op) + + def rewrite_access_helper(self, op): + ARGS = [arg.concretetype for arg in op.args[2:]] + RESULT = op.result.concretetype + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + # make sure we make a copy of function so it no longer belongs + # to extregistry + func = op.args[1].value + func = func_with_new_name(func, func.func_name + '_compiled') + ptr = self.helper_func(FUNCPTR, func) + op.opname = 'direct_call' + op.args = [Constant(ptr, FUNCPTR)] + op.args[2:] + def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: self.rewrite_jit_merge_point(jd, policy) diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -596,20 +596,6 @@ return fn(*greenargs) self.should_unroll_one_iteration = should_unroll_one_iteration - if hasattr(jd.jitdriver, 'on_compile'): - def on_compile(logger, token, operations, type, greenkey): - greenargs = unwrap_greenkey(greenkey) - return jd.jitdriver.on_compile(logger, token, operations, type, - *greenargs) - def on_compile_bridge(logger, orig_token, operations, n): - return jd.jitdriver.on_compile_bridge(logger, orig_token, - operations, n) - jd.on_compile = on_compile - jd.on_compile_bridge = on_compile_bridge - else: - jd.on_compile = lambda *args: None - jd.on_compile_bridge = lambda *args: None - redargtypes = ''.join([kind[0] for kind in jd.red_args_types]) def get_assembler_token(greenkey): diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -89,11 +89,18 @@ assert typ == 'class' return self.model.ConstObj(ootype.cast_to_object(obj)) - def get_descr(self, poss_descr): + def get_descr(self, poss_descr, allow_invent): if poss_descr.startswith('<'): return None - else: + try: return self._consts[poss_descr] + except KeyError: + if allow_invent: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: @@ -186,7 +193,8 @@ poss_descr = allargs[-1].strip() if poss_descr.startswith('descr='): - descr = self.get_descr(poss_descr[len('descr='):]) + descr = self.get_descr(poss_descr[len('descr='):], + opname == 'label') allargs = allargs[:-1] for arg in allargs: arg = arg.strip() diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -6,7 +6,7 @@ from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat - from pypy.jit.metainterp.history import BasicFailDescr + from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.history import get_const_ptr_for_string @@ -42,6 +42,10 @@ class JitCellToken(object): I_am_a_descr = True + class TargetToken(object): + def __init__(self, jct): + pass + class BasicFailDescr(object): I_am_a_descr = True diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,8 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken,\ + TargetToken class BaseTestOparser(object): @@ -243,6 +244,16 @@ b = loop.getboxes() assert isinstance(b.sum0, BoxInt) + def test_label(self): + x = """ + [i0] + label(i0, descr=1) + jump(i0, descr=1) + """ + loop = self.parse(x) + assert loop.operations[0].getdescr() is loop.operations[1].getdescr() + assert isinstance(loop.operations[0].getdescr(), TargetToken) + class ForbiddenModule(object): def __init__(self, name, old_mod): diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize @@ -387,6 +388,8 @@ "Float": "space.w_float", "Long": "space.w_long", "Complex": "space.w_complex", + "ByteArray": "space.w_bytearray", + "MemoryView": "space.gettypeobject(W_MemoryView.typedef)", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', 'NotImplemented': 'space.type(space.w_NotImplemented)', diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py --- a/pypy/module/cpyext/buffer.py +++ b/pypy/module/cpyext/buffer.py @@ -1,6 +1,36 @@ +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, Py_buffer) +from pypy.module.cpyext.pyobject import PyObject + + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyObject_CheckBuffer(space, w_obj): + """Return 1 if obj supports the buffer interface otherwise 0.""" + return 0 # the bf_getbuffer field is never filled by cpyext + + at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real], + rffi.INT_real, error=-1) +def PyObject_GetBuffer(space, w_obj, view, flags): + """Export obj into a Py_buffer, view. These arguments must + never be NULL. The flags argument is a bit field indicating what + kind of buffer the caller is prepared to deal with and therefore what + kind of buffer the exporter is allowed to return. The buffer interface + allows for complicated memory sharing possibilities, but some caller may + not be able to handle all the complexity but may want to see if the + exporter will let them take a simpler view to its memory. + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a BufferError unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + Py_buffer structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + 0 is returned on success and -1 on error.""" + raise OperationError(space.w_TypeError, space.wrap( + 'PyPy does not yet implement the new buffer interface')) @cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL) def PyBuffer_IsContiguous(space, view, fortran): diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -123,10 +123,6 @@ typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *); typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **); -typedef int (*objobjproc)(PyObject *, PyObject *); -typedef int (*visitproc)(PyObject *, void *); -typedef int (*traverseproc)(PyObject *, visitproc, void *); - /* Py3k buffer interface */ typedef struct bufferinfo { void *buf; @@ -153,6 +149,41 @@ typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); + /* Flags for getting buffers */ +#define PyBUF_SIMPLE 0 +#define PyBUF_WRITABLE 0x0001 +/* we used to include an E, backwards compatible alias */ +#define PyBUF_WRITEABLE PyBUF_WRITABLE +#define PyBUF_FORMAT 0x0004 +#define PyBUF_ND 0x0008 +#define PyBUF_STRIDES (0x0010 | PyBUF_ND) +#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) +#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) +#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) +#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE) +#define PyBUF_CONTIG_RO (PyBUF_ND) + +#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE) +#define PyBUF_STRIDED_RO (PyBUF_STRIDES) + +#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT) + +#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT) + + +#define PyBUF_READ 0x100 +#define PyBUF_WRITE 0x200 +#define PyBUF_SHADOW 0x400 +/* end Py3k buffer interface */ + +typedef int (*objobjproc)(PyObject *, PyObject *); +typedef int (*visitproc)(PyObject *, void *); +typedef int (*traverseproc)(PyObject *, visitproc, void *); + typedef struct { /* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all arguments are guaranteed to be of the object's type (modulo diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -5,7 +5,7 @@ struct _is; /* Forward */ typedef struct _is { - int _foo; + struct _is *next; } PyInterpreterState; typedef struct _ts { diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -2,7 +2,10 @@ cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) from pypy.rpython.lltypesystem import rffi, lltype -PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ())) +PyInterpreterStateStruct = lltype.ForwardReference() +PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) +cpython_struct( + "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) @@ -54,7 +57,8 @@ class InterpreterState(object): def __init__(self, space): - self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True) + self.interpreter_state = lltype.malloc( + PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) def new_thread_state(self): capsule = ThreadStateCapsule() diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -34,141 +34,6 @@ @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyObject_CheckBuffer(space, obj): - """Return 1 if obj supports the buffer interface otherwise 0.""" - raise NotImplementedError - - at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1) -def PyObject_GetBuffer(space, obj, view, flags): - """Export obj into a Py_buffer, view. These arguments must - never be NULL. The flags argument is a bit field indicating what - kind of buffer the caller is prepared to deal with and therefore what - kind of buffer the exporter is allowed to return. The buffer interface - allows for complicated memory sharing possibilities, but some caller may - not be able to handle all the complexity but may want to see if the - exporter will let them take a simpler view to its memory. - - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a BufferError unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - Py_buffer structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. - - 0 is returned on success and -1 on error. - - The following table gives possible values to the flags arguments. - - Flag - - Description - - PyBUF_SIMPLE - - This is the default flag state. The returned - buffer may or may not have writable memory. The - format of the data will be assumed to be unsigned - bytes. This is a "stand-alone" flag constant. It - never needs to be '|'d to the others. The exporter - will raise an error if it cannot provide such a - contiguous buffer of bytes. - - PyBUF_WRITABLE - - The returned buffer must be writable. If it is - not writable, then raise an error. - - PyBUF_STRIDES - - This implies PyBUF_ND. The returned - buffer must provide strides information (i.e. the - strides cannot be NULL). This would be used when - the consumer can handle strided, discontiguous - arrays. Handling strides automatically assumes - you can handle shape. The exporter can raise an - error if a strided representation of the data is - not possible (i.e. without the suboffsets). - - PyBUF_ND - - The returned buffer must provide shape - information. The memory will be assumed C-style - contiguous (last dimension varies the - fastest). The exporter may raise an error if it - cannot provide this kind of contiguous buffer. If - this is not given then shape will be NULL. - - PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS - - These flags indicate that the contiguity returned - buffer must be respectively, C-contiguous (last - dimension varies the fastest), Fortran contiguous - (first dimension varies the fastest) or either - one. All of these flags imply - PyBUF_STRIDES and guarantee that the - strides buffer info structure will be filled in - correctly. - - PyBUF_INDIRECT - - This flag indicates the returned buffer must have - suboffsets information (which can be NULL if no - suboffsets are needed). This can be used when - the consumer can handle indirect array - referencing implied by these suboffsets. This - implies PyBUF_STRIDES. - - PyBUF_FORMAT - - The returned buffer must have true format - information if this flag is provided. This would - be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An - exporter should always be able to provide this - information if requested. If format is not - explicitly requested then the format must be - returned as NULL (which means 'B', or - unsigned bytes) - - PyBUF_STRIDED - - This is equivalent to (PyBUF_STRIDES | - PyBUF_WRITABLE). - - PyBUF_STRIDED_RO - - This is equivalent to (PyBUF_STRIDES). - - PyBUF_RECORDS - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_RECORDS_RO - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT). - - PyBUF_FULL - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_FULL_RO - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT). - - PyBUF_CONTIG - - This is equivalent to (PyBUF_ND | - PyBUF_WRITABLE). - - PyBUF_CONTIG_RO - - This is equivalent to (PyBUF_ND).""" raise NotImplementedError @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -37,6 +37,7 @@ def test_thread_state_interp(self, space, api): ts = api.PyThreadState_Get() assert ts.c_interp == api.PyInterpreterState_Head() + assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO) def test_basic_threadstate_dance(self, space, api): # Let extension modules call these functions, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -9,7 +9,7 @@ appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule @@ -48,6 +48,7 @@ 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', + 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -14,29 +14,29 @@ return mean(a) def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n,n), dtype=dtype) for i in range(n): a[i][i] = 1 return a def mean(a): if not hasattr(a, "mean"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.mean() def sum(a): if not hasattr(a, "sum"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.sum() def min(a): if not hasattr(a, "min"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.min() def max(a): if not hasattr(a, "max"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.max() def arange(start, stop=None, step=1, dtype=None): @@ -47,9 +47,9 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i @@ -90,5 +90,5 @@ you should assign the new shape to the shape attribute of the array ''' if not hasattr(a, 'reshape'): - a = numpypy.array(a) + a = _numpypy.array(a) return a.reshape(shape) diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -78,6 +78,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -170,6 +171,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), @@ -245,6 +247,7 @@ long_name = "int64" W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), __module__ = "numpypy", + __new__ = interp2app(W_LongBox.descr__new__.im_func), ) W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -16,24 +16,28 @@ virtualizables=['frame'], reds=['result_size', 'frame', 'ri', 'self', 'result'], get_printable_location=signature.new_printable_location('numpy'), + name='numpy', ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('all'), + name='numpy_all', ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('any'), + name='numpy_any', ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['self', 'frame', 'source', 'res_iter'], get_printable_location=signature.new_printable_location('slice'), + name='numpy_slice', ) def _find_shape_and_elems(space, w_iterable): @@ -297,6 +301,7 @@ greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), + name='numpy_' + op_name, ) def loop(self): sig = self.find_sig() @@ -380,6 +385,9 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -409,7 +417,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -422,8 +430,9 @@ else: concrete.to_str(space, 1, res, indent=' ') if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -559,6 +568,18 @@ def descr_mean(self, space): return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_var(self, space): + # var = mean((values - mean(values)) ** 2) + w_res = self.descr_sub(space, self.descr_mean(space)) + assert isinstance(w_res, BaseArray) + w_res = w_res.descr_pow(space, space.wrap(2)) + assert isinstance(w_res, BaseArray) + return w_res.descr_mean(space) + + def descr_std(self, space): + # std(v) = sqrt(var(v)) + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -840,80 +861,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) + if i < self.shape[0] - 1: + builder.append(ccomma +'\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe @@ -1185,6 +1206,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), @@ -1199,6 +1221,8 @@ all = interp2app(BaseArray.descr_all), any = interp2app(BaseArray.descr_any), dot = interp2app(BaseArray.descr_dot), + var = interp2app(BaseArray.descr_var), + std = interp2app(BaseArray.descr_std), copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -14,6 +14,7 @@ virtualizables = ["frame"], reds = ["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), + name='numpy_reduce', ) class W_Ufunc(Wrappable): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -137,6 +137,9 @@ def _invent_array_numbering(self, arr, cache): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() + # this get_concrete never forces assembler. If we're here and array + # is not of a concrete class it means that we have a _forced_result, + # otherwise the signature would not match assert isinstance(concr, ConcreteArray) self.array_no = _add_ptr_to_cache(concr.storage, cache) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +36,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +44,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +58,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +156,28 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -181,7 +190,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +205,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +227,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +250,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +260,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +269,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +284,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +295,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -315,7 +324,7 @@ def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +339,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +348,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +361,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,33 +3,33 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpypy import array, mean + from _numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -158,9 +158,10 @@ assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -175,12 +176,26 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from _numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1, 2]) + assert x.ndim == 1 + x = array([[1, 2], [3, 4]]) + assert x.ndim == 2 + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but _numpypy raises an + # TypeError + raises(TypeError, 'x.ndim = 3') + def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -189,7 +204,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -200,13 +215,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -215,7 +230,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -232,12 +247,12 @@ assert (c == b).all() def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -246,7 +261,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -256,7 +271,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -264,7 +279,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -275,7 +290,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -286,7 +301,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -305,7 +320,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -313,14 +328,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -329,13 +344,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -344,7 +359,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -364,7 +379,7 @@ a.shape = (1,) def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -377,7 +392,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -413,13 +428,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -432,7 +447,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -440,20 +455,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -462,14 +477,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -477,34 +492,34 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -512,7 +527,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -544,7 +559,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -552,14 +567,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -569,7 +584,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -577,14 +592,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -597,7 +612,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -605,14 +620,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -623,7 +638,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -634,7 +649,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -645,7 +660,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -659,7 +674,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -673,7 +688,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpypy import array + from _numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -681,7 +696,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -691,7 +706,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -705,13 +720,13 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 def test_sum(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -720,52 +735,52 @@ assert a.sum() == 5 def test_identity(self): - from numpypy import identity, array - from numpypy import int32, float64, dtype + from _numpypy import identity, array + from _numpypy import int32, float64, dtype a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -786,14 +801,14 @@ assert a.argmax() == 2 def test_argmin(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -802,7 +817,7 @@ assert b.all() == True def test_any(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -811,7 +826,7 @@ assert c.any() == False def test_dot(self): - from numpypy import array, dot + from _numpypy import array, dot a = array(range(5)) assert a.dot(a) == 30.0 @@ -821,14 +836,14 @@ assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all() def test_dot_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype, float64, int8, bool_ + from _numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -845,7 +860,7 @@ def test_comparison(self): import operator - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -864,7 +879,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpypy import array + from _numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -874,7 +889,7 @@ assert not bool(array([0])) def test_slice_assignment(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[::-1] = a assert (a == [0, 1, 2, 1, 0]).all() @@ -884,8 +899,8 @@ assert (a == [8, 6, 4, 2, 0]).all() def test_debug_repr(self): - from numpypy import zeros, sin - from numpypy.pypy import debug_repr + from _numpypy import zeros, sin + from _numpypy.pypy import debug_repr a = zeros(1) assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' @@ -899,8 +914,8 @@ assert debug_repr(b) == 'Array' def test_remove_invalidates(self): - from numpypy import array - from numpypy.pypy import remove_invalidates + from _numpypy import array + from _numpypy.pypy import remove_invalidates a = array([1, 2, 3]) b = a + a remove_invalidates(a) @@ -908,7 +923,7 @@ assert b[0] == 28 def test_virtual_views(self): - from numpypy import arange + from _numpypy import arange a = arange(15) c = (a + a) d = c[::2] @@ -926,7 +941,7 @@ assert b[1] == 2 def test_tolist_scalar(self): - from numpypy import int32, bool_ + from _numpypy import int32, bool_ x = int32(23) assert x.tolist() == 23 assert type(x.tolist()) is int @@ -934,13 +949,13 @@ assert y.tolist() is True def test_tolist_zerodim(self): - from numpypy import array + from _numpypy import array x = array(3) assert x.tolist() == 3 assert type(x.tolist()) is int def test_tolist_singledim(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.tolist() == [0, 1, 2, 3, 4] assert type(a.tolist()[0]) is int @@ -948,41 +963,55 @@ assert b.tolist() == [0.2, 0.4, 0.6] def test_tolist_multidim(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert a.tolist() == [[1, 2], [3, 4]] def test_tolist_view(self): - from numpypy import array - a = array([[1,2],[3,4]]) + from _numpypy import array + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): - from numpypy import array + from _numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] + def test_var(self): + from _numpypy import array + a = array(range(10)) + assert a.var() == 8.25 + a = array([5.0]) + assert a.var() == 0.0 + + def test_std(self): + from _numpypy import array + a = array(range(10)) + assert a.std() == 2.8722813232690143 + a = array([5.0]) + assert a.std() == 0.0 + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpypy - a = numpypy.zeros((2, 2)) + import _numpypy + a = _numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpypy - assert numpypy.zeros(1).shape == (1,) - assert numpypy.zeros((2, 2)).shape == (2, 2) - assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpypy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpypy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpypy.zeros(())) - raises(ValueError, numpypy.array, [[1, 2], 3]) + import _numpypy + assert _numpypy.zeros(1).shape == (1,) + assert _numpypy.zeros((2, 2)).shape == (2, 2) + assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(_numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, _numpypy.zeros(())) + raises(ValueError, _numpypy.array, [[1, 2], 3]) def test_getsetitem(self): - import numpypy - a = numpypy.zeros((2, 3, 1)) + import _numpypy + a = _numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -993,8 +1022,8 @@ assert a[1, -1, 0] == 3 def test_slices(self): - import numpypy - a = numpypy.zeros((4, 3, 2)) + import _numpypy + a = _numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -1027,51 +1056,51 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpypy - raises(ValueError, numpypy.array, [[1], 2]) - raises(ValueError, numpypy.array, [[1, 2], [3]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) - a = numpypy.array([[1, 2], [4, 5]]) + import _numpypy + raises(ValueError, _numpypy.array, [[1], 2]) + raises(ValueError, _numpypy.array, [[1, 2], [3]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = _numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpypy - a = numpypy.zeros((3, 4)) + import _numpypy + a = _numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpypy.array([[1, 2], [3, 4]]) + a = _numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpypy.array([5, 6]) + a[1] = _numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpypy.array([8, 10]) + a[:, 1] = _numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpypy.array([11, 12]) + a[0, :: -1] = _numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == \ array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] @@ -1082,37 +1111,37 @@ assert c[1][1] == 12 def test_multidim_ones(self): - from numpypy import ones + from _numpypy import ones a = ones((1, 2, 3)) assert a[0, 1, 2] == 1.0 def test_multidim_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((10, 10)) b = ones(10) a[:, :] = b assert a[3, 5] == 1 def test_broadcast_shape_agreement(self): - from numpypy import zeros, array + from _numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) @@ -1126,7 +1155,7 @@ assert c.all() def test_broadcast_scalar(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 5), 'd') a[:, 1] = 3 assert a[2, 1] == 3 @@ -1137,14 +1166,14 @@ assert a[3, 2] == 0 def test_broadcast_call2(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((4, 1, 5)) b = ones((4, 3, 5)) b[:] = (a + a) assert (b == zeros((4, 3, 5))).all() def test_broadcast_virtualview(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = arange(8).reshape([2, 2, 2]) b = (a + a)[1, 1] c = zeros((2, 2, 2)) @@ -1152,13 +1181,13 @@ assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all() def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert a.argmax() == 5 assert a[:2, ].argmax() == 3 def test_broadcast_wrong_shapes(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 3, 2)) b = zeros((4, 2)) exc = raises(ValueError, lambda: a + b) @@ -1166,7 +1195,7 @@ " together with shapes (4,3,2) (4,2)" def test_reduce(self): - from numpypy import array + from _numpypy import array a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) assert a.sum() == (13 * 12) / 2 b = a[1:, 1::2] @@ -1174,7 +1203,7 @@ assert c.sum() == (6 + 8 + 10 + 12) * 2 def test_transpose(self): - from numpypy import array + from _numpypy import array a = array(((range(3), range(3, 6)), (range(6, 9), range(9, 12)), (range(12, 15), range(15, 18)), @@ -1193,7 +1222,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from numpypy import array, flatiter + from _numpypy import array, flatiter a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1208,23 +1237,23 @@ assert s == 140 def test_flatiter_array_conv(self): - from numpypy import array, dot + from _numpypy import array, dot a = array([1, 2, 3]) assert dot(a.flat, a.flat) == 14 def test_flatiter_varray(self): - from numpypy import ones + from _numpypy import ones a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] def test_slice_copy(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((10, 10)) b = a[0].copy() assert (b == zeros(10)).all() def test_array_interface(self): - from numpypy import array + from _numpypy import array a = array([1, 2, 3]) i = a.__array_interface__ assert isinstance(i['data'][0], int) @@ -1233,6 +1262,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1245,7 +1275,7 @@ def test_fromstring(self): import sys - from numpypy import fromstring, array, uint8, float32, int32 + from _numpypy import fromstring, array, uint8, float32, int32 a = fromstring(self.data) for i in range(4): @@ -1275,17 +1305,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1309,7 +1339,7 @@ assert (u == [1, 0]).all() def test_fromstring_types(self): - from numpypy import (fromstring, int8, int16, int32, int64, uint8, + from _numpypy import (fromstring, int8, int16, int32, int64, uint8, uint16, uint32, float32, float64) a = fromstring('\xFF', dtype=int8) @@ -1333,9 +1363,8 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): - from numpypy import fromstring, uint16, uint8, int32 + from _numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail raises(ValueError, fromstring, "\x01\x02\x03") #3 bytes is not modulo 2 bytes (int16) @@ -1346,7 +1375,8 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpypy import array, zeros + from _numpypy import array, zeros + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1354,14 +1384,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from _numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1374,9 +1416,19 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -1391,7 +1443,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -1417,14 +1469,14 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' def test_str_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -1440,7 +1492,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array, dtype + from _numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) @@ -1462,7 +1514,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_app_reshape(self): - from numpypy import arange, array, dtype, reshape + from _numpypy import arange, array, dtype, reshape a = arange(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpypy import add, ufunc + from _numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpypy import add, multiply, sin + from _numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpypy import add, sin + from _numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpypy import negative, sign, minimum + from _numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpypy import array, ndarray, negative, minimum + from _numpypy import array, ndarray, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpypy import array, absolute + from _numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpypy import array, add + from _numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpypy import array, divide + from _numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -114,7 +114,7 @@ assert (divide(array([-10]), array([2])) == array([-5])).all() def test_fabs(self): - from numpypy import array, fabs + from _numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -123,7 +123,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpypy import array, minimum + from _numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -132,7 +132,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpypy import array, maximum + from _numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -145,7 +145,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpypy import array, multiply + from _numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -154,7 +154,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpypy import array, sign, dtype + from _numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -173,7 +173,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpypy import array, reciprocal + from _numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -182,7 +182,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpypy import array, subtract + from _numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -191,7 +191,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpypy import array, floor + from _numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -200,7 +200,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpypy import array, copysign + from _numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -216,7 +216,7 @@ def test_exp(self): import math - from numpypy import array, exp + from _numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -230,7 +230,7 @@ def test_sin(self): import math - from numpypy import array, sin + from _numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -243,7 +243,7 @@ def test_cos(self): import math - from numpypy import array, cos + from _numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -252,7 +252,7 @@ def test_tan(self): import math - from numpypy import array, tan + from _numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -262,7 +262,7 @@ def test_arcsin(self): import math - from numpypy import array, arcsin + from _numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -276,7 +276,7 @@ def test_arccos(self): import math - from numpypy import array, arccos + from _numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -291,7 +291,7 @@ def test_arctan(self): import math - from numpypy import array, arctan + from _numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -304,7 +304,7 @@ def test_arcsinh(self): import math - from numpypy import arcsinh, inf + from _numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -312,7 +312,7 @@ def test_arctanh(self): import math - from numpypy import arctanh + from _numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -323,7 +323,7 @@ def test_sqrt(self): import math - from numpypy import sqrt + from _numpypy import sqrt nan, inf = float("nan"), float("inf") data = [1, 2, 3, inf] @@ -333,13 +333,13 @@ assert math.isnan(sqrt(nan)) def test_reduce_errors(self): - from numpypy import sin, add + from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(TypeError, add.reduce, 1) def test_reduce(self): - from numpypy import add, maximum + from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 @@ -348,7 +348,7 @@ def test_comparisons(self): import operator - from numpypy import equal, not_equal, less, less_equal, greater, greater_equal + from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -7,16 +7,21 @@ interpleveldefs = { 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', - 'set_compile_hook': 'interp_jit.set_compile_hook', - 'DebugMergePoint': 'interp_resop.W_DebugMergePoint', + 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_optimize_hook': 'interp_resop.set_optimize_hook', + 'set_abort_hook': 'interp_resop.set_abort_hook', + 'ResOperation': 'interp_resop.WrappedOp', + 'Box': 'interp_resop.WrappedBox', } def setup_after_space_initialization(self): # force the __extend__ hacks to occur early from pypy.module.pypyjit.interp_jit import pypyjitdriver + from pypy.module.pypyjit.policy import pypy_hooks # add the 'defaults' attribute from pypy.rlib.jit import PARAMETERS space = self.space pypyjitdriver.space = space w_obj = space.wrap(PARAMETERS) space.setattr(space.wrap(self), space.wrap('defaults'), w_obj) + pypy_hooks.space = space diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -13,11 +13,7 @@ from pypy.interpreter.pycode import PyCode, CO_GENERATOR from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame -from pypy.interpreter.gateway import unwrap_spec from opcode import opmap -from pypy.rlib.nonconst import NonConstant -from pypy.jit.metainterp.resoperation import rop -from pypy.module.pypyjit.interp_resop import debug_merge_point_from_boxes PyFrame._virtualizable2_ = ['last_instr', 'pycode', 'valuestackdepth', 'locals_stack_w[*]', @@ -51,72 +47,19 @@ def should_unroll_one_iteration(next_instr, is_being_profiled, bytecode): return (bytecode.co_flags & CO_GENERATOR) != 0 -def wrap_oplist(space, logops, operations): - list_w = [] - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - list_w.append(space.wrap(debug_merge_point_from_boxes( - op.getarglist()))) - else: - list_w.append(space.wrap(logops.repr_of_resop(op))) - return list_w - class PyPyJitDriver(JitDriver): reds = ['frame', 'ec'] greens = ['next_instr', 'is_being_profiled', 'pycode'] virtualizables = ['frame'] - def on_compile(self, logger, looptoken, operations, type, next_instr, - is_being_profiled, ll_pycode): - from pypy.rpython.annlowlevel import cast_base_ptr_to_instance - - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap(type), - space.newtuple([pycode, - space.wrap(next_instr), - space.wrap(is_being_profiled)]), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap('bridge'), - space.wrap(n), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - pypyjitdriver = PyPyJitDriver(get_printable_location = get_printable_location, get_jitcell_at = get_jitcell_at, set_jitcell_at = set_jitcell_at, confirm_enter_jit = confirm_enter_jit, can_never_inline = can_never_inline, should_unroll_one_iteration = - should_unroll_one_iteration) + should_unroll_one_iteration, + name='pypyjit') class __extend__(PyFrame): @@ -223,34 +166,3 @@ '''For testing. Invokes callable(...), but without letting the JIT follow the call.''' return space.call_args(w_callable, __args__) - -class Cache(object): - in_recursion = False - - def __init__(self, space): - self.w_compile_hook = space.w_None - -def set_compile_hook(space, w_hook): - """ set_compile_hook(hook) - - Set a compiling hook that will be called each time a loop is compiled. - The hook will be called with the following signature: - hook(merge_point_type, loop_type, greenkey or guard_number, operations) - - for now merge point type is always `main` - - loop_type can be either `loop` `entry_bridge` or `bridge` - in case loop is not `bridge`, greenkey will be a set of constants - for jit merge point. in case it's `main` it'll be a tuple - (code, offset, is_being_profiled) - - Note that jit hook is not reentrant. It means that if the code - inside the jit hook is itself jitted, it will get compiled, but the - jit hook won't be called for that. - - XXX write down what else - """ - cache = space.fromcache(Cache) - cache.w_compile_hook = w_hook - cache.in_recursion = NonConstant(False) - return space.w_None diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,41 +1,197 @@ -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import unwrap_spec, interp2app +from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode -from pypy.rpython.lltypesystem import lltype -from pypy.rpython.annlowlevel import cast_base_ptr_to_instance +from pypy.interpreter.error import OperationError +from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr from pypy.rpython.lltypesystem.rclass import OBJECT +from pypy.jit.metainterp.resoperation import rop, AbstractResOp +from pypy.rlib.nonconst import NonConstant +from pypy.rlib import jit_hooks -class W_DebugMergePoint(Wrappable): - """ A class representing debug_merge_point JIT operation +class Cache(object): + in_recursion = False + + def __init__(self, space): + self.w_compile_hook = space.w_None + self.w_abort_hook = space.w_None + self.w_optimize_hook = space.w_None + +def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr): + if greenkey is None: + return space.w_None + jitdriver_name = jitdriver.name + if jitdriver_name == 'pypyjit': + next_instr = greenkey[0].getint() + is_being_profiled = greenkey[1].getint() + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + greenkey[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return space.newtuple([space.wrap(pycode), space.wrap(next_instr), + space.newbool(bool(is_being_profiled))]) + else: + return space.wrap(greenkey_repr) + +def set_compile_hook(space, w_hook): + """ set_compile_hook(hook) + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. """ + cache = space.fromcache(Cache) + cache.w_compile_hook = w_hook + cache.in_recursion = NonConstant(False) - def __init__(self, mp_no, offset, pycode): - self.mp_no = mp_no +def set_optimize_hook(space, w_hook): + """ set_optimize_hook(hook) + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + """ + cache = space.fromcache(Cache) + cache.w_optimize_hook = w_hook + cache.in_recursion = NonConstant(False) + +def set_abort_hook(space, w_hook): + """ set_abort_hook(hook) + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. + """ + cache = space.fromcache(Cache) + cache.w_abort_hook = w_hook + cache.in_recursion = NonConstant(False) + +def wrap_oplist(space, logops, operations, ops_offset=None): + l_w = [] + for op in operations: + if ops_offset is None: + ofs = -1 + else: + ofs = ops_offset.get(op, 0) + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) + return l_w + +class WrappedBox(Wrappable): + """ A class representing a single box + """ + def __init__(self, llbox): + self.llbox = llbox + + def descr_getint(self, space): + return space.wrap(jit_hooks.box_getint(self.llbox)) + + at unwrap_spec(no=int) +def descr_new_box(space, w_tp, no): + return WrappedBox(jit_hooks.boxint_new(no)) + +WrappedBox.typedef = TypeDef( + 'Box', + __new__ = interp2app(descr_new_box), + getint = interp2app(WrappedBox.descr_getint), +) + + at unwrap_spec(num=int, offset=int, repr=str, res=WrappedBox) +def descr_new_resop(space, w_tp, num, w_args, res, offset=-1, + repr=''): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + if res is None: + llres = jit_hooks.emptyval() + else: + llres = res.llbox + return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + +class WrappedOp(Wrappable): + """ A class representing a single ResOperation, wrapped nicely + """ + def __init__(self, op, offset, repr_of_resop): + self.op = op self.offset = offset - self.pycode = pycode + self.repr_of_resop = repr_of_resop def descr_repr(self, space): - return space.wrap('DebugMergePoint()') + return space.wrap(self.repr_of_resop) - at unwrap_spec(mp_no=int, offset=int, pycode=PyCode) -def new_debug_merge_point(space, w_tp, mp_no, offset, pycode): - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_num(self, space): + return space.wrap(jit_hooks.resop_getopnum(self.op)) -def debug_merge_point_from_boxes(boxes): - mp_no = boxes[0].getint() - offset = boxes[2].getint() - llcode = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), - boxes[4].getref_base()) - pycode = cast_base_ptr_to_instance(PyCode, llcode) - assert pycode is not None - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_name(self, space): + return space.wrap(hlstr(jit_hooks.resop_getopname(self.op))) -W_DebugMergePoint.typedef = TypeDef( - 'DebugMergePoint', - __new__ = interp2app(new_debug_merge_point), - __doc__ = W_DebugMergePoint.__doc__, - __repr__ = interp2app(W_DebugMergePoint.descr_repr), - code = interp_attrproperty('pycode', W_DebugMergePoint), + @unwrap_spec(no=int) + def descr_getarg(self, space, no): + return WrappedBox(jit_hooks.resop_getarg(self.op, no)) + + @unwrap_spec(no=int, box=WrappedBox) + def descr_setarg(self, space, no, box): + jit_hooks.resop_setarg(self.op, no, box.llbox) + + def descr_getresult(self, space): + return WrappedBox(jit_hooks.resop_getresult(self.op)) + + def descr_setresult(self, space, w_box): + box = space.interp_w(WrappedBox, w_box) + jit_hooks.resop_setresult(self.op, box.llbox) + +WrappedOp.typedef = TypeDef( + 'ResOperation', + __doc__ = WrappedOp.__doc__, + __new__ = interp2app(descr_new_resop), + __repr__ = interp2app(WrappedOp.descr_repr), + num = GetSetProperty(WrappedOp.descr_num), + name = GetSetProperty(WrappedOp.descr_name), + getarg = interp2app(WrappedOp.descr_getarg), + setarg = interp2app(WrappedOp.descr_setarg), + result = GetSetProperty(WrappedOp.descr_getresult, + WrappedOp.descr_setresult) ) +WrappedOp.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,4 +1,112 @@ from pypy.jit.codewriter.policy import JitPolicy +from pypy.rlib.jit import JitHookInterface +from pypy.rlib import jit_hooks +from pypy.interpreter.error import OperationError +from pypy.jit.metainterp.jitprof import counter_names +from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\ + WrappedOp + +class PyPyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_abort_hook): + cache.in_recursion = True + try: + try: + space.call_function(cache.w_abort_hook, + space.wrap(jitdriver.name), + wrap_greenkey(space, jitdriver, + greenkey, greenkey_repr), + space.wrap(counter_names[reason])) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_abort_hook) + finally: + cache.in_recursion = False + + def after_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._compile_hook(debug_info, w_greenkey) + + def after_compile_bridge(self, debug_info): + self._compile_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def before_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._optimize_hook(debug_info, w_greenkey) + + def before_compile_bridge(self, debug_info): + self._optimize_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def _compile_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_compile_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations, + debug_info.asminfo.ops_offset) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + asminfo = debug_info.asminfo + space.call_function(cache.w_compile_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w), + space.wrap(asminfo.asmaddr), + space.wrap(asminfo.asmlen)) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + + def _optimize_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_optimize_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + w_res = space.call_function(cache.w_optimize_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w)) + if space.is_w(w_res, space.w_None): + return + l = [] + for w_item in space.listview(w_res): + item = space.interp_w(WrappedOp, w_item) + l.append(jit_hooks._cast_to_resop(item.op)) + del debug_info.operations[:] # modifying operations above is + # probably not a great idea since types may not work + # and we'll end up with half-working list and + # a segfault/fatal RPython error + for elem in l: + debug_info.operations.append(elem) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + +pypy_hooks = PyPyJitIface() class PyPyJitPolicy(JitPolicy): @@ -12,12 +120,16 @@ modname == 'thread.os_thread'): return True if '.' in modname: - modname, _ = modname.split('.', 1) + modname, rest = modname.split('.', 1) + else: + rest = '' if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', 'mmap', 'marshal']: + if modname == 'pypyjit' and 'interp_resop' in rest: + return False return True return False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -1,22 +1,40 @@ import py from pypy.conftest import gettestobjspace, option +from pypy.interpreter.gateway import interp2app from pypy.interpreter.pycode import PyCode -from pypy.interpreter.gateway import interp2app -from pypy.jit.metainterp.history import JitCellToken -from pypy.jit.metainterp.resoperation import ResOperation, rop +from pypy.jit.metainterp.history import JitCellToken, ConstInt, ConstPtr +from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.logger import Logger from pypy.rpython.annlowlevel import (cast_instance_to_base_ptr, cast_base_ptr_to_instance) from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.module.pypyjit.interp_jit import pypyjitdriver +from pypy.module.pypyjit.policy import pypy_hooks from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.typesystem import llhelper +from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG +from pypy.rlib.jit import JitDebugInfo, AsmInfo + +class MockJitDriverSD(object): + class warmstate(object): + @staticmethod + def get_location_str(boxes): + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + boxes[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return pycode.co_name + + jitdriver = pypyjitdriver + class MockSD(object): class cpu(object): ts = llhelper + jitdrivers_sd = [MockJitDriverSD] + class AppTestJitHook(object): def setup_class(cls): if option.runappdirect: @@ -24,9 +42,9 @@ space = gettestobjspace(usemodules=('pypyjit',)) cls.space = space w_f = space.appexec([], """(): - def f(): + def function(): pass - return f + return function """) cls.w_f = w_f ll_code = cast_instance_to_base_ptr(w_f.code) @@ -34,41 +52,73 @@ logger = Logger(MockSD()) oplist = parse(""" - [i1, i2] + [i1, i2, p2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) + guard_nonnull(p2) [] guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations + greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] + offset = {} + for i, op in enumerate(oplist): + if i != 1: + offset[op] = i + + di_loop = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop_optimize = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop.asminfo = AsmInfo(offset, 0, 0) + di_bridge = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'bridge', fail_descr_no=0) + di_bridge.asminfo = AsmInfo(offset, 0, 0) def interp_on_compile(): - pypyjitdriver.on_compile(logger, JitCellToken(), oplist, 'loop', - 0, False, ll_code) + di_loop.oplist = cls.oplist + pypy_hooks.after_compile(di_loop) def interp_on_compile_bridge(): - pypyjitdriver.on_compile_bridge(logger, JitCellToken(), oplist, 0) + pypy_hooks.after_compile_bridge(di_bridge) + + def interp_on_optimize(): + di_loop_optimize.oplist = cls.oplist + pypy_hooks.before_compile(di_loop_optimize) + + def interp_on_abort(): + pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey, + 'blah') cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) + cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) + cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) + cls.orig_oplist = oplist + + def setup_method(self, meth): + self.__class__.oplist = self.orig_oplist[:] def test_on_compile(self): import pypyjit all = [] - def hook(*args): - assert args[0] == 'main' - assert args[1] in ['loop', 'bridge'] - all.append(args[2:]) + def hook(name, looptype, tuple_or_guard_no, ops, asmstart, asmlen): + all.append((name, looptype, tuple_or_guard_no, ops)) self.on_compile() pypyjit.set_compile_hook(hook) assert not all self.on_compile() assert len(all) == 1 - assert all[0][0][0].co_name == 'f' - assert all[0][0][1] == 0 - assert all[0][0][2] == False - assert len(all[0][1]) == 3 - assert 'int_add' in all[0][1][0] + elem = all[0] + assert elem[0] == 'pypyjit' + assert elem[2][0].co_name == 'function' + assert elem[2][1] == 0 + assert elem[2][2] == False + assert len(elem[3]) == 4 + int_add = elem[3][0] + #assert int_add.name == 'int_add' + assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 pypyjit.set_compile_hook(None) @@ -116,11 +166,48 @@ pypyjit.set_compile_hook(hook) self.on_compile() - dmp = l[0][3][1] - assert isinstance(dmp, pypyjit.DebugMergePoint) - assert dmp.code is self.f.func_code + op = l[0][3][1] + assert isinstance(op, pypyjit.ResOperation) + assert 'function' in repr(op) + + def test_on_abort(self): + import pypyjit + l = [] + + def hook(jitdriver_name, greenkey, reason): + l.append((jitdriver_name, reason)) + + pypyjit.set_abort_hook(hook) + self.on_abort() + assert l == [('pypyjit', 'ABORT_TOO_LONG')] + + def test_on_optimize(self): + import pypyjit + l = [] + + def hook(name, looptype, tuple_or_guard_no, ops, *args): + l.append(ops) + + def optimize_hook(name, looptype, tuple_or_guard_no, ops): + return [] + + pypyjit.set_compile_hook(hook) + pypyjit.set_optimize_hook(optimize_hook) + self.on_optimize() + self.on_compile() + assert l == [[]] def test_creation(self): - import pypyjit - dmp = pypyjit.DebugMergePoint(0, 0, self.f.func_code) - assert dmp.code is self.f.func_code + from pypyjit import Box, ResOperation + + op = ResOperation(self.int_add_num, [Box(1), Box(3)], Box(4)) + assert op.num == self.int_add_num + assert op.name == 'int_add' + box = op.getarg(0) + assert box.getint() == 1 + box2 = op.result + assert box2.getint() == 4 + op.setarg(0, box2) + assert op.getarg(0).getint() == 4 + op.result = box + assert op.result.getint() == 1 diff --git a/pypy/module/pypyjit/test/test_policy.py b/pypy/module/pypyjit/test/test_policy.py --- a/pypy/module/pypyjit/test/test_policy.py +++ b/pypy/module/pypyjit/test/test_policy.py @@ -52,6 +52,7 @@ for modname in 'pypyjit', 'signal', 'micronumpy', 'math', 'imp': assert pypypolicy.look_inside_pypy_module(modname) assert pypypolicy.look_inside_pypy_module(modname + '.foo') + assert not pypypolicy.look_inside_pypy_module('pypyjit.interp_resop') def test_see_jit_module(): assert pypypolicy.look_inside_pypy_module('pypyjit.interp_jit') diff --git a/pypy/module/pypyjit/test/test_ztranslation.py b/pypy/module/pypyjit/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test/test_ztranslation.py @@ -0,0 +1,5 @@ + +from pypy.objspace.fake.checkmodule import checkmodule + +def test_pypyjit_translates(): + checkmodule('pypyjit') diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -8,10 +8,12 @@ from pypy.tool import logparser from pypy.jit.tool.jitoutput import parse_prof from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range, - find_ids, TraceWithIds, + find_ids, OpMatcher, InvalidMatch) class BaseTestPyPyC(object): + log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary' + def setup_class(cls): if '__pypy__' not in sys.builtin_module_names: py.test.skip("must run this test with pypy") @@ -52,8 +54,7 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - print cmdline, logfile - env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)} + env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -98,7 +98,8 @@ end = time.time() return end - start # - log = self.run(main, [get_libc_name(), 200], threshold=150) + log = self.run(main, [get_libc_name(), 200], threshold=150, + import_site=True) assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead loops = log.loops_by_id('sleep') assert len(loops) == 1 # make sure that we actually JITted the loop @@ -121,7 +122,7 @@ return fabs._ptr.getaddr(), x libm_name = get_libm_name(sys.platform) - log = self.run(main, [libm_name]) + log = self.run(main, [libm_name], import_site=True) fabs_addr, res = log.result assert res == -4.0 loop, = log.loops_by_filename(self.filepath) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -15,7 +15,7 @@ i += letters[i % len(letters)] == uletters[i % len(letters)] return i - log = self.run(main, [300]) + log = self.run(main, [300], import_site=True) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" @@ -55,7 +55,7 @@ i += int(long(string.digits[i % len(string.digits)], 16)) return i - log = self.run(main, [1100]) + log = self.run(main, [1100], import_site=True) assert log.result == main(1100) loop, = log.loops_by_filename(self.filepath) assert loop.match(""" diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -42,7 +42,7 @@ 'argv' : 'state.get(space).w_argv', 'py3kwarning' : 'space.w_False', 'warnoptions' : 'state.get(space).w_warnoptions', - 'builtin_module_names' : 'state.w_None', + 'builtin_module_names' : 'space.w_None', 'pypy_getudir' : 'state.pypy_getudir', # not translated 'pypy_initial_path' : 'state.pypy_initial_path', diff --git a/pypy/module/sys/app.py b/pypy/module/sys/app.py --- a/pypy/module/sys/app.py +++ b/pypy/module/sys/app.py @@ -66,11 +66,11 @@ return None copyright_str = """ -Copyright 2003-2011 PyPy development team. +Copyright 2003-2012 PyPy development team. All Rights Reserved. For further information, see -Portions Copyright (c) 2001-2008 Python Software Foundation. +Portions Copyright (c) 2001-2012 Python Software Foundation. All Rights Reserved. Portions Copyright (c) 2000 BeOpen.com. diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py --- a/pypy/objspace/fake/checkmodule.py +++ b/pypy/objspace/fake/checkmodule.py @@ -1,8 +1,10 @@ from pypy.objspace.fake.objspace import FakeObjSpace, W_Root +from pypy.config.pypyoption import get_pypy_config def checkmodule(modname): - space = FakeObjSpace() + config = get_pypy_config(translating=True) + space = FakeObjSpace(config) mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__']) # force computation and record what we wrap module = mod.Module(space, W_Root()) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -93,9 +93,9 @@ class FakeObjSpace(ObjSpace): - def __init__(self): + def __init__(self, config=None): self._seen_extras = [] - ObjSpace.__init__(self) + ObjSpace.__init__(self, config=config) def float_w(self, w_obj): is_root(w_obj) @@ -135,6 +135,9 @@ def newfloat(self, x): return w_some_obj() + def newcomplex(self, x, y): + return w_some_obj() + def marshal_w(self, w_obj): "NOT_RPYTHON" raise NotImplementedError @@ -215,6 +218,10 @@ expected_length = 3 return [w_some_obj()] * expected_length + def unpackcomplex(self, w_complex): + is_root(w_complex) + return 1.1, 2.2 + def allocate_instance(self, cls, w_subtype): is_root(w_subtype) return instantiate(cls) @@ -232,6 +239,11 @@ def exec_(self, *args, **kwds): pass + def createexecutioncontext(self): + ec = ObjSpace.createexecutioncontext(self) + ec._py_repr = None + return ec + # ---------- def translates(self, func=None, argtypes=None, **kwds): @@ -267,18 +279,21 @@ ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', 'dict', 'unicode', 'complex', 'slice', 'bool', - 'type', 'basestring']): + 'type', 'basestring', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # for (name, _, arity, _) in ObjSpace.MethodTable: args = ['w_%d' % i for i in range(arity)] + params = args[:] d = {'is_root': is_root, 'w_some_obj': w_some_obj} + if name in ('get',): + params[-1] += '=None' exec compile2("""\ def meth(self, %s): %s return w_some_obj() - """ % (', '.join(args), + """ % (', '.join(params), '; '.join(['is_root(%s)' % arg for arg in args]))) in d meth = func_with_new_name(d['meth'], name) setattr(FakeObjSpace, name, meth) @@ -301,9 +316,12 @@ pass FakeObjSpace.default_compiler = FakeCompiler() -class FakeModule(object): +class FakeModule(Wrappable): + def __init__(self): + self.w_dict = w_some_obj() def get(self, name): name + "xx" # check that it's a string return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py --- a/pypy/objspace/fake/test/test_objspace.py +++ b/pypy/objspace/fake/test/test_objspace.py @@ -40,7 +40,7 @@ def test_constants(self): space = self.space space.translates(lambda: (space.w_None, space.w_True, space.w_False, - space.w_int, space.w_str, + space.w_int, space.w_str, space.w_object, space.w_TypeError)) def test_wrap(self): @@ -72,3 +72,9 @@ def test_newlist(self): self.space.newlist([W_Root(), W_Root()]) + + def test_default_values(self): + # the __get__ method takes either 2 or 3 arguments + space = self.space + space.translates(lambda: (space.get(W_Root(), W_Root()), + space.get(W_Root(), W_Root(), W_Root()))) diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -360,12 +363,36 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -6,7 +6,6 @@ from pypy.rlib.objectmodel import CDefinedIntSymbolic, keepalive_until_here, specialize from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.extregistry import ExtRegistryEntry -from pypy.tool.sourcetools import func_with_new_name DEBUG_ELIDABLE_FUNCTIONS = False @@ -386,6 +385,18 @@ class JitHintError(Exception): """Inconsistency in the JIT hints.""" +PARAMETER_DOCS = { + 'threshold': 'number of times a loop has to run for it to become hot', + 'function_threshold': 'number of times a function must run for it to become traced from start', + 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG', + 'inlining': 'inline python functions or not (1/0)', + 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', + 'retrace_limit': 'how many times we can try retracing before giving up', + 'max_retrace_guards': 'number of extra guards a retrace can cause', + 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' + } + PARAMETERS = {'threshold': 1039, # just above 1024, prime 'function_threshold': 1619, # slightly more than one above, also prime 'trace_eagerness': 200, @@ -410,13 +421,16 @@ active = True # if set to False, this JitDriver is ignored virtualizables = [] + name = 'jitdriver' def __init__(self, greens=None, reds=None, virtualizables=None, get_jitcell_at=None, set_jitcell_at=None, get_printable_location=None, confirm_enter_jit=None, - can_never_inline=None, should_unroll_one_iteration=None): + can_never_inline=None, should_unroll_one_iteration=None, + name='jitdriver'): if greens is not None: self.greens = greens + self.name = name if reds is not None: self.reds = reds if not hasattr(self, 'greens') or not hasattr(self, 'reds'): @@ -450,23 +464,6 @@ # special-cased by ExtRegistryEntry pass - def on_compile(self, logger, looptoken, operations, type, *greenargs): - """ A hook called when loop is compiled. Overwrite - for your own jitdriver if you want to do something special, like - call applevel code - """ - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - """ A hook called when a bridge is compiled. Overwrite - for your own jitdriver if you want to do something special - """ - - # note: if you overwrite this functions with the above signature it'll - # work, but the *greenargs is different for each jitdriver, so we - # can't share the same methods - del on_compile - del on_compile_bridge - def _make_extregistryentries(self): # workaround: we cannot declare ExtRegistryEntries for functions # used as methods of a frozen object, but we can attach the @@ -628,7 +625,6 @@ def specialize_call(self, hop, **kwds_i): # XXX to be complete, this could also check that the concretetype # of the variables are the same for each of the calls. - from pypy.rpython.error import TyperError from pypy.rpython.lltypesystem import lltype driver = self.instance.im_self greens_v = [] @@ -741,6 +737,105 @@ return hop.genop('jit_marker', vlist, resulttype=lltype.Void) +class AsmInfo(object): + """ An addition to JitDebugInfo concerning assembler. Attributes: + + ops_offset - dict of offsets of operations or None + asmaddr - (int) raw address of assembler block + asmlen - assembler block length + """ + def __init__(self, ops_offset, asmaddr, asmlen): + self.ops_offset = ops_offset + self.asmaddr = asmaddr + self.asmlen = asmlen + +class JitDebugInfo(object): + """ An object representing debug info. Attributes meanings: + + greenkey - a list of green boxes or None for bridge + logger - an instance of jit.metainterp.logger.LogOperations + type - either 'loop', 'entry bridge' or 'bridge' + looptoken - description of a loop + fail_descr_no - number of failing descr for bridges, -1 otherwise + asminfo - extra assembler information + """ + + asminfo = None + def __init__(self, jitdriver_sd, logger, looptoken, operations, type, + greenkey=None, fail_descr_no=-1): + self.jitdriver_sd = jitdriver_sd + self.logger = logger + self.looptoken = looptoken + self.operations = operations + self.type = type + if type == 'bridge': + assert fail_descr_no != -1 + else: + assert greenkey is not None + self.greenkey = greenkey + self.fail_descr_no = fail_descr_no + + def get_jitdriver(self): + """ Return where the jitdriver on which the jitting started + """ + return self.jitdriver_sd.jitdriver + + def get_greenkey_repr(self): + """ Return the string repr of a greenkey + """ + return self.jitdriver_sd.warmstate.get_location_str(self.greenkey) + +class JitHookInterface(object): + """ This is the main connector between the JIT and the interpreter. + Several methods on this class will be invoked at various stages + of JIT running like JIT loops compiled, aborts etc. + An instance of this class will be available as policy.jithookiface. + """ + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + """ A hook called each time a loop is aborted with jitdriver and + greenkey where it started, reason is a string why it got aborted + """ + + #def before_optimize(self, debug_info): + # """ A hook called before optimizer is run, called with instance of + # JitDebugInfo. Overwrite for custom behavior + # """ + # DISABLED + + def before_compile(self, debug_info): + """ A hook called after a loop is optimized, before compiling assembler, + called with JitDebugInfo instance. Overwrite for custom behavior + """ + + def after_compile(self, debug_info): + """ A hook called after a loop has compiled assembler, + called with JitDebugInfo instance. Overwrite for custom behavior + """ + + #def before_optimize_bridge(self, debug_info): + # operations, fail_descr_no): + # """ A hook called before a bridge is optimized. + # Called with JitDebugInfo instance, overwrite for + # custom behavior + # """ + # DISABLED + + def before_compile_bridge(self, debug_info): + """ A hook called before a bridge is compiled, but after optimizations + are performed. Called with instance of debug_info, overwrite for + custom behavior + """ + + def after_compile_bridge(self, debug_info): + """ A hook called after a bridge is compiled, called with JitDebugInfo + instance, overwrite for custom behavior + """ + + def get_stats(self): + """ Returns various statistics + """ + raise NotImplementedError + def record_known_class(value, cls): """ Assure the JIT that value is an instance of cls. This is not a precise @@ -748,7 +843,6 @@ """ assert isinstance(value, cls) - class Entry(ExtRegistryEntry): _about_ = record_known_class @@ -759,7 +853,8 @@ assert isinstance(s_inst, annmodel.SomeInstance) def specialize_call(self, hop): - from pypy.rpython.lltypesystem import lltype, rclass + from pypy.rpython.lltypesystem import rclass, lltype + classrepr = rclass.get_type_repr(hop.rtyper) hop.exception_cannot_occur() diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/jit_hooks.py @@ -0,0 +1,106 @@ + +from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.annotation import model as annmodel +from pypy.rpython.lltypesystem import llmemory, lltype +from pypy.rpython.lltypesystem import rclass +from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\ + cast_base_ptr_to_instance, llstr, hlstr +from pypy.rlib.objectmodel import specialize + +def register_helper(s_result): + def wrapper(helper): + class Entry(ExtRegistryEntry): + _about_ = helper + + def compute_result_annotation(self, *args): + return s_result + + def specialize_call(self, hop): + from pypy.rpython.lltypesystem import lltype + + c_func = hop.inputconst(lltype.Void, helper) + c_name = hop.inputconst(lltype.Void, 'access_helper') + args_v = [hop.inputarg(arg, arg=i) + for i, arg in enumerate(hop.args_r)] + return hop.genop('jit_marker', [c_name, c_func] + args_v, + resulttype=hop.r_result) + return helper + return wrapper + +def _cast_to_box(llref): + from pypy.jit.metainterp.history import AbstractValue + + ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) + return cast_base_ptr_to_instance(AbstractValue, ptr) + +def _cast_to_resop(llref): + from pypy.jit.metainterp.resoperation import AbstractResOp + + ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) + return cast_base_ptr_to_instance(AbstractResOp, ptr) + + at specialize.argtype(0) +def _cast_to_gcref(obj): + return lltype.cast_opaque_ptr(llmemory.GCREF, + cast_instance_to_base_ptr(obj)) + +def emptyval(): + return lltype.nullptr(llmemory.GCREF.TO) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_new(no, llargs, llres): + from pypy.jit.metainterp.history import ResOperation + + args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] + res = _cast_to_box(llres) + return _cast_to_gcref(ResOperation(no, args, res)) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def boxint_new(no): + from pypy.jit.metainterp.history import BoxInt + return _cast_to_gcref(BoxInt(no)) + + at register_helper(annmodel.SomeInteger()) +def resop_getopnum(llop): + return _cast_to_resop(llop).getopnum() + + at register_helper(annmodel.SomeString(can_be_None=True)) +def resop_getopname(llop): + return llstr(_cast_to_resop(llop).getopname()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getarg(llop, no): + return _cast_to_gcref(_cast_to_resop(llop).getarg(no)) + + at register_helper(annmodel.s_None) +def resop_setarg(llop, no, llbox): + _cast_to_resop(llop).setarg(no, _cast_to_box(llbox)) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getresult(llop): + return _cast_to_gcref(_cast_to_resop(llop).result) + + at register_helper(annmodel.s_None) +def resop_setresult(llop, llbox): + _cast_to_resop(llop).result = _cast_to_box(llbox) + + at register_helper(annmodel.SomeInteger()) +def box_getint(llbox): + return _cast_to_box(llbox).getint() + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_clone(llbox): + return _cast_to_gcref(_cast_to_box(llbox).clonebox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_constbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).constbox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_nonconstbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).nonconstbox()) + + at register_helper(annmodel.SomeBool()) +def box_isconst(llbox): + from pypy.jit.metainterp.history import Const + return isinstance(_cast_to_box(llbox), Const) diff --git a/pypy/rlib/rsre/rsre_jit.py b/pypy/rlib/rsre/rsre_jit.py --- a/pypy/rlib/rsre/rsre_jit.py +++ b/pypy/rlib/rsre/rsre_jit.py @@ -5,7 +5,7 @@ active = True def __init__(self, name, debugprint, **kwds): - JitDriver.__init__(self, **kwds) + JitDriver.__init__(self, name='rsre_' + name, **kwds) # def get_printable_location(*args): # we print based on indices in 'args'. We first print diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -185,7 +185,10 @@ return self.code.map[self.bytecode_no] def getlineno(self): - return self.getopcode().lineno + code = self.getopcode() + if code is None: + return None + return code.lineno lineno = property(getlineno) def getline_starts_here(self): diff --git a/pypy/tool/jitlogparser/storage.py b/pypy/tool/jitlogparser/storage.py --- a/pypy/tool/jitlogparser/storage.py +++ b/pypy/tool/jitlogparser/storage.py @@ -6,7 +6,6 @@ import py import os from lib_pypy.disassembler import dis -from pypy.tool.jitlogparser.parser import Function from pypy.tool.jitlogparser.module_finder import gather_all_code_objs class LoopStorage(object): diff --git a/pypy/tool/release_dates.py b/pypy/tool/release_dates.py deleted file mode 100644 --- a/pypy/tool/release_dates.py +++ /dev/null @@ -1,14 +0,0 @@ -import py - -release_URL = 'http://codespeak.net/svn/pypy/release/' -releases = [r[:-2] for r in py.std.os.popen('svn list ' + release_URL).readlines() if 'x' not in r] - -f = file('release_dates.txt', 'w') -print >> f, 'date, release' -for release in releases: - for s in py.std.os.popen('svn info ' + release_URL + release).readlines(): - if s.startswith('Last Changed Date'): - date = s.split()[3] - print >> f, date, ',', release - break -f.close() diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c --- a/pypy/translator/c/src/profiling.c +++ b/pypy/translator/c/src/profiling.c @@ -29,6 +29,35 @@ profiling_setup = 0; } } + +#elif defined(_WIN32) +#include + +DWORD_PTR base_affinity_mask; +int profiling_setup = 0; + +void pypy_setup_profiling() { + if (!profiling_setup) { + DWORD_PTR affinity_mask, system_affinity_mask; + GetProcessAffinityMask(GetCurrentProcess(), + &base_affinity_mask, &system_affinity_mask); + affinity_mask = 1; + /* Pick one cpu allowed by the system */ + if (system_affinity_mask) + while ((affinity_mask & system_affinity_mask) == 0) + affinity_mask <<= 1; + SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); + profiling_setup = 1; + } +} + +void pypy_teardown_profiling() { + if (profiling_setup) { + SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); + profiling_setup = 0; + } +} + #else void pypy_setup_profiling() { } void pypy_teardown_profiling() { } diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,8 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %slow-level JIT parameter (default %s)' % ( - key, ' '*(18-len(key)), value) + print ' --jit %s=N %s%s (default %s)' % ( + key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) print ' --jit off turn off the JIT' def print_version(*args): diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -226,8 +226,8 @@ return self.get_entry_point(config) def jitpolicy(self, driver): - from pypy.module.pypyjit.policy import PyPyJitPolicy - return PyPyJitPolicy() + from pypy.module.pypyjit.policy import PyPyJitPolicy, pypy_hooks + return PyPyJitPolicy(pypy_hooks) def get_entry_point(self, config): from pypy.tool.lib_pypy import import_from_lib_pypy From noreply at buildbot.pypy.org Thu Jan 12 14:19:34 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 12 Jan 2012 14:19:34 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: in-progress. Now I understand what's exactly missing from making it nicely Message-ID: <20120112131934.1224082C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51260:d15dcd1e48b1 Date: 2012-01-12 15:19 +0200 http://bitbucket.org/pypy/pypy/changeset/d15dcd1e48b1/ Log: in-progress. Now I understand what's exactly missing from making it nicely work with lazy evaluation. breaks tests diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -22,9 +22,6 @@ def done(self): raise NotImplementedError - def axis_done(self): - raise NotImplementedError - class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 @@ -106,74 +103,56 @@ def next(self, shapelen): return self -def axis_iter_from_arr(arr, dim=-1): - # The assert is needed for zjit tests - from pypy.module.micronumpy.interp_numarray import ConcreteArray - assert isinstance(arr, ConcreteArray) - return AxisIterator(arr.start, arr.strides, arr.backstrides, arr.shape, - dim) - class AxisIterator(BaseIterator): """ Accept an addition argument dim Redorder the dimensions to iterate over dim most often. Set a flag at the end of each run over dim. """ - def __init__(self, arr_start, strides, backstrides, shape, dim): + def __init__(self, dim, shape, strides, backstrides): self.shape = shape - self.shapelen = len(shape) self.indices = [0] * len(shape) self._done = False - self._axis_done = False - self.offset = arr_start + self.axis_done = False + self.offset = -1 self.dim = dim - self.dim_order = [] - if self.dim >= 0: - self.dim_order.append(self.dim) - for i in range(self.shapelen - 1, -1, -1): - if i == self.dim: - continue - self.dim_order.append(i) - self.strides = strides - self.backstrides = backstrides + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + self.dim_order = [dim] + for i in range(len(shape) - 1, -1, -1): + if i != self.dim: + self.dim_order.append(i) def done(self): return self._done - def axis_done(self): - return self._axis_done - @jit.unroll_safe def next(self, shapelen): - #shapelen will always be one less than self.shapelen offset = self.offset - _axis_done = False done = False - #indices = [0] * self.shapelen - #for i in range(self.shapelen): - # indices[i] = self.indices[i] - indices = self.indices + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + axis_done = False for i in self.dim_order: if indices[i] < self.shape[i] - 1: indices[i] += 1 - offset += self.strides[i] break else: if i == self.dim: - _axis_done = True + axis_done = True + offset += 1 indices[i] = 0 - offset -= self.backstrides[i] else: done = True res = instantiate(AxisIterator) - res._axis_done = _axis_done + res.axis_done = axis_done + res.strides = self.strides + res.backstrides = self.backstrides res.offset = offset res.indices = indices - res.strides = self.strides + res.shape = self.shape + res.dim = self.dim res.dim_order = self.dim_order - res.backstrides = self.backstrides - res.shape = self.shape - res.shapelen = self.shapelen - res.dim = self.dim res._done = done return res diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -9,7 +9,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - view_iter_from_arr, SkipLastAxisIterator + view_iter_from_arr, SkipLastAxisIterator, AxisIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -36,13 +36,6 @@ get_printable_location=signature.new_printable_location('slice'), ) -axisreduce_driver = jit.JitDriver( - greens=['shapelen', 'sig'], - virtualizables=['frame'], - reds=['identity', 'self','result', 'ri', 'frame', 'dtype', 'value'], - get_printable_location=signature.new_printable_location('axisreduce'), -) - def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -695,7 +688,6 @@ # to allow garbage-collecting them raise NotImplementedError - @jit.unroll_safe def compute(self): result = W_NDimArray(self.size, self.shape, self.find_dtype()) shapelen = len(self.shape) @@ -760,74 +752,6 @@ self.child = None -class Reduce(VirtualArray): - def __init__(self, binfunc, name, dim, res_dtype, values, identity=None): - shape = values.shape[0:dim] + values.shape[dim + 1:len(values.shape)] - VirtualArray.__init__(self, name, shape, res_dtype) - self.values = values - self.size = 1 - for s in shape: - self.size *= s - self.binfunc = binfunc - self.dtype = res_dtype - self.dim = dim - self.identity = identity - - def _del_sources(self): - self.values = None - - def create_sig(self, res_shape): - if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) - return signature.AxisReduceSignature(self.binfunc, self.name, self.dtype, - signature.ViewSignature(self.dtype), - self.values.create_sig(res_shape)) - - def get_identity(self, sig, frame, shapelen): - #XXX does this allocate? Yes :( - #XXX is this inlinable? Yes :) - if self.identity is None: - value = sig.eval(frame, self.values).convert_to(self.dtype) - frame.next(shapelen) - else: - value = self.identity.convert_to(self.dtype) - return value - - @jit.unroll_safe - def compute(self): - dtype = self.dtype - result = W_NDimArray(self.size, self.shape, dtype) - self.values = self.values.get_concrete() - shapelen = len(result.shape) - identity = self.identity - sig = self.find_sig(res_shape=result.shape, arr=self.values) - ri = ArrayIterator(result.size) - frame = sig.create_frame(self.values, dim=self.dim) - value = self.get_identity(sig, frame, shapelen) - assert isinstance(sig, signature.AxisReduceSignature) - while not frame.done(): - axisreduce_driver.jit_merge_point(frame=frame, self=self, - value=value, sig=sig, - shapelen=shapelen, ri=ri, - dtype=dtype, - identity=identity, - result=result) - if frame.axis_done(): - result.dtype.setitem(result.storage, ri.offset, value) - if identity is None: - value = sig.eval(frame, self.values).convert_to(dtype) - frame.next(shapelen) - else: - value = identity.convert_to(dtype) - ri = ri.next(shapelen) - value = self.binfunc(dtype, value, - sig.eval(frame, self.values).convert_to(dtype)) - frame.next(shapelen) - assert ri.done - result.dtype.setitem(result.storage, ri.offset, value) - return result - - class Call1(VirtualArray): def __init__(self, ufunc, name, shape, res_dtype, values): VirtualArray.__init__(self, name, shape, res_dtype) @@ -869,6 +793,16 @@ self.left.create_sig(res_shape), self.right.create_sig(res_shape)) +class AxisReduce(Call2): + """ NOTE: this is only used as a container, you should never + encounter such things in the wild. Remove this comment + when we'll make AxisReduce lazy + """ + def __init__(self, ufunc, name, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim + class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -3,8 +3,8 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature,\ - find_sig, new_printable_location +from pypy.module.micronumpy.signature import ReduceSignature,\ + find_sig, new_printable_location, AxisReduceSignature, ScalarSignature from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name @@ -16,6 +16,14 @@ get_printable_location=new_printable_location('reduce'), ) +axisreduce_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['self','arr', 'frame', 'shapelen'], +# name='axisreduce', + get_printable_location=new_printable_location('axisreduce'), +) + class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] @@ -129,14 +137,13 @@ raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - from pypy.module.micronumpy.interp_numarray import Reduce - res = Reduce(self.func, self.name, dim, dtype, obj, self.identity) - obj.add_invalidates(res) + res = self.do_axis_reduce(obj, dtype, dim) return space.wrap(res) + scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, - ScalarSignature(dtype), + scalarsig, obj.create_sig(obj.shape)), obj) - frame = sig.create_frame(obj, dim=-1) + frame = sig.create_frame(obj) if self.identity is None: value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) @@ -144,6 +151,46 @@ value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + def do_axis_reduce(self, obj, dtype, dim): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + W_NDimArray + + shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] + size = 1 + for s in shape: + size *= s + result = W_NDimArray(size, shape, dtype) + rightsig = obj.create_sig(obj.shape) + # note - this is just a wrapper so signature can fetch + # both left and right, nothing more, especially + # this is not a true virtual array, because shapes + # don't quite match + arr = AxisReduce(self.func, self.name, shape, dtype, + result, obj, dim) + scalarsig = ScalarSignature(dtype) + sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, + scalarsig, rightsig), arr) + frame = sig.create_frame(arr) + shapelen = len(obj.shape) + if self.identity is None: + frame.identity = sig.eval(frame, arr).convert_to(dtype) + frame.next(shapelen) + else: + frame.identity = self.identity.convert_to(dtype) + frame.value = frame.identity + self.reduce_axis_loop(frame, sig, shapelen, arr) + return result + + def reduce_axis_loop(self, frame, sig, shapelen, arr): + while not frame.done(): + axisreduce_driver.jit_merge_point(frame=frame, self=self, + sig=sig, + shapelen=shapelen, arr=arr) + sig.eval(frame, arr) + frame.next(shapelen) + # store the last value, when everything is done + arr.left.setitem(frame.iterators[0].offset, frame.value) + def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): while not frame.done(): reduce_driver.jit_merge_point(sig=sig, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,7 +1,7 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator, axis_iter_from_arr + OneDimIterator, ConstantIterator, AxisIterator from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib.jit import hint, unroll_safe, promote @@ -33,7 +33,8 @@ return sig class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]'] + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity'] @unroll_safe def __init__(self, iterators, arrays): @@ -47,6 +48,8 @@ break else: self.final_iter = -1 + self.value = None + self.identity = None def done(self): final_iter = promote(self.final_iter) @@ -57,13 +60,7 @@ @unroll_safe def next(self, shapelen): for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) - - def axis_done(self): - final_iter = promote(self.final_iter) - if final_iter < 0: - return False - return self.iterators[final_iter].axis_done() + self.iterators[i] = self.iterators[i].next(shapelen) def _add_ptr_to_cache(ptr, cache): i = 0 @@ -101,13 +98,13 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None, chunks=None, dim=-1): + def create_frame(self, arr, res_shape=None, chunks=None): if chunks is None: chunks = [] res_shape = res_shape or arr.shape iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, chunks, dim) + self._create_iter(iterlist, arraylist, arr, res_shape, chunks) return NumpyEvalFrame(iterlist, arraylist) @@ -150,7 +147,7 @@ assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) @@ -178,7 +175,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -219,12 +216,12 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) chunklist.append(arr.chunks) self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist, dim) + chunklist) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -260,11 +257,11 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist, dim) + chunklist) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 @@ -305,14 +302,14 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist, dim) + chunklist) self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist, dim) + chunklist) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 @@ -327,9 +324,9 @@ class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): self.right._create_iter(iterlist, arraylist, arr, res_shape, - chunklist, dim) + chunklist) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) @@ -341,33 +338,39 @@ return self.right.eval(frame, arr) def debug_repr(self): - return 'ReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(), - self.right.debug_repr()) + return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) class AxisReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist, dim): - from pypy.module.micronumpy.interp_numarray import ConcreteArray - concr = arr.get_concrete() - assert isinstance(concr, ConcreteArray) - storage = concr.storage - if self.iter_no >= len(iterlist): - _iter = axis_iter_from_arr(concr, dim) - from interp_iter import AxisIterator - assert isinstance(_iter, AxisIterator) - iterlist.append(_iter) - if self.array_no >= len(arraylist): - arraylist.append(storage) + def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + assert not iterlist # we assume that later in eval + iterlist.append(AxisIterator(arr.dim, arr.right.shape, + arr.left.strides, + arr.left.backstrides)) + self.right._create_iter(iterlist, arraylist, arr.right, arr.right.shape, + chunklist) def _invent_numbering(self, cache, allnumbers): + no = len(allnumbers) + allnumbers.append(no) self.right._invent_numbering(cache, allnumbers) def _invent_array_numbering(self, arr, cache): - self.right._invent_array_numbering(arr, cache) + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + self.right._invent_array_numbering(arr.right, cache) def eval(self, frame, arr): - return self.right.eval(frame, arr) + if frame.iterators[0].axis_done: + arr.left.setitem(frame.iterators[0].offset, frame.value) + frame.value = frame.identity + v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + print v.value, frame.value.value + frame.value = self.binfunc(self.calc_dtype, frame.value, v) + return frame.value def debug_repr(self): - return 'AxisReduceSig(%s, %s, %s)' % (self.name, self.left.debug_repr(), - self.right.debug_repr()) - + return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -745,7 +745,7 @@ raises(TypeError, 'a.sum(2, 3)') - def test_reduceND(self): + def test_reduce_nd(self): from numpypy import arange a = arange(15).reshape(5, 3) assert a.sum() == 105 From notifications-noreply at bitbucket.org Thu Jan 12 14:20:45 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Thu, 12 Jan 2012 13:20:45 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120112132045.16339.77882@bitbucket02.managed.contegix.com> You have received a notification from Carl Friedrich Bolz. Hi, I forked pypy. My fork is at https://bitbucket.org/cfbolz/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Thu Jan 12 14:28:17 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:17 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: test conversion to unsigned long Message-ID: <20120112132817.B157D82C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51261:0429f19a7321 Date: 2012-01-10 14:20 +0100 http://bitbucket.org/pypy/pypy/changeset/0429f19a7321/ Log: test conversion to unsigned long diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py --- a/pypy/module/_ffi/test/test_type_converter.py +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -45,11 +45,14 @@ self.check(app_types.sint, self.space.wrap(sys.maxint+1), -sys.maxint-1) self.check(app_types.sint, self.space.wrap(sys.maxint*2), -2) - def test_uint(self): - self.check(app_types.uint, self.space.wrap(42), r_uint(42)) - self.check(app_types.uint, self.space.wrap(-1), r_uint(sys.maxint*2 +1)) - self.check(app_types.uint, self.space.wrap(sys.maxint*3), + def test_unsigned(self): + space = self.space + self.check(app_types.uint, space.wrap(42), r_uint(42)) + self.check(app_types.uint, space.wrap(-1), r_uint(sys.maxint*2 +1)) + self.check(app_types.uint, space.wrap(sys.maxint*3), r_uint(sys.maxint - 2)) + self.check(app_types.ulong, space.wrap(sys.maxint+12), + r_uint(sys.maxint+12)) def test_pointer(self): # pointers are "unsigned" at applevel, but signed at interp-level (for From noreply at buildbot.pypy.org Thu Jan 12 14:28:18 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:18 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add test for chars and unichars Message-ID: <20120112132818.D7F8382C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51262:b85af534959c Date: 2012-01-10 14:45 +0100 http://bitbucket.org/pypy/pypy/changeset/b85af534959c/ Log: add test for chars and unichars diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py --- a/pypy/module/_ffi/test/test_type_converter.py +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -53,6 +53,12 @@ r_uint(sys.maxint - 2)) self.check(app_types.ulong, space.wrap(sys.maxint+12), r_uint(sys.maxint+12)) + self.check(app_types.ulong, space.wrap(sys.maxint*2+3), r_uint(1)) + + def test_char(self): + space = self.space + self.check(app_types.char, space.wrap('a'), ord('a')) + self.check(app_types.unichar, space.wrap(u'\u1234'), 0x1234) def test_pointer(self): # pointers are "unsigned" at applevel, but signed at interp-level (for From noreply at buildbot.pypy.org Thu Jan 12 14:28:20 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:20 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add tests for floats and doubles Message-ID: <20120112132820.1045882C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51263:ff33c5e7f4a9 Date: 2012-01-10 15:04 +0100 http://bitbucket.org/pypy/pypy/changeset/ff33c5e7f4a9/ Log: add tests for floats and doubles diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py --- a/pypy/module/_ffi/test/test_type_converter.py +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -1,6 +1,6 @@ import sys from pypy.conftest import gettestobjspace -from pypy.rlib.rarithmetic import r_uint +from pypy.rlib.rarithmetic import r_uint, r_singlefloat from pypy.module._ffi.interp_ffitype import app_types, descr_new_pointer from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter @@ -60,6 +60,11 @@ self.check(app_types.char, space.wrap('a'), ord('a')) self.check(app_types.unichar, space.wrap(u'\u1234'), 0x1234) + def test_float_and_double(self): + space = self.space + self.check(app_types.float, space.wrap(12.34), r_singlefloat(12.34)) + self.check(app_types.double, space.wrap(12.34), 12.34) + def test_pointer(self): # pointers are "unsigned" at applevel, but signed at interp-level (for # no good reason, at interp-level Signed or Unsigned makes no From noreply at buildbot.pypy.org Thu Jan 12 14:28:21 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:21 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add a test to convert signed long longs Message-ID: <20120112132821.3526882C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51264:000bd7710f81 Date: 2012-01-10 15:08 +0100 http://bitbucket.org/pypy/pypy/changeset/000bd7710f81/ Log: add a test to convert signed long longs diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py --- a/pypy/module/_ffi/test/test_type_converter.py +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -1,6 +1,7 @@ import sys from pypy.conftest import gettestobjspace -from pypy.rlib.rarithmetic import r_uint, r_singlefloat +from pypy.rlib.rarithmetic import r_uint, r_singlefloat, r_longlong +from pypy.rlib.libffi import IS_32_BIT from pypy.module._ffi.interp_ffitype import app_types, descr_new_pointer from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter @@ -60,6 +61,16 @@ self.check(app_types.char, space.wrap('a'), ord('a')) self.check(app_types.unichar, space.wrap(u'\u1234'), 0x1234) + def test_signed_longlong(self): + space = self.space + maxint32 = 2147483647 # we cannot really go above maxint on 64 bits + # (and we would not test anything, as there long + # is the same as long long) + expected = maxint32+1 + if IS_32_BIT: + expected = r_longlong(expected) + self.check(app_types.slonglong, space.wrap(maxint32+1), expected) + def test_float_and_double(self): space = self.space self.check(app_types.float, space.wrap(12.34), r_singlefloat(12.34)) From noreply at buildbot.pypy.org Thu Jan 12 14:28:22 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:22 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: add a test for converting unsigned long longs Message-ID: <20120112132822.58FCA82C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51265:2a9dd9835199 Date: 2012-01-10 15:15 +0100 http://bitbucket.org/pypy/pypy/changeset/2a9dd9835199/ Log: add a test for converting unsigned long longs diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py --- a/pypy/module/_ffi/test/test_type_converter.py +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -1,6 +1,6 @@ import sys from pypy.conftest import gettestobjspace -from pypy.rlib.rarithmetic import r_uint, r_singlefloat, r_longlong +from pypy.rlib.rarithmetic import r_uint, r_singlefloat, r_longlong, r_ulonglong from pypy.rlib.libffi import IS_32_BIT from pypy.module._ffi.interp_ffitype import app_types, descr_new_pointer from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter @@ -71,6 +71,19 @@ expected = r_longlong(expected) self.check(app_types.slonglong, space.wrap(maxint32+1), expected) + def test_unsigned_longlong(self): + space = self.space + maxint64 = 9223372036854775807 # maxint64+1 does not fit into a + # longlong, but it does into a + # ulonglong + if IS_32_BIT: + # internally, the type converter always casts to signed longlongs + expected = r_longlong(-maxint64-1) + else: + # on 64 bit, ulonglong == uint (i.e., unsigned long in C terms) + expected = r_uint(maxint64+1) + self.check(app_types.ulonglong, space.wrap(maxint64+1), expected) + def test_float_and_double(self): space = self.space self.check(app_types.float, space.wrap(12.34), r_singlefloat(12.34)) From noreply at buildbot.pypy.org Thu Jan 12 14:28:23 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:23 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: this code belongs to the subclass of type_converter, kill it Message-ID: <20120112132823.7E97D82C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51266:c574cd6ef30c Date: 2012-01-12 11:38 +0100 http://bitbucket.org/pypy/pypy/changeset/c574cd6ef30c/ Log: this code belongs to the subclass of type_converter, kill it diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -325,7 +325,7 @@ Return type: lltype.Unsigned (the address of the structure) """ - return self.func.call(self.argchain, rffi.ULONG, is_struct=True) + self.error(w_ffitype) def get_void(self, w_ffitype): """ From noreply at buildbot.pypy.org Thu Jan 12 14:28:24 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:24 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: fix two NameErrors Message-ID: <20120112132824.A334182C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51267:fba591df5224 Date: 2012-01-12 11:44 +0100 http://bitbucket.org/pypy/pypy/changeset/fba591df5224/ Log: fix two NameErrors diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -2,6 +2,7 @@ from pypy.rlib import jit from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rpython.lltypesystem import rffi +from pypy.interpreter.error import operationerrfmt from pypy.module._rawffi.structure import W_StructureInstance, W_Structure from pypy.module._ffi.interp_ffitype import app_types @@ -100,7 +101,7 @@ return w_arg def error(self, w_ffitype, w_obj): - raise operationerrfmt(space.w_TypeError, + raise operationerrfmt(self.space.w_TypeError, 'Unsupported ffi type to convert: %s', w_ffitype.name) @@ -253,7 +254,7 @@ return self.space.wrap(float(singlefloatval)) def error(self, w_ffitype): - raise operationerrfmt(space.w_TypeError, + raise operationerrfmt(self.space.w_TypeError, 'Unsupported ffi type to convert: %s', w_ffitype.name) From noreply at buildbot.pypy.org Thu Jan 12 14:28:25 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:25 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: fix for the case in which we return a ulonglong on 64 bit Message-ID: <20120112132825.CDF7E82C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51268:36e31f50363c Date: 2012-01-12 11:58 +0100 http://bitbucket.org/pypy/pypy/changeset/36e31f50363c/ Log: fix for the case in which we return a ulonglong on 64 bit diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -194,8 +194,13 @@ elif w_ffitype.is_signed(): intval = self.get_signed(w_ffitype) return space.wrap(intval) - elif w_ffitype is app_types.ulong: - # we need to be careful when the return type is ULONG, because the + elif w_ffitype is app_types.ulong or w_ffitype is app_types.ulonglong: + # Note that we the second check (for ulonglong) is meaningful only + # on 64 bit, because on 32 bit the ulonglong case would have been + # handled by the is_longlong() branch above. On 64 bit, ulonglong + # is essentially the same as ulong. + # + # We need to be careful when the return type is ULONG, because the # value might not fit into a signed LONG, and thus might require # and app-evel . This is why we need to treat it separately # than the other unsigned types. From noreply at buildbot.pypy.org Thu Jan 12 14:28:27 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:27 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: start to use ToAppLevel converter for getting the struct fields Message-ID: <20120112132827.02E8782C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51269:e207f97ec151 Date: 2012-01-12 12:11 +0100 http://bitbucket.org/pypy/pypy/changeset/e207f97ec151/ Log: start to use ToAppLevel converter for getting the struct fields diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -10,6 +10,8 @@ from pypy.interpreter.error import operationerrfmt from pypy.objspace.std.typetype import type_typedef from pypy.module._ffi.interp_ffitype import W_FFIType, app_types +from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter + class W_Field(Wrappable): @@ -133,11 +135,9 @@ @unwrap_spec(name=str) def getfield(self, space, name): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) + converter = GetFieldConverter(space, self.rawmem, offset) if w_ffitype.is_longlong(): - value = libffi.struct_getfield_longlong(w_ffitype.ffitype, self.rawmem, offset) - if w_ffitype is app_types.ulonglong: - return space.wrap(r_ulonglong(value)) - return space.wrap(value) + return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_signed() or w_ffitype.is_unsigned() or w_ffitype.is_pointer(): value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) @@ -193,6 +193,29 @@ # raise operationerrfmt(space.w_TypeError, 'Unknown type: %s', w_ffitype.name) + +class GetFieldConverter(ToAppLevelConverter): + """ + A converter used by W__StructInstance to get a field from the struct and + wrap it to the correct app-level type. + """ + + def __init__(self, space, rawmem, offset): + self.space = space + self.rawmem = rawmem + self.offset = offset + + def get_longlong(self, w_ffitype): + return libffi.struct_getfield_longlong(libffi.types.slonglong, + self.rawmem, self.offset) + + def get_ulonglong(self, w_ffitype): + longlongval = libffi.struct_getfield_longlong(libffi.types.ulonglong, + self.rawmem, self.offset) + return r_ulonglong(longlongval) + + + W__StructInstance.typedef = TypeDef( '_StructInstance', getaddr = interp2app(W__StructInstance.getaddr), From noreply at buildbot.pypy.org Thu Jan 12 14:28:28 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:28 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: migrate more cases to the GetFieldConverter Message-ID: <20120112132828.30F3382C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51270:7a8280eed4bb Date: 2012-01-12 12:22 +0100 http://bitbucket.org/pypy/pypy/changeset/7a8280eed4bb/ Log: migrate more cases to the GetFieldConverter diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -140,10 +140,7 @@ return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_signed() or w_ffitype.is_unsigned() or w_ffitype.is_pointer(): - value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) - if w_ffitype.is_signed(): - return space.wrap(value) - return space.wrap(r_uint(value)) + return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_char(): value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) @@ -215,6 +212,36 @@ return r_ulonglong(longlongval) + def get_signed(self, w_ffitype): + return libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, self.offset) + + def get_unsigned(self, w_ffitype): + value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, self.offset) + return r_uint(value) + + get_unsigned_which_fits_into_a_signed = get_signed + get_pointer = get_unsigned + + ## def get_char(self, w_ffitype): + ## ... + + ## def get_unichar(self, w_ffitype): + ## ... + + ## def get_float(self, w_ffitype): + ## ... + + ## def get_singlefloat(self, w_ffitype): + ## ... + + ## def get_struct(self, w_datashape): + ## ... + + ## def get_void(self, w_ffitype): + ## ... + + + W__StructInstance.typedef = TypeDef( '_StructInstance', From noreply at buildbot.pypy.org Thu Jan 12 14:28:29 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:29 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: migrate the char case to GetFieldConverter Message-ID: <20120112132829.5725D82C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51271:43112b2a9723 Date: 2012-01-12 12:25 +0100 http://bitbucket.org/pypy/pypy/changeset/43112b2a9723/ Log: migrate the char case to GetFieldConverter diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -143,8 +143,7 @@ return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_char(): - value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) - return space.wrap(chr(value)) + return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_unichar(): value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) @@ -222,8 +221,9 @@ get_unsigned_which_fits_into_a_signed = get_signed get_pointer = get_unsigned - ## def get_char(self, w_ffitype): - ## ... + def get_char(self, w_ffitype): + value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, self.offset) + return value ## def get_unichar(self, w_ffitype): ## ... From noreply at buildbot.pypy.org Thu Jan 12 14:28:30 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:30 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: migrate the unichar case to GetFieldConverter Message-ID: <20120112132830.7D52682C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51272:260c4a371fea Date: 2012-01-12 12:28 +0100 http://bitbucket.org/pypy/pypy/changeset/260c4a371fea/ Log: migrate the unichar case to GetFieldConverter diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -146,8 +146,7 @@ return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_unichar(): - value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, offset) - return space.wrap(unichr(value)) + return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_double(): value = libffi.struct_getfield_float(w_ffitype.ffitype, self.rawmem, offset) @@ -222,11 +221,10 @@ get_pointer = get_unsigned def get_char(self, w_ffitype): - value = libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, self.offset) - return value + return libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, self.offset) - ## def get_unichar(self, w_ffitype): - ## ... + def get_unichar(self, w_ffitype): + return libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, self.offset) ## def get_float(self, w_ffitype): ## ... From noreply at buildbot.pypy.org Thu Jan 12 14:28:31 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:31 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: migrate the float and singlefloat cases to GetFieldConverter Message-ID: <20120112132831.A48F182C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51273:1e4a336da3a3 Date: 2012-01-12 12:30 +0100 http://bitbucket.org/pypy/pypy/changeset/1e4a336da3a3/ Log: migrate the float and singlefloat cases to GetFieldConverter diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -149,12 +149,10 @@ return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_double(): - value = libffi.struct_getfield_float(w_ffitype.ffitype, self.rawmem, offset) - return space.wrap(value) + return converter.do_and_wrap(w_ffitype) # if w_ffitype.is_singlefloat(): - value = libffi.struct_getfield_singlefloat(w_ffitype.ffitype, self.rawmem, offset) - return space.wrap(float(value)) + return converter.do_and_wrap(w_ffitype) # raise operationerrfmt(space.w_TypeError, 'Unknown type: %s', w_ffitype.name) @@ -226,11 +224,13 @@ def get_unichar(self, w_ffitype): return libffi.struct_getfield_int(w_ffitype.ffitype, self.rawmem, self.offset) - ## def get_float(self, w_ffitype): - ## ... + def get_float(self, w_ffitype): + return libffi.struct_getfield_float(w_ffitype.ffitype, self.rawmem, self.offset) - ## def get_singlefloat(self, w_ffitype): - ## ... + def get_singlefloat(self, w_ffitype): + return libffi.struct_getfield_singlefloat(w_ffitype.ffitype, self.rawmem, + self.offset) + ## def get_struct(self, w_datashape): ## ... From noreply at buildbot.pypy.org Thu Jan 12 14:28:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:32 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: fully migrate the whole W__StructInstance.getfield to use GetFieldConverter Message-ID: <20120112132832.CB78A82C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51274:5cd049bf9f55 Date: 2012-01-12 12:34 +0100 http://bitbucket.org/pypy/pypy/changeset/5cd049bf9f55/ Log: fully migrate the whole W__StructInstance.getfield to use GetFieldConverter diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -136,25 +136,7 @@ def getfield(self, space, name): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) converter = GetFieldConverter(space, self.rawmem, offset) - if w_ffitype.is_longlong(): - return converter.do_and_wrap(w_ffitype) - # - if w_ffitype.is_signed() or w_ffitype.is_unsigned() or w_ffitype.is_pointer(): - return converter.do_and_wrap(w_ffitype) - # - if w_ffitype.is_char(): - return converter.do_and_wrap(w_ffitype) - # - if w_ffitype.is_unichar(): - return converter.do_and_wrap(w_ffitype) - # - if w_ffitype.is_double(): - return converter.do_and_wrap(w_ffitype) - # - if w_ffitype.is_singlefloat(): - return converter.do_and_wrap(w_ffitype) - # - raise operationerrfmt(space.w_TypeError, 'Unknown type: %s', w_ffitype.name) + return converter.do_and_wrap(w_ffitype) @unwrap_spec(name=str) def setfield(self, space, name, w_value): diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -232,7 +232,7 @@ assert voidval is None return space.w_None else: - assert False, "Return value shape '%s' not supported" % w_ffitype + self.error(w_ffitype) def _longlong(self, w_ffitype): # a separate function, which can be seen by the jit or not, From noreply at buildbot.pypy.org Thu Jan 12 14:28:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:28:34 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: use FromAppLevelConverter to handle the conversion for setfields Message-ID: <20120112132834.0486282C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51275:9a7a19bfc660 Date: 2012-01-12 14:10 +0100 http://bitbucket.org/pypy/pypy/changeset/9a7a19bfc660/ Log: use FromAppLevelConverter to handle the conversion for setfields diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -3,7 +3,7 @@ from pypy.rlib import libffi from pypy.rlib import jit from pypy.rlib.rgc import must_be_light_finalizer -from pypy.rlib.rarithmetic import r_uint, r_ulonglong, r_singlefloat +from pypy.rlib.rarithmetic import r_uint, r_ulonglong, r_singlefloat, intmask from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.typedef import TypeDef, interp_attrproperty from pypy.interpreter.gateway import interp2app, unwrap_spec @@ -135,38 +135,14 @@ @unwrap_spec(name=str) def getfield(self, space, name): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) - converter = GetFieldConverter(space, self.rawmem, offset) - return converter.do_and_wrap(w_ffitype) + field_getter = GetFieldConverter(space, self.rawmem, offset) + return field_getter.do_and_wrap(w_ffitype) @unwrap_spec(name=str) def setfield(self, space, name, w_value): w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) - if w_ffitype.is_longlong(): - value = space.truncatedlonglong_w(w_value) - libffi.struct_setfield_longlong(w_ffitype.ffitype, self.rawmem, offset, value) - return - # - if w_ffitype.is_signed() or w_ffitype.is_unsigned() or w_ffitype.is_pointer(): - value = space.truncatedint_w(w_value) - libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) - return - # - if w_ffitype.is_char() or w_ffitype.is_unichar(): - value = space.int_w(space.ord(w_value)) - libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, offset, value) - return - # - if w_ffitype.is_double(): - value = space.float_w(w_value) - libffi.struct_setfield_float(w_ffitype.ffitype, self.rawmem, offset, value) - return - # - if w_ffitype.is_singlefloat(): - value = r_singlefloat(space.float_w(w_value)) - libffi.struct_setfield_singlefloat(w_ffitype.ffitype, self.rawmem, offset, value) - return - # - raise operationerrfmt(space.w_TypeError, 'Unknown type: %s', w_ffitype.name) + field_setter = SetFieldConverter(space, self.rawmem, offset) + field_setter.unwrap_and_do(w_ffitype, w_value) class GetFieldConverter(ToAppLevelConverter): @@ -213,7 +189,6 @@ return libffi.struct_getfield_singlefloat(w_ffitype.ffitype, self.rawmem, self.offset) - ## def get_struct(self, w_datashape): ## ... @@ -221,6 +196,51 @@ ## ... +class SetFieldConverter(FromAppLevelConverter): + """ + A converter used by W__StructInstance to convert an app-level object to + the corresponding low-level value and set the field of a structure. + """ + + def __init__(self, space, rawmem, offset): + self.space = space + self.rawmem = rawmem + self.offset = offset + + def handle_signed(self, w_ffitype, w_obj, intval): + libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, self.offset, + intval) + + def handle_unsigned(self, w_ffitype, w_obj, uintval): + libffi.struct_setfield_int(w_ffitype.ffitype, self.rawmem, self.offset, + intmask(uintval)) + + handle_pointer = handle_signed + handle_char = handle_signed + handle_unichar = handle_signed + + def handle_longlong(self, w_ffitype, w_obj, longlongval): + libffi.struct_setfield_longlong(w_ffitype.ffitype, self.rawmem, self.offset, + longlongval) + + def handle_float(self, w_ffitype, w_obj, floatval): + libffi.struct_setfield_float(w_ffitype.ffitype, self.rawmem, self.offset, + floatval) + + def handle_singlefloat(self, w_ffitype, w_obj, singlefloatval): + libffi.struct_setfield_singlefloat(w_ffitype.ffitype, self.rawmem, self.offset, + singlefloatval) + + ## def handle_struct(self, w_ffitype, w_structinstance): + ## ... + + ## def handle_char_p(self, w_ffitype, w_obj, strval): + ## ... + + ## def handle_unichar_p(self, w_ffitype, w_obj, unicodeval): + ## ... + + W__StructInstance.typedef = TypeDef( diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -33,7 +33,7 @@ pass elif w_ffitype.is_pointer(): w_obj = self.convert_pointer_arg_maybe(w_obj, w_ffitype) - intval = intmask(space.uint_w(w_obj)) + intval = space.truncatedint_w(w_obj) self.handle_pointer(w_ffitype, w_obj, intval) elif w_ffitype.is_unsigned(): uintval = r_uint(space.truncatedint_w(w_obj)) @@ -58,7 +58,7 @@ def _longlong(self, w_ffitype, w_obj): # a separate function, which can be seen by the jit or not, # depending on whether longlongs are supported - bigval = self.space.bigint_w(w_obj) + bigval = self.space.bigint_w(w_obj) # XXX, use truncatedlonglong? ullval = bigval.ulonglongmask() llval = rffi.cast(rffi.LONGLONG, ullval) self.handle_longlong(w_ffitype, w_obj, llval) From noreply at buildbot.pypy.org Thu Jan 12 14:30:08 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 12 Jan 2012 14:30:08 +0100 (CET) Subject: [pypy-commit] pypy ffistruct: use truncatedlonglong_w instead of manually converting to bigint and then call ulonglongmask Message-ID: <20120112133008.38DFF82C03@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r51276:903365dd2b8f Date: 2012-01-12 14:29 +0100 http://bitbucket.org/pypy/pypy/changeset/903365dd2b8f/ Log: use truncatedlonglong_w instead of manually converting to bigint and then call ulonglongmask diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -58,10 +58,8 @@ def _longlong(self, w_ffitype, w_obj): # a separate function, which can be seen by the jit or not, # depending on whether longlongs are supported - bigval = self.space.bigint_w(w_obj) # XXX, use truncatedlonglong? - ullval = bigval.ulonglongmask() - llval = rffi.cast(rffi.LONGLONG, ullval) - self.handle_longlong(w_ffitype, w_obj, llval) + longlongval = self.space.truncatedlonglong_w(w_obj) + self.handle_longlong(w_ffitype, w_obj, longlongval) def _float(self, w_ffitype, w_obj): # a separate function, which can be seen by the jit or not, From noreply at buildbot.pypy.org Thu Jan 12 15:32:12 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 12 Jan 2012 15:32:12 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (stepahn, bivab): Fix decoding of guard description. Missed a cast before checking the bytecodes Message-ID: <20120112143212.6DD4082C03@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51277:33e8a892e86f Date: 2012-01-12 15:28 +0100 http://bitbucket.org/pypy/pypy/changeset/33e8a892e86f/ Log: (stepahn, bivab): Fix decoding of guard description. Missed a cast before checking the bytecodes diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -184,7 +184,7 @@ fvalue = 0 code_inputarg = False while True: - code = bytecode[0] + code = rffi.cast(lltype.Signed, bytecode[0]) bytecode = rffi.ptradd(bytecode, 1) if code >= self.CODE_FROMSTACK: if code > 0x7F: From noreply at buildbot.pypy.org Thu Jan 12 15:32:13 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 12 Jan 2012 15:32:13 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Import test from ppc backend Message-ID: <20120112143213.939AA82C03@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51278:7606d3ea9b4d Date: 2012-01-12 15:31 +0100 http://bitbucket.org/pypy/pypy/changeset/7606d3ea9b4d/ Log: Import test from ppc backend diff --git a/pypy/jit/backend/arm/test/test_runner.py b/pypy/jit/backend/arm/test/test_runner.py --- a/pypy/jit/backend/arm/test/test_runner.py +++ b/pypy/jit/backend/arm/test/test_runner.py @@ -175,3 +175,33 @@ res = self.execute_operation(rop.GETFIELD_GC, [t_box], 'float', descr=floatdescr) assert res.getfloat() == -3.6 + + def test_compile_loop_many_int_args(self): + for numargs in range(2, 30): + for _ in range(numargs): + self.cpu.reserve_some_free_fail_descr_number() + ops = [] + arglist = "[%s]\n" % ", ".join(["i%d" % i for i in range(numargs)]) + ops.append(arglist) + + arg1 = 0 + arg2 = 1 + res = numargs + for i in range(numargs - 1): + op = "i%d = int_add(i%d, i%d)\n" % (res, arg1, arg2) + arg1 = res + res += 1 + arg2 += 1 + ops.append(op) + ops.append("finish(i%d)" % (res - 1)) + + ops = "".join(ops) + loop = parse(ops) + looptoken = JitCellToken() + done_number = self.cpu.get_fail_descr_number(loop.operations[-1].getdescr()) + self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + ARGS = [lltype.Signed] * numargs + RES = lltype.Signed + args = [i+1 for i in range(numargs)] + res = self.cpu.execute_token(looptoken, *args) + assert self.cpu.get_latest_value_int(0) == sum(args) From noreply at buildbot.pypy.org Thu Jan 12 17:07:01 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 12 Jan 2012 17:07:01 +0100 (CET) Subject: [pypy-commit] pypy default: Highlight the start and end of the sections. Message-ID: <20120112160701.75FDF82C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51279:453df8a9c213 Date: 2012-01-12 17:04 +0100 http://bitbucket.org/pypy/pypy/changeset/453df8a9c213/ Log: Highlight the start and end of the sections. diff --git a/pypy/jit/tool/pypytrace.vim b/pypy/jit/tool/pypytrace.vim --- a/pypy/jit/tool/pypytrace.vim +++ b/pypy/jit/tool/pypytrace.vim @@ -19,6 +19,7 @@ syn match pypyLoopArgs '^[[].*' syn match pypyLoopStart '^#.*' syn match pypyDebugMergePoint '^debug_merge_point(.\+)' +syn match pypyLogBoundary '[[][0-9a-f]\+[]] \([{].\+\|.\+[}]\)$' hi def link pypyLoopStart Structure "hi def link pypyLoopArgs PreProc @@ -29,3 +30,4 @@ hi def link pypyNumber Number hi def link pypyDescr PreProc hi def link pypyDescrField Label +hi def link pypyLogBoundary Statement From noreply at buildbot.pypy.org Thu Jan 12 17:07:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 12 Jan 2012 17:07:02 +0100 (CET) Subject: [pypy-commit] pypy default: Some extra passing tests. Message-ID: <20120112160702.A0EF282C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51280:ddb13f0805a0 Date: 2012-01-12 17:05 +0100 http://bitbucket.org/pypy/pypy/changeset/ddb13f0805a0/ Log: Some extra passing tests. diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -5,7 +5,7 @@ VArrayStateInfo, NotVirtualStateInfo, VirtualState, ShortBoxes from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, llmemory from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, \ equaloplists, FakeDescrWithSnapshot from pypy.jit.metainterp.optimizeopt.intutils import IntBound @@ -82,6 +82,13 @@ assert isgeneral(value1, value2) assert not isgeneral(value2, value1) + assert isgeneral(OptValue(ConstInt(7)), OptValue(ConstInt(7))) + S = lltype.GcStruct('S') + foo = lltype.malloc(S) + fooref = lltype.cast_opaque_ptr(llmemory.GCREF, foo) + assert isgeneral(OptValue(ConstPtr(fooref)), + OptValue(ConstPtr(fooref))) + def test_field_matching_generalization(self): const1 = NotVirtualStateInfo(OptValue(ConstInt(1))) const2 = NotVirtualStateInfo(OptValue(ConstInt(2))) From noreply at buildbot.pypy.org Thu Jan 12 17:13:02 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Thu, 12 Jan 2012 17:13:02 +0100 (CET) Subject: [pypy-commit] pypy default: make docstring of elidable a bit more friendly Message-ID: <20120112161302.10BC982C03@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r51281:769b1a02e211 Date: 2012-01-12 17:02 +0100 http://bitbucket.org/pypy/pypy/changeset/769b1a02e211/ Log: make docstring of elidable a bit more friendly diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -11,12 +11,19 @@ def elidable(func): - """ Decorate a function as "trace-elidable". This means precisely that: + """ Decorate a function as "trace-elidable". Usually this means simply that + the function is constant-foldable, i.e. is pure and has no side-effects. + + In some situations it is ok to use this decorator if the function *has* + side effects, as long as these side-effects are idempotent. A typical + example for this would be a cache. + + To be totally precise: (1) the result of the call should not change if the arguments are the same (same numbers or same pointers) (2) it's fine to remove the call completely if we can guess the result - according to rule 1 + according to rule 1 (3) the function call can be moved around by optimizer, but only so it'll be called earlier and not later. From noreply at buildbot.pypy.org Thu Jan 12 17:13:03 2012 From: noreply at buildbot.pypy.org (cfbolz) Date: Thu, 12 Jan 2012 17:13:03 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120112161303.3405582C03@wyvern.cs.uni-duesseldorf.de> Author: Carl Friedrich Bolz Branch: Changeset: r51282:080ab6106fc4 Date: 2012-01-12 17:11 +0100 http://bitbucket.org/pypy/pypy/changeset/080ab6106fc4/ Log: merge diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -5,7 +5,7 @@ VArrayStateInfo, NotVirtualStateInfo, VirtualState, ShortBoxes from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, llmemory from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, \ equaloplists, FakeDescrWithSnapshot from pypy.jit.metainterp.optimizeopt.intutils import IntBound @@ -82,6 +82,13 @@ assert isgeneral(value1, value2) assert not isgeneral(value2, value1) + assert isgeneral(OptValue(ConstInt(7)), OptValue(ConstInt(7))) + S = lltype.GcStruct('S') + foo = lltype.malloc(S) + fooref = lltype.cast_opaque_ptr(llmemory.GCREF, foo) + assert isgeneral(OptValue(ConstPtr(fooref)), + OptValue(ConstPtr(fooref))) + def test_field_matching_generalization(self): const1 = NotVirtualStateInfo(OptValue(ConstInt(1))) const2 = NotVirtualStateInfo(OptValue(ConstInt(2))) diff --git a/pypy/jit/tool/pypytrace.vim b/pypy/jit/tool/pypytrace.vim --- a/pypy/jit/tool/pypytrace.vim +++ b/pypy/jit/tool/pypytrace.vim @@ -19,6 +19,7 @@ syn match pypyLoopArgs '^[[].*' syn match pypyLoopStart '^#.*' syn match pypyDebugMergePoint '^debug_merge_point(.\+)' +syn match pypyLogBoundary '[[][0-9a-f]\+[]] \([{].\+\|.\+[}]\)$' hi def link pypyLoopStart Structure "hi def link pypyLoopArgs PreProc @@ -29,3 +30,4 @@ hi def link pypyNumber Number hi def link pypyDescr PreProc hi def link pypyDescrField Label +hi def link pypyLogBoundary Statement From noreply at buildbot.pypy.org Thu Jan 12 17:16:32 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 12 Jan 2012 17:16:32 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (bivab, hager): do sign extension in decode32 Message-ID: <20120112161632.A7A0782C03@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51283:31d52c590cbd Date: 2012-01-12 08:15 -0800 http://bitbucket.org/pypy/pypy/changeset/31d52c590cbd/ Log: (bivab, hager): do sign extension in decode32 diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py @@ -5,6 +5,7 @@ from pypy.jit.metainterp.history import FLOAT from pypy.rlib.unroll import unrolling_iterable import pypy.jit.backend.ppc.ppcgen.register as r +from pypy.rpython.lltypesystem import rffi def gen_emit_cmp_op(condition, signed=True): def f(self, op, arglocs, regalloc): @@ -58,12 +59,17 @@ mem[i+1] = chr((n >> 16) & 0xFF) mem[i] = chr((n >> 24) & 0xFF) +# XXX this sign extension looks a bit strange ... +# It is important for PPC64. def decode32(mem, index): - return intmask(ord(mem[index+3]) + value = ( ord(mem[index+3]) | ord(mem[index+2]) << 8 | ord(mem[index+1]) << 16 | ord(mem[index]) << 24) + rffi_value = rffi.cast(rffi.INT, value) + return int(rffi_value) + def encode64(mem, i, n): mem[i+7] = chr(n & 0xFF) mem[i+6] = chr((n >> 8) & 0xFF) From noreply at buildbot.pypy.org Thu Jan 12 17:26:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 12 Jan 2012 17:26:45 +0100 (CET) Subject: [pypy-commit] pypy default: fix the test Message-ID: <20120112162645.A8D6182C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51284:20ee6554e580 Date: 2012-01-12 18:26 +0200 http://bitbucket.org/pypy/pypy/changeset/20ee6554e580/ Log: fix the test diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,7 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint -from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_opnum +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_getopnum from pypy.jit.metainterp.jitprof import Profiler from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory @@ -96,7 +96,7 @@ def main(i, j): op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], boxint_new(8)) - return f(i) - f2(i+j, i, j) + resop_opnum(op) + return f(i) - f2(i+j, i, j) + resop_getopnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) From noreply at buildbot.pypy.org Thu Jan 12 17:29:24 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 12 Jan 2012 17:29:24 +0100 (CET) Subject: [pypy-commit] pypy default: TEMPORARY: put a limit (4 by default) on the number of "cancelled, Message-ID: <20120112162924.CFD7C82C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51285:b09a9354d977 Date: 2012-01-12 17:28 +0100 http://bitbucket.org/pypy/pypy/changeset/b09a9354d977/ Log: TEMPORARY: put a limit (4 by default) on the number of "cancelled, tracing more" that can occur during one tracing. I think this will again fail in some non-PyPy interpreters like Pyrolog. Sorry about that, but it's the quickest way to fix issue985... diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1553,6 +1553,7 @@ class MetaInterp(object): in_recursion = 0 + cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): self.staticdata = staticdata @@ -1975,6 +1976,13 @@ raise SwitchToBlackhole(ABORT_BAD_LOOP) # For now self.compile_loop(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! + self.cancel_count += 1 + if self.staticdata.warmrunnerdesc: + memmgr = self.staticdata.warmrunnerdesc.memory_manager + if memmgr: + if self.cancel_count > memmgr.max_unroll_loops: + self.staticdata.log('cancelled too many times!') + raise SwitchToBlackhole(ABORT_BAD_LOOP) self.staticdata.log('cancelled, tracing more...') # Otherwise, no loop found so far, so continue tracing. diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -244,6 +244,11 @@ if self.warmrunnerdesc.memory_manager: self.warmrunnerdesc.memory_manager.max_retrace_guards = value + def set_param_max_unroll_loops(self, value): + if self.warmrunnerdesc: + if self.warmrunnerdesc.memory_manager: + self.warmrunnerdesc.memory_manager.max_unroll_loops = value + def disable_noninlinable_function(self, greenkey): cell = self.jit_cell_at_key(greenkey) cell.dont_trace_here = True diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -401,6 +401,7 @@ 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', 'retrace_limit': 'how many times we can try retracing before giving up', 'max_retrace_guards': 'number of extra guards a retrace can cause', + 'max_unroll_loops': 'number of extra unrollings a loop can cause', 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' } @@ -412,6 +413,7 @@ 'loop_longevity': 1000, 'retrace_limit': 5, 'max_retrace_guards': 15, + 'max_unroll_loops': 4, 'enable_opts': 'all', } unroll_parameters = unrolling_iterable(PARAMETERS.items()) From noreply at buildbot.pypy.org Thu Jan 12 17:36:06 2012 From: noreply at buildbot.pypy.org (hager) Date: Thu, 12 Jan 2012 17:36:06 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: make sign extension more explicit Message-ID: <20120112163606.7013C82C03@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51286:0993530a85a0 Date: 2012-01-12 17:35 +0100 http://bitbucket.org/pypy/pypy/changeset/0993530a85a0/ Log: make sign extension more explicit diff --git a/pypy/jit/backend/ppc/ppcgen/arch.py b/pypy/jit/backend/ppc/ppcgen/arch.py --- a/pypy/jit/backend/ppc/ppcgen/arch.py +++ b/pypy/jit/backend/ppc/ppcgen/arch.py @@ -7,16 +7,17 @@ import sys if sys.maxint == (2**31 - 1): WORD = 4 + DWORD = 2 * WORD IS_PPC_32 = True BACKCHAIN_SIZE = 2 FPR_SAVE_AREA = len(NONVOLATILES_FLOAT) * DWORD else: WORD = 8 + DWORD = 2 * WORD IS_PPC_32 = False BACKCHAIN_SIZE = 6 FPR_SAVE_AREA = len(NONVOLATILES_FLOAT) * WORD -DWORD = 2 * WORD IS_PPC_64 = not IS_PPC_32 MY_COPY_OF_REGS = 0 diff --git a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py --- a/pypy/jit/backend/ppc/ppcgen/helper/assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/assembler.py @@ -5,7 +5,7 @@ from pypy.jit.metainterp.history import FLOAT from pypy.rlib.unroll import unrolling_iterable import pypy.jit.backend.ppc.ppcgen.register as r -from pypy.rpython.lltypesystem import rffi +from pypy.rpython.lltypesystem import rffi, lltype def gen_emit_cmp_op(condition, signed=True): def f(self, op, arglocs, regalloc): @@ -68,7 +68,8 @@ | ord(mem[index]) << 24) rffi_value = rffi.cast(rffi.INT, value) - return int(rffi_value) + # do sign extension + return rffi.cast(lltype.Signed, rffi_value) def encode64(mem, i, n): mem[i+7] = chr(n & 0xFF) From noreply at buildbot.pypy.org Thu Jan 12 17:53:31 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Thu, 12 Jan 2012 17:53:31 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Set loc on STACK_LOC path of decode_inputargs. Message-ID: <20120112165331.0EDC882C03@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r51287:d4f1f24ee998 Date: 2012-01-12 11:53 -0500 http://bitbucket.org/pypy/pypy/changeset/d4f1f24ee998/ Log: Set loc on STACK_LOC path of decode_inputargs. diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -265,7 +265,7 @@ t = REF assert t != FLOAT stack_loc = decode32(enc, j+1) - PPCFrameManager.frame_pos(stack_loc, t) + loc = PPCFrameManager.frame_pos(stack_loc, t) j += 4 else: # REG_LOC if res_type == self.FLOAT_TYPE: From noreply at buildbot.pypy.org Thu Jan 12 17:56:13 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Thu, 12 Jan 2012 17:56:13 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: load_imm doesn't take a reg value. Message-ID: <20120112165613.66D9982C03@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r51288:f104a6279600 Date: 2012-01-12 11:55 -0500 http://bitbucket.org/pypy/pypy/changeset/f104a6279600/ Log: load_imm doesn't take a reg value. diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -743,7 +743,7 @@ elif loc.is_stack(): self.mc.alloc_scratch_reg() offset = loc.value - self.mc.load_imm(r.SCRATCH.value, value) + self.mc.load_imm(r.SCRATCH, value) self.mc.store(r.SCRATCH.value, r.SPP.value, offset) self.mc.free_scratch_reg() return From noreply at buildbot.pypy.org Thu Jan 12 18:23:46 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 12 Jan 2012 18:23:46 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: little changes to make the jit inline more stuff and optimize the trace Message-ID: <20120112172346.261EE82C03@wyvern.cs.uni-duesseldorf.de> Author: l.diekmann Branch: set-strategies Changeset: r51289:93d68d35cc81 Date: 2012-01-12 17:23 +0000 http://bitbucket.org/pypy/pypy/changeset/93d68d35cc81/ Log: little changes to make the jit inline more stuff and optimize the trace diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -943,8 +943,9 @@ w_set.sstorage = strategy.get_empty_storage() return - #XXX check ints and strings at once + _pick_correct_strategy(space, w_set, iterable_w) +def _pick_correct_strategy(space, w_set, iterable_w): # check for integers for w_item in iterable_w: if type(w_item) is not W_IntObject: diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -61,7 +61,12 @@ return plain_str2unicode(space, w_self._value) def listview_str(w_self): - return [s for s in w_self._value] + return _create_list_from_string(w_self._value) + +def _create_list_from_string(value): + # need this helper function to allow the jit to look inside and inline + # listview_str + return [s for s in value] registerimplementation(W_StringObject) From noreply at buildbot.pypy.org Thu Jan 12 18:29:46 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 12 Jan 2012 18:29:46 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: added tests for optimized jit output with merged strategy implementations (lists, sets, strings) Message-ID: <20120112172946.F1BA782C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51290:a81b07b0d748 Date: 2012-01-12 18:27 +0100 http://bitbucket.org/pypy/pypy/changeset/a81b07b0d748/ Log: added tests for optimized jit output with merged strategy implementations (lists, sets, strings) diff --git a/pypy/module/pypyjit/test_pypy_c/test_containers.py b/pypy/module/pypyjit/test_pypy_c/test_containers.py --- a/pypy/module/pypyjit/test_pypy_c/test_containers.py +++ b/pypy/module/pypyjit/test_pypy_c/test_containers.py @@ -128,3 +128,82 @@ loop, = log.loops_by_filename(self.filepath) ops = loop.ops_by_id('look') assert 'call' not in log.opnames(ops) + + #XXX the following tests only work with strategies enabled + + def test_should_not_create_intobject_with_sets(self): + def main(n): + i = 0 + s = set() + while i < n: + s.add(i) + i += 1 + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + opnames = log.opnames(loop.allops()) + assert opnames.count('new_with_vtable') == 0 + + def test_should_not_create_stringobject_with_sets(self): + def main(n): + i = 0 + s = set() + while i < n: + s.add(str(i)) + i += 1 + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + opnames = log.opnames(loop.allops()) + assert opnames.count('new_with_vtable') == 0 + + def test_should_not_create_intobject_with_lists(self): + def main(n): + i = 0 + l = [] + while i < n: + l.append(i) + i += 1 + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + opnames = log.opnames(loop.allops()) + assert opnames.count('new_with_vtable') == 0 + + def test_should_not_create_stringobject_with_lists(self): + def main(n): + i = 0 + l = [] + while i < n: + l.append(str(i)) + i += 1 + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + opnames = log.opnames(loop.allops()) + assert opnames.count('new_with_vtable') == 0 + + def test_optimized_create_list_from_string(self): + def main(n): + i = 0 + l = [] + while i < n: + l = list("abc" * i) + i += 1 + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + opnames = log.opnames(loop.allops()) + assert opnames.count('new_with_vtable') == 0 + + def test_optimized_create_set_from_list(self): + def main(n): + i = 0 + while i < n: + s = set([1,2,3]) + i += 1 + log = self.run(main, [1000]) + assert log.result == main(1000) + loop, = log.loops_by_filename(self.filepath) + opnames = log.opnames(loop.allops()) + assert opnames.count('new_with_vtable') == 0 From noreply at buildbot.pypy.org Thu Jan 12 18:29:48 2012 From: noreply at buildbot.pypy.org (l.diekmann) Date: Thu, 12 Jan 2012 18:29:48 +0100 (CET) Subject: [pypy-commit] pypy set-strategies: merge Message-ID: <20120112172948.2AA8282C03@wyvern.cs.uni-duesseldorf.de> Author: Lukas Diekmann Branch: set-strategies Changeset: r51291:a4bba3dd3493 Date: 2012-01-12 18:29 +0100 http://bitbucket.org/pypy/pypy/changeset/a4bba3dd3493/ Log: merge diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -943,8 +943,9 @@ w_set.sstorage = strategy.get_empty_storage() return - #XXX check ints and strings at once + _pick_correct_strategy(space, w_set, iterable_w) +def _pick_correct_strategy(space, w_set, iterable_w): # check for integers for w_item in iterable_w: if type(w_item) is not W_IntObject: diff --git a/pypy/objspace/std/stringobject.py b/pypy/objspace/std/stringobject.py --- a/pypy/objspace/std/stringobject.py +++ b/pypy/objspace/std/stringobject.py @@ -61,7 +61,12 @@ return plain_str2unicode(space, w_self._value) def listview_str(w_self): - return [s for s in w_self._value] + return _create_list_from_string(w_self._value) + +def _create_list_from_string(value): + # need this helper function to allow the jit to look inside and inline + # listview_str + return [s for s in value] registerimplementation(W_StringObject) From noreply at buildbot.pypy.org Thu Jan 12 18:42:26 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 12 Jan 2012 18:42:26 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: progress on transformations Message-ID: <20120112174226.E112582C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51292:8b23d6076d33 Date: 2012-01-12 19:41 +0200 http://bitbucket.org/pypy/pypy/changeset/8b23d6076d33/ Log: progress on transformations diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -1,19 +1,28 @@ from pypy.rlib import jit from pypy.rlib.objectmodel import instantiate -from pypy.module.micronumpy.strides import calculate_broadcast_strides +from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ + calculate_slice_strides -# Iterators for arrays -# -------------------- -# all those iterators with the exception of BroadcastIterator iterate over the -# entire array in C order (the last index changes the fastest). This will -# yield all elements. Views iterate over indices and look towards strides and -# backstrides to find the correct position. Notably the offset between -# x[..., i + 1] and x[..., i] will be strides[-1]. Offset between -# x[..., k + 1, 0] and x[..., k, i_max] will be backstrides[-2] etc. +class BaseTransform(object): + pass -# BroadcastIterator works like that, but for indexes that don't change source -# in the original array, strides[i] == backstrides[i] == 0 +class ViewTransform(BaseTransform): + def __init__(self, chunks): + # 4-tuple specifying slicing + self.chunks = chunks + +class BroadcastTransform(BaseTransform): + def __init__(self, res_shape): + self.res_shape = res_shape + +class ReduceTransform(BaseTransform): + """ A reduction from ``shape`` over ``dim``. This also changes the order + of iteration, because we iterate over dim the most often + """ + def __init__(self, shape, dim): + self.shape = shape + self.dim = dim class BaseIterator(object): def next(self, shapelen): @@ -22,6 +31,15 @@ def done(self): raise NotImplementedError + def apply_transformations(self, arr, transformations): + v = self + for transform in transformations: + v = v.transform(arr, transform) + return v + + def transform(self, arr, t): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 @@ -36,6 +54,10 @@ def done(self): return self.offset >= self.size + def transform(self, arr, t): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).transform(arr, t) + class OneDimIterator(BaseIterator): def __init__(self, start, step, stop): self.offset = start @@ -56,22 +78,30 @@ return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) class ViewIterator(BaseIterator): - def __init__(self, start, strides, backstrides, shape, res_shape=None): + def __init__(self, start, strides, backstrides, shape): self.offset = start self._done = False - if res_shape is not None and res_shape != shape: - r = calculate_broadcast_strides(strides, backstrides, - shape, res_shape) - self.strides, self.backstrides = r - self.res_shape = res_shape - else: - self.strides = strides - self.backstrides = backstrides - self.res_shape = shape + self.strides = strides + self.backstrides = backstrides + self.res_shape = shape self.indices = [0] * len(self.res_shape) + def transform(self, arr, t): + if isinstance(t, BroadcastTransform): + r = calculate_broadcast_strides(self.strides, self.backstrides, + self.res_shape, t.res_shape) + return ViewIterator(self.offset, r[0], r[1], t.res_shape) + elif isinstance(t, ViewTransform): + r = calculate_slice_strides(self.res_shape, self.offset, + self.strides, + self.backstrides, t.chunks) + return ViewIterator(r[1], r[2], r[3], r[0]) + elif isinstance(t, ReduceTransform): + xxx + @jit.unroll_safe def next(self, shapelen): + shapelen = jit.promote(len(self.res_shape)) offset = self.offset indices = [0] * shapelen for i in range(shapelen): @@ -96,6 +126,13 @@ res._done = done return res + def apply_transformations(self, arr, transformations): + v = BaseIterator.apply_transformations(self, arr, transformations) + if len(v.res_shape) == 1: + return OneDimIterator(self.offset, self.strides[0], + self.res_shape[0]) + return v + def done(self): return self._done @@ -103,6 +140,9 @@ def next(self, shapelen): return self + def transform(self, arr, t): + pass + class AxisIterator(BaseIterator): """ Accept an addition argument dim Redorder the dimensions to iterate over dim most often. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -32,7 +32,7 @@ slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'], + reds=['self', 'frame', 'source'], get_printable_location=signature.new_printable_location('slice'), ) @@ -612,7 +612,7 @@ """ res_shape = res_shape or self.shape arr = arr or self - return signature.find_sig(self.create_sig(res_shape), arr) + return signature.find_sig(self.create_sig(), arr) def descr_array_iface(self, space): if not self.shape: @@ -666,7 +666,7 @@ def copy(self, space): return Scalar(self.dtype, self.value) - def create_sig(self, res_shape): + def create_sig(self): return signature.ScalarSignature(self.dtype) def get_concrete_or_scalar(self): @@ -737,11 +737,11 @@ self.size = size VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() return signature.VirtualSliceSignature( - self.child.create_sig(res_shape)) + self.child.create_sig()) def force_if_needed(self): if self.forced_result is None: @@ -762,11 +762,10 @@ def _del_sources(self): self.values = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) - return signature.Call1(self.ufunc, self.name, - self.values.create_sig(res_shape)) + return self.forced_result.create_sig() + return signature.Call1(self.ufunc, self.name, self.values.create_sig()) class Call2(VirtualArray): """ @@ -786,12 +785,43 @@ self.left = None self.right = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() + if self.shape != self.left.shape and self.shape != self.right.shape: + return signature.BroadcastBoth(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.left.shape: + return signature.BroadcastLeft(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.right.shape: + return signature.BroadcastRight(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) return signature.Call2(self.ufunc, self.name, self.calc_dtype, - self.left.create_sig(res_shape), - self.right.create_sig(res_shape)) + self.left.create_sig(), self.right.create_sig()) + +class SliceArray(Call2): + def __init__(self, shape, dtype, left, right): + Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, + right) + + def create_sig(self): + lsig = self.left.create_sig() + rsig = self.right.create_sig() + if self.shape != self.right.shape: + return signature.SliceloopBroadcastSignature(self.ufunc, + self.name, + self.calc_dtype, + lsig, rsig) + return signature.SliceloopSignature(self.ufunc, self.name, + self.calc_dtype, + lsig, rsig) class AxisReduce(Call2): """ NOTE: this is only used as a container, you should never @@ -856,11 +886,6 @@ self.strides = strides self.backstrides = backstrides - def array_sig(self, res_shape): - if res_shape is not None and self.shape != res_shape: - return signature.ViewSignature(self.dtype) - return signature.ArraySignature(self.dtype) - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): '''Modifies builder with a representation of the array/slice The items will be seperated by a comma if comma is 1 @@ -975,7 +1000,7 @@ self.dtype is w_value.find_dtype()): self._fast_setslice(space, w_value) else: - self._sliceloop(w_value, res_shape) + self._sliceloop(w_value) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -999,21 +1024,16 @@ source.next() dest.next() - def _sliceloop(self, source, res_shape): - sig = source.find_sig(res_shape=res_shape) - frame = sig.create_frame(source, res_shape) - res_iter = view_iter_from_arr(self) - shapelen = len(res_shape) - while not res_iter.done(): - slice_driver.jit_merge_point(sig=sig, - frame=frame, - shapelen=shapelen, - self=self, source=source, - res_iter=res_iter) - self.setitem(res_iter.offset, sig.eval(frame, source).convert_to( - self.find_dtype())) + def _sliceloop(self, source): + arr = SliceArray(self.shape, self.dtype, self, source) + sig = arr.find_sig() + frame = sig.create_frame(arr) + shapelen = len(self.shape) + while not frame.done(): + slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, + shapelen=shapelen, source=source) + sig.eval(frame, arr) frame.next(shapelen) - res_iter = res_iter.next(shapelen) def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) @@ -1022,7 +1042,7 @@ class ViewArray(ConcreteArray): - def create_sig(self, res_shape): + def create_sig(self): return signature.ViewSignature(self.dtype) @@ -1086,8 +1106,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_sig(self, res_shape): - return self.array_sig(res_shape) + def create_sig(self): + return signature.ArraySignature(self.dtype) def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -142,7 +142,7 @@ scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, scalarsig, - obj.create_sig(obj.shape)), obj) + obj.create_sig()), obj) frame = sig.create_frame(obj) if self.identity is None: value = sig.eval(frame, obj).convert_to(dtype) @@ -160,7 +160,7 @@ for s in shape: size *= s result = W_NDimArray(size, shape, dtype) - rightsig = obj.create_sig(obj.shape) + rightsig = obj.create_sig() # note - this is just a wrapper so signature can fetch # both left and right, nothing more, especially # this is not a true virtual array, because shapes diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,10 +1,33 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator, AxisIterator + OneDimIterator, ConstantIterator, AxisIterator, ViewTransform,\ + BroadcastTransform, ReduceTransform from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib.jit import hint, unroll_safe, promote +""" Signature specifies both the numpy expression that has been constructed +and the assembler to be compiled. This is a very important observation - +Two expressions will be using the same assembler if and only if they are +compiled to the same signature. + +This is also a very convinient tool for specializations. For example +a + a and a + b (where a != b) will compile to different assembler because +we specialize on the same array access. + +When evaluating, signatures will create iterators per signature node, +potentially sharing some of them. Iterators depend also on the actual +expression, they're not only dependant on the array itself. For example +a + b where a is dim 2 and b is dim 1 would create a broadcasted iterator for +the array b. + +Such iterator changes are called Transformations. An actual iterator would +be a combination of array and various transformation, like view, broadcast, +dimension swapping etc. + +See interp_iter for transformations +""" + def new_printable_location(driver_name): def get_printable_location(shapelen, sig): return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) @@ -98,13 +121,10 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None, chunks=None): - if chunks is None: - chunks = [] - res_shape = res_shape or arr.shape + def create_frame(self, arr): iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, chunks) + self._create_iter(iterlist, arraylist, arr, []) return NumpyEvalFrame(iterlist, arraylist) @@ -126,16 +146,6 @@ def hash(self): return compute_identity_hash(self.dtype) - def allocate_view_iter(self, arr, res_shape, chunklist): - r = arr.shape, arr.start, arr.strides, arr.backstrides - if chunklist: - for chunkelem in chunklist: - r = calculate_slice_strides(r[0], r[1], r[2], r[3], chunkelem) - shape, start, strides, backstrides = r - if len(res_shape) == 1: - return OneDimIterator(start, strides[0], res_shape[0]) - return ViewIterator(start, strides, backstrides, shape, res_shape) - class ArraySignature(ConcreteSignature): def debug_repr(self): return 'Array' @@ -147,22 +157,18 @@ assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, res_shape, chunklist)) + iterlist.append(self.allocate_iter(concr, transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, res_shape, chunklist): - if chunklist: - #How did we get here? - assert NotImplemented - #return self.allocate_view_iter(arr, res_shape, chunklist) - return ArrayIterator(arr.size) + def allocate_iter(self, arr, transforms): + return ArrayIterator(arr.size).apply_transformations(arr, transforms) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] @@ -175,7 +181,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -195,8 +201,9 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, res_shape, chunklist): - return self.allocate_view_iter(arr, res_shape, chunklist) + def allocate_iter(self, arr, transforms): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).apply_transformations(arr, transforms) class VirtualSliceSignature(Signature): def __init__(self, child): @@ -207,6 +214,9 @@ assert isinstance(arr, VirtualSlice) self.child._invent_array_numbering(arr.child, cache) + def _invent_numbering(self, cache, allnumbers): + self.child._invent_numbering({}, allnumbers) + def hash(self): return intmask(self.child.hash() ^ 1234) @@ -216,12 +226,11 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) - chunklist.append(arr.chunks) - self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist) + transforms = transforms + [ViewTransform(arr.chunks)] + self.child._create_iter(iterlist, arraylist, arr.child, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -257,11 +266,10 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist) + self.child._create_iter(iterlist, arraylist, arr.values, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 @@ -302,31 +310,68 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) - self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist) - self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist) + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class BroadcastLeft(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering({}, allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + +class BroadcastRight(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(cache, allnumbers) + self.right._invent_numbering({}, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class BroadcastBoth(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering({}, allnumbers) + self.right._invent_numbering({}, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): - self.right._create_iter(iterlist, arraylist, arr, res_shape, - chunklist) + def _create_iter(self, iterlist, arraylist, arr, transforms): + self.right._create_iter(iterlist, arraylist, arr, transforms) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) @@ -340,17 +385,41 @@ def debug_repr(self): return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) +class SliceloopSignature(Call2): + def eval(self, frame, arr): + ofs = frame.iterators[0].offset + arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( + self.calc_dtype)) + + def debug_repr(self): + return 'SliceLoop(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + +class SliceloopBroadcastSignature(SliceloopSignature): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering({}, allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import SliceArray + + assert isinstance(arr, SliceArray) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + class AxisReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import AxisReduce + xxx + assert isinstance(arr, AxisReduce) assert not iterlist # we assume that later in eval iterlist.append(AxisIterator(arr.dim, arr.right.shape, arr.left.strides, arr.left.backstrides)) - self.right._create_iter(iterlist, arraylist, arr.right, arr.right.shape, - chunklist) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) def _invent_numbering(self, cache, allnumbers): no = len(allnumbers) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -724,6 +724,7 @@ assert d[1] == 12 def test_mean(self): + skip("xxx") from numpypy import array,mean a = array(range(5)) assert a.mean() == 2.0 @@ -746,6 +747,7 @@ raises(TypeError, 'a.sum(2, 3)') def test_reduce_nd(self): + skip("xxx") from numpypy import arange a = arange(15).reshape(5, 3) assert a.sum() == 105 From noreply at buildbot.pypy.org Thu Jan 12 18:48:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 12 Jan 2012 18:48:09 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: oops, fix a test, I'm glad I wrote it :) Message-ID: <20120112174809.7BC4D82C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51293:03714deedfa4 Date: 2012-01-12 19:47 +0200 http://bitbucket.org/pypy/pypy/changeset/03714deedfa4/ Log: oops, fix a test, I'm glad I wrote it :) diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -128,7 +128,7 @@ def apply_transformations(self, arr, transformations): v = BaseIterator.apply_transformations(self, arr, transformations) - if len(v.res_shape) == 1: + if len(arr.shape) == 1: return OneDimIterator(self.offset, self.strides[0], self.res_shape[0]) return v diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -346,6 +346,7 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): + skip("xxx") from numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() From noreply at buildbot.pypy.org Thu Jan 12 19:08:52 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 12 Jan 2012 19:08:52 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Draft Message-ID: <20120112180852.ACF7482C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4018:4d0db926ef79 Date: 2012-01-12 19:08 +0100 http://bitbucket.org/pypy/extradoc/changeset/4d0db926ef79/ Log: Draft diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst new file mode 100644 --- /dev/null +++ b/blog/draft/tm.rst @@ -0,0 +1,111 @@ +Transactional Memory +==================== + +Here is an update about the previous blog post about the +`Global Interpreter Lock`__ (GIL). + +.. __: http://morepypy.blogspot.com/p/global-interpreter-lock-or-how-to-kill.html + +We believe we have a plan to implement an interesting model for using +multiple cores. Believe it or not, this is *better* than just removing +the infamous GIL from PyPy. You might get to use all your cores +*without ever writing threads.* + +You would instead just use some event dispatcher, say from Twisted, from +Stackless, or from your favorite GUI; or just write your own. In this +model, with minimal changes to the event dispatcher's source code --- +and of course by using a special version of PyPy --- you get some form +of automatic parallelization. The basic idea is simple: start handling +multiple events in parallel, but give each one its own transaction_. + +.. _transaction: http://en.wikipedia.org/wiki/Transactional_memory + +Threads or Events? +------------------ + +First, why would this be better than "just" removing the GIL? Because +using threads can be a mess in any complex program. Some authors (e.g. +Lee_) have argued that the reason is that threads are fundamentally +non-deterministic. This makes it very hard to reason about them. +Basically the programmer needs to "trim" down the non-determinism (e.g. +by adding locks, semaphores, etc.), and it's hard to be sure that he has +a sufficiently deterministic result, if only because he can't write +exhaustive tests for it. + +.. _Lee: http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf + +By contrast, consider a Twisted program. It's not a multi-threaded +program, which means that it handles the "events" one after the other. +The exact ordering of the events is not really deterministic, because +they often correspond to external events; but that's the only source of +non-determinism. The actual handling of each event occurs in a nicely +deterministic way, and most importantly, not in parallel with the +handling of other events. The same is true about other libraries like +GUI toolkits, gevent, or even Stackless. + +These two models --- threads or events --- are the two main models we +have right now. The latter is more used in Python, because it is much +simpler to use than the former, and the former doesn't give any benefit +because of the GIL. A third model, which is the only one that gives +multi-core benefits, is to use multiple processes, and do inter-process +communication. + +The problem +----------- + +Consider the case of a big program that has arbitrary complicated +dependencies. Even assuming a GIL-less Python, this is likely enough to +prevent the programmer from even starting a multi-threaded rewrite, +because it would require a huge mess of locks. He could also consider +using multiple processes instead, but the result is annoying too: the +complicated dependencies translate into a huge mess of inter-process +synchronization. + +The problem can also be down-sized to very small programs, like the kind +of hacks that you do and forget about. In this case, the dependencies +might be simpler, but you still have to learn and use a complex +inter-process library, which is overkill for the purpose. I would even +argue that this is similar to how we might feel a priori that automatic +memory management is overkill in small programs --- of course anyone who +wrote a number of 15-line Python scripts knows this to be wrong. This +is even *so* wrong that the opposite is obvious nowadays: it makes no +sense whatsoever to manage object lifetimes explicitly in most small +scripts. I think the same will eventually be true for using multiple +CPUs. + +Events in Transactions +---------------------- + +Consider again the Twisted example I gave above. The case I am +interested in is the case in which events are *generally mostly +independent.* By this I mean the following: there are often several +events pending in the dispatch queue (assuming the program is not under +100% 1-CPU load, otherwise the whole discussion is moot). Handling +these events is often mostly independent --- but the point is that they +don't *have* to be proved independent. In fact it is fine if they have +arbitrary complicated dependencies as described above. The point is the +expected common case. Imagine that you have a GIL-less Python and that +you can, by a wave of your hand, have all the careful locking mess +magically done. Then what I mean here is the case in which this +theoretical program would run mostly in parallel on multiple core, +without waiting too often on the locks. + +In this case, with minimal tweaks in the event dispatch loop, we can +handle multiple events on multiple threads, each in its own transaction. +A transaction is basically a tentative execution of the corresponding +piece of code: if we detect conflicts with other concurrently executing +transactions, we cancel the whole transaction and restart it from +scratch. + +By now, the fact that it can basically work should be clear: multiple +transactions will only get into conflicts when modifying the same data +structures, which is the case where the magical wand above would have +put locks. If the magical program could progress without too many +locks, then the transactional program can progress without too many +conflicts. Moreover, you get more than what the magical program can +give you: each event is dispatched in its own transaction, which means +that from each event's point of view, we have the illusion that nobody +else is running concurrently. This is exactly what all existing +Twisted-/Stackless-/etc.-based programs are assuming. + +xxx From noreply at buildbot.pypy.org Thu Jan 12 23:05:44 2012 From: noreply at buildbot.pypy.org (mattip) Date: Thu, 12 Jan 2012 23:05:44 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: add tests Message-ID: <20120112220544.6978A82C03@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-axisops Changeset: r51294:5c5db1df52eb Date: 2012-01-13 00:05 +0200 http://bitbucket.org/pypy/pypy/changeset/5c5db1df52eb/ Log: add tests diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -748,14 +748,26 @@ def test_reduce_nd(self): skip("xxx") - from numpypy import arange + from numpypy import arange, array a = arange(15).reshape(5, 3) assert a.sum() == 105 assert a.max() == 14 + assert array([]).sum() = 0.0 + raises(ValueError,'array([]).sum()') assert (a.sum(0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() + assert ((a + a).max() == 28) + assert ((a + a).max(0) == [24, 26. 28]).all() + assert ((a + a).sum(1) == [6, 24, 42, 60, 78]).all() + a = array(range(105)).reshape(3, 5, 7) + assert (a[:, 1, :].sum(0) == [126, 129, 132, 135, 138, 141, 144]).all() + assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() + raises (ValueError, 'a[:, 1, :].sum(2)') + assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() + assert (a.reshape(1,-1).sum(0) == range(105)).all() + assert (a.reshape(1,-1).sum(1) == 5460) def test_identity(self): from numpypy import identity, array From noreply at buildbot.pypy.org Fri Jan 13 12:30:07 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 13 Jan 2012 12:30:07 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (edelsohn, bivab): Implement get_ and set_ interiorfileld operations Message-ID: <20120113113007.B887282C03@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r51295:6a95268fe8c5 Date: 2012-01-13 02:32 -0800 http://bitbucket.org/pypy/pypy/changeset/6a95268fe8c5/ Log: (edelsohn, bivab): Implement get_ and set_ interiorfileld operations diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -537,6 +537,54 @@ emit_getfield_raw_pure = emit_getfield_gc emit_getfield_gc_pure = emit_getfield_gc + def emit_getinteriorfield_gc(self, op, arglocs, regalloc): + (base_loc, index_loc, res_loc, + ofs_loc, ofs, itemsize, fieldsize) = arglocs + self.mc.load_imm(r.SCRATCH, itemsize.value) + self.mc.mullw(r.SCRATCH.value, index_loc.value, r.SCRATCH.value) + if ofs.value > 0: + if ofs_loc.is_imm(): + self.mc.addic(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + else: + self.mc.add(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + + if fieldsize.value == 8: + self.mc.ldx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 4: + self.mc.lwzx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 2: + self.mc.lhzx(res_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 1: + self.mc.lbzx(res_loc.value, base_loc.value, r.SCRATCH.value) + else: + assert 0 + + #XXX Hack, Hack, Hack + if not we_are_translated(): + signed = op.getdescr().fielddescr.is_field_signed() + self._ensure_result_bit_extension(res_loc, fieldsize.value, signed) + + def emit_setinteriorfield_gc(self, op, arglocs, regalloc): + (base_loc, index_loc, value_loc, + ofs_loc, ofs, itemsize, fieldsize) = arglocs + self.mc.load_imm(r.SCRATCH, itemsize.value) + self.mc.mullw(r.SCRATCH.value, index_loc.value, r.SCRATCH.value) + if ofs.value > 0: + if ofs_loc.is_imm(): + self.mc.addic(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + else: + self.mc.add(r.SCRATCH.value, r.SCRATCH.value, ofs_loc.value) + if fieldsize.value == 8: + self.mc.stdx(value_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 4: + self.mc.stwx(value_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 2: + self.mc.sthx(value_loc.value, base_loc.value, r.SCRATCH.value) + elif fieldsize.value == 1: + self.mc.stbx(value_loc.value, base_loc.value, r.SCRATCH.value) + else: + assert 0 + class ArrayOpAssembler(object): diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -602,6 +602,46 @@ prepare_getfield_raw_pure = prepare_getfield_gc prepare_getfield_gc_pure = prepare_getfield_gc + def prepare_getinteriorfield_gc(self, op): + t = unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, sign = t + args = op.getarglist() + base_loc, base_box = self._ensure_value_is_boxed(op.getarg(0), args) + index_loc, index_box = self._ensure_value_is_boxed(op.getarg(1), args) + c_ofs = ConstInt(ofs) + if _check_imm_arg(c_ofs): + ofs_loc = imm(ofs) + else: + ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, [base_box]) + self.possibly_free_var(ofs_box) + self.possibly_free_vars_for_op(op) + self.possibly_free_var(base_box) + self.possibly_free_var(index_box) + self.free_temp_vars() + result_loc = self.force_allocate_reg(op.result) + self.possibly_free_var(op.result) + return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), + imm(itemsize), imm(fieldsize)] + + def prepare_setinteriorfield_gc(self, op): + t = unpack_interiorfielddescr(op.getdescr()) + ofs, itemsize, fieldsize, sign = t + args = op.getarglist() + base_loc, base_box = self._ensure_value_is_boxed(op.getarg(0), args) + index_loc, index_box = self._ensure_value_is_boxed(op.getarg(1), args) + value_loc, value_box = self._ensure_value_is_boxed(op.getarg(2), args) + c_ofs = ConstInt(ofs) + if _check_imm_arg(c_ofs): + ofs_loc = imm(ofs) + else: + ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, [base_box]) + self.possibly_free_var(ofs_box) + self.possibly_free_var(base_box) + self.possibly_free_var(index_box) + self.possibly_free_var(value_box) + return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), + imm(itemsize), imm(fieldsize)] + def prepare_arraylen_gc(self, op): arraydescr = op.getdescr() assert isinstance(arraydescr, ArrayDescr) From noreply at buildbot.pypy.org Fri Jan 13 14:02:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 14:02:36 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Write the end. Add XXXes. Message-ID: <20120113130236.AE4ED82C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4019:45b71996531a Date: 2012-01-13 13:04 +0100 http://bitbucket.org/pypy/extradoc/changeset/45b71996531a/ Log: Write the end. Add XXXes. diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -1,6 +1,8 @@ Transactional Memory ==================== +XXX intro: what's the GIL and what's the problem + Here is an update about the previous blog post about the `Global Interpreter Lock`__ (GIL). @@ -20,6 +22,12 @@ .. _transaction: http://en.wikipedia.org/wiki/Transactional_memory +XXX point to Erlang + +XXX Twisted != Stackless; my point is that you should be able to tweak + both Twisted's event loops and Stackless's, to get TM benefits without + changing neither the Twisted model nor the Stackless model + Threads or Events? ------------------ @@ -98,7 +106,7 @@ scratch. By now, the fact that it can basically work should be clear: multiple -transactions will only get into conflicts when modifying the same data +transactions will only get into conflict when modifying the same data structures, which is the case where the magical wand above would have put locks. If the magical program could progress without too many locks, then the transactional program can progress without too many @@ -108,4 +116,57 @@ else is running concurrently. This is exactly what all existing Twisted-/Stackless-/etc.-based programs are assuming. -xxx +Not a perfect solution +---------------------- + +I would like to put some emphasis on the fact that TM is not a perfect +solution either. Right now, the biggest issue is that of the +performance hit that comes from STM. In time, HTM will help mitigate +the problem; but I won't deny the fact that in some cases, because it's +simple enough and/or because you really need the top performance, TM is +not the best solution. + +Also, the explanations above are silent on what is a hard point for TM, +namely system calls. The basic general solution is to suspend other +transactions when a transaction wants to do a system call, so that we +are sure that the transaction will succeed. Of course this solution is +far from optimal. Interestingly, it's possible to do better on a +case-by-case basis: for example, by adding in-process buffers, we can +improve the situation for sockets, by having recv() store in a buffer +what is received so that it can be re-recv()-ed later if the transaction +is cancelled; similarly, send() can be delayed in another buffer until +we are sure that the transaction can be committed. + +From my point of view, the most important point is that the TM solution +comes from the correct side of the "determinism" scale. With threads, +you have to prune down non-determinism. With TM, you start from a +mostly deterministic point, and if needed, you add non-determinism. The +reason you would want to do so is to make the transactions shorter: +shorter transactions have less risks of conflicts, and when there are +conflicts, less things to redo. So making transactions shorter +increases the parallelism that your program can achieve, while at the +same time requiring more careful thinking about the program + +In terms of an event-driven model, the equivalent would be to divide the +response of a big processing event into several events that are handled +one after the other: the first event sets things up and fires the second +event, which does the actual computation; and afterwards a third event +writes the results back. As a result, the second event's transaction +has little risks of getting cancelled. On the other hand, the writing +back needs to be aware of the fact that it's not in the same transaction +as the original setting up, which means that other unrelated +transactions may have run in-between. + +One step in the future? +----------------------- + +These, and others, are the problems of the TM approach. They are "new" +problems, too, in the sense that the existing ways of programming don't +have these problems. + +Still, as you have guessed, I think that it is overall a win, and +possibly a big win --- a win that might be on the same scale for the age +of multiple-CPUs as automatic garbage collection was for the age of +plenty-of-RAM. + +--- Armin From noreply at buildbot.pypy.org Fri Jan 13 14:02:37 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 14:02:37 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Finish. Message-ID: <20120113130237.C9E5982C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4020:82661cd4dcd7 Date: 2012-01-13 14:02 +0100 http://bitbucket.org/pypy/extradoc/changeset/82661cd4dcd7/ Log: Finish. diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -1,32 +1,34 @@ Transactional Memory ==================== -XXX intro: what's the GIL and what's the problem - Here is an update about the previous blog post about the `Global Interpreter Lock`__ (GIL). .. __: http://morepypy.blogspot.com/p/global-interpreter-lock-or-how-to-kill.html -We believe we have a plan to implement an interesting model for using -multiple cores. Believe it or not, this is *better* than just removing -the infamous GIL from PyPy. You might get to use all your cores -*without ever writing threads.* +Let me remind you that the GIL is the technique used in both CPython and +PyPy to safely run multi-threaded programs: it is a global lock that +prevents multiple threads from actually running at the same time. The +reason to do that is that it would have desastrous effects in the +interpreter if both threads access the same object concurrently --- to +the point that in CPython even just manipulating the reference counter +needs to be protected by the lock. + +Keeping your Python interpreter unchanged while managing to remove the +infamous GIL: so far, this is regarded as the ultimate goal to enable +true multi-CPU usage. But we believe we have a plan to implement a +different model for using multiple cores. Believe it or not, this is +*better* than just removing the GIL from PyPy. You might get to use all +your cores *without ever writing threads.* You would instead just use some event dispatcher, say from Twisted, from -Stackless, or from your favorite GUI; or just write your own. In this -model, with minimal changes to the event dispatcher's source code --- -and of course by using a special version of PyPy --- you get some form -of automatic parallelization. The basic idea is simple: start handling -multiple events in parallel, but give each one its own transaction_. - -.. _transaction: http://en.wikipedia.org/wiki/Transactional_memory - -XXX point to Erlang - -XXX Twisted != Stackless; my point is that you should be able to tweak - both Twisted's event loops and Stackless's, to get TM benefits without - changing neither the Twisted model nor the Stackless model +Stackless, or from your favorite GUI; or just write your own. From +there, you (or someone else) would add some minimal extra code to the +event dispatcher's source code. Then you would run your program on a +special version of PyPy, and get some form of automatic parallelization. +Sounds magic, but the basic idea is simple: start handling multiple +events in parallel, giving each one its own *transaction.* More about +it later. Threads or Events? ------------------ @@ -36,8 +38,8 @@ Lee_) have argued that the reason is that threads are fundamentally non-deterministic. This makes it very hard to reason about them. Basically the programmer needs to "trim" down the non-determinism (e.g. -by adding locks, semaphores, etc.), and it's hard to be sure that he has -a sufficiently deterministic result, if only because he can't write +by adding locks, semaphores, etc.), and it's hard to be sure when he's +got a sufficiently deterministic result, if only because he can't write exhaustive tests for it. .. _Lee: http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf @@ -49,7 +51,13 @@ non-determinism. The actual handling of each event occurs in a nicely deterministic way, and most importantly, not in parallel with the handling of other events. The same is true about other libraries like -GUI toolkits, gevent, or even Stackless. +GUI toolkits, gevent, or Stackless. + +(Of course the Twisted and the Stackless models, to cite only these two, +are quite different from each other; but they have in common the fact +that they are not multi-threaded, and based instead on "events" --- +which in the Stackless case means running a tasklet from one switch() +point to the next one.) These two models --- threads or events --- are the two main models we have right now. The latter is more used in Python, because it is much @@ -65,21 +73,24 @@ dependencies. Even assuming a GIL-less Python, this is likely enough to prevent the programmer from even starting a multi-threaded rewrite, because it would require a huge mess of locks. He could also consider -using multiple processes instead, but the result is annoying too: the -complicated dependencies translate into a huge mess of inter-process +using multiple processes instead, but the result is annoying as well: +the complicated dependencies translate into a huge mess of inter-process synchronization. The problem can also be down-sized to very small programs, like the kind of hacks that you do and forget about. In this case, the dependencies -might be simpler, but you still have to learn and use a complex -inter-process library, which is overkill for the purpose. I would even -argue that this is similar to how we might feel a priori that automatic -memory management is overkill in small programs --- of course anyone who -wrote a number of 15-line Python scripts knows this to be wrong. This -is even *so* wrong that the opposite is obvious nowadays: it makes no -sense whatsoever to manage object lifetimes explicitly in most small -scripts. I think the same will eventually be true for using multiple -CPUs. +might be simpler, but you still have to learn and use subtle locking +patterns or a complex inter-process library, which is overkill for the +purpose. I would even argue that this is similar to how we might feel a +priori that automatic memory management is overkill in small programs +--- of course anyone who wrote a number of 15-line Python scripts knows +this to be wrong. This is even *so* wrong that the opposite is obvious +nowadays: it makes no sense whatsoever to manage object lifetimes +explicitly in most small scripts. + +(I think the same will eventually be true for using multiple CPUs, but +the correct solution will take time to mature, like garbage collectors +did. This post is a step in hopefully the right direction ``:-)``) Events in Transactions ---------------------- @@ -87,55 +98,69 @@ Consider again the Twisted example I gave above. The case I am interested in is the case in which events are *generally mostly independent.* By this I mean the following: there are often several -events pending in the dispatch queue (assuming the program is not under -100% 1-CPU load, otherwise the whole discussion is moot). Handling -these events is often mostly independent --- but the point is that they -don't *have* to be proved independent. In fact it is fine if they have -arbitrary complicated dependencies as described above. The point is the -expected common case. Imagine that you have a GIL-less Python and that -you can, by a wave of your hand, have all the careful locking mess -magically done. Then what I mean here is the case in which this -theoretical program would run mostly in parallel on multiple core, -without waiting too often on the locks. +events pending in the dispatch queue (assuming the program is using 100% +of our single usable CPU, otherwise the whole discussion is moot). +Handling these events is often mostly independent --- but the point is +that they don't *have* to be proved independent. In fact it is fine if +they have arbitrary complicated dependencies as described above. The +point is the expected common case. Imagine that you have a GIL-less +Python and that you can, by a wave of your hand, have all the careful +locking mess magically done. Then what I mean here is the case in which +such a theoretical program would run mostly in parallel on multiple +core, without waiting too often on the locks. In this case, with minimal tweaks in the event dispatch loop, we can handle multiple events on multiple threads, each in its own transaction. -A transaction is basically a tentative execution of the corresponding +A transaction_ is basically a tentative execution of the corresponding piece of code: if we detect conflicts with other concurrently executing -transactions, we cancel the whole transaction and restart it from +transactions, we abort the whole transaction and restart it from scratch. +.. _transaction: http://en.wikipedia.org/wiki/Transactional_memory + By now, the fact that it can basically work should be clear: multiple transactions will only get into conflict when modifying the same data structures, which is the case where the magical wand above would have put locks. If the magical program could progress without too many locks, then the transactional program can progress without too many -conflicts. Moreover, you get more than what the magical program can -give you: each event is dispatched in its own transaction, which means -that from each event's point of view, we have the illusion that nobody -else is running concurrently. This is exactly what all existing +conflicts. In a way, you get even more than what the magical program +can give you: each event is dispatched in its own transaction, which +means that from each event's point of view, we have the illusion that +nobody else is running concurrently. This is exactly what all existing Twisted-/Stackless-/etc.-based programs are assuming. +Note that this solution, without transactions, already exists in some +other languages: for example, Erlang is all about independent events. +This is the simple case where we can just run them on multiple cores, +knowing by construction of the language that you can't get conflicts. +Of course, it doesn't work for Python or for a lot of other languages. +From that point of view, what I'm suggesting is merely that +transactional memory could be a good model to cope with the risks of +conflicts that come from not having a special-made language. + Not a perfect solution ---------------------- -I would like to put some emphasis on the fact that TM is not a perfect -solution either. Right now, the biggest issue is that of the -performance hit that comes from STM. In time, HTM will help mitigate -the problem; but I won't deny the fact that in some cases, because it's -simple enough and/or because you really need the top performance, TM is -not the best solution. +I would like to put some emphasis on the fact that transactional memory +(TM) is not a perfect solution either. Right now, the biggest issue is +the performance hit that comes from the software implementation (STM). +In time, hardware support (HTM) is `likely to show up`_ and help +mitigate the problem; but I won't deny the fact that in some cases, +because it's simple enough and/or because you really need the top +performance, TM is not the best solution. + +.. _`likely to show up`: http://en.wikipedia.org/wiki/Haswell_%28microarchitecture%29 Also, the explanations above are silent on what is a hard point for TM, namely system calls. The basic general solution is to suspend other -transactions when a transaction wants to do a system call, so that we -are sure that the transaction will succeed. Of course this solution is -far from optimal. Interestingly, it's possible to do better on a -case-by-case basis: for example, by adding in-process buffers, we can -improve the situation for sockets, by having recv() store in a buffer -what is received so that it can be re-recv()-ed later if the transaction -is cancelled; similarly, send() can be delayed in another buffer until -we are sure that the transaction can be committed. +transactions as soon as a transaction does its first system call, so +that we are sure that the transaction will succeed. Of course this +solution is far from optimal. Interestingly, it's possible to do better +on a case-by-case basis: for example, by adding in-process buffers, we +can improve the situation for sockets, by having recv() store in a +buffer what is received so that it can be re-recv()-ed later if the +transaction is aborted; similarly, send() or writes to log files can be +delayed until we are sure that the transaction will commit. From my point of view, the most important point is that the TM solution comes from the correct side of the "determinism" scale. With threads, @@ -145,14 +170,14 @@ shorter transactions have less risks of conflicts, and when there are conflicts, less things to redo. So making transactions shorter increases the parallelism that your program can achieve, while at the -same time requiring more careful thinking about the program +same time requiring more care. In terms of an event-driven model, the equivalent would be to divide the response of a big processing event into several events that are handled one after the other: the first event sets things up and fires the second event, which does the actual computation; and afterwards a third event writes the results back. As a result, the second event's transaction -has little risks of getting cancelled. On the other hand, the writing +has little risks of getting aborted. On the other hand, the writing back needs to be aware of the fact that it's not in the same transaction as the original setting up, which means that other unrelated transactions may have run in-between. @@ -166,7 +191,7 @@ Still, as you have guessed, I think that it is overall a win, and possibly a big win --- a win that might be on the same scale for the age -of multiple-CPUs as automatic garbage collection was for the age of -plenty-of-RAM. +of multiple CPUs as automatic garbage collection was 20 years ago for +the age of RAM size explosion. --- Armin From noreply at buildbot.pypy.org Fri Jan 13 14:50:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 14:50:17 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add Message-ID: <20120113135017.3D36882CAA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4021:f6512e401b4c Date: 2012-01-13 14:50 +0100 http://bitbucket.org/pypy/extradoc/changeset/f6512e401b4c/ Log: add diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -194,4 +194,6 @@ of multiple CPUs as automatic garbage collection was 20 years ago for the age of RAM size explosion. +Stay tuned for more! + --- Armin From noreply at buildbot.pypy.org Fri Jan 13 14:59:53 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 14:59:53 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Tweaks Message-ID: <20120113135953.5AF4582CAA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4022:397ef6a17af3 Date: 2012-01-13 14:59 +0100 http://bitbucket.org/pypy/extradoc/changeset/397ef6a17af3/ Log: Tweaks diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -1,8 +1,8 @@ Transactional Memory ==================== -Here is an update about the previous blog post about the -`Global Interpreter Lock`__ (GIL). +Here is an update about the `previous blog post`__ about the +Global Interpreter Lock (GIL). .. __: http://morepypy.blogspot.com/p/global-interpreter-lock-or-how-to-kill.html @@ -11,8 +11,8 @@ prevents multiple threads from actually running at the same time. The reason to do that is that it would have desastrous effects in the interpreter if both threads access the same object concurrently --- to -the point that in CPython even just manipulating the reference counter -needs to be protected by the lock. +the point that in CPython even just manipulating the object's reference +counter needs to be protected by the lock. Keeping your Python interpreter unchanged while managing to remove the infamous GIL: so far, this is regarded as the ultimate goal to enable @@ -82,11 +82,12 @@ might be simpler, but you still have to learn and use subtle locking patterns or a complex inter-process library, which is overkill for the purpose. I would even argue that this is similar to how we might feel a -priori that automatic memory management is overkill in small programs ---- of course anyone who wrote a number of 15-line Python scripts knows -this to be wrong. This is even *so* wrong that the opposite is obvious -nowadays: it makes no sense whatsoever to manage object lifetimes -explicitly in most small scripts. +priori that automatic memory management is unnecessary in small programs +--- but of course anyone who wrote a number of 15-line Python scripts +knows this to be wrong. This is even *so* wrong that the opposite is +obvious nowadays: it makes no sense whatsoever to manage object +lifetimes explicitly in most small scripts. A garbage collector is not +overkill; it is part of the basics that you expect. (I think the same will eventually be true for using multiple CPUs, but the correct solution will take time to mature, like garbage collectors From noreply at buildbot.pypy.org Fri Jan 13 15:10:21 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 13 Jan 2012 15:10:21 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: tweaks to the first section Message-ID: <20120113141021.8B33782CAA@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r4023:12fcf122a300 Date: 2012-01-13 15:09 +0100 http://bitbucket.org/pypy/extradoc/changeset/12fcf122a300/ Log: tweaks to the first section diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -14,9 +14,9 @@ the point that in CPython even just manipulating the object's reference counter needs to be protected by the lock. -Keeping your Python interpreter unchanged while managing to remove the -infamous GIL: so far, this is regarded as the ultimate goal to enable -true multi-CPU usage. But we believe we have a plan to implement a +So far, the ultimate goal to enable true multi-CPU usage has been to remove +the infamous GIL from the interpreter, so that multiple threads could actually +run in parallel. But we think we have a plan to implement a different model for using multiple cores. Believe it or not, this is *better* than just removing the GIL from PyPy. You might get to use all your cores *without ever writing threads.* @@ -24,7 +24,8 @@ You would instead just use some event dispatcher, say from Twisted, from Stackless, or from your favorite GUI; or just write your own. From there, you (or someone else) would add some minimal extra code to the -event dispatcher's source code. Then you would run your program on a +event dispatcher's source code, to exploit the new transactional features +offered by PyPy. Then you would run your program on a special version of PyPy, and get some form of automatic parallelization. Sounds magic, but the basic idea is simple: start handling multiple events in parallel, giving each one its own *transaction.* More about From noreply at buildbot.pypy.org Fri Jan 13 15:22:38 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 15:22:38 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: shorten this Message-ID: <20120113142238.27ABF82CAA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4024:b2b5484f6be3 Date: 2012-01-13 15:22 +0100 http://bitbucket.org/pypy/extradoc/changeset/b2b5484f6be3/ Log: shorten this diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -82,17 +82,14 @@ of hacks that you do and forget about. In this case, the dependencies might be simpler, but you still have to learn and use subtle locking patterns or a complex inter-process library, which is overkill for the -purpose. I would even argue that this is similar to how we might feel a -priori that automatic memory management is unnecessary in small programs ---- but of course anyone who wrote a number of 15-line Python scripts -knows this to be wrong. This is even *so* wrong that the opposite is -obvious nowadays: it makes no sense whatsoever to manage object -lifetimes explicitly in most small scripts. A garbage collector is not -overkill; it is part of the basics that you expect. +purpose. -(I think the same will eventually be true for using multiple CPUs, but -the correct solution will take time to mature, like garbage collectors -did. This post is a step in hopefully the right direction ``:-)``) +(This is similar to how explicit memory management is not very hard for +small programs --- but still, nowadays a lot of people agree that +automatic memory management is easier for programs of all sizes. I +think the same will eventually be true for using multiple CPUs, but the +correct solution will take time to mature, like garbage collectors did. +This post is a step in hopefully the right direction ``:-)``) Events in Transactions ---------------------- From noreply at buildbot.pypy.org Fri Jan 13 15:27:13 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 13 Jan 2012 15:27:13 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: try to clarify the event-vs-thread idea Message-ID: <20120113142713.218CD82CAA@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r4025:2dc171daaa33 Date: 2012-01-13 15:26 +0100 http://bitbucket.org/pypy/extradoc/changeset/2dc171daaa33/ Log: try to clarify the event-vs-thread idea diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -97,13 +97,22 @@ Events in Transactions ---------------------- -Consider again the Twisted example I gave above. The case I am -interested in is the case in which events are *generally mostly -independent.* By this I mean the following: there are often several +Let me introduce the notion of *independent events*: two events are +independent if they don't touch the same set of objects. In a multi-threaded +world, it means that they can be executed in parallel without needing any lock +to ensure correctness. + +Events might also be *mostly independent*, i.e. they rarely access the same +object concurrently. Of course, in a multi-threaded world we would still need +a lock to ensure correctness, but the point is that without the lock the +program would run correctly "most of the time". + +Consider again the Twisted example I gave above. There are often several events pending in the dispatch queue (assuming the program is using 100% -of our single usable CPU, otherwise the whole discussion is moot). -Handling these events is often mostly independent --- but the point is -that they don't *have* to be proved independent. In fact it is fine if +of our single usable CPU, otherwise the whole discussion is moot). The case I am +interested in is the case in which these events are *generally mostly +independent*, i.e. we expect few conflicts between them. However +they don't *have* to be proved independent. In fact it is fine if they have arbitrary complicated dependencies as described above. The point is the expected common case. Imagine that you have a GIL-less Python and that you can, by a wave of your hand, have all the careful From noreply at buildbot.pypy.org Fri Jan 13 15:27:14 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 13 Jan 2012 15:27:14 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge heads Message-ID: <20120113142714.3C1C182CAA@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r4026:56bab9df185f Date: 2012-01-13 15:26 +0100 http://bitbucket.org/pypy/extradoc/changeset/56bab9df185f/ Log: merge heads diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -82,17 +82,14 @@ of hacks that you do and forget about. In this case, the dependencies might be simpler, but you still have to learn and use subtle locking patterns or a complex inter-process library, which is overkill for the -purpose. I would even argue that this is similar to how we might feel a -priori that automatic memory management is unnecessary in small programs ---- but of course anyone who wrote a number of 15-line Python scripts -knows this to be wrong. This is even *so* wrong that the opposite is -obvious nowadays: it makes no sense whatsoever to manage object -lifetimes explicitly in most small scripts. A garbage collector is not -overkill; it is part of the basics that you expect. +purpose. -(I think the same will eventually be true for using multiple CPUs, but -the correct solution will take time to mature, like garbage collectors -did. This post is a step in hopefully the right direction ``:-)``) +(This is similar to how explicit memory management is not very hard for +small programs --- but still, nowadays a lot of people agree that +automatic memory management is easier for programs of all sizes. I +think the same will eventually be true for using multiple CPUs, but the +correct solution will take time to mature, like garbage collectors did. +This post is a step in hopefully the right direction ``:-)``) Events in Transactions ---------------------- From noreply at buildbot.pypy.org Fri Jan 13 15:31:49 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 15:31:49 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: tweak Message-ID: <20120113143149.8A49C82CAA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4027:2053ec4541aa Date: 2012-01-13 15:31 +0100 http://bitbucket.org/pypy/extradoc/changeset/2053ec4541aa/ Log: tweak diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -102,7 +102,8 @@ Events might also be *mostly independent*, i.e. they rarely access the same object concurrently. Of course, in a multi-threaded world we would still need a lock to ensure correctness, but the point is that without the lock the -program would run correctly "most of the time". +program would run correctly "most of the time" (and likely segfault the rest +of the time). Consider again the Twisted example I gave above. There are often several events pending in the dispatch queue (assuming the program is using 100% From noreply at buildbot.pypy.org Fri Jan 13 15:40:38 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 15:40:38 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Rewrote this using the official term. Message-ID: <20120113144038.89AA782CF4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4028:874eed76d72e Date: 2012-01-13 15:40 +0100 http://bitbucket.org/pypy/extradoc/changeset/874eed76d72e/ Log: Rewrote this using the official term. diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -101,9 +101,10 @@ Events might also be *mostly independent*, i.e. they rarely access the same object concurrently. Of course, in a multi-threaded world we would still need -a lock to ensure correctness, but the point is that without the lock the -program would run correctly "most of the time" (and likely segfault the rest -of the time). +locks to ensure correctness, but the point is that the locks are rarely causing +pauses: `lock contention`_ is low. + +.. _`lock contention`: http://en.wikipedia.org/wiki/Lock_%28computer_science%29 Consider again the Twisted example I gave above. There are often several events pending in the dispatch queue (assuming the program is using 100% From noreply at buildbot.pypy.org Fri Jan 13 15:42:52 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 15:42:52 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: change the title Message-ID: <20120113144252.512C182CF4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4029:2645e022fef1 Date: 2012-01-13 15:42 +0100 http://bitbucket.org/pypy/extradoc/changeset/2645e022fef1/ Log: change the title diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -192,8 +192,8 @@ as the original setting up, which means that other unrelated transactions may have run in-between. -One step in the future? ------------------------ +One step towards the future? +---------------------------- These, and others, are the problems of the TM approach. They are "new" problems, too, in the sense that the existing ways of programming don't From noreply at buildbot.pypy.org Fri Jan 13 17:23:51 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 17:23:51 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Add a short comment. Message-ID: <20120113162351.F263882CF4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4030:8a88d35f7c69 Date: 2012-01-13 17:23 +0100 http://bitbucket.org/pypy/extradoc/changeset/8a88d35f7c69/ Log: Add a short comment. diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -2,7 +2,8 @@ ==================== Here is an update about the `previous blog post`__ about the -Global Interpreter Lock (GIL). +Global Interpreter Lock (GIL). In 5 months, the point of view +changed quite a bit. .. __: http://morepypy.blogspot.com/p/global-interpreter-lock-or-how-to-kill.html From noreply at buildbot.pypy.org Fri Jan 13 17:27:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 17:27:31 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Fix: these locks from the GC don't have anything to do with the GIL Message-ID: <20120113162731.C4BCB82CF4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51296:2dce8616fa62 Date: 2012-01-13 17:27 +0100 http://bitbucket.org/pypy/pypy/changeset/2dce8616fa62/ Log: Fix: these locks from the GC don't have anything to do with the GIL diff --git a/pypy/rpython/memory/gc/concurrentgen.py b/pypy/rpython/memory/gc/concurrentgen.py --- a/pypy/rpython/memory/gc/concurrentgen.py +++ b/pypy/rpython/memory/gc/concurrentgen.py @@ -739,7 +739,7 @@ def acquire(self, lock): if we_are_translated(): - ll_thread.c_thread_acquirelock(lock, 1) + ll_thread.c_thread_acquirelock_NOAUTO(lock, 1) else: assert ll_thread.get_ident() == self.main_thread_ident while not self.try_acquire(lock): @@ -749,11 +749,11 @@ self._reraise_from_collector_thread() def try_acquire(self, lock): - res = ll_thread.c_thread_acquirelock(lock, 0) + res = ll_thread.c_thread_acquirelock_NOAUTO(lock, 0) return rffi.cast(lltype.Signed, res) != 0 def release(self, lock): - ll_thread.c_thread_releaselock(lock) + ll_thread.c_thread_releaselock_NOAUTO(lock) def _reraise_from_collector_thread(self): exc, val, tb = self.collector._exc_info @@ -881,10 +881,10 @@ assert self.collector_ident != -1 def acquire(self, lock): - ll_thread.c_thread_acquirelock(lock, 1) + ll_thread.c_thread_acquirelock_NOAUTO(lock, 1) def release(self, lock): - ll_thread.c_thread_releaselock(lock) + ll_thread.c_thread_releaselock_NOAUTO(lock) def get_mark(self, obj): return self.gc.get_mark(obj) From noreply at buildbot.pypy.org Fri Jan 13 17:37:58 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 13 Jan 2012 17:37:58 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: typo Message-ID: <20120113163758.22AAC82CF4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4031:a0bbad7107d7 Date: 2012-01-13 18:37 +0200 http://bitbucket.org/pypy/extradoc/changeset/a0bbad7107d7/ Log: typo diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -10,7 +10,7 @@ Let me remind you that the GIL is the technique used in both CPython and PyPy to safely run multi-threaded programs: it is a global lock that prevents multiple threads from actually running at the same time. The -reason to do that is that it would have desastrous effects in the +reason to do that is that it would have disastrous effects in the interpreter if both threads access the same object concurrently --- to the point that in CPython even just manipulating the object's reference counter needs to be protected by the lock. From noreply at buildbot.pypy.org Fri Jan 13 17:52:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 13 Jan 2012 17:52:36 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: expand the introduction Message-ID: <20120113165236.A6B6E82CF4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4032:6b30ed03026f Date: 2012-01-13 17:52 +0100 http://bitbucket.org/pypy/extradoc/changeset/6b30ed03026f/ Log: expand the introduction diff --git a/blog/draft/tm.rst b/blog/draft/tm.rst --- a/blog/draft/tm.rst +++ b/blog/draft/tm.rst @@ -11,16 +11,25 @@ PyPy to safely run multi-threaded programs: it is a global lock that prevents multiple threads from actually running at the same time. The reason to do that is that it would have disastrous effects in the -interpreter if both threads access the same object concurrently --- to +interpreter if several threads access the same object concurrently --- to the point that in CPython even just manipulating the object's reference counter needs to be protected by the lock. -So far, the ultimate goal to enable true multi-CPU usage has been to remove -the infamous GIL from the interpreter, so that multiple threads could actually -run in parallel. But we think we have a plan to implement a -different model for using multiple cores. Believe it or not, this is -*better* than just removing the GIL from PyPy. You might get to use all -your cores *without ever writing threads.* +So far, the ultimate goal to enable true multi-CPU usage has been to +remove the infamous GIL from the interpreter, so that multiple threads +could actually run in parallel. It's a lot of work, but this has been +done in Jython. The reason that it has not been done in CPython so far +is that it's even more work: we would need to care not only about +carefully adding fine-grained locks everywhere, but also about reference +counting; and there are a lot more C extension modules that would need +care, too. And we don't have locking primitives as performant as +Java's, which have been hand-tuned since ages (e.g. to use help from the +JIT compiler). + +But we think we have a plan to implement a different model for using +multiple cores. Believe it or not, this is *better* than just removing +the GIL from PyPy. You might get to use all your cores *without ever +writing threads.* You would instead just use some event dispatcher, say from Twisted, from Stackless, or from your favorite GUI; or just write your own. From From notifications-noreply at bitbucket.org Fri Jan 13 19:18:01 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Fri, 13 Jan 2012 18:18:01 -0000 Subject: [pypy-commit] Notification: pypydoc Message-ID: <20120113181801.7604.69103@bitbucket13.managed.contegix.com> You have received a notification from tomo cocoa. Hi, I forked pypy. My fork is at https://bitbucket.org/cocoatomo/pypydoc. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Fri Jan 13 22:21:05 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 13 Jan 2012 22:21:05 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: finish numpy-axisops (hopefully) Message-ID: <20120113212105.AAF8A82CF4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51297:dbcd5ab2e0a2 Date: 2012-01-13 23:20 +0200 http://bitbucket.org/pypy/pypy/changeset/dbcd5ab2e0a2/ Log: finish numpy-axisops (hopefully) diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -16,14 +16,6 @@ def __init__(self, res_shape): self.res_shape = res_shape -class ReduceTransform(BaseTransform): - """ A reduction from ``shape`` over ``dim``. This also changes the order - of iteration, because we iterate over dim the most often - """ - def __init__(self, shape, dim): - self.shape = shape - self.dim = dim - class BaseIterator(object): def next(self, shapelen): raise NotImplementedError @@ -96,8 +88,6 @@ self.strides, self.backstrides, t.chunks) return ViewIterator(r[1], r[2], r[3], r[0]) - elif isinstance(t, ReduceTransform): - xxx @jit.unroll_safe def next(self, shapelen): @@ -144,59 +134,52 @@ pass class AxisIterator(BaseIterator): - """ Accept an addition argument dim - Redorder the dimensions to iterate over dim most often. - Set a flag at the end of each run over dim. - """ - def __init__(self, dim, shape, strides, backstrides): - self.shape = shape + def __init__(self, start, dim, shape, strides, backstrides): + self.res_shape = shape[:] + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + self.first_line = False self.indices = [0] * len(shape) self._done = False - self.axis_done = False - self.offset = -1 + self.offset = start self.dim = dim - self.strides = strides[:dim] + [0] + strides[dim:] - self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] - self.dim_order = [dim] - for i in range(len(shape) - 1, -1, -1): - if i != self.dim: - self.dim_order.append(i) + + @jit.unroll_safe + def next(self, shapelen): + offset = self.offset + first_line = self.first_line + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - 1: + indices[i] += 1 + offset += self.strides[i] + break + else: + if i == self.dim: + first_line = False + indices[i] = 0 + offset -= self.backstrides[i] + else: + done = True + res = instantiate(AxisIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + res.first_line = first_line + res.dim = self.dim + return res def done(self): return self._done - @jit.unroll_safe - def next(self, shapelen): - offset = self.offset - done = False - indices = [0] * shapelen - for i in range(shapelen): - indices[i] = self.indices[i] - axis_done = False - for i in self.dim_order: - if indices[i] < self.shape[i] - 1: - indices[i] += 1 - break - else: - if i == self.dim: - axis_done = True - offset += 1 - indices[i] = 0 - else: - done = True - res = instantiate(AxisIterator) - res.axis_done = axis_done - res.strides = self.strides - res.backstrides = self.backstrides - res.offset = offset - res.indices = indices - res.shape = self.shape - res.dim = self.dim - res.dim_order = self.dim_order - res._done = done - return res - # ------ other iterators that are not part of the computation frame ---------- + class SkipLastAxisIterator(object): def __init__(self, arr): self.arr = arr diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -19,7 +19,7 @@ axisreduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self','arr', 'frame', 'shapelen'], + reds=['self','arr', 'identity', 'frame'], # name='axisreduce', get_printable_location=new_printable_location('axisreduce'), ) @@ -121,6 +121,8 @@ dim = space.int_w(w_dim) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) + if dim >= len(obj.shape): + raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -165,31 +167,38 @@ # both left and right, nothing more, especially # this is not a true virtual array, because shapes # don't quite match - arr = AxisReduce(self.func, self.name, shape, dtype, + arr = AxisReduce(self.func, self.name, obj.shape, dtype, result, obj, dim) scalarsig = ScalarSignature(dtype) sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, scalarsig, rightsig), arr) + assert isinstance(sig, AxisReduceSignature) frame = sig.create_frame(arr) shapelen = len(obj.shape) - if self.identity is None: - frame.identity = sig.eval(frame, arr).convert_to(dtype) - frame.next(shapelen) - else: - frame.identity = self.identity.convert_to(dtype) - frame.value = frame.identity - self.reduce_axis_loop(frame, sig, shapelen, arr) + self.reduce_axis_loop(frame, sig, shapelen, arr, self.identity) return result - def reduce_axis_loop(self, frame, sig, shapelen, arr): + def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): + # note - we can be advanterous here, depending on the exact field + # layout. For now let's say we iterate the original way and + # simply follow the original iteration order while not frame.done(): axisreduce_driver.jit_merge_point(frame=frame, self=self, sig=sig, + identity=identity, shapelen=shapelen, arr=arr) - sig.eval(frame, arr) + iter = frame.get_final_iter() + v = sig.eval(frame, arr).convert_to(sig.calc_dtype) + if iter.first_line: + if identity is not None: + value = self.func(sig.calc_dtype, identity, v) + else: + value = v + else: + cur = arr.left.getitem(iter.offset) + value = self.func(sig.calc_dtype, cur, v) + arr.left.setitem(iter.offset, value) frame.next(shapelen) - # store the last value, when everything is done - arr.left.setitem(frame.iterators[0].offset, frame.value) def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): while not frame.done(): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,9 +1,8 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform, ReduceTransform -from pypy.module.micronumpy.strides import calculate_slice_strides + ConstantIterator, AxisIterator, ViewTransform,\ + BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote """ Signature specifies both the numpy expression that has been constructed @@ -71,8 +70,6 @@ break else: self.final_iter = -1 - self.value = None - self.identity = None def done(self): final_iter = promote(self.final_iter) @@ -83,7 +80,10 @@ @unroll_safe def next(self, shapelen): for i in range(len(self.iterators)): - self.iterators[i] = self.iterators[i].next(shapelen) + self.iterators[i] = self.iterators[i].next(shapelen) + + def get_final_iter(self): + return self.iterators[promote(self.final_iter)] def _add_ptr_to_cache(ptr, cache): i = 0 @@ -96,6 +96,9 @@ cache.append(ptr) return res +def new_cache(): + return r_dict(sigeq_no_numbering, sighash) + class Signature(object): _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -104,7 +107,7 @@ iter_no = 0 def invent_numbering(self): - cache = r_dict(sigeq_no_numbering, sighash) + cache = new_cache() allnumbers = [] self._invent_numbering(cache, allnumbers) @@ -215,7 +218,7 @@ self.child._invent_array_numbering(arr.child, cache) def _invent_numbering(self, cache, allnumbers): - self.child._invent_numbering({}, allnumbers) + self.child._invent_numbering(new_cache(), allnumbers) def hash(self): return intmask(self.child.hash() ^ 1234) @@ -331,7 +334,7 @@ class BroadcastLeft(Call2): def _invent_numbering(self, cache, allnumbers): - self.left._invent_numbering({}, allnumbers) + self.left._invent_numbering(new_cache(), allnumbers) self.right._invent_numbering(cache, allnumbers) def _create_iter(self, iterlist, arraylist, arr, transforms): @@ -345,7 +348,7 @@ class BroadcastRight(Call2): def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(cache, allnumbers) - self.right._invent_numbering({}, allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 @@ -357,8 +360,8 @@ class BroadcastBoth(Call2): def _invent_numbering(self, cache, allnumbers): - self.left._invent_numbering({}, allnumbers) - self.right._invent_numbering({}, allnumbers) + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 @@ -387,6 +390,9 @@ class SliceloopSignature(Call2): def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) ofs = frame.iterators[0].offset arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( self.calc_dtype)) @@ -397,7 +403,7 @@ class SliceloopBroadcastSignature(SliceloopSignature): def _invent_numbering(self, cache, allnumbers): - self.left._invent_numbering({}, allnumbers) + self.left._invent_numbering(new_cache(), allnumbers) self.right._invent_numbering(cache, allnumbers) def _create_iter(self, iterlist, arraylist, arr, transforms): @@ -410,20 +416,18 @@ class AxisReduceSignature(Call2): def _create_iter(self, iterlist, arraylist, arr, transforms): - from pypy.module.micronumpy.interp_numarray import AxisReduce - - xxx + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + ConcreteArray assert isinstance(arr, AxisReduce) - assert not iterlist # we assume that later in eval - iterlist.append(AxisIterator(arr.dim, arr.right.shape, - arr.left.strides, - arr.left.backstrides)) + left = arr.left + assert isinstance(left, ConcreteArray) + iterlist.append(AxisIterator(left.start, arr.dim, arr.shape, + left.strides, left.backstrides)) self.right._create_iter(iterlist, arraylist, arr.right, transforms) def _invent_numbering(self, cache, allnumbers): - no = len(allnumbers) - allnumbers.append(no) + allnumbers.append(0) self.right._invent_numbering(cache, allnumbers) def _invent_array_numbering(self, arr, cache): @@ -433,13 +437,10 @@ self.right._invent_array_numbering(arr.right, cache) def eval(self, frame, arr): - if frame.iterators[0].axis_done: - arr.left.setitem(frame.iterators[0].offset, frame.value) - frame.value = frame.identity - v = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) - print v.value, frame.value.value - frame.value = self.binfunc(self.calc_dtype, frame.value, v) - return frame.value + from pypy.module.micronumpy.interp_numarray import AxisReduce + assert isinstance(arr, AxisReduce) + return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -724,7 +724,6 @@ assert d[1] == 12 def test_mean(self): - skip("xxx") from numpypy import array,mean a = array(range(5)) assert a.mean() == 2.0 @@ -747,25 +746,25 @@ raises(TypeError, 'a.sum(2, 3)') def test_reduce_nd(self): - skip("xxx") from numpypy import arange, array a = arange(15).reshape(5, 3) assert a.sum() == 105 assert a.max() == 14 - assert array([]).sum() = 0.0 - raises(ValueError,'array([]).sum()') + assert array([]).sum() == 0.0 + raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() assert ((a + a).max() == 28) - assert ((a + a).max(0) == [24, 26. 28]).all() + assert ((a + a).max(0) == [24, 26, 28]).all() assert ((a + a).sum(1) == [6, 24, 42, 60, 78]).all() a = array(range(105)).reshape(3, 5, 7) assert (a[:, 1, :].sum(0) == [126, 129, 132, 135, 138, 141, 144]).all() assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() raises (ValueError, 'a[:, 1, :].sum(2)') assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() + skip("Those are broken on reshape, fix!") assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -336,7 +336,7 @@ from numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(TypeError, add.reduce, 1) + raises(ValueError, add.reduce, 1) def test_reduce1D(self): from numpypy import add, maximum @@ -346,7 +346,6 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): - skip("xxx") from numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() From noreply at buildbot.pypy.org Fri Jan 13 22:29:44 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 13 Jan 2012 22:29:44 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: make test_zjit run again. Fails, because produces utter nonsense, next step Message-ID: <20120113212944.1957582CF4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51298:2bfdde95bd9f Date: 2012-01-13 23:29 +0200 http://bitbucket.org/pypy/pypy/changeset/2bfdde95bd9f/ Log: make test_zjit run again. Fails, because produces utter nonsense, next step diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -32,7 +32,7 @@ slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source'], + reds=['self', 'frame', 'arr'], get_printable_location=signature.new_printable_location('slice'), ) @@ -1000,7 +1000,8 @@ self.dtype is w_value.find_dtype()): self._fast_setslice(space, w_value) else: - self._sliceloop(w_value) + arr = SliceArray(self.shape, self.dtype, self, w_value) + self._sliceloop(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) @@ -1024,14 +1025,14 @@ source.next() dest.next() - def _sliceloop(self, source): - arr = SliceArray(self.shape, self.dtype, self, source) + def _sliceloop(self, arr): sig = arr.find_sig() frame = sig.create_frame(arr) shapelen = len(self.shape) while not frame.done(): slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, - shapelen=shapelen, source=source) + arr=arr, + shapelen=shapelen) sig.eval(frame, arr) frame.next(shapelen) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -74,7 +74,7 @@ def done(self): final_iter = promote(self.final_iter) if final_iter < 0: - return False + assert False return self.iterators[final_iter].done() @unroll_safe @@ -83,7 +83,10 @@ self.iterators[i] = self.iterators[i].next(shapelen) def get_final_iter(self): - return self.iterators[promote(self.final_iter)] + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] def _add_ptr_to_cache(ptr, cache): i = 0 From noreply at buildbot.pypy.org Fri Jan 13 22:38:59 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 13 Jan 2012 22:38:59 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: some immutable fields Message-ID: <20120113213859.12FBF82CF4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51299:a6fc54ab906e Date: 2012-01-13 23:36 +0200 http://bitbucket.org/pypy/pypy/changeset/a6fc54ab906e/ Log: some immutable fields diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -771,6 +771,8 @@ """ Intermediate class for performing binary operations. """ + _immutable_fields_ = ['left', 'right'] + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -828,6 +830,8 @@ encounter such things in the wild. Remove this comment when we'll make AxisReduce lazy """ + _immutable_fields_ = ['left', 'right'] + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) From noreply at buildbot.pypy.org Fri Jan 13 22:39:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 13 Jan 2012 22:39:02 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: merge default Message-ID: <20120113213902.E7F0482CF4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51300:283834dcbfc3 Date: 2012-01-13 23:38 +0200 http://bitbucket.org/pypy/pypy/changeset/283834dcbfc3/ Log: merge default diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -257,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,8 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -27,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -2974,6 +2978,56 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -411,6 +412,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -476,7 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -485,12 +488,7 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: @@ -503,6 +501,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -537,7 +536,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub @@ -621,7 +620,10 @@ def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -810,7 +812,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): @@ -2550,3 +2555,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -188,7 +188,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, @@ -741,7 +744,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,13 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -416,7 +423,8 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,11 +8,15 @@ class JitPolicy(object): - def __init__(self): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -75,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -90,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -296,8 +297,6 @@ patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -307,21 +306,38 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) + else: + debug_info = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset, name=loopname) @@ -332,25 +348,40 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) + else: + hooks = None + debug_info = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/jitprof.py b/pypy/jit/metainterp/jitprof.py --- a/pypy/jit/metainterp/jitprof.py +++ b/pypy/jit/metainterp/jitprof.py @@ -18,8 +18,8 @@ OPT_FORCINGS ABORT_TOO_LONG ABORT_BRIDGE +ABORT_BAD_LOOP ABORT_ESCAPE -ABORT_BAD_LOOP ABORT_FORCE_QUASIIMMUT NVIRTUALS NVHOLES @@ -30,10 +30,13 @@ TOTAL_FREED_BRIDGES """ +counter_names = [] + def _setup(): names = counters.split() for i, name in enumerate(names): globals()[name] = i + counter_names.append(name) global ncounters ncounters = len(names) _setup() diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -442,6 +442,22 @@ """ self.optimize_loop(ops, expected) + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -271,6 +271,10 @@ if newresult is not op.result and not newvalue.is_constant(): op = ResOperation(rop.SAME_AS, [op.result], newresult) self.optimizer._newoperations.append(op) + if self.optimizer.loop.logops: + debug_print(' Falling back to add extra: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + self.optimizer.flush() self.optimizer.emitting_dissabled = False @@ -435,7 +439,13 @@ return for a in op.getarglist(): if not isinstance(a, Const) and a not in seen: - self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen) + self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, + seen) + + if self.optimizer.loop.logops: + debug_print(' Emitting short op: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + optimizer.send_extra_operation(op) seen[op.result] = True if op.is_ovf(): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1553,6 +1553,7 @@ class MetaInterp(object): in_recursion = 0 + cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): self.staticdata = staticdata @@ -1793,6 +1794,15 @@ def aborted_tracing(self, reason): self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') + jd_sd = self.jitdriver_sd + if not self.current_merge_points: + greenkey = None # we're in the bridge + else: + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, + jd_sd.jitdriver, + greenkey, + jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): @@ -1966,9 +1976,14 @@ raise SwitchToBlackhole(ABORT_BAD_LOOP) # For now self.compile_loop(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! + self.cancel_count += 1 + if self.staticdata.warmrunnerdesc: + memmgr = self.staticdata.warmrunnerdesc.memory_manager + if memmgr: + if self.cancel_count > memmgr.max_unroll_loops: + self.staticdata.log('cancelled too many times!') + raise SwitchToBlackhole(ABORT_BAD_LOOP) self.staticdata.log('cancelled, tracing more...') - #self.staticdata.log('cancelled, stopping tracing') - #raise SwitchToBlackhole(ABORT_BAD_LOOP) # Otherwise, no loop found so far, so continue tracing. start = len(self.history.operations) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -16,15 +16,15 @@ # debug name = "" pc = 0 + opnum = 0 + + _attrs_ = ('result',) def __init__(self, result): self.result = result - # methods implemented by each concrete class - # ------------------------------------------ - def getopnum(self): - raise NotImplementedError + return self.opnum # methods implemented by the arity mixins # --------------------------------------- @@ -64,6 +64,9 @@ def setdescr(self, descr): raise NotImplementedError + def cleardescr(self): + pass + # common methods # -------------- @@ -196,6 +199,9 @@ self._check_descr(descr) self._descr = descr + def cleardescr(self): + self._descr = None + def _check_descr(self, descr): if not we_are_translated() and getattr(descr, 'I_am_a_descr', False): return # needed for the mock case in oparser_model @@ -590,12 +596,9 @@ baseclass = PlainResOp mixin = arity2mixin.get(arity, N_aryOp) - def getopnum(self): - return opnum - cls_name = '%s_OP' % name bases = (get_base_class(mixin, baseclass),) - dic = {'getopnum': getopnum} + dic = {'opnum': opnum} return type(cls_name, bases, dic) setup(__name__ == '__main__') # print out the table when run directly diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -56,8 +56,6 @@ greenfield_info = None result_type = result_kind portal_runner_ptr = "???" - on_compile = lambda *args: None - on_compile_bridge = lambda *args: None stats = history.Stats() cpu = CPUClass(rtyper, stats, None, False) diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -53,8 +53,6 @@ call_pure_results = {} class jitdriver_sd: warmstate = FakeState() - on_compile = staticmethod(lambda *args: None) - on_compile_bridge = staticmethod(lambda *args: None) virtualizable_info = None def test_compile_loop(): diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -10,57 +10,6 @@ def getloc2(g): return "in jitdriver2, with g=%d" % g -class JitDriverTests(object): - def test_on_compile(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = looptoken - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - i += 1 - - self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "loop")] - self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] - - def test_on_compile_bridge(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = loop - def on_compile_bridge(self, logger, orig_token, operations, n): - assert 'bridge' not in called - called['bridge'] = orig_token - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - if i >= 4: - i += 2 - i += 1 - - self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] - - -class TestLLtypeSingle(JitDriverTests, LLJitMixin): - pass - class MultipleJitDriversTests(object): def test_simple(self): diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -0,0 +1,148 @@ + +from pypy.rlib.jit import JitDriver, JitHookInterface +from pypy.rlib import jit_hooks +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.jit.codewriter.policy import JitPolicy +from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT +from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr + +class TestJitHookInterface(LLJitMixin): + def test_abort_quasi_immut(self): + reasons = [] + + class MyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + assert jitdriver is myjitdriver + assert len(greenkey) == 1 + reasons.append(reason) + assert greenkey_repr == 'blah' + + iface = MyJitIface() + + myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'], + get_printable_location=lambda *args: 'blah') + + class Foo: + _immutable_fields_ = ['a?'] + def __init__(self, a): + self.a = a + def f(a, x): + foo = Foo(a) + total = 0 + while x > 0: + myjitdriver.jit_merge_point(foo=foo, x=x, total=total) + # read a quasi-immutable field out of a Constant + total += foo.a + foo.a += 1 + x -= 1 + return total + # + assert f(100, 7) == 721 + res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface)) + assert res == 721 + assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 + + def test_on_compile(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append(("compile", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + def before_compile(self, di): + called.append(("optimize", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + #def before_optimize(self, jitdriver, logger, looptoken, oeprations, + # type, greenkey): + # called.append(("trace", greenkey[1].getint(), + # greenkey[0].getint(), type)) + + iface = MyJitIface() + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + i += 1 + + self.meta_interp(loop, [1, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] + self.meta_interp(loop, [2, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] + + def test_on_compile_bridge(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append("compile") + + def after_compile_bridge(self, di): + called.append("compile_bridge") + + def before_compile_bridge(self, di): + called.append("before_compile_bridge") + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + if i >= 4: + i += 2 + i += 1 + + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface())) + assert called == ["compile", "before_compile_bridge", "compile_bridge"] + + def test_resop_interface(self): + driver = JitDriver(greens = [], reds = ['i']) + + def loop(i): + while i > 0: + driver.jit_merge_point(i=i) + i -= 1 + + def main(): + loop(1) + op = jit_hooks.resop_new(rop.INT_ADD, + [jit_hooks.boxint_new(3), + jit_hooks.boxint_new(4)], + jit_hooks.boxint_new(1)) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) + box5 = jit_hooks.boxint_new(18) + jit_hooks.resop_setarg(op, 0, box5) + assert jit_hooks.resop_getarg(op, 0) == box5 + box6 = jit_hooks.resop_getresult(op) + assert jit_hooks.box_getint(box6) == 1 + jit_hooks.resop_setresult(op, box5) + assert jit_hooks.resop_getresult(op) == box5 + + self.meta_interp(main, []) diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py --- a/pypy/jit/metainterp/test/test_resoperation.py +++ b/pypy/jit/metainterp/test/test_resoperation.py @@ -30,17 +30,17 @@ cls = rop.opclasses[rop.rop.INT_ADD] assert issubclass(cls, rop.PlainResOp) assert issubclass(cls, rop.BinaryOp) - assert cls.getopnum.im_func(None) == rop.rop.INT_ADD + assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD cls = rop.opclasses[rop.rop.CALL] assert issubclass(cls, rop.ResOpWithDescr) assert issubclass(cls, rop.N_aryOp) - assert cls.getopnum.im_func(None) == rop.rop.CALL + assert cls.getopnum.im_func(cls) == rop.rop.CALL cls = rop.opclasses[rop.rop.GUARD_TRUE] assert issubclass(cls, rop.GuardResOp) assert issubclass(cls, rop.UnaryOp) - assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE + assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE def test_mixins_in_common_base(): INT_ADD = rop.opclasses[rop.rop.INT_ADD] diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -5,7 +5,7 @@ VArrayStateInfo, NotVirtualStateInfo, VirtualState, ShortBoxes from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, llmemory from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, \ equaloplists, FakeDescrWithSnapshot from pypy.jit.metainterp.optimizeopt.intutils import IntBound @@ -82,6 +82,13 @@ assert isgeneral(value1, value2) assert not isgeneral(value2, value1) + assert isgeneral(OptValue(ConstInt(7)), OptValue(ConstInt(7))) + S = lltype.GcStruct('S') + foo = lltype.malloc(S) + fooref = lltype.cast_opaque_ptr(llmemory.GCREF, foo) + assert isgeneral(OptValue(ConstPtr(fooref)), + OptValue(ConstPtr(fooref))) + def test_field_matching_generalization(self): const1 = NotVirtualStateInfo(OptValue(ConstInt(1))) const2 = NotVirtualStateInfo(OptValue(ConstInt(2))) diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,9 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_getopnum from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory class TranslationTest: @@ -22,6 +24,7 @@ # - jitdriver hooks # - two JITs # - string concatenation, slicing and comparison + # - jit hooks interface class Frame(object): _virtualizable2_ = ['l[*]'] @@ -91,7 +94,9 @@ return f.i # def main(i, j): - return f(i) - f2(i+j, i, j) + op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], + boxint_new(8)) + return f(i) - f2(i+j, i, j) + resop_getopnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -1,4 +1,5 @@ import sys, py +from pypy.tool.sourcetools import func_with_new_name from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\ cast_base_ptr_to_instance, hlstr @@ -112,7 +113,7 @@ return ll_meta_interp(function, args, backendopt=backendopt, translate_support_code=True, **kwds) -def _find_jit_marker(graphs, marker_name): +def _find_jit_marker(graphs, marker_name, check_driver=True): results = [] for graph in graphs: for block in graph.iterblocks(): @@ -120,8 +121,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - (op.args[1].value is None or - op.args[1].value.active)): # the jitdriver + (not check_driver or op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -140,6 +141,9 @@ "found several jit_merge_points in the same graph") return results +def find_access_helpers(graphs): + return _find_jit_marker(graphs, 'access_helper', False) + def locate_jit_merge_point(graph): [(graph, block, pos)] = find_jit_merge_points([graph]) return block, pos, block.operations[pos] @@ -206,6 +210,7 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # + self.hooks = policy.jithookiface self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() @@ -213,6 +218,7 @@ self.rewrite_jit_merge_points(policy) verbose = False # not self.cpu.translate_support_code + self.rewrite_access_helpers() self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() self.rewrite_set_param() @@ -619,6 +625,24 @@ graph = self.annhelper.getgraph(func, args_s, s_result) return self.annhelper.graph2delayed(graph, FUNC) + def rewrite_access_helpers(self): + ah = find_access_helpers(self.translator.graphs) + for graph, block, index in ah: + op = block.operations[index] + self.rewrite_access_helper(op) + + def rewrite_access_helper(self, op): + ARGS = [arg.concretetype for arg in op.args[2:]] + RESULT = op.result.concretetype + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + # make sure we make a copy of function so it no longer belongs + # to extregistry + func = op.args[1].value + func = func_with_new_name(func, func.func_name + '_compiled') + ptr = self.helper_func(FUNCPTR, func) + op.opname = 'direct_call' + op.args = [Constant(ptr, FUNCPTR)] + op.args[2:] + def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: self.rewrite_jit_merge_point(jd, policy) diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -244,6 +244,11 @@ if self.warmrunnerdesc.memory_manager: self.warmrunnerdesc.memory_manager.max_retrace_guards = value + def set_param_max_unroll_loops(self, value): + if self.warmrunnerdesc: + if self.warmrunnerdesc.memory_manager: + self.warmrunnerdesc.memory_manager.max_unroll_loops = value + def disable_noninlinable_function(self, greenkey): cell = self.jit_cell_at_key(greenkey) cell.dont_trace_here = True @@ -596,20 +601,6 @@ return fn(*greenargs) self.should_unroll_one_iteration = should_unroll_one_iteration - if hasattr(jd.jitdriver, 'on_compile'): - def on_compile(logger, token, operations, type, greenkey): - greenargs = unwrap_greenkey(greenkey) - return jd.jitdriver.on_compile(logger, token, operations, type, - *greenargs) - def on_compile_bridge(logger, orig_token, operations, n): - return jd.jitdriver.on_compile_bridge(logger, orig_token, - operations, n) - jd.on_compile = on_compile - jd.on_compile_bridge = on_compile_bridge - else: - jd.on_compile = lambda *args: None - jd.on_compile_bridge = lambda *args: None - redargtypes = ''.join([kind[0] for kind in jd.red_args_types]) def get_assembler_token(greenkey): diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -89,11 +89,18 @@ assert typ == 'class' return self.model.ConstObj(ootype.cast_to_object(obj)) - def get_descr(self, poss_descr): + def get_descr(self, poss_descr, allow_invent): if poss_descr.startswith('<'): return None - else: + try: return self._consts[poss_descr] + except KeyError: + if allow_invent: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: @@ -186,7 +193,8 @@ poss_descr = allargs[-1].strip() if poss_descr.startswith('descr='): - descr = self.get_descr(poss_descr[len('descr='):]) + descr = self.get_descr(poss_descr[len('descr='):], + opname == 'label') allargs = allargs[:-1] for arg in allargs: arg = arg.strip() diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -6,7 +6,7 @@ from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat - from pypy.jit.metainterp.history import BasicFailDescr + from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.history import get_const_ptr_for_string @@ -42,6 +42,10 @@ class JitCellToken(object): I_am_a_descr = True + class TargetToken(object): + def __init__(self, jct): + pass + class BasicFailDescr(object): I_am_a_descr = True diff --git a/pypy/jit/tool/pypytrace.vim b/pypy/jit/tool/pypytrace.vim --- a/pypy/jit/tool/pypytrace.vim +++ b/pypy/jit/tool/pypytrace.vim @@ -19,6 +19,7 @@ syn match pypyLoopArgs '^[[].*' syn match pypyLoopStart '^#.*' syn match pypyDebugMergePoint '^debug_merge_point(.\+)' +syn match pypyLogBoundary '[[][0-9a-f]\+[]] \([{].\+\|.\+[}]\)$' hi def link pypyLoopStart Structure "hi def link pypyLoopArgs PreProc @@ -29,3 +30,4 @@ hi def link pypyNumber Number hi def link pypyDescr PreProc hi def link pypyDescrField Label +hi def link pypyLogBoundary Statement diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,8 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken,\ + TargetToken class BaseTestOparser(object): @@ -243,6 +244,16 @@ b = loop.getboxes() assert isinstance(b.sum0, BoxInt) + def test_label(self): + x = """ + [i0] + label(i0, descr=1) + jump(i0, descr=1) + """ + loop = self.parse(x) + assert loop.operations[0].getdescr() is loop.operations[1].getdescr() + assert isinstance(loop.operations[0].getdescr(), TargetToken) + class ForbiddenModule(object): def __init__(self, name, old_mod): diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize @@ -387,6 +388,8 @@ "Float": "space.w_float", "Long": "space.w_long", "Complex": "space.w_complex", + "ByteArray": "space.w_bytearray", + "MemoryView": "space.gettypeobject(W_MemoryView.typedef)", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', 'NotImplemented': 'space.type(space.w_NotImplemented)', diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py --- a/pypy/module/cpyext/buffer.py +++ b/pypy/module/cpyext/buffer.py @@ -1,6 +1,36 @@ +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, Py_buffer) +from pypy.module.cpyext.pyobject import PyObject + + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyObject_CheckBuffer(space, w_obj): + """Return 1 if obj supports the buffer interface otherwise 0.""" + return 0 # the bf_getbuffer field is never filled by cpyext + + at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real], + rffi.INT_real, error=-1) +def PyObject_GetBuffer(space, w_obj, view, flags): + """Export obj into a Py_buffer, view. These arguments must + never be NULL. The flags argument is a bit field indicating what + kind of buffer the caller is prepared to deal with and therefore what + kind of buffer the exporter is allowed to return. The buffer interface + allows for complicated memory sharing possibilities, but some caller may + not be able to handle all the complexity but may want to see if the + exporter will let them take a simpler view to its memory. + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a BufferError unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + Py_buffer structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + 0 is returned on success and -1 on error.""" + raise OperationError(space.w_TypeError, space.wrap( + 'PyPy does not yet implement the new buffer interface')) @cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL) def PyBuffer_IsContiguous(space, view, fortran): diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -123,10 +123,6 @@ typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *); typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **); -typedef int (*objobjproc)(PyObject *, PyObject *); -typedef int (*visitproc)(PyObject *, void *); -typedef int (*traverseproc)(PyObject *, visitproc, void *); - /* Py3k buffer interface */ typedef struct bufferinfo { void *buf; @@ -153,6 +149,41 @@ typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); + /* Flags for getting buffers */ +#define PyBUF_SIMPLE 0 +#define PyBUF_WRITABLE 0x0001 +/* we used to include an E, backwards compatible alias */ +#define PyBUF_WRITEABLE PyBUF_WRITABLE +#define PyBUF_FORMAT 0x0004 +#define PyBUF_ND 0x0008 +#define PyBUF_STRIDES (0x0010 | PyBUF_ND) +#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) +#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) +#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) +#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE) +#define PyBUF_CONTIG_RO (PyBUF_ND) + +#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE) +#define PyBUF_STRIDED_RO (PyBUF_STRIDES) + +#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT) + +#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT) + + +#define PyBUF_READ 0x100 +#define PyBUF_WRITE 0x200 +#define PyBUF_SHADOW 0x400 +/* end Py3k buffer interface */ + +typedef int (*objobjproc)(PyObject *, PyObject *); +typedef int (*visitproc)(PyObject *, void *); +typedef int (*traverseproc)(PyObject *, visitproc, void *); + typedef struct { /* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all arguments are guaranteed to be of the object's type (modulo diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -5,7 +5,7 @@ struct _is; /* Forward */ typedef struct _is { - int _foo; + struct _is *next; } PyInterpreterState; typedef struct _ts { diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -2,7 +2,10 @@ cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) from pypy.rpython.lltypesystem import rffi, lltype -PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ())) +PyInterpreterStateStruct = lltype.ForwardReference() +PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) +cpython_struct( + "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) @@ -54,7 +57,8 @@ class InterpreterState(object): def __init__(self, space): - self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True) + self.interpreter_state = lltype.malloc( + PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) def new_thread_state(self): capsule = ThreadStateCapsule() diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -34,141 +34,6 @@ @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyObject_CheckBuffer(space, obj): - """Return 1 if obj supports the buffer interface otherwise 0.""" - raise NotImplementedError - - at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1) -def PyObject_GetBuffer(space, obj, view, flags): - """Export obj into a Py_buffer, view. These arguments must - never be NULL. The flags argument is a bit field indicating what - kind of buffer the caller is prepared to deal with and therefore what - kind of buffer the exporter is allowed to return. The buffer interface - allows for complicated memory sharing possibilities, but some caller may - not be able to handle all the complexity but may want to see if the - exporter will let them take a simpler view to its memory. - - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a BufferError unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - Py_buffer structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. - - 0 is returned on success and -1 on error. - - The following table gives possible values to the flags arguments. - - Flag - - Description - - PyBUF_SIMPLE - - This is the default flag state. The returned - buffer may or may not have writable memory. The - format of the data will be assumed to be unsigned - bytes. This is a "stand-alone" flag constant. It - never needs to be '|'d to the others. The exporter - will raise an error if it cannot provide such a - contiguous buffer of bytes. - - PyBUF_WRITABLE - - The returned buffer must be writable. If it is - not writable, then raise an error. - - PyBUF_STRIDES - - This implies PyBUF_ND. The returned - buffer must provide strides information (i.e. the - strides cannot be NULL). This would be used when - the consumer can handle strided, discontiguous - arrays. Handling strides automatically assumes - you can handle shape. The exporter can raise an - error if a strided representation of the data is - not possible (i.e. without the suboffsets). - - PyBUF_ND - - The returned buffer must provide shape - information. The memory will be assumed C-style - contiguous (last dimension varies the - fastest). The exporter may raise an error if it - cannot provide this kind of contiguous buffer. If - this is not given then shape will be NULL. - - PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS - - These flags indicate that the contiguity returned - buffer must be respectively, C-contiguous (last - dimension varies the fastest), Fortran contiguous - (first dimension varies the fastest) or either - one. All of these flags imply - PyBUF_STRIDES and guarantee that the - strides buffer info structure will be filled in - correctly. - - PyBUF_INDIRECT - - This flag indicates the returned buffer must have - suboffsets information (which can be NULL if no - suboffsets are needed). This can be used when - the consumer can handle indirect array - referencing implied by these suboffsets. This - implies PyBUF_STRIDES. - - PyBUF_FORMAT - - The returned buffer must have true format - information if this flag is provided. This would - be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An - exporter should always be able to provide this - information if requested. If format is not - explicitly requested then the format must be - returned as NULL (which means 'B', or - unsigned bytes) - - PyBUF_STRIDED - - This is equivalent to (PyBUF_STRIDES | - PyBUF_WRITABLE). - - PyBUF_STRIDED_RO - - This is equivalent to (PyBUF_STRIDES). - - PyBUF_RECORDS - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_RECORDS_RO - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT). - - PyBUF_FULL - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_FULL_RO - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT). - - PyBUF_CONTIG - - This is equivalent to (PyBUF_ND | - PyBUF_WRITABLE). - - PyBUF_CONTIG_RO - - This is equivalent to (PyBUF_ND).""" raise NotImplementedError @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -37,6 +37,7 @@ def test_thread_state_interp(self, space, api): ts = api.PyThreadState_Get() assert ts.c_interp == api.PyInterpreterState_Head() + assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO) def test_basic_threadstate_dance(self, space, api): # Let extension modules call these functions, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -9,7 +9,7 @@ appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule @@ -48,6 +48,7 @@ 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', + 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -14,14 +14,14 @@ return mean(a) def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n,n), dtype=dtype) for i in range(n): a[i][i] = 1 return a def mean(a, axis=None): if not hasattr(a, "mean"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.mean(axis) def sum(a,axis=None): @@ -50,17 +50,17 @@ ''' # TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements. if not hasattr(a, "sum"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.sum(axis) def min(a, axis=None): if not hasattr(a, "min"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.min(axis) def max(a, axis=None): if not hasattr(a, "max"): - a = numpypy.array(a) + a = _numpypy.array(a) return a.max(axis) def arange(start, stop=None, step=1, dtype=None): @@ -71,9 +71,9 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i @@ -114,5 +114,5 @@ you should assign the new shape to the shape attribute of the array ''' if not hasattr(a, 'reshape'): - a = numpypy.array(a) + a = _numpypy.array(a) return a.reshape(shape) diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -78,6 +78,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -170,6 +171,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), @@ -245,6 +247,7 @@ long_name = "int64" W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), __module__ = "numpypy", + __new__ = interp2app(W_LongBox.descr__new__.im_func), ) W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -16,24 +16,28 @@ virtualizables=['frame'], reds=['result_size', 'frame', 'ri', 'self', 'result'], get_printable_location=signature.new_printable_location('numpy'), + name='numpy', ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('all'), + name='numpy_all', ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('any'), + name='numpy_any', ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['self', 'frame', 'arr'], get_printable_location=signature.new_printable_location('slice'), + name='numpy_slice', ) @@ -302,6 +306,7 @@ greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), + name='numpy_' + op_name, ) def loop(self): sig = self.find_sig() @@ -574,6 +579,18 @@ w_denom = space.wrap(self.shape[dim]) return space.div(self.descr_sum_promote(space, w_dim), w_denom) + def descr_var(self, space): + # var = mean((values - mean(values)) ** 2) + w_res = self.descr_sub(space, self.descr_mean(space)) + assert isinstance(w_res, BaseArray) + w_res = w_res.descr_pow(space, space.wrap(2)) + assert isinstance(w_res, BaseArray) + return w_res.descr_mean(space) + + def descr_std(self, space): + # std(v) = sqrt(var(v)) + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -1254,6 +1271,8 @@ all = interp2app(BaseArray.descr_all), any = interp2app(BaseArray.descr_any), dot = interp2app(BaseArray.descr_dot), + var = interp2app(BaseArray.descr_var), + std = interp2app(BaseArray.descr_std), copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -14,6 +14,7 @@ virtualizables=["frame"], reds=["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), + name='numpy_reduce', ) axisreduce_driver = jit.JitDriver( diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -159,6 +159,9 @@ def _invent_array_numbering(self, arr, cache): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() + # this get_concrete never forces assembler. If we're here and array + # is not of a concrete class it means that we have a _forced_result, + # otherwise the signature would not match assert isinstance(concr, ConcreteArray) assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +36,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +44,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +58,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +156,28 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -181,7 +190,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +205,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +227,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +250,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +260,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +269,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +284,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +295,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -315,7 +324,7 @@ def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +339,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +348,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +361,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,33 +3,33 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpypy import array, mean + from _numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -161,7 +161,7 @@ class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -176,12 +176,12 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) def test_ndim(self): - from numpypy import array + from _numpypy import array x = array(0.2) assert x.ndim == 0 x = array([1, 2]) @@ -190,12 +190,12 @@ assert x.ndim == 2 x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) assert x.ndim == 3 - # numpy actually raises an AttributeError, but numpypy raises an + # numpy actually raises an AttributeError, but _numpypy raises an # TypeError raises(TypeError, 'x.ndim = 3') def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -204,7 +204,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -215,13 +215,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -230,7 +230,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -251,12 +251,12 @@ assert (b == a).all() def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -265,7 +265,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -275,7 +275,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -283,7 +283,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -294,7 +294,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -305,7 +305,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -324,7 +324,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -332,14 +332,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -348,13 +348,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -363,7 +363,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -383,7 +383,7 @@ a.shape = (1,) def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -396,7 +396,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -432,13 +432,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -451,7 +451,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -459,20 +459,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -481,14 +481,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -496,34 +496,34 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -531,7 +531,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -563,7 +563,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -571,14 +571,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -588,7 +588,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -596,14 +596,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -616,7 +616,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -624,14 +624,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -642,7 +642,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -653,7 +653,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -664,7 +664,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -678,7 +678,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -692,7 +692,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpypy import array + from _numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -700,7 +700,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -710,7 +710,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -724,7 +724,7 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array,mean + from _numpypy import array, mean a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 @@ -735,7 +735,7 @@ assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): - from numpypy import array, arange + from _numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -769,8 +769,8 @@ assert (a.reshape(1,-1).sum(1) == 5460) def test_identity(self): - from numpypy import identity, array - from numpypy import int32, float64, dtype + from _numpypy import identity, array + from _numpypy import int32, float64, dtype a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') @@ -789,32 +789,32 @@ assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -835,14 +835,14 @@ assert a.argmax() == 2 def test_argmin(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -851,7 +851,7 @@ assert b.all() == True def test_any(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -860,7 +860,7 @@ assert c.any() == False def test_dot(self): - from numpypy import array, dot + from _numpypy import array, dot a = array(range(5)) assert a.dot(a) == 30.0 @@ -870,14 +870,14 @@ assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all() def test_dot_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype, float64, int8, bool_ + from _numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -894,7 +894,7 @@ def test_comparison(self): import operator - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -913,7 +913,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpypy import array + from _numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -923,7 +923,7 @@ assert not bool(array([0])) def test_slice_assignment(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[::-1] = a assert (a == [0, 1, 2, 1, 0]).all() @@ -933,8 +933,8 @@ assert (a == [8, 6, 4, 2, 0]).all() def test_debug_repr(self): - from numpypy import zeros, sin - from numpypy.pypy import debug_repr + from _numpypy import zeros, sin + from _numpypy.pypy import debug_repr a = zeros(1) assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' @@ -948,8 +948,8 @@ assert debug_repr(b) == 'Array' def test_remove_invalidates(self): - from numpypy import array - from numpypy.pypy import remove_invalidates + from _numpypy import array + from _numpypy.pypy import remove_invalidates a = array([1, 2, 3]) b = a + a remove_invalidates(a) @@ -957,7 +957,7 @@ assert b[0] == 28 def test_virtual_views(self): - from numpypy import arange + from _numpypy import arange a = arange(15) c = (a + a) d = c[::2] @@ -975,7 +975,7 @@ assert b[1] == 2 def test_tolist_scalar(self): - from numpypy import int32, bool_ + from _numpypy import int32, bool_ x = int32(23) assert x.tolist() == 23 assert type(x.tolist()) is int @@ -983,13 +983,13 @@ assert y.tolist() is True def test_tolist_zerodim(self): - from numpypy import array + from _numpypy import array x = array(3) assert x.tolist() == 3 assert type(x.tolist()) is int def test_tolist_singledim(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.tolist() == [0, 1, 2, 3, 4] assert type(a.tolist()[0]) is int @@ -997,41 +997,55 @@ assert b.tolist() == [0.2, 0.4, 0.6] def test_tolist_multidim(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert a.tolist() == [[1, 2], [3, 4]] def test_tolist_view(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): - from numpypy import array + from _numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] + def test_var(self): + from _numpypy import array + a = array(range(10)) + assert a.var() == 8.25 + a = array([5.0]) + assert a.var() == 0.0 + + def test_std(self): + from _numpypy import array + a = array(range(10)) + assert a.std() == 2.8722813232690143 + a = array([5.0]) + assert a.std() == 0.0 + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpypy - a = numpypy.zeros((2, 2)) + import _numpypy + a = _numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpypy - assert numpypy.zeros(1).shape == (1,) - assert numpypy.zeros((2, 2)).shape == (2, 2) - assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpypy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpypy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpypy.zeros(())) - raises(ValueError, numpypy.array, [[1, 2], 3]) + import _numpypy + assert _numpypy.zeros(1).shape == (1,) + assert _numpypy.zeros((2, 2)).shape == (2, 2) + assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(_numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, _numpypy.zeros(())) + raises(ValueError, _numpypy.array, [[1, 2], 3]) def test_getsetitem(self): - import numpypy - a = numpypy.zeros((2, 3, 1)) + import _numpypy + a = _numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -1042,8 +1056,8 @@ assert a[1, -1, 0] == 3 def test_slices(self): - import numpypy - a = numpypy.zeros((4, 3, 2)) + import _numpypy + a = _numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -1076,51 +1090,51 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpypy - raises(ValueError, numpypy.array, [[1], 2]) - raises(ValueError, numpypy.array, [[1, 2], [3]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) - a = numpypy.array([[1, 2], [4, 5]]) + import _numpypy + raises(ValueError, _numpypy.array, [[1], 2]) + raises(ValueError, _numpypy.array, [[1, 2], [3]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = _numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpypy - a = numpypy.zeros((3, 4)) + import _numpypy + a = _numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpypy.array([[1, 2], [3, 4]]) + a = _numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpypy.array([5, 6]) + a[1] = _numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpypy.array([8, 10]) + a[:, 1] = _numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpypy.array([11, 12]) + a[0, :: -1] = _numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == \ array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] @@ -1131,12 +1145,12 @@ assert c[1][1] == 12 def test_multidim_ones(self): - from numpypy import ones + from _numpypy import ones a = ones((1, 2, 3)) assert a[0, 1, 2] == 1.0 def test_multidim_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) a[:, 1:3] = b[:, 1:3] @@ -1147,21 +1161,21 @@ assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((10, 10)) b = ones(10) a[:, :] = b assert a[3, 5] == 1 def test_broadcast_shape_agreement(self): - from numpypy import zeros, array + from _numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) @@ -1175,7 +1189,7 @@ assert c.all() def test_broadcast_scalar(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 5), 'd') a[:, 1] = 3 assert a[2, 1] == 3 @@ -1186,14 +1200,14 @@ assert a[3, 2] == 0 def test_broadcast_call2(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((4, 1, 5)) b = ones((4, 3, 5)) b[:] = (a + a) assert (b == zeros((4, 3, 5))).all() def test_broadcast_virtualview(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = arange(8).reshape([2, 2, 2]) b = (a + a)[1, 1] c = zeros((2, 2, 2)) @@ -1201,13 +1215,13 @@ assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all() def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert a.argmax() == 5 assert a[:2, ].argmax() == 3 def test_broadcast_wrong_shapes(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 3, 2)) b = zeros((4, 2)) exc = raises(ValueError, lambda: a + b) @@ -1215,7 +1229,7 @@ " together with shapes (4,3,2) (4,2)" def test_reduce(self): - from numpypy import array + from _numpypy import array a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) assert a.sum() == (13 * 12) / 2 b = a[1:, 1::2] @@ -1223,7 +1237,7 @@ assert c.sum() == (6 + 8 + 10 + 12) * 2 def test_transpose(self): - from numpypy import array + from _numpypy import array a = array(((range(3), range(3, 6)), (range(6, 9), range(9, 12)), (range(12, 15), range(15, 18)), @@ -1242,7 +1256,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from numpypy import array, flatiter + from _numpypy import array, flatiter a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1257,23 +1271,23 @@ assert s == 140 def test_flatiter_array_conv(self): - from numpypy import array, dot + from _numpypy import array, dot a = array([1, 2, 3]) assert dot(a.flat, a.flat) == 14 def test_flatiter_varray(self): - from numpypy import ones + from _numpypy import ones a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] def test_slice_copy(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((10, 10)) b = a[0].copy() assert (b == zeros(10)).all() def test_array_interface(self): - from numpypy import array + from _numpypy import array a = array([1, 2, 3]) i = a.__array_interface__ assert isinstance(i['data'][0], int) @@ -1295,7 +1309,7 @@ def test_fromstring(self): import sys - from numpypy import fromstring, array, uint8, float32, int32 + from _numpypy import fromstring, array, uint8, float32, int32 a = fromstring(self.data) for i in range(4): @@ -1359,7 +1373,7 @@ assert (u == [1, 0]).all() def test_fromstring_types(self): - from numpypy import (fromstring, int8, int16, int32, int64, uint8, + from _numpypy import (fromstring, int8, int16, int32, int64, uint8, uint16, uint32, float32, float64) a = fromstring('\xFF', dtype=int8) @@ -1384,7 +1398,7 @@ assert j[0] == 12 def test_fromstring_invalid(self): - from numpypy import fromstring, uint16, uint8, int32 + from _numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail raises(ValueError, fromstring, "\x01\x02\x03") #3 bytes is not modulo 2 bytes (int16) @@ -1395,7 +1409,7 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpypy import array, zeros + from _numpypy import array, zeros int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" @@ -1423,7 +1437,7 @@ assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1448,7 +1462,7 @@ [500, 1001]])''' def test_repr_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -1463,7 +1477,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -1496,7 +1510,7 @@ assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' def test_str_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -1512,7 +1526,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array, dtype + from _numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) @@ -1534,7 +1548,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_app_reshape(self): - from numpypy import arange, array, dtype, reshape + from _numpypy import arange, array, dtype, reshape a = arange(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpypy import add, ufunc + from _numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpypy import add, multiply, sin + from _numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpypy import add, sin + from _numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpypy import negative, sign, minimum + from _numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpypy import array, ndarray, negative, minimum + from _numpypy import array, ndarray, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpypy import array, absolute + from _numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpypy import array, add + from _numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpypy import array, divide + from _numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -114,7 +114,7 @@ assert (divide(array([-10]), array([2])) == array([-5])).all() def test_fabs(self): - from numpypy import array, fabs + from _numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -123,7 +123,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpypy import array, minimum + from _numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -132,7 +132,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpypy import array, maximum + from _numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -145,7 +145,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpypy import array, multiply + from _numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -154,7 +154,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpypy import array, sign, dtype + from _numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -173,7 +173,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpypy import array, reciprocal + from _numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -182,7 +182,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpypy import array, subtract + from _numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -191,7 +191,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpypy import array, floor + from _numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -200,7 +200,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpypy import array, copysign + from _numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -216,7 +216,7 @@ def test_exp(self): import math - from numpypy import array, exp + from _numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -230,7 +230,7 @@ def test_sin(self): import math - from numpypy import array, sin + from _numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -243,7 +243,7 @@ def test_cos(self): import math - from numpypy import array, cos + from _numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -252,7 +252,7 @@ def test_tan(self): import math - from numpypy import array, tan + from _numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -262,7 +262,7 @@ def test_arcsin(self): import math - from numpypy import array, arcsin + from _numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -276,7 +276,7 @@ def test_arccos(self): import math - from numpypy import array, arccos + from _numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -291,7 +291,7 @@ def test_arctan(self): import math - from numpypy import array, arctan + from _numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) @@ -304,7 +304,7 @@ def test_arcsinh(self): import math - from numpypy import arcsinh, inf + from _numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -312,7 +312,7 @@ def test_arctanh(self): import math - from numpypy import arctanh + from _numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -323,7 +323,7 @@ def test_sqrt(self): import math - from numpypy import sqrt + from _numpypy import sqrt nan, inf = float("nan"), float("inf") data = [1, 2, 3, inf] @@ -333,13 +333,14 @@ assert math.isnan(sqrt(nan)) def test_reduce_errors(self): - from numpypy import sin, add + from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) raises(ValueError, add.reduce, 1) - def test_reduce1D(self): - from numpypy import add, maximum + def test_reduce_1d(self): + from _numpypy import add, maximum + assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 @@ -353,7 +354,7 @@ def test_comparisons(self): import operator - from numpypy import equal, not_equal, less, less_equal, greater, greater_equal + from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -7,16 +7,21 @@ interpleveldefs = { 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', - 'set_compile_hook': 'interp_jit.set_compile_hook', - 'DebugMergePoint': 'interp_resop.W_DebugMergePoint', + 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_optimize_hook': 'interp_resop.set_optimize_hook', + 'set_abort_hook': 'interp_resop.set_abort_hook', + 'ResOperation': 'interp_resop.WrappedOp', + 'Box': 'interp_resop.WrappedBox', } def setup_after_space_initialization(self): # force the __extend__ hacks to occur early from pypy.module.pypyjit.interp_jit import pypyjitdriver + from pypy.module.pypyjit.policy import pypy_hooks # add the 'defaults' attribute from pypy.rlib.jit import PARAMETERS space = self.space pypyjitdriver.space = space w_obj = space.wrap(PARAMETERS) space.setattr(space.wrap(self), space.wrap('defaults'), w_obj) + pypy_hooks.space = space diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -13,11 +13,7 @@ from pypy.interpreter.pycode import PyCode, CO_GENERATOR from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame -from pypy.interpreter.gateway import unwrap_spec from opcode import opmap -from pypy.rlib.nonconst import NonConstant -from pypy.jit.metainterp.resoperation import rop -from pypy.module.pypyjit.interp_resop import debug_merge_point_from_boxes PyFrame._virtualizable2_ = ['last_instr', 'pycode', 'valuestackdepth', 'locals_stack_w[*]', @@ -51,72 +47,19 @@ def should_unroll_one_iteration(next_instr, is_being_profiled, bytecode): return (bytecode.co_flags & CO_GENERATOR) != 0 -def wrap_oplist(space, logops, operations): - list_w = [] - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - list_w.append(space.wrap(debug_merge_point_from_boxes( - op.getarglist()))) - else: - list_w.append(space.wrap(logops.repr_of_resop(op))) - return list_w - class PyPyJitDriver(JitDriver): reds = ['frame', 'ec'] greens = ['next_instr', 'is_being_profiled', 'pycode'] virtualizables = ['frame'] - def on_compile(self, logger, looptoken, operations, type, next_instr, - is_being_profiled, ll_pycode): - from pypy.rpython.annlowlevel import cast_base_ptr_to_instance - - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap(type), - space.newtuple([pycode, - space.wrap(next_instr), - space.wrap(is_being_profiled)]), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap('bridge'), - space.wrap(n), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - pypyjitdriver = PyPyJitDriver(get_printable_location = get_printable_location, get_jitcell_at = get_jitcell_at, set_jitcell_at = set_jitcell_at, confirm_enter_jit = confirm_enter_jit, can_never_inline = can_never_inline, should_unroll_one_iteration = - should_unroll_one_iteration) + should_unroll_one_iteration, + name='pypyjit') class __extend__(PyFrame): @@ -223,34 +166,3 @@ '''For testing. Invokes callable(...), but without letting the JIT follow the call.''' return space.call_args(w_callable, __args__) - -class Cache(object): - in_recursion = False - - def __init__(self, space): - self.w_compile_hook = space.w_None - -def set_compile_hook(space, w_hook): - """ set_compile_hook(hook) - - Set a compiling hook that will be called each time a loop is compiled. - The hook will be called with the following signature: - hook(merge_point_type, loop_type, greenkey or guard_number, operations) - - for now merge point type is always `main` - - loop_type can be either `loop` `entry_bridge` or `bridge` - in case loop is not `bridge`, greenkey will be a set of constants - for jit merge point. in case it's `main` it'll be a tuple - (code, offset, is_being_profiled) - - Note that jit hook is not reentrant. It means that if the code - inside the jit hook is itself jitted, it will get compiled, but the - jit hook won't be called for that. - - XXX write down what else - """ - cache = space.fromcache(Cache) - cache.w_compile_hook = w_hook - cache.in_recursion = NonConstant(False) - return space.w_None diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,41 +1,197 @@ -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import unwrap_spec, interp2app +from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode -from pypy.rpython.lltypesystem import lltype -from pypy.rpython.annlowlevel import cast_base_ptr_to_instance +from pypy.interpreter.error import OperationError +from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr from pypy.rpython.lltypesystem.rclass import OBJECT +from pypy.jit.metainterp.resoperation import rop, AbstractResOp +from pypy.rlib.nonconst import NonConstant +from pypy.rlib import jit_hooks -class W_DebugMergePoint(Wrappable): - """ A class representing debug_merge_point JIT operation +class Cache(object): + in_recursion = False + + def __init__(self, space): + self.w_compile_hook = space.w_None + self.w_abort_hook = space.w_None + self.w_optimize_hook = space.w_None + +def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr): + if greenkey is None: + return space.w_None + jitdriver_name = jitdriver.name + if jitdriver_name == 'pypyjit': + next_instr = greenkey[0].getint() + is_being_profiled = greenkey[1].getint() + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + greenkey[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return space.newtuple([space.wrap(pycode), space.wrap(next_instr), + space.newbool(bool(is_being_profiled))]) + else: + return space.wrap(greenkey_repr) + +def set_compile_hook(space, w_hook): + """ set_compile_hook(hook) + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. """ + cache = space.fromcache(Cache) + cache.w_compile_hook = w_hook + cache.in_recursion = NonConstant(False) - def __init__(self, mp_no, offset, pycode): - self.mp_no = mp_no +def set_optimize_hook(space, w_hook): + """ set_optimize_hook(hook) + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + """ + cache = space.fromcache(Cache) + cache.w_optimize_hook = w_hook + cache.in_recursion = NonConstant(False) + +def set_abort_hook(space, w_hook): + """ set_abort_hook(hook) + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. + """ + cache = space.fromcache(Cache) + cache.w_abort_hook = w_hook + cache.in_recursion = NonConstant(False) + +def wrap_oplist(space, logops, operations, ops_offset=None): + l_w = [] + for op in operations: + if ops_offset is None: + ofs = -1 + else: + ofs = ops_offset.get(op, 0) + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) + return l_w + +class WrappedBox(Wrappable): + """ A class representing a single box + """ + def __init__(self, llbox): + self.llbox = llbox + + def descr_getint(self, space): + return space.wrap(jit_hooks.box_getint(self.llbox)) + + at unwrap_spec(no=int) +def descr_new_box(space, w_tp, no): + return WrappedBox(jit_hooks.boxint_new(no)) + +WrappedBox.typedef = TypeDef( + 'Box', + __new__ = interp2app(descr_new_box), + getint = interp2app(WrappedBox.descr_getint), +) + + at unwrap_spec(num=int, offset=int, repr=str, res=WrappedBox) +def descr_new_resop(space, w_tp, num, w_args, res, offset=-1, + repr=''): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + if res is None: + llres = jit_hooks.emptyval() + else: + llres = res.llbox + return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + +class WrappedOp(Wrappable): + """ A class representing a single ResOperation, wrapped nicely + """ + def __init__(self, op, offset, repr_of_resop): + self.op = op self.offset = offset - self.pycode = pycode + self.repr_of_resop = repr_of_resop def descr_repr(self, space): - return space.wrap('DebugMergePoint()') + return space.wrap(self.repr_of_resop) - at unwrap_spec(mp_no=int, offset=int, pycode=PyCode) -def new_debug_merge_point(space, w_tp, mp_no, offset, pycode): - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_num(self, space): + return space.wrap(jit_hooks.resop_getopnum(self.op)) -def debug_merge_point_from_boxes(boxes): - mp_no = boxes[0].getint() - offset = boxes[2].getint() - llcode = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), - boxes[4].getref_base()) - pycode = cast_base_ptr_to_instance(PyCode, llcode) - assert pycode is not None - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_name(self, space): + return space.wrap(hlstr(jit_hooks.resop_getopname(self.op))) -W_DebugMergePoint.typedef = TypeDef( - 'DebugMergePoint', - __new__ = interp2app(new_debug_merge_point), - __doc__ = W_DebugMergePoint.__doc__, - __repr__ = interp2app(W_DebugMergePoint.descr_repr), - code = interp_attrproperty('pycode', W_DebugMergePoint), + @unwrap_spec(no=int) + def descr_getarg(self, space, no): + return WrappedBox(jit_hooks.resop_getarg(self.op, no)) + + @unwrap_spec(no=int, box=WrappedBox) + def descr_setarg(self, space, no, box): + jit_hooks.resop_setarg(self.op, no, box.llbox) + + def descr_getresult(self, space): + return WrappedBox(jit_hooks.resop_getresult(self.op)) + + def descr_setresult(self, space, w_box): + box = space.interp_w(WrappedBox, w_box) + jit_hooks.resop_setresult(self.op, box.llbox) + +WrappedOp.typedef = TypeDef( + 'ResOperation', + __doc__ = WrappedOp.__doc__, + __new__ = interp2app(descr_new_resop), + __repr__ = interp2app(WrappedOp.descr_repr), + num = GetSetProperty(WrappedOp.descr_num), + name = GetSetProperty(WrappedOp.descr_name), + getarg = interp2app(WrappedOp.descr_getarg), + setarg = interp2app(WrappedOp.descr_setarg), + result = GetSetProperty(WrappedOp.descr_getresult, + WrappedOp.descr_setresult) ) +WrappedOp.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,4 +1,112 @@ from pypy.jit.codewriter.policy import JitPolicy +from pypy.rlib.jit import JitHookInterface +from pypy.rlib import jit_hooks +from pypy.interpreter.error import OperationError +from pypy.jit.metainterp.jitprof import counter_names +from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\ + WrappedOp + +class PyPyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_abort_hook): + cache.in_recursion = True + try: + try: + space.call_function(cache.w_abort_hook, + space.wrap(jitdriver.name), + wrap_greenkey(space, jitdriver, + greenkey, greenkey_repr), + space.wrap(counter_names[reason])) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_abort_hook) + finally: + cache.in_recursion = False + + def after_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._compile_hook(debug_info, w_greenkey) + + def after_compile_bridge(self, debug_info): + self._compile_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def before_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._optimize_hook(debug_info, w_greenkey) + + def before_compile_bridge(self, debug_info): + self._optimize_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def _compile_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_compile_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations, + debug_info.asminfo.ops_offset) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + asminfo = debug_info.asminfo + space.call_function(cache.w_compile_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w), + space.wrap(asminfo.asmaddr), + space.wrap(asminfo.asmlen)) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + + def _optimize_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_optimize_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + w_res = space.call_function(cache.w_optimize_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w)) + if space.is_w(w_res, space.w_None): + return + l = [] + for w_item in space.listview(w_res): + item = space.interp_w(WrappedOp, w_item) + l.append(jit_hooks._cast_to_resop(item.op)) + del debug_info.operations[:] # modifying operations above is + # probably not a great idea since types may not work + # and we'll end up with half-working list and + # a segfault/fatal RPython error + for elem in l: + debug_info.operations.append(elem) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + +pypy_hooks = PyPyJitIface() class PyPyJitPolicy(JitPolicy): @@ -12,12 +120,16 @@ modname == 'thread.os_thread'): return True if '.' in modname: - modname, _ = modname.split('.', 1) + modname, rest = modname.split('.', 1) + else: + rest = '' if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', 'mmap', 'marshal']: + if modname == 'pypyjit' and 'interp_resop' in rest: + return False return True return False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -1,22 +1,40 @@ import py from pypy.conftest import gettestobjspace, option +from pypy.interpreter.gateway import interp2app from pypy.interpreter.pycode import PyCode -from pypy.interpreter.gateway import interp2app -from pypy.jit.metainterp.history import JitCellToken -from pypy.jit.metainterp.resoperation import ResOperation, rop +from pypy.jit.metainterp.history import JitCellToken, ConstInt, ConstPtr +from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.logger import Logger from pypy.rpython.annlowlevel import (cast_instance_to_base_ptr, cast_base_ptr_to_instance) from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.module.pypyjit.interp_jit import pypyjitdriver +from pypy.module.pypyjit.policy import pypy_hooks from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.typesystem import llhelper +from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG +from pypy.rlib.jit import JitDebugInfo, AsmInfo + +class MockJitDriverSD(object): + class warmstate(object): + @staticmethod + def get_location_str(boxes): + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + boxes[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return pycode.co_name + + jitdriver = pypyjitdriver + class MockSD(object): class cpu(object): ts = llhelper + jitdrivers_sd = [MockJitDriverSD] + class AppTestJitHook(object): def setup_class(cls): if option.runappdirect: @@ -24,9 +42,9 @@ space = gettestobjspace(usemodules=('pypyjit',)) cls.space = space w_f = space.appexec([], """(): - def f(): + def function(): pass - return f + return function """) cls.w_f = w_f ll_code = cast_instance_to_base_ptr(w_f.code) @@ -34,41 +52,73 @@ logger = Logger(MockSD()) oplist = parse(""" - [i1, i2] + [i1, i2, p2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) + guard_nonnull(p2) [] guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations + greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] + offset = {} + for i, op in enumerate(oplist): + if i != 1: + offset[op] = i + + di_loop = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop_optimize = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop.asminfo = AsmInfo(offset, 0, 0) + di_bridge = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'bridge', fail_descr_no=0) + di_bridge.asminfo = AsmInfo(offset, 0, 0) def interp_on_compile(): - pypyjitdriver.on_compile(logger, JitCellToken(), oplist, 'loop', - 0, False, ll_code) + di_loop.oplist = cls.oplist + pypy_hooks.after_compile(di_loop) def interp_on_compile_bridge(): - pypyjitdriver.on_compile_bridge(logger, JitCellToken(), oplist, 0) + pypy_hooks.after_compile_bridge(di_bridge) + + def interp_on_optimize(): + di_loop_optimize.oplist = cls.oplist + pypy_hooks.before_compile(di_loop_optimize) + + def interp_on_abort(): + pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey, + 'blah') cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) + cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) + cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) + cls.orig_oplist = oplist + + def setup_method(self, meth): + self.__class__.oplist = self.orig_oplist[:] def test_on_compile(self): import pypyjit all = [] - def hook(*args): - assert args[0] == 'main' - assert args[1] in ['loop', 'bridge'] - all.append(args[2:]) + def hook(name, looptype, tuple_or_guard_no, ops, asmstart, asmlen): + all.append((name, looptype, tuple_or_guard_no, ops)) self.on_compile() pypyjit.set_compile_hook(hook) assert not all self.on_compile() assert len(all) == 1 - assert all[0][0][0].co_name == 'f' - assert all[0][0][1] == 0 - assert all[0][0][2] == False - assert len(all[0][1]) == 3 - assert 'int_add' in all[0][1][0] + elem = all[0] + assert elem[0] == 'pypyjit' + assert elem[2][0].co_name == 'function' + assert elem[2][1] == 0 + assert elem[2][2] == False + assert len(elem[3]) == 4 + int_add = elem[3][0] + #assert int_add.name == 'int_add' + assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 pypyjit.set_compile_hook(None) @@ -116,11 +166,48 @@ pypyjit.set_compile_hook(hook) self.on_compile() - dmp = l[0][3][1] - assert isinstance(dmp, pypyjit.DebugMergePoint) - assert dmp.code is self.f.func_code + op = l[0][3][1] + assert isinstance(op, pypyjit.ResOperation) + assert 'function' in repr(op) + + def test_on_abort(self): + import pypyjit + l = [] + + def hook(jitdriver_name, greenkey, reason): + l.append((jitdriver_name, reason)) + + pypyjit.set_abort_hook(hook) + self.on_abort() + assert l == [('pypyjit', 'ABORT_TOO_LONG')] + + def test_on_optimize(self): + import pypyjit + l = [] + + def hook(name, looptype, tuple_or_guard_no, ops, *args): + l.append(ops) + + def optimize_hook(name, looptype, tuple_or_guard_no, ops): + return [] + + pypyjit.set_compile_hook(hook) + pypyjit.set_optimize_hook(optimize_hook) + self.on_optimize() + self.on_compile() + assert l == [[]] def test_creation(self): - import pypyjit - dmp = pypyjit.DebugMergePoint(0, 0, self.f.func_code) - assert dmp.code is self.f.func_code + from pypyjit import Box, ResOperation + + op = ResOperation(self.int_add_num, [Box(1), Box(3)], Box(4)) + assert op.num == self.int_add_num + assert op.name == 'int_add' + box = op.getarg(0) + assert box.getint() == 1 + box2 = op.result + assert box2.getint() == 4 + op.setarg(0, box2) + assert op.getarg(0).getint() == 4 + op.result = box + assert op.result.getint() == 1 diff --git a/pypy/module/pypyjit/test/test_policy.py b/pypy/module/pypyjit/test/test_policy.py --- a/pypy/module/pypyjit/test/test_policy.py +++ b/pypy/module/pypyjit/test/test_policy.py @@ -52,6 +52,7 @@ for modname in 'pypyjit', 'signal', 'micronumpy', 'math', 'imp': assert pypypolicy.look_inside_pypy_module(modname) assert pypypolicy.look_inside_pypy_module(modname + '.foo') + assert not pypypolicy.look_inside_pypy_module('pypyjit.interp_resop') def test_see_jit_module(): assert pypypolicy.look_inside_pypy_module('pypyjit.interp_jit') diff --git a/pypy/module/pypyjit/test/test_ztranslation.py b/pypy/module/pypyjit/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test/test_ztranslation.py @@ -0,0 +1,5 @@ + +from pypy.objspace.fake.checkmodule import checkmodule + +def test_pypyjit_translates(): + checkmodule('pypyjit') diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -42,7 +42,7 @@ 'argv' : 'state.get(space).w_argv', 'py3kwarning' : 'space.w_False', 'warnoptions' : 'state.get(space).w_warnoptions', - 'builtin_module_names' : 'state.w_None', + 'builtin_module_names' : 'space.w_None', 'pypy_getudir' : 'state.pypy_getudir', # not translated 'pypy_initial_path' : 'state.pypy_initial_path', diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py --- a/pypy/objspace/fake/checkmodule.py +++ b/pypy/objspace/fake/checkmodule.py @@ -1,8 +1,10 @@ from pypy.objspace.fake.objspace import FakeObjSpace, W_Root +from pypy.config.pypyoption import get_pypy_config def checkmodule(modname): - space = FakeObjSpace() + config = get_pypy_config(translating=True) + space = FakeObjSpace(config) mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__']) # force computation and record what we wrap module = mod.Module(space, W_Root()) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -93,9 +93,9 @@ class FakeObjSpace(ObjSpace): - def __init__(self): + def __init__(self, config=None): self._seen_extras = [] - ObjSpace.__init__(self) + ObjSpace.__init__(self, config=config) def float_w(self, w_obj): is_root(w_obj) @@ -135,6 +135,9 @@ def newfloat(self, x): return w_some_obj() + def newcomplex(self, x, y): + return w_some_obj() + def marshal_w(self, w_obj): "NOT_RPYTHON" raise NotImplementedError @@ -215,6 +218,10 @@ expected_length = 3 return [w_some_obj()] * expected_length + def unpackcomplex(self, w_complex): + is_root(w_complex) + return 1.1, 2.2 + def allocate_instance(self, cls, w_subtype): is_root(w_subtype) return instantiate(cls) @@ -232,6 +239,11 @@ def exec_(self, *args, **kwds): pass + def createexecutioncontext(self): + ec = ObjSpace.createexecutioncontext(self) + ec._py_repr = None + return ec + # ---------- def translates(self, func=None, argtypes=None, **kwds): @@ -267,18 +279,21 @@ ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', 'dict', 'unicode', 'complex', 'slice', 'bool', - 'type', 'basestring']): + 'type', 'basestring', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # for (name, _, arity, _) in ObjSpace.MethodTable: args = ['w_%d' % i for i in range(arity)] + params = args[:] d = {'is_root': is_root, 'w_some_obj': w_some_obj} + if name in ('get',): + params[-1] += '=None' exec compile2("""\ def meth(self, %s): %s return w_some_obj() - """ % (', '.join(args), + """ % (', '.join(params), '; '.join(['is_root(%s)' % arg for arg in args]))) in d meth = func_with_new_name(d['meth'], name) setattr(FakeObjSpace, name, meth) @@ -301,9 +316,12 @@ pass FakeObjSpace.default_compiler = FakeCompiler() -class FakeModule(object): +class FakeModule(Wrappable): + def __init__(self): + self.w_dict = w_some_obj() def get(self, name): name + "xx" # check that it's a string return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py --- a/pypy/objspace/fake/test/test_objspace.py +++ b/pypy/objspace/fake/test/test_objspace.py @@ -40,7 +40,7 @@ def test_constants(self): space = self.space space.translates(lambda: (space.w_None, space.w_True, space.w_False, - space.w_int, space.w_str, + space.w_int, space.w_str, space.w_object, space.w_TypeError)) def test_wrap(self): @@ -72,3 +72,9 @@ def test_newlist(self): self.space.newlist([W_Root(), W_Root()]) + + def test_default_values(self): + # the __get__ method takes either 2 or 3 arguments + space = self.space + space.translates(lambda: (space.get(W_Root(), W_Root()), + space.get(W_Root(), W_Root(), W_Root()))) diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -360,12 +363,36 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -6,18 +6,24 @@ from pypy.rlib.objectmodel import CDefinedIntSymbolic, keepalive_until_here, specialize from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.extregistry import ExtRegistryEntry -from pypy.tool.sourcetools import func_with_new_name DEBUG_ELIDABLE_FUNCTIONS = False def elidable(func): - """ Decorate a function as "trace-elidable". This means precisely that: + """ Decorate a function as "trace-elidable". Usually this means simply that + the function is constant-foldable, i.e. is pure and has no side-effects. + + In some situations it is ok to use this decorator if the function *has* + side effects, as long as these side-effects are idempotent. A typical + example for this would be a cache. + + To be totally precise: (1) the result of the call should not change if the arguments are the same (same numbers or same pointers) (2) it's fine to remove the call completely if we can guess the result - according to rule 1 + according to rule 1 (3) the function call can be moved around by optimizer, but only so it'll be called earlier and not later. @@ -386,6 +392,19 @@ class JitHintError(Exception): """Inconsistency in the JIT hints.""" +PARAMETER_DOCS = { + 'threshold': 'number of times a loop has to run for it to become hot', + 'function_threshold': 'number of times a function must run for it to become traced from start', + 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG', + 'inlining': 'inline python functions or not (1/0)', + 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', + 'retrace_limit': 'how many times we can try retracing before giving up', + 'max_retrace_guards': 'number of extra guards a retrace can cause', + 'max_unroll_loops': 'number of extra unrollings a loop can cause', + 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' + } + PARAMETERS = {'threshold': 1039, # just above 1024, prime 'function_threshold': 1619, # slightly more than one above, also prime 'trace_eagerness': 200, @@ -394,6 +413,7 @@ 'loop_longevity': 1000, 'retrace_limit': 5, 'max_retrace_guards': 15, + 'max_unroll_loops': 4, 'enable_opts': 'all', } unroll_parameters = unrolling_iterable(PARAMETERS.items()) @@ -410,13 +430,16 @@ active = True # if set to False, this JitDriver is ignored virtualizables = [] + name = 'jitdriver' def __init__(self, greens=None, reds=None, virtualizables=None, get_jitcell_at=None, set_jitcell_at=None, get_printable_location=None, confirm_enter_jit=None, - can_never_inline=None, should_unroll_one_iteration=None): + can_never_inline=None, should_unroll_one_iteration=None, + name='jitdriver'): if greens is not None: self.greens = greens + self.name = name if reds is not None: self.reds = reds if not hasattr(self, 'greens') or not hasattr(self, 'reds'): @@ -450,23 +473,6 @@ # special-cased by ExtRegistryEntry pass - def on_compile(self, logger, looptoken, operations, type, *greenargs): - """ A hook called when loop is compiled. Overwrite - for your own jitdriver if you want to do something special, like - call applevel code - """ - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - """ A hook called when a bridge is compiled. Overwrite - for your own jitdriver if you want to do something special - """ - - # note: if you overwrite this functions with the above signature it'll - # work, but the *greenargs is different for each jitdriver, so we - # can't share the same methods - del on_compile - del on_compile_bridge - def _make_extregistryentries(self): # workaround: we cannot declare ExtRegistryEntries for functions # used as methods of a frozen object, but we can attach the @@ -628,7 +634,6 @@ def specialize_call(self, hop, **kwds_i): # XXX to be complete, this could also check that the concretetype # of the variables are the same for each of the calls. - from pypy.rpython.error import TyperError from pypy.rpython.lltypesystem import lltype driver = self.instance.im_self greens_v = [] @@ -741,6 +746,105 @@ return hop.genop('jit_marker', vlist, resulttype=lltype.Void) +class AsmInfo(object): + """ An addition to JitDebugInfo concerning assembler. Attributes: + + ops_offset - dict of offsets of operations or None + asmaddr - (int) raw address of assembler block + asmlen - assembler block length + """ + def __init__(self, ops_offset, asmaddr, asmlen): + self.ops_offset = ops_offset + self.asmaddr = asmaddr + self.asmlen = asmlen + +class JitDebugInfo(object): + """ An object representing debug info. Attributes meanings: + + greenkey - a list of green boxes or None for bridge + logger - an instance of jit.metainterp.logger.LogOperations + type - either 'loop', 'entry bridge' or 'bridge' + looptoken - description of a loop + fail_descr_no - number of failing descr for bridges, -1 otherwise + asminfo - extra assembler information + """ + + asminfo = None + def __init__(self, jitdriver_sd, logger, looptoken, operations, type, + greenkey=None, fail_descr_no=-1): + self.jitdriver_sd = jitdriver_sd + self.logger = logger + self.looptoken = looptoken + self.operations = operations + self.type = type + if type == 'bridge': + assert fail_descr_no != -1 + else: + assert greenkey is not None + self.greenkey = greenkey + self.fail_descr_no = fail_descr_no + + def get_jitdriver(self): + """ Return where the jitdriver on which the jitting started + """ + return self.jitdriver_sd.jitdriver + + def get_greenkey_repr(self): + """ Return the string repr of a greenkey + """ + return self.jitdriver_sd.warmstate.get_location_str(self.greenkey) + +class JitHookInterface(object): + """ This is the main connector between the JIT and the interpreter. + Several methods on this class will be invoked at various stages + of JIT running like JIT loops compiled, aborts etc. + An instance of this class will be available as policy.jithookiface. + """ + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + """ A hook called each time a loop is aborted with jitdriver and + greenkey where it started, reason is a string why it got aborted + """ + + #def before_optimize(self, debug_info): + # """ A hook called before optimizer is run, called with instance of + # JitDebugInfo. Overwrite for custom behavior + # """ + # DISABLED + + def before_compile(self, debug_info): + """ A hook called after a loop is optimized, before compiling assembler, + called with JitDebugInfo instance. Overwrite for custom behavior + """ + + def after_compile(self, debug_info): + """ A hook called after a loop has compiled assembler, + called with JitDebugInfo instance. Overwrite for custom behavior + """ + + #def before_optimize_bridge(self, debug_info): + # operations, fail_descr_no): + # """ A hook called before a bridge is optimized. + # Called with JitDebugInfo instance, overwrite for + # custom behavior + # """ + # DISABLED + + def before_compile_bridge(self, debug_info): + """ A hook called before a bridge is compiled, but after optimizations + are performed. Called with instance of debug_info, overwrite for + custom behavior + """ + + def after_compile_bridge(self, debug_info): + """ A hook called after a bridge is compiled, called with JitDebugInfo + instance, overwrite for custom behavior + """ + + def get_stats(self): + """ Returns various statistics + """ + raise NotImplementedError + def record_known_class(value, cls): """ Assure the JIT that value is an instance of cls. This is not a precise @@ -748,7 +852,6 @@ """ assert isinstance(value, cls) - class Entry(ExtRegistryEntry): _about_ = record_known_class @@ -759,7 +862,8 @@ assert isinstance(s_inst, annmodel.SomeInstance) def specialize_call(self, hop): - from pypy.rpython.lltypesystem import lltype, rclass + from pypy.rpython.lltypesystem import rclass, lltype + classrepr = rclass.get_type_repr(hop.rtyper) hop.exception_cannot_occur() diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/jit_hooks.py @@ -0,0 +1,106 @@ + +from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.annotation import model as annmodel +from pypy.rpython.lltypesystem import llmemory, lltype +from pypy.rpython.lltypesystem import rclass +from pypy.rpython.annlowlevel import cast_instance_to_base_ptr,\ + cast_base_ptr_to_instance, llstr, hlstr +from pypy.rlib.objectmodel import specialize + +def register_helper(s_result): + def wrapper(helper): + class Entry(ExtRegistryEntry): + _about_ = helper + + def compute_result_annotation(self, *args): + return s_result + + def specialize_call(self, hop): + from pypy.rpython.lltypesystem import lltype + + c_func = hop.inputconst(lltype.Void, helper) + c_name = hop.inputconst(lltype.Void, 'access_helper') + args_v = [hop.inputarg(arg, arg=i) + for i, arg in enumerate(hop.args_r)] + return hop.genop('jit_marker', [c_name, c_func] + args_v, + resulttype=hop.r_result) + return helper + return wrapper + +def _cast_to_box(llref): + from pypy.jit.metainterp.history import AbstractValue + + ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) + return cast_base_ptr_to_instance(AbstractValue, ptr) + +def _cast_to_resop(llref): + from pypy.jit.metainterp.resoperation import AbstractResOp + + ptr = lltype.cast_opaque_ptr(rclass.OBJECTPTR, llref) + return cast_base_ptr_to_instance(AbstractResOp, ptr) + + at specialize.argtype(0) +def _cast_to_gcref(obj): + return lltype.cast_opaque_ptr(llmemory.GCREF, + cast_instance_to_base_ptr(obj)) + +def emptyval(): + return lltype.nullptr(llmemory.GCREF.TO) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_new(no, llargs, llres): + from pypy.jit.metainterp.history import ResOperation + + args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] + res = _cast_to_box(llres) + return _cast_to_gcref(ResOperation(no, args, res)) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def boxint_new(no): + from pypy.jit.metainterp.history import BoxInt + return _cast_to_gcref(BoxInt(no)) + + at register_helper(annmodel.SomeInteger()) +def resop_getopnum(llop): + return _cast_to_resop(llop).getopnum() + + at register_helper(annmodel.SomeString(can_be_None=True)) +def resop_getopname(llop): + return llstr(_cast_to_resop(llop).getopname()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getarg(llop, no): + return _cast_to_gcref(_cast_to_resop(llop).getarg(no)) + + at register_helper(annmodel.s_None) +def resop_setarg(llop, no, llbox): + _cast_to_resop(llop).setarg(no, _cast_to_box(llbox)) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def resop_getresult(llop): + return _cast_to_gcref(_cast_to_resop(llop).result) + + at register_helper(annmodel.s_None) +def resop_setresult(llop, llbox): + _cast_to_resop(llop).result = _cast_to_box(llbox) + + at register_helper(annmodel.SomeInteger()) +def box_getint(llbox): + return _cast_to_box(llbox).getint() + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_clone(llbox): + return _cast_to_gcref(_cast_to_box(llbox).clonebox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_constbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).constbox()) + + at register_helper(annmodel.SomePtr(llmemory.GCREF)) +def box_nonconstbox(llbox): + return _cast_to_gcref(_cast_to_box(llbox).nonconstbox()) + + at register_helper(annmodel.SomeBool()) +def box_isconst(llbox): + from pypy.jit.metainterp.history import Const + return isinstance(_cast_to_box(llbox), Const) diff --git a/pypy/rlib/rsre/rsre_jit.py b/pypy/rlib/rsre/rsre_jit.py --- a/pypy/rlib/rsre/rsre_jit.py +++ b/pypy/rlib/rsre/rsre_jit.py @@ -5,7 +5,7 @@ active = True def __init__(self, name, debugprint, **kwds): - JitDriver.__init__(self, **kwds) + JitDriver.__init__(self, name='rsre_' + name, **kwds) # def get_printable_location(*args): # we print based on indices in 'args'. We first print diff --git a/pypy/tool/release_dates.py b/pypy/tool/release_dates.py deleted file mode 100644 --- a/pypy/tool/release_dates.py +++ /dev/null @@ -1,14 +0,0 @@ -import py - -release_URL = 'http://codespeak.net/svn/pypy/release/' -releases = [r[:-2] for r in py.std.os.popen('svn list ' + release_URL).readlines() if 'x' not in r] - -f = file('release_dates.txt', 'w') -print >> f, 'date, release' -for release in releases: - for s in py.std.os.popen('svn info ' + release_URL + release).readlines(): - if s.startswith('Last Changed Date'): - date = s.split()[3] - print >> f, date, ',', release - break -f.close() diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c --- a/pypy/translator/c/src/profiling.c +++ b/pypy/translator/c/src/profiling.c @@ -29,6 +29,35 @@ profiling_setup = 0; } } + +#elif defined(_WIN32) +#include + +DWORD_PTR base_affinity_mask; +int profiling_setup = 0; + +void pypy_setup_profiling() { + if (!profiling_setup) { + DWORD_PTR affinity_mask, system_affinity_mask; + GetProcessAffinityMask(GetCurrentProcess(), + &base_affinity_mask, &system_affinity_mask); + affinity_mask = 1; + /* Pick one cpu allowed by the system */ + if (system_affinity_mask) + while ((affinity_mask & system_affinity_mask) == 0) + affinity_mask <<= 1; + SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); + profiling_setup = 1; + } +} + +void pypy_teardown_profiling() { + if (profiling_setup) { + SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); + profiling_setup = 0; + } +} + #else void pypy_setup_profiling() { } void pypy_teardown_profiling() { } diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -139,8 +139,8 @@ items = pypyjit.defaults.items() items.sort() for key, value in items: - print ' --jit %s=N %slow-level JIT parameter (default %s)' % ( - key, ' '*(18-len(key)), value) + print ' --jit %s=N %s%s (default %s)' % ( + key, ' '*(18-len(key)), pypyjit.PARAMETER_DOCS[key], value) print ' --jit off turn off the JIT' def print_version(*args): diff --git a/pypy/translator/goal/targetpypystandalone.py b/pypy/translator/goal/targetpypystandalone.py --- a/pypy/translator/goal/targetpypystandalone.py +++ b/pypy/translator/goal/targetpypystandalone.py @@ -226,8 +226,8 @@ return self.get_entry_point(config) def jitpolicy(self, driver): - from pypy.module.pypyjit.policy import PyPyJitPolicy - return PyPyJitPolicy() + from pypy.module.pypyjit.policy import PyPyJitPolicy, pypy_hooks + return PyPyJitPolicy(pypy_hooks) def get_entry_point(self, config): from pypy.tool.lib_pypy import import_from_lib_pypy From noreply at buildbot.pypy.org Fri Jan 13 22:43:35 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 13 Jan 2012 22:43:35 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: fix the merge Message-ID: <20120113214335.336EE82CF4@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51301:a8dc471dcb8e Date: 2012-01-13 23:43 +0200 http://bitbucket.org/pypy/pypy/changeset/a8dc471dcb8e/ Log: fix the merge diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -581,11 +581,11 @@ def descr_var(self, space): # var = mean((values - mean(values)) ** 2) - w_res = self.descr_sub(space, self.descr_mean(space)) + w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) - return w_res.descr_mean(space) + return w_res.descr_mean(space, space.w_None) def descr_std(self, space): # std(v) = sqrt(var(v)) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -21,7 +21,7 @@ greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['self','arr', 'identity', 'frame'], -# name='axisreduce', + name='numpy_axisreduce', get_printable_location=new_printable_location('axisreduce'), ) From noreply at buildbot.pypy.org Sat Jan 14 12:49:12 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 14 Jan 2012 12:49:12 +0100 (CET) Subject: [pypy-commit] pypy default: Write a test for max_unroll_loops, based on an idea by Hakan. Message-ID: <20120114114912.7B95382B12@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51302:039bd6756d51 Date: 2012-01-14 12:48 +0100 http://bitbucket.org/pypy/pypy/changeset/039bd6756d51/ Log: Write a test for max_unroll_loops, based on an idea by Hakan. diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2629,6 +2629,38 @@ self.check_jitcell_token_count(1) self.check_target_token_count(5) + def test_max_unroll_loops(self): + from pypy.jit.metainterp.optimize import InvalidLoop + from pypy.jit.metainterp import optimizeopt + myjitdriver = JitDriver(greens = [], reds = ['n', 'i']) + # + def f(n, limit): + set_param(myjitdriver, 'threshold', 5) + set_param(myjitdriver, 'max_unroll_loops', limit) + i = 0 + while i < n: + myjitdriver.jit_merge_point(n=n, i=i) + print i + i += 1 + return i + # + def my_optimize_trace(*args, **kwds): + raise InvalidLoop + old_optimize_trace = optimizeopt.optimize_trace + optimizeopt.optimize_trace = my_optimize_trace + try: + res = self.meta_interp(f, [23, 4]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(3) + # + res = self.meta_interp(f, [23, 20]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(2) + finally: + optimizeopt.optimize_trace = old_optimize_trace + def test_retrace_limit_with_extra_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) From notifications-noreply at bitbucket.org Sat Jan 14 14:04:02 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Sat, 14 Jan 2012 13:04:02 -0000 Subject: [pypy-commit] Notification: Your access to pypydoc has been revoked. Message-ID: <20120114130402.9975.29845@bitbucket05.managed.contegix.com> You have received a notification from tomo cocoa. You no longer have access to the source of pypydoc. -- Disable notifications at https://bitbucket.org/account/notifications/ From notifications-noreply at bitbucket.org Sat Jan 14 14:04:24 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Sat, 14 Jan 2012 13:04:24 -0000 Subject: [pypy-commit] Notification: pypyjvm Message-ID: <20120114130424.19420.77196@bitbucket01.managed.contegix.com> You have received a notification from tomo cocoa. Hi, I forked pypy. My fork is at https://bitbucket.org/cocoatomo/pypyjvm. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Sat Jan 14 15:04:24 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 14 Jan 2012 15:04:24 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: a test and a fix Message-ID: <20120114140424.1A34A82C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51304:16b16c97b7c1 Date: 2012-01-14 16:03 +0200 http://bitbucket.org/pypy/pypy/changeset/16b16c97b7c1/ Log: a test and a fix diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -135,7 +135,7 @@ self.res_shape = shape[:] self.strides = strides[:dim] + [0] + strides[dim:] self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] - self.first_line = False + self.first_line = True self.indices = [0] * len(shape) self._done = False self.offset = start @@ -151,12 +151,12 @@ done = False for i in range(shapelen - 1, -1, -1): if indices[i] < self.res_shape[i] - 1: + if i == self.dim: + first_line = False indices[i] += 1 offset += self.strides[i] break else: - if i == self.dim: - first_line = False indices[i] = 0 offset -= self.backstrides[i] else: diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -176,7 +176,11 @@ assert isinstance(sig, AxisReduceSignature) frame = sig.create_frame(arr) shapelen = len(obj.shape) - self.reduce_axis_loop(frame, sig, shapelen, arr, self.identity) + if self.identity is not None: + identity = self.identity.convert_to(dtype) + else: + identity = None + self.reduce_axis_loop(frame, sig, shapelen, arr, identity) return result def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -746,7 +746,7 @@ raises(TypeError, 'a.sum(2, 3)') def test_reduce_nd(self): - from numpypy import arange, array + from numpypy import arange, array, multiply a = arange(15).reshape(5, 3) assert a.sum() == 105 assert a.max() == 14 @@ -759,6 +759,7 @@ assert ((a + a).max() == 28) assert ((a + a).max(0) == [24, 26, 28]).all() assert ((a + a).sum(1) == [6, 24, 42, 60, 78]).all() + assert (multiply.reduce(a) == array([0, 3640, 12320])).all() a = array(range(105)).reshape(3, 5, 7) assert (a[:, 1, :].sum(0) == [126, 129, 132, 135, 138, 141, 144]).all() assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() From noreply at buildbot.pypy.org Sat Jan 14 15:04:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 14 Jan 2012 15:04:22 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: kill some dead code Message-ID: <20120114140422.CEBD082B12@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51303:54af8d89c99e Date: 2012-01-14 15:30 +0200 http://bitbucket.org/pypy/pypy/changeset/54af8d89c99e/ Log: kill some dead code diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1003,16 +1003,16 @@ return insns def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. + """ Usefull in the simplest case when we have only one trace ending with + a jump back to itself and possibly a few bridges. + Only the operations within the loop formed by that single jump will + be counted. + """ loops = self.get_all_loops() assert len(loops) == 1 loop = loops[0] jumpop = loop.operations[-1] assert jumpop.getopnum() == rop.JUMP - assert self.check_resops(jump=1) labels = [op for op in loop.operations if op.getopnum() == rop.LABEL] targets = [op._descr_wref() for op in labels] assert None not in targets # TargetToken was freed, give up diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -66,9 +66,6 @@ def done(self): return self.offset == self.size -def view_iter_from_arr(arr): - return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) - class ViewIterator(BaseIterator): def __init__(self, start, strides, backstrides, shape): self.offset = start diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -9,7 +9,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - view_iter_from_arr, SkipLastAxisIterator, AxisIterator + SkipLastAxisIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -127,17 +127,14 @@ def test_axissum(self): result = self.run("axissum") assert result == 30 - self.check_simple_loop({\ - 'setarrayitem_gc': 1, - 'getarrayitem_gc': 5, - 'getinteriorfield_raw': 1, - 'arraylen_gc': 2, + self.check_simple_loop({'getinteriorfield_raw': 2, + 'setinteriorfield_raw': 1, + 'arraylen_gc': 1, 'guard_true': 1, - 'int_sub': 1, 'int_lt': 1, 'jump': 1, 'float_add': 1, - 'int_add': 2, + 'int_add': 3, }) def define_prod(): @@ -218,9 +215,9 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 26, + self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, - 'getfield_gc_pure': 4, + 'getfield_gc_pure': 8, 'guard_class': 8, 'int_add': 8, 'float_mul': 2, 'jump': 2, 'int_ge': 4, 'getinteriorfield_raw': 4, 'float_add': 2, @@ -349,9 +346,8 @@ assert result == 11.0 self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, - 'setinteriorfield_raw': 1, 'int_add': 3, - 'int_lt': 1, 'guard_true': 1, 'jump': 1, - 'arraylen_gc': 3}) + 'setinteriorfield_raw': 1, 'int_add': 2, + 'int_eq': 1, 'guard_false': 1, 'jump': 1}) def define_virtual_slice(): return """ From noreply at buildbot.pypy.org Sat Jan 14 15:05:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 14 Jan 2012 15:05:50 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: add a note Message-ID: <20120114140550.BBA1282B12@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51305:9478a09d1230 Date: 2012-01-14 16:05 +0200 http://bitbucket.org/pypy/pypy/changeset/9478a09d1230/ Log: add a note diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -127,6 +127,8 @@ def test_axissum(self): result = self.run("axissum") assert result == 30 + # XXX note - the bridge here is fairly crucial and yet it's pretty + # bogus. We need to improve the situation somehow. self.check_simple_loop({'getinteriorfield_raw': 2, 'setinteriorfield_raw': 1, 'arraylen_gc': 1, From noreply at buildbot.pypy.org Sat Jan 14 15:10:49 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 14 Jan 2012 15:10:49 +0100 (CET) Subject: [pypy-commit] pypy default: (mattip, fijal) merge numpypy-axisops, this adds axis=x argument to reduce Message-ID: <20120114141049.1B5DD82B12@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51306:5c19878c1ef9 Date: 2012-01-14 16:10 +0200 http://bitbucket.org/pypy/pypy/changeset/5c19878c1ef9/ Log: (mattip, fijal) merge numpypy-axisops, this adds axis=x argument to reduce functions. diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1003,16 +1003,16 @@ return insns def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. + """ Usefull in the simplest case when we have only one trace ending with + a jump back to itself and possibly a few bridges. + Only the operations within the loop formed by that single jump will + be counted. + """ loops = self.get_all_loops() assert len(loops) == 1 loop = loops[0] jumpop = loop.operations[-1] assert jumpop.getopnum() == rop.JUMP - assert self.check_resops(jump=1) labels = [op for op in loop.operations if op.getopnum() == rop.LABEL] targets = [op._descr_wref() for op in labels] assert None not in targets # TargetToken was freed, give up diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -19,25 +19,49 @@ a[i][i] = 1 return a -def mean(a): +def mean(a, axis=None): if not hasattr(a, "mean"): a = _numpypy.array(a) - return a.mean() + return a.mean(axis) -def sum(a): +def sum(a,axis=None): + '''sum(a, axis=None) + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + ''' + # TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements. if not hasattr(a, "sum"): a = _numpypy.array(a) - return a.sum() + return a.sum(axis) -def min(a): +def min(a, axis=None): if not hasattr(a, "min"): a = _numpypy.array(a) - return a.min() + return a.min(axis) -def max(a): +def max(a, axis=None): if not hasattr(a, "max"): a = _numpypy.array(a) - return a.max() + return a.max(axis) def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -372,13 +372,17 @@ def execute(self, interp): if self.name in SINGLE_ARG_FUNCTIONS: - if len(self.args) != 1: + if len(self.args) != 1 and self.name != 'sum': raise ArgumentMismatch arr = self.args[0].execute(interp) if not isinstance(arr, BaseArray): raise ArgumentNotAnArray if self.name == "sum": - w_res = arr.descr_sum(interp.space) + if len(self.args)>1: + w_res = arr.descr_sum(interp.space, + self.args[1].execute(interp)) + else: + w_res = arr.descr_sum(interp.space) elif self.name == "prod": w_res = arr.descr_prod(interp.space) elif self.name == "max": @@ -416,7 +420,7 @@ ('\]', 'array_right'), ('(->)|[\+\-\*\/]', 'operator'), ('=', 'assign'), - (',', 'coma'), + (',', 'comma'), ('\|', 'pipe'), ('\(', 'paren_left'), ('\)', 'paren_right'), @@ -504,7 +508,7 @@ return SliceConstant(start, stop, step) - def parse_expression(self, tokens): + def parse_expression(self, tokens, accept_comma=False): stack = [] while tokens.remaining(): token = tokens.pop() @@ -524,9 +528,13 @@ stack.append(RangeConstant(tokens.pop().v)) end = tokens.pop() assert end.name == 'pipe' + elif accept_comma and token.name == 'comma': + continue else: tokens.push() break + if accept_comma: + return stack stack.reverse() lhs = stack.pop() while stack: @@ -540,7 +548,7 @@ args = [] tokens.pop() # lparen while tokens.get(0).name != 'paren_right': - args.append(self.parse_expression(tokens)) + args += self.parse_expression(tokens, accept_comma=True) return FunctionCall(name, args) def parse_array_const(self, tokens): @@ -556,7 +564,7 @@ token = tokens.pop() if token.name == 'array_right': return elems - assert token.name == 'coma' + assert token.name == 'comma' def parse_statement(self, tokens): if (tokens.get(0).name == 'identifier' and diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -1,19 +1,20 @@ from pypy.rlib import jit from pypy.rlib.objectmodel import instantiate -from pypy.module.micronumpy.strides import calculate_broadcast_strides +from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ + calculate_slice_strides -# Iterators for arrays -# -------------------- -# all those iterators with the exception of BroadcastIterator iterate over the -# entire array in C order (the last index changes the fastest). This will -# yield all elements. Views iterate over indices and look towards strides and -# backstrides to find the correct position. Notably the offset between -# x[..., i + 1] and x[..., i] will be strides[-1]. Offset between -# x[..., k + 1, 0] and x[..., k, i_max] will be backstrides[-2] etc. +class BaseTransform(object): + pass -# BroadcastIterator works like that, but for indexes that don't change source -# in the original array, strides[i] == backstrides[i] == 0 +class ViewTransform(BaseTransform): + def __init__(self, chunks): + # 4-tuple specifying slicing + self.chunks = chunks + +class BroadcastTransform(BaseTransform): + def __init__(self, res_shape): + self.res_shape = res_shape class BaseIterator(object): def next(self, shapelen): @@ -22,6 +23,15 @@ def done(self): raise NotImplementedError + def apply_transformations(self, arr, transformations): + v = self + for transform in transformations: + v = v.transform(arr, transform) + return v + + def transform(self, arr, t): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 @@ -36,6 +46,10 @@ def done(self): return self.offset >= self.size + def transform(self, arr, t): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).transform(arr, t) + class OneDimIterator(BaseIterator): def __init__(self, start, step, stop): self.offset = start @@ -52,26 +66,29 @@ def done(self): return self.offset == self.size -def view_iter_from_arr(arr): - return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) - class ViewIterator(BaseIterator): - def __init__(self, start, strides, backstrides, shape, res_shape=None): + def __init__(self, start, strides, backstrides, shape): self.offset = start self._done = False - if res_shape is not None and res_shape != shape: - r = calculate_broadcast_strides(strides, backstrides, - shape, res_shape) - self.strides, self.backstrides = r - self.res_shape = res_shape - else: - self.strides = strides - self.backstrides = backstrides - self.res_shape = shape + self.strides = strides + self.backstrides = backstrides + self.res_shape = shape self.indices = [0] * len(self.res_shape) + def transform(self, arr, t): + if isinstance(t, BroadcastTransform): + r = calculate_broadcast_strides(self.strides, self.backstrides, + self.res_shape, t.res_shape) + return ViewIterator(self.offset, r[0], r[1], t.res_shape) + elif isinstance(t, ViewTransform): + r = calculate_slice_strides(self.res_shape, self.offset, + self.strides, + self.backstrides, t.chunks) + return ViewIterator(r[1], r[2], r[3], r[0]) + @jit.unroll_safe def next(self, shapelen): + shapelen = jit.promote(len(self.res_shape)) offset = self.offset indices = [0] * shapelen for i in range(shapelen): @@ -96,6 +113,13 @@ res._done = done return res + def apply_transformations(self, arr, transformations): + v = BaseIterator.apply_transformations(self, arr, transformations) + if len(arr.shape) == 1: + return OneDimIterator(self.offset, self.strides[0], + self.res_shape[0]) + return v + def done(self): return self._done @@ -103,11 +127,57 @@ def next(self, shapelen): return self + def transform(self, arr, t): + pass + +class AxisIterator(BaseIterator): + def __init__(self, start, dim, shape, strides, backstrides): + self.res_shape = shape[:] + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + self.first_line = True + self.indices = [0] * len(shape) + self._done = False + self.offset = start + self.dim = dim + + @jit.unroll_safe + def next(self, shapelen): + offset = self.offset + first_line = self.first_line + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - 1: + if i == self.dim: + first_line = False + indices[i] += 1 + offset += self.strides[i] + break + else: + indices[i] = 0 + offset -= self.backstrides[i] + else: + done = True + res = instantiate(AxisIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + res.first_line = first_line + res.dim = self.dim + return res + + def done(self): + return self._done + # ------ other iterators that are not part of the computation frame ---------- - -class AxisIterator(object): - """ This object will return offsets of each start of the last stride - """ + +class SkipLastAxisIterator(object): def __init__(self, arr): self.arr = arr self.indices = [0] * (len(arr.shape) - 1) @@ -125,4 +195,3 @@ self.offset -= self.arr.backstrides[i] else: self.done = True - diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator,\ - view_iter_from_arr, OneDimIterator, AxisIterator +from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ + SkipLastAxisIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -35,11 +35,12 @@ slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'], + reds=['self', 'frame', 'arr'], get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) + def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) @@ -286,13 +287,17 @@ descr_rpow = _binop_right_impl("power") descr_rmod = _binop_right_impl("mod") - def _reduce_ufunc_impl(ufunc_name): - def impl(self, space): - return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, self, multidim=True) + def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): + def impl(self, space, w_dim=None): + if space.is_w(w_dim, space.w_None): + w_dim = space.wrap(-1) + return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, + self, True, promote_to_largest, w_dim) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") - descr_prod = _reduce_ufunc_impl("multiply") + descr_sum_promote = _reduce_ufunc_impl("add", True) + descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") @@ -377,7 +382,7 @@ else: w_res = self.descr_mul(space, w_other) assert isinstance(w_res, BaseArray) - return w_res.descr_sum(space) + return w_res.descr_sum(space, space.wrap(-1)) def get_concrete(self): raise NotImplementedError @@ -565,16 +570,22 @@ ) return w_result - def descr_mean(self, space): - return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_mean(self, space, w_dim=None): + if space.is_w(w_dim, space.w_None): + w_dim = space.wrap(-1) + w_denom = space.wrap(self.size) + else: + dim = space.int_w(w_dim) + w_denom = space.wrap(self.shape[dim]) + return space.div(self.descr_sum_promote(space, w_dim), w_denom) def descr_var(self, space): # var = mean((values - mean(values)) ** 2) - w_res = self.descr_sub(space, self.descr_mean(space)) + w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) - return w_res.descr_mean(space) + return w_res.descr_mean(space, space.w_None) def descr_std(self, space): # std(v) = sqrt(var(v)) @@ -613,11 +624,12 @@ def getitem(self, item): raise NotImplementedError - def find_sig(self, res_shape=None): + def find_sig(self, res_shape=None, arr=None): """ find a correct signature for the array """ res_shape = res_shape or self.shape - return signature.find_sig(self.create_sig(res_shape), self) + arr = arr or self + return signature.find_sig(self.create_sig(), arr) def descr_array_iface(self, space): if not self.shape: @@ -671,7 +683,7 @@ def copy(self, space): return Scalar(self.dtype, self.value) - def create_sig(self, res_shape): + def create_sig(self): return signature.ScalarSignature(self.dtype) def get_concrete_or_scalar(self): @@ -689,7 +701,8 @@ self.name = name def _del_sources(self): - # Function for deleting references to source arrays, to allow garbage-collecting them + # Function for deleting references to source arrays, + # to allow garbage-collecting them raise NotImplementedError def compute(self): @@ -741,11 +754,11 @@ self.size = size VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() return signature.VirtualSliceSignature( - self.child.create_sig(res_shape)) + self.child.create_sig()) def force_if_needed(self): if self.forced_result is None: @@ -755,6 +768,7 @@ def _del_sources(self): self.child = None + class Call1(VirtualArray): def __init__(self, ufunc, name, shape, res_dtype, values): VirtualArray.__init__(self, name, shape, res_dtype) @@ -765,16 +779,17 @@ def _del_sources(self): self.values = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) - return signature.Call1(self.ufunc, self.name, - self.values.create_sig(res_shape)) + return self.forced_result.create_sig() + return signature.Call1(self.ufunc, self.name, self.values.create_sig()) class Call2(VirtualArray): """ Intermediate class for performing binary operations. """ + _immutable_fields_ = ['left', 'right'] + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -789,12 +804,55 @@ self.left = None self.right = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() + if self.shape != self.left.shape and self.shape != self.right.shape: + return signature.BroadcastBoth(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.left.shape: + return signature.BroadcastLeft(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.right.shape: + return signature.BroadcastRight(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) return signature.Call2(self.ufunc, self.name, self.calc_dtype, - self.left.create_sig(res_shape), - self.right.create_sig(res_shape)) + self.left.create_sig(), self.right.create_sig()) + +class SliceArray(Call2): + def __init__(self, shape, dtype, left, right): + Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, + right) + + def create_sig(self): + lsig = self.left.create_sig() + rsig = self.right.create_sig() + if self.shape != self.right.shape: + return signature.SliceloopBroadcastSignature(self.ufunc, + self.name, + self.calc_dtype, + lsig, rsig) + return signature.SliceloopSignature(self.ufunc, self.name, + self.calc_dtype, + lsig, rsig) + +class AxisReduce(Call2): + """ NOTE: this is only used as a container, you should never + encounter such things in the wild. Remove this comment + when we'll make AxisReduce lazy + """ + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not @@ -849,11 +907,6 @@ self.strides = strides self.backstrides = backstrides - def array_sig(self, res_shape): - if res_shape is not None and self.shape != res_shape: - return signature.ViewSignature(self.dtype) - return signature.ArraySignature(self.dtype) - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): '''Modifies builder with a representation of the array/slice The items will be seperated by a comma if comma is 1 @@ -890,7 +943,7 @@ view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: - builder.append(ccomma +'\n' + indent + '...' + ncomma) + builder.append(ccomma + '\n' + indent + '...' + ncomma) i = self.shape[0] - 3 else: i += 1 @@ -968,20 +1021,22 @@ self.dtype is w_value.find_dtype()): self._fast_setslice(space, w_value) else: - self._sliceloop(w_value, res_shape) + arr = SliceArray(self.shape, self.dtype, self, w_value) + self._sliceloop(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) itemsize = self.dtype.itemtype.get_element_size() - if len(self.shape) == 1: + shapelen = len(self.shape) + if shapelen == 1: rffi.c_memcpy( rffi.ptradd(self.storage, self.start * itemsize), rffi.ptradd(w_value.storage, w_value.start * itemsize), self.size * itemsize ) else: - dest = AxisIterator(self) - source = AxisIterator(w_value) + dest = SkipLastAxisIterator(self) + source = SkipLastAxisIterator(w_value) while not dest.done: rffi.c_memcpy( rffi.ptradd(self.storage, dest.offset * itemsize), @@ -991,21 +1046,16 @@ source.next() dest.next() - def _sliceloop(self, source, res_shape): - sig = source.find_sig(res_shape) - frame = sig.create_frame(source, res_shape) - res_iter = view_iter_from_arr(self) - shapelen = len(res_shape) - while not res_iter.done(): - slice_driver.jit_merge_point(sig=sig, - frame=frame, - shapelen=shapelen, - self=self, source=source, - res_iter=res_iter) - self.setitem(res_iter.offset, sig.eval(frame, source).convert_to( - self.find_dtype())) + def _sliceloop(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(arr) + shapelen = len(self.shape) + while not frame.done(): + slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, + arr=arr, + shapelen=shapelen) + sig.eval(frame, arr) frame.next(shapelen) - res_iter = res_iter.next(shapelen) def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) @@ -1014,7 +1064,7 @@ class ViewArray(ConcreteArray): - def create_sig(self, res_shape): + def create_sig(self): return signature.ViewSignature(self.dtype) @@ -1078,8 +1128,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_sig(self, res_shape): - return self.array_sig(res_shape) + def create_sig(self): + return signature.ArraySignature(self.dtype) def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -3,20 +3,29 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature,\ - find_sig, new_printable_location +from pypy.module.micronumpy.signature import ReduceSignature,\ + find_sig, new_printable_location, AxisReduceSignature, ScalarSignature from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name reduce_driver = jit.JitDriver( - greens = ['shapelen', "sig"], - virtualizables = ["frame"], - reds = ["frame", "self", "dtype", "value", "obj"], + greens=['shapelen', "sig"], + virtualizables=["frame"], + reds=["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), name='numpy_reduce', ) +axisreduce_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['self','arr', 'identity', 'frame'], + name='numpy_axisreduce', + get_printable_location=new_printable_location('axisreduce'), +) + + class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -49,18 +58,72 @@ ) return self.call(space, __args__.arguments_w) - def descr_reduce(self, space, w_obj): - return self.reduce(space, w_obj, multidim=False) + def descr_reduce(self, space, w_obj, w_dim=0): + """reduce(...) + reduce(a, axis=0) - def reduce(self, space, w_obj, multidim): - from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar - + Reduces `a`'s dimension by one, by applying ufunc along one axis. + + Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then + :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = + the result of iterating `j` over :math:`range(N_i)`, cumulatively applying + ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. + For a one-dimensional array, reduce produces results equivalent to: + :: + + r = op.identity # op = ufunc + for i in xrange(len(A)): + r = op(r, A[i]) + return r + + For example, add.reduce() is equivalent to sum(). + + Parameters + ---------- + a : array_like + The array to act on. + axis : int, optional + The axis along which to apply the reduction. + + Examples + -------- + >>> np.multiply.reduce([2,3,5]) + 30 + + A multi-dimensional array example: + + >>> X = np.arange(8).reshape((2,2,2)) + >>> X + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> np.add.reduce(X, 0) + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X) # confirm: default axis value is 0 + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X, 1) + array([[ 2, 4], + [10, 12]]) + >>> np.add.reduce(X, 2) + array([[ 1, 5], + [ 9, 13]]) + """ + return self.reduce(space, w_obj, False, False, w_dim) + + def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): + from pypy.module.micronumpy.interp_numarray import convert_to_array, \ + Scalar if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) - + dim = space.int_w(w_dim) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) + if dim >= len(obj.shape): + raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -68,26 +131,80 @@ size = obj.size dtype = find_unaryop_result_dtype( space, obj.find_dtype(), - promote_to_largest=True + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True ) shapelen = len(obj.shape) + if self.identity is None and size == 0: + raise operationerrfmt(space.w_ValueError, "zero-size array to " + "%s.reduce without identity", self.name) + if shapelen > 1 and dim >= 0: + res = self.do_axis_reduce(obj, dtype, dim) + return space.wrap(res) + scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, - ScalarSignature(dtype), - obj.create_sig(obj.shape)), obj) + scalarsig, + obj.create_sig()), obj) frame = sig.create_frame(obj) - if shapelen > 1 and not multidim: - raise OperationError(space.w_NotImplementedError, - space.wrap("not implemented yet")) if self.identity is None: - if size == 0: - raise operationerrfmt(space.w_ValueError, "zero-size array to " - "%s.reduce without identity", self.name) value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) else: value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + def do_axis_reduce(self, obj, dtype, dim): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + W_NDimArray + + shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] + size = 1 + for s in shape: + size *= s + result = W_NDimArray(size, shape, dtype) + rightsig = obj.create_sig() + # note - this is just a wrapper so signature can fetch + # both left and right, nothing more, especially + # this is not a true virtual array, because shapes + # don't quite match + arr = AxisReduce(self.func, self.name, obj.shape, dtype, + result, obj, dim) + scalarsig = ScalarSignature(dtype) + sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, + scalarsig, rightsig), arr) + assert isinstance(sig, AxisReduceSignature) + frame = sig.create_frame(arr) + shapelen = len(obj.shape) + if self.identity is not None: + identity = self.identity.convert_to(dtype) + else: + identity = None + self.reduce_axis_loop(frame, sig, shapelen, arr, identity) + return result + + def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): + # note - we can be advanterous here, depending on the exact field + # layout. For now let's say we iterate the original way and + # simply follow the original iteration order + while not frame.done(): + axisreduce_driver.jit_merge_point(frame=frame, self=self, + sig=sig, + identity=identity, + shapelen=shapelen, arr=arr) + iter = frame.get_final_iter() + v = sig.eval(frame, arr).convert_to(sig.calc_dtype) + if iter.first_line: + if identity is not None: + value = self.func(sig.calc_dtype, identity, v) + else: + value = v + else: + cur = arr.left.getitem(iter.offset) + value = self.func(sig.calc_dtype, cur, v) + arr.left.setitem(iter.offset, value) + frame.next(shapelen) + def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): while not frame.done(): reduce_driver.jit_merge_point(sig=sig, @@ -95,10 +212,12 @@ value=value, obj=obj, frame=frame, dtype=dtype) assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, sig.eval(frame, obj).convert_to(dtype)) + value = sig.binfunc(dtype, value, + sig.eval(frame, obj).convert_to(dtype)) frame.next(shapelen) return value + class W_Ufunc1(W_Ufunc): argcount = 1 @@ -183,6 +302,7 @@ reduce = interp2app(W_Ufunc.descr_reduce), ) + def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, promote_bools=False): # dt1.num should be <= dt2.num @@ -231,6 +351,7 @@ dtypenum += 3 return interp_dtype.get_dtype_cache(space).builtin_dtypes[dtypenum] + def find_unaryop_result_dtype(space, dt, promote_to_float=False, promote_bools=False, promote_to_largest=False): if promote_bools and (dt.kind == interp_dtype.BOOLLTR): @@ -255,6 +376,7 @@ assert False return dt + def find_dtype_for_scalar(space, w_obj, current_guess=None): bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype @@ -348,7 +470,8 @@ identity = extra_kwargs.get("identity") if identity is not None: - identity = interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) + identity = \ + interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,10 +1,32 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator -from pypy.module.micronumpy.strides import calculate_slice_strides + ConstantIterator, AxisIterator, ViewTransform,\ + BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote +""" Signature specifies both the numpy expression that has been constructed +and the assembler to be compiled. This is a very important observation - +Two expressions will be using the same assembler if and only if they are +compiled to the same signature. + +This is also a very convinient tool for specializations. For example +a + a and a + b (where a != b) will compile to different assembler because +we specialize on the same array access. + +When evaluating, signatures will create iterators per signature node, +potentially sharing some of them. Iterators depend also on the actual +expression, they're not only dependant on the array itself. For example +a + b where a is dim 2 and b is dim 1 would create a broadcasted iterator for +the array b. + +Such iterator changes are called Transformations. An actual iterator would +be a combination of array and various transformation, like view, broadcast, +dimension swapping etc. + +See interp_iter for transformations +""" + def new_printable_location(driver_name): def get_printable_location(shapelen, sig): return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) @@ -33,7 +55,8 @@ return sig class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]'] + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity'] @unroll_safe def __init__(self, iterators, arrays): @@ -51,7 +74,7 @@ def done(self): final_iter = promote(self.final_iter) if final_iter < 0: - return False + assert False return self.iterators[final_iter].done() @unroll_safe @@ -59,6 +82,12 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -70,6 +99,9 @@ cache.append(ptr) return res +def new_cache(): + return r_dict(sigeq_no_numbering, sighash) + class Signature(object): _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -78,7 +110,7 @@ iter_no = 0 def invent_numbering(self): - cache = r_dict(sigeq_no_numbering, sighash) + cache = new_cache() allnumbers = [] self._invent_numbering(cache, allnumbers) @@ -95,13 +127,13 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None): - res_shape = res_shape or arr.shape + def create_frame(self, arr): iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, []) + self._create_iter(iterlist, arraylist, arr, []) return NumpyEvalFrame(iterlist, arraylist) + class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -120,16 +152,6 @@ def hash(self): return compute_identity_hash(self.dtype) - def allocate_view_iter(self, arr, res_shape, chunklist): - r = arr.shape, arr.start, arr.strides, arr.backstrides - if chunklist: - for chunkelem in chunklist: - r = calculate_slice_strides(r[0], r[1], r[2], r[3], chunkelem) - shape, start, strides, backstrides = r - if len(res_shape) == 1: - return OneDimIterator(start, strides[0], res_shape[0]) - return ViewIterator(start, strides, backstrides, shape, res_shape) - class ArraySignature(ConcreteSignature): def debug_repr(self): return 'Array' @@ -141,22 +163,21 @@ # is not of a concrete class it means that we have a _forced_result, # otherwise the signature would not match assert isinstance(concr, ConcreteArray) + assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, res_shape, chunklist)) + iterlist.append(self.allocate_iter(concr, transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, res_shape, chunklist): - if chunklist: - return self.allocate_view_iter(arr, res_shape, chunklist) - return ArrayIterator(arr.size) + def allocate_iter(self, arr, transforms): + return ArrayIterator(arr.size).apply_transformations(arr, transforms) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] @@ -169,7 +190,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -189,8 +210,9 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, res_shape, chunklist): - return self.allocate_view_iter(arr, res_shape, chunklist) + def allocate_iter(self, arr, transforms): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).apply_transformations(arr, transforms) class VirtualSliceSignature(Signature): def __init__(self, child): @@ -201,6 +223,9 @@ assert isinstance(arr, VirtualSlice) self.child._invent_array_numbering(arr.child, cache) + def _invent_numbering(self, cache, allnumbers): + self.child._invent_numbering(new_cache(), allnumbers) + def hash(self): return intmask(self.child.hash() ^ 1234) @@ -210,12 +235,11 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) - chunklist.append(arr.chunks) - self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist) + transforms = transforms + [ViewTransform(arr.chunks)] + self.child._create_iter(iterlist, arraylist, arr.child, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -251,11 +275,10 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist) + self.child._create_iter(iterlist, arraylist, arr.values, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 @@ -296,29 +319,68 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) - self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist) - self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist) + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class BroadcastLeft(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + +class BroadcastRight(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(cache, allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class BroadcastBoth(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): - self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist) + def _create_iter(self, iterlist, arraylist, arr, transforms): + self.right._create_iter(iterlist, arraylist, arr, transforms) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) @@ -328,3 +390,63 @@ def eval(self, frame, arr): return self.right.eval(frame, arr) + + def debug_repr(self): + return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + +class SliceloopSignature(Call2): + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ofs = frame.iterators[0].offset + arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( + self.calc_dtype)) + + def debug_repr(self): + return 'SliceLoop(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + +class SliceloopBroadcastSignature(SliceloopSignature): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import SliceArray + + assert isinstance(arr, SliceArray) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class AxisReduceSignature(Call2): + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + ConcreteArray + + assert isinstance(arr, AxisReduce) + left = arr.left + assert isinstance(left, ConcreteArray) + iterlist.append(AxisIterator(left.start, arr.dim, arr.shape, + left.strides, left.backstrides)) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + + def _invent_numbering(self, cache, allnumbers): + allnumbers.append(0) + self.right._invent_numbering(cache, allnumbers) + + def _invent_array_numbering(self, arr, cache): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + self.right._invent_array_numbering(arr.right, cache) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + + def debug_repr(self): + return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -246,6 +246,10 @@ c = b.copy() assert (c == b).all() + a = arange(15).reshape(5,3) + b = a.copy() + assert (b == a).all() + def test_iterator_init(self): from _numpypy import array a = array(range(5)) @@ -720,10 +724,15 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array + from _numpypy import array, mean a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 + a = array(range(105)).reshape(3, 5, 7) + b = mean(a, axis=0) + b[0,0]==35. + assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() + assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from _numpypy import array @@ -734,6 +743,32 @@ a = array([True] * 5, bool) assert a.sum() == 5 + raises(TypeError, 'a.sum(2, 3)') + + def test_reduce_nd(self): + from numpypy import arange, array, multiply + a = arange(15).reshape(5, 3) + assert a.sum() == 105 + assert a.max() == 14 + assert array([]).sum() == 0.0 + raises(ValueError, 'array([]).max()') + assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(1) == [3, 12, 21, 30, 39]).all() + assert (a.max(0) == [12, 13, 14]).all() + assert (a.max(1) == [2, 5, 8, 11, 14]).all() + assert ((a + a).max() == 28) + assert ((a + a).max(0) == [24, 26, 28]).all() + assert ((a + a).sum(1) == [6, 24, 42, 60, 78]).all() + assert (multiply.reduce(a) == array([0, 3640, 12320])).all() + a = array(range(105)).reshape(3, 5, 7) + assert (a[:, 1, :].sum(0) == [126, 129, 132, 135, 138, 141, 144]).all() + assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() + raises (ValueError, 'a[:, 1, :].sum(2)') + assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() + skip("Those are broken on reshape, fix!") + assert (a.reshape(1,-1).sum(0) == range(105)).all() + assert (a.reshape(1,-1).sum(1) == 5460) + def test_identity(self): from _numpypy import identity, array from _numpypy import int32, float64, dtype diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -298,7 +298,7 @@ for i in range(len(a)): assert b[i] == math.atan(a[i]) - a = array([float('nan')]) + a = array([float('nan')]) b = arctan(a) assert math.isnan(b[0]) @@ -336,9 +336,9 @@ from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(TypeError, add.reduce, 1) + raises(ValueError, add.reduce, 1) - def test_reduce(self): + def test_reduce_1d(self): from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 @@ -346,6 +346,12 @@ assert maximum.reduce([1, 2, 3]) == 3 raises(ValueError, maximum.reduce, []) + def test_reduceND(self): + from numpypy import add, arange + a = arange(12).reshape(3, 4) + assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() + assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -47,6 +47,8 @@ def f(i): interp = InterpreterState(codes[i]) interp.run(space) + if not len(interp.results): + raise Exception("need results") w_res = interp.results[-1] if isinstance(w_res, BaseArray): concr = w_res.get_concrete_or_scalar() @@ -115,6 +117,28 @@ "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) + def define_axissum(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = sum(a,0) + b -> 1 + """ + + def test_axissum(self): + result = self.run("axissum") + assert result == 30 + # XXX note - the bridge here is fairly crucial and yet it's pretty + # bogus. We need to improve the situation somehow. + self.check_simple_loop({'getinteriorfield_raw': 2, + 'setinteriorfield_raw': 1, + 'arraylen_gc': 1, + 'guard_true': 1, + 'int_lt': 1, + 'jump': 1, + 'float_add': 1, + 'int_add': 3, + }) + def define_prod(): return """ a = |30| @@ -193,9 +217,9 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 26, + self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, - 'getfield_gc_pure': 4, + 'getfield_gc_pure': 8, 'guard_class': 8, 'int_add': 8, 'float_mul': 2, 'jump': 2, 'int_ge': 4, 'getinteriorfield_raw': 4, 'float_add': 2, @@ -212,7 +236,8 @@ def test_ufunc(self): result = self.run("ufunc") assert result == -6 - self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, + self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, + "float_neg": 1, "setinteriorfield_raw": 1, "int_add": 2, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -322,10 +347,9 @@ result = self.run("setslice") assert result == 11.0 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, - 'setinteriorfield_raw': 1, 'int_add': 3, - 'int_lt': 1, 'guard_true': 1, 'jump': 1, - 'arraylen_gc': 3}) + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 2, + 'int_eq': 1, 'guard_false': 1, 'jump': 1}) def define_virtual_slice(): return """ @@ -339,11 +363,12 @@ result = self.run("virtual_slice") assert result == 4 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) + class TestNumpyOld(LLJitMixin): def setup_class(cls): py.test.skip("old") @@ -377,4 +402,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) - From noreply at buildbot.pypy.org Sat Jan 14 15:10:50 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 14 Jan 2012 15:10:50 +0100 (CET) Subject: [pypy-commit] pypy numpypy-axisops: close merged branch Message-ID: <20120114141050.3DB7682B12@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-axisops Changeset: r51307:b5b4522440a5 Date: 2012-01-14 16:10 +0200 http://bitbucket.org/pypy/pypy/changeset/b5b4522440a5/ Log: close merged branch From noreply at buildbot.pypy.org Sat Jan 14 15:26:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 14 Jan 2012 15:26:33 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: done Message-ID: <20120114142633.5427B82B12@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4033:e383a2a0d80b Date: 2012-01-14 16:26 +0200 http://bitbucket.org/pypy/extradoc/changeset/e383a2a0d80b/ Log: done diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -17,8 +17,6 @@ - more attributes/methods on numpy.flatiter -- axis= parameter to various methods - - expose ndarray.ctypes - subclassing ndarray (instantiating subcalsses curently returns the wrong type) From noreply at buildbot.pypy.org Sat Jan 14 15:41:17 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Sat, 14 Jan 2012 15:41:17 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: possibly free boxes returned by ensure_value_is_boxed. Message-ID: <20120114144117.32A8A82B12@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r51308:f657ede0f621 Date: 2012-01-14 09:41 -0500 http://bitbucket.org/pypy/pypy/changeset/f657ede0f621/ Log: possibly free boxes returned by ensure_value_is_boxed. diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -659,11 +659,14 @@ size, ofs, _ = unpack_arraydescr(op.getdescr()) scale = get_scale(size) args = op.getarglist() - base_loc, _ = self._ensure_value_is_boxed(a0, args) - ofs_loc, _ = self._ensure_value_is_boxed(a1, args) - value_loc, _ = self._ensure_value_is_boxed(a2, args) + base_loc, base_box = self._ensure_value_is_boxed(a0, args) + ofs_loc, ofs_box = self._ensure_value_is_boxed(a1, args) + value_loc, value_box = self._ensure_value_is_boxed(a2, args) assert _check_imm_arg(ofs) scratch_loc = self.rm.get_scratch_reg(INT, [base_loc, ofs_loc]) + self.possibly_free_var(base_box) + self.possibly_free_var(ofs_box) + self.possibly_free_var(value_box) assert scratch_loc not in [base_loc, ofs_loc] return [value_loc, base_loc, ofs_loc, scratch_loc, imm(scale), imm(ofs)] From noreply at buildbot.pypy.org Sat Jan 14 16:35:32 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 14 Jan 2012 16:35:32 +0100 (CET) Subject: [pypy-commit] pypy.org extradoc: Update the status of stackless. Message-ID: <20120114153532.A2EF582B12@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r304:07305742afe8 Date: 2012-01-14 16:35 +0100 http://bitbucket.org/pypy/pypy.org/changeset/07305742afe8/ Log: Update the status of stackless. diff --git a/features.html b/features.html --- a/features.html +++ b/features.html @@ -87,10 +87,10 @@

Stackless

-

PyPy is also available in a separate Stackless version that includes -support for micro-threads for massive concurrency. Read more about -it at the Stackless main site (we provide the same interface as the -standard Stackless Python), and at the greenlets page.

+

Support for Stackless and greenlets are now integrated in the normal +PyPy. More detailed information is available here.

+

Note that there is still an important performance hit for programs using +Stackless features.

Other features

diff --git a/index.html b/index.html --- a/index.html +++ b/index.html @@ -58,7 +58,7 @@ and django.
  • Sandboxing: PyPy provides the ability to run untrusted code in a fully secure way.
  • -
  • Stackless: PyPy can be configured to run in stackless mode, +
  • Stackless: PyPy comes by default with support for stackless mode, providing micro-threads for massive concurrency.
  • As well as other features.
  • diff --git a/source/features.txt b/source/features.txt --- a/source/features.txt +++ b/source/features.txt @@ -70,14 +70,14 @@ Stackless -------------------------- -PyPy is also available in a separate `Stackless version`_ that includes -support for micro-threads for massive concurrency. Read more about -it at the Stackless_ main site (we provide the same interface as the -standard Stackless Python), and at the greenlets_ page. +Support for Stackless_ and greenlets are now integrated in the normal +PyPy. More detailed information is available here__. -.. _`Stackless version`: download.html#stackless-version -.. _`stackless`: http://www.stackless.com/ -.. _`greenlets`: http://codespeak.net/svn/greenlet/trunk/doc/greenlet.txt +Note that there is still an important performance hit for programs using +Stackless features. + +.. _Stackless: http://www.stackless.com/ +.. __: http://doc.pypy.org/en/latest/stackless.html Other features diff --git a/source/index.txt b/source/index.txt --- a/source/index.txt +++ b/source/index.txt @@ -19,7 +19,7 @@ * **Sandboxing:** PyPy provides the ability to `run untrusted code`_ in a fully secure way. - * **Stackless:** PyPy can be configured to run in `stackless`_ mode, + * **Stackless:** PyPy comes by default with support for `stackless mode`_, providing micro-threads for massive concurrency. * As well as other `features`_. @@ -33,7 +33,7 @@ Want to know more? A good place to start is our detailed `speed`_ and `compatibility`_ reports! -.. _`stackless`: http://www.stackless.com/ +.. _`stackless mode`: features.html#stackless .. _`Python`: http://python.org/ .. _`fast`: http://speed.pypy.org/ .. _`faster`: http://speed.pypy.org/ From noreply at buildbot.pypy.org Sat Jan 14 17:19:24 2012 From: noreply at buildbot.pypy.org (bivab) Date: Sat, 14 Jan 2012 17:19:24 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Implement case of moving a const float to the stack Message-ID: <20120114161924.40ABD82B12@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51309:bb83f459ff90 Date: 2012-01-13 14:58 +0100 http://bitbucket.org/pypy/pypy/changeset/bb83f459ff90/ Log: Implement case of moving a const float to the stack diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -856,12 +856,18 @@ assert 0, 'unsupported case' def _mov_imm_float_to_loc(self, prev_loc, loc, cond=c.AL): - if not loc.is_vfp_reg(): + if loc.is_vfp_reg(): + self.mc.PUSH([r.ip.value], cond=cond) + self.mc.gen_load_int(r.ip.value, prev_loc.getint(), cond=cond) + self.mc.VLDR(loc.value, r.ip.value, cond=cond) + self.mc.POP([r.ip.value], cond=cond) + elif loc.is_stack(): + self.regalloc_push(r.vfp_ip) + self.regalloc_mov(prev_loc, r.vfp_ip, cond) + self.regalloc_mov(r.vfp_ip, loc, cond) + self.regalloc_pop(r.vfp_ip) + else: assert 0, 'unsupported case' - self.mc.PUSH([r.ip.value], cond=cond) - self.mc.gen_load_int(r.ip.value, prev_loc.getint(), cond=cond) - self.mc.VLDR(loc.value, r.ip.value, cond=cond) - self.mc.POP([r.ip.value], cond=cond) def _mov_vfp_reg_to_loc(self, prev_loc, loc, cond=c.AL): if loc.is_vfp_reg(): From noreply at buildbot.pypy.org Sat Jan 14 17:19:25 2012 From: noreply at buildbot.pypy.org (bivab) Date: Sat, 14 Jan 2012 17:19:25 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: implement as_key method for const float locations required for using const floats in jumps Message-ID: <20120114161925.6C7FA82B12@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51310:44a83931f5cb Date: 2012-01-13 14:59 +0100 http://bitbucket.org/pypy/pypy/changeset/44a83931f5cb/ Log: implement as_key method for const float locations required for using const floats in jumps diff --git a/pypy/jit/backend/arm/locations.py b/pypy/jit/backend/arm/locations.py --- a/pypy/jit/backend/arm/locations.py +++ b/pypy/jit/backend/arm/locations.py @@ -100,6 +100,9 @@ def is_imm_float(self): return True + def as_key(self): + return self.value + class StackLocation(AssemblerLocation): _immutable_ = True From noreply at buildbot.pypy.org Sat Jan 14 17:19:26 2012 From: noreply at buildbot.pypy.org (bivab) Date: Sat, 14 Jan 2012 17:19:26 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: remove a call that was done twice Message-ID: <20120114161926.990EF82B12@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51311:8ff3eb80d78e Date: 2012-01-13 16:38 +0100 http://bitbucket.org/pypy/pypy/changeset/8ff3eb80d78e/ Log: remove a call that was done twice diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -951,14 +951,13 @@ self.rm.force_allocate_reg(t, selected_reg=r.r1) self.possibly_free_var(op.result) self.possibly_free_var(t) + return [imm(size)] - return [imm(size)] def get_mark_gc_roots(self, gcrootmap, use_copy_area=False): shape = gcrootmap.get_basic_shape(False) for v, val in self.frame_manager.bindings.items(): if (isinstance(v, BoxPtr) and self.rm.stays_alive(v)): assert val.is_stack() - gcrootmap.add_frame_offset(shape, val.position * -WORD) gcrootmap.add_frame_offset(shape, -val.value) for v, reg in self.rm.reg_bindings.items(): if reg is r.r0: From noreply at buildbot.pypy.org Sat Jan 14 17:19:27 2012 From: noreply at buildbot.pypy.org (bivab) Date: Sat, 14 Jan 2012 17:19:27 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: as pointed out by David Edelsohn and Maciej, get rid of redundant calls to list Message-ID: <20120114161927.D70DE82B12@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51312:483ce484be66 Date: 2012-01-14 16:34 +0100 http://bitbucket.org/pypy/pypy/changeset/483ce484be66/ Log: as pointed out by David Edelsohn and Maciej, get rid of redundant calls to list when accessing the list of inputargs of an operation. Also replace all calls to make_sure_var_in_reg with calls to _ensure_value_is_boxes which makes sure that all temporary boxes are also considered as forbidden. diff --git a/pypy/jit/backend/arm/helper/regalloc.py b/pypy/jit/backend/arm/helper/regalloc.py --- a/pypy/jit/backend/arm/helper/regalloc.py +++ b/pypy/jit/backend/arm/helper/regalloc.py @@ -33,9 +33,9 @@ imm_a1 = check_imm_box(a1, imm_size, allow_zero=allow_zero) if not imm_a0 and imm_a1: l0 = self._ensure_value_is_boxed(a0) - l1 = self.make_sure_var_in_reg(a1, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) elif commutative and imm_a0 and not imm_a1: - l1 = self.make_sure_var_in_reg(a0, boxes) + l1 = self._ensure_value_is_boxed(a0, boxes) l0 = self._ensure_value_is_boxed(a1, boxes) else: l0 = self._ensure_value_is_boxed(a0, boxes) @@ -90,8 +90,8 @@ assert fcond is not None a0 = op.getarg(0) a1 = op.getarg(1) - arg1 = self.make_sure_var_in_reg(a0, selected_reg=r.r0) - arg2 = self.make_sure_var_in_reg(a1, selected_reg=r.r1) + arg1 = self.rm.make_sure_var_in_reg(a0, selected_reg=r.r0) + arg2 = self.rm.make_sure_var_in_reg(a1, selected_reg=r.r1) assert arg1 == r.r0 assert arg2 == r.r1 if isinstance(a0, Box) and self.stays_alive(a0): @@ -113,7 +113,7 @@ l0 = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) if imm_a1: - l1 = self.make_sure_var_in_reg(arg1, boxes) + l1 = self._ensure_value_is_boxed(arg1, boxes) else: l1 = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) @@ -134,7 +134,7 @@ assert fcond is not None a0 = op.getarg(0) assert isinstance(a0, Box) - reg = self.make_sure_var_in_reg(a0) + reg = self._ensure_value_is_boxed(a0) self.possibly_free_vars_for_op(op) if guard_op is None: res = self.force_allocate_reg(op.result, [a0]) diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -892,7 +892,7 @@ # need the box here if isinstance(args[4], Box): length_box = args[4] - length_loc = regalloc.make_sure_var_in_reg(args[4], forbidden_vars) + length_loc = regalloc._ensure_value_is_boxed(args[4], forbidden_vars) else: length_box = TempInt() length_loc = regalloc.force_allocate_reg(length_box, diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -183,7 +183,7 @@ self.assembler.load(loc, immvalue) else: loc = self.make_sure_var_in_reg(thing, - forbidden_vars=forbidden_vars) + forbidden_vars=self.temp_boxes + forbidden_vars) return loc def get_scratch_reg(self, type=INT, forbidden_vars=[], selected_reg=None): @@ -285,12 +285,7 @@ def make_sure_var_in_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): - if var.type == FLOAT: - return self.vfprm.make_sure_var_in_reg(var, forbidden_vars, - selected_reg, need_lower_byte) - else: - return self.rm.make_sure_var_in_reg(var, forbidden_vars, - selected_reg, need_lower_byte) + assert 0, 'should not be called directly' def convert_to_imm(self, value): if isinstance(value, ConstInt): @@ -312,7 +307,7 @@ def prepare_loop(self, inputargs, operations): self._prepare(inputargs, operations) self._set_initial_bindings(inputargs) - self.possibly_free_vars(list(inputargs)) + self.possibly_free_vars(inputargs) def prepare_bridge(self, inputargs, arglocs, ops): self._prepare(inputargs, ops) @@ -410,18 +405,18 @@ self.rm._sync_var(v) def _prepare_op_int_add(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() a0, a1 = boxes imm_a0 = check_imm_box(a0) imm_a1 = check_imm_box(a1) if not imm_a0 and imm_a1: - l0 = self._ensure_value_is_boxed(a0) - l1 = self.make_sure_var_in_reg(a1, boxes) + l0 = self._ensure_value_is_boxed(a0, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) elif imm_a0 and not imm_a1: - l0 = self.make_sure_var_in_reg(a0) + l0 = self._ensure_value_is_boxed(a0, boxes) l1 = self._ensure_value_is_boxed(a1, boxes) else: - l0 = self._ensure_value_is_boxed(a0) + l0 = self._ensure_value_is_boxed(a0, boxes) l1 = self._ensure_value_is_boxed(a1, boxes) return [l0, l1] @@ -438,9 +433,9 @@ imm_a1 = check_imm_box(a1) if not imm_a0 and imm_a1: l0 = self._ensure_value_is_boxed(a0, boxes) - l1 = self.make_sure_var_in_reg(a1, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) elif imm_a0 and not imm_a1: - l0 = self.make_sure_var_in_reg(a0, boxes) + l0 = self._ensure_value_is_boxed(a0, boxes) l1 = self._ensure_value_is_boxed(a1, boxes) else: l0 = self._ensure_value_is_boxed(a0, boxes) @@ -455,7 +450,7 @@ return locs + [res] def prepare_op_int_mul(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() a0, a1 = boxes reg1 = self._ensure_value_is_boxed(a0, forbidden_vars=boxes) @@ -597,14 +592,14 @@ prepare_op_guard_isnull = prepare_op_guard_true def prepare_op_guard_value(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() a0, a1 = boxes imm_a1 = check_imm_box(a1) l0 = self._ensure_value_is_boxed(a0, boxes) if not imm_a1: l1 = self._ensure_value_is_boxed(a1, boxes) else: - l1 = self.make_sure_var_in_reg(a1, boxes) + l1 = self._ensure_value_is_boxed(a1, boxes) assert op.result is None arglocs = self._prepare_guard(op, [l0, l1]) self.possibly_free_vars(op.getarglist()) @@ -620,7 +615,7 @@ prepare_op_guard_not_invalidated = prepare_op_guard_no_overflow def prepare_op_guard_exception(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() arg0 = ConstInt(rffi.cast(lltype.Signed, op.getarg(0).getint())) loc = self._ensure_value_is_boxed(arg0) loc1 = self.get_scratch_reg(INT, boxes) @@ -648,7 +643,7 @@ def _prepare_guard_class(self, op, fcond): assert isinstance(op.getarg(0), Box) - boxes = list(op.getarglist()) + boxes = op.getarglist() x = self._ensure_value_is_boxed(boxes[0], boxes) y = self.get_scratch_reg(REF, forbidden_vars=boxes) @@ -727,7 +722,7 @@ return [] def prepare_op_setfield_gc(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() a0, a1 = boxes ofs, size, sign = unpack_fielddescr(op.getdescr()) base_loc = self._ensure_value_is_boxed(a0, boxes) @@ -806,23 +801,22 @@ return [res, base_loc, imm(ofs)] def prepare_op_setarrayitem_gc(self, op, fcond): - a0, a1, a2 = list(op.getarglist()) size, ofs, _ = unpack_arraydescr(op.getdescr()) scale = get_scale(size) args = op.getarglist() - base_loc = self._ensure_value_is_boxed(a0, args) - ofs_loc = self._ensure_value_is_boxed(a1, args) - value_loc = self._ensure_value_is_boxed(a2, args) + base_loc = self._ensure_value_is_boxed(args[0], args) + ofs_loc = self._ensure_value_is_boxed(args[1], args) + value_loc = self._ensure_value_is_boxed(args[2], args) assert check_imm_arg(ofs) return [value_loc, base_loc, ofs_loc, imm(scale), imm(ofs)] prepare_op_setarrayitem_raw = prepare_op_setarrayitem_gc def prepare_op_getarrayitem_gc(self, op, fcond): - a0, a1 = boxes = list(op.getarglist()) + boxes = op.getarglist() size, ofs, _ = unpack_arraydescr(op.getdescr()) scale = get_scale(size) - base_loc = self._ensure_value_is_boxed(a0, boxes) - ofs_loc = self._ensure_value_is_boxed(a1, boxes) + base_loc = self._ensure_value_is_boxed(boxes[0], boxes) + ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) self.possibly_free_vars_for_op(op) self.free_temp_vars() res = self.force_allocate_reg(op.result) @@ -852,13 +846,13 @@ return [l0, l1, res] def prepare_op_strgetitem(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() base_loc = self._ensure_value_is_boxed(boxes[0]) a1 = boxes[1] imm_a1 = check_imm_box(a1) if imm_a1: - ofs_loc = self.make_sure_var_in_reg(a1, boxes) + ofs_loc = self._ensure_value_is_boxed(a1, boxes) else: ofs_loc = self._ensure_value_is_boxed(a1, boxes) @@ -872,7 +866,7 @@ return [res, base_loc, ofs_loc, imm(basesize)] def prepare_op_strsetitem(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() base_loc = self._ensure_value_is_boxed(boxes[0], boxes) ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) value_loc = self._ensure_value_is_boxed(boxes[2], boxes) @@ -901,7 +895,7 @@ return [l0, l1, res] def prepare_op_unicodegetitem(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() base_loc = self._ensure_value_is_boxed(boxes[0], boxes) ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) @@ -916,7 +910,7 @@ imm(scale), imm(basesize), imm(itemsize)] def prepare_op_unicodesetitem(self, op, fcond): - boxes = list(op.getarglist()) + boxes = op.getarglist() base_loc = self._ensure_value_is_boxed(boxes[0], boxes) ofs_loc = self._ensure_value_is_boxed(boxes[1], boxes) value_loc = self._ensure_value_is_boxed(boxes[2], boxes) @@ -930,7 +924,7 @@ arg = op.getarg(0) imm_arg = check_imm_box(arg) if imm_arg: - argloc = self.make_sure_var_in_reg(arg) + argloc = self._ensure_value_is_boxed(arg) else: argloc = self._ensure_value_is_boxed(arg) self.possibly_free_vars_for_op(op) From noreply at buildbot.pypy.org Sat Jan 14 20:07:50 2012 From: noreply at buildbot.pypy.org (stefanor) Date: Sat, 14 Jan 2012 20:07:50 +0100 (CET) Subject: [pypy-commit] pypy default: Big endian test case for test_utf_16_encode_decode Message-ID: <20120114190750.A77FA82B12@wyvern.cs.uni-duesseldorf.de> Author: Stefano Rivera Branch: Changeset: r51313:c6a06cbab53d Date: 2012-01-14 21:07 +0200 http://bitbucket.org/pypy/pypy/changeset/c6a06cbab53d/ Log: Big endian test case for test_utf_16_encode_decode diff --git a/pypy/module/_codecs/test/test_codecs.py b/pypy/module/_codecs/test/test_codecs.py --- a/pypy/module/_codecs/test/test_codecs.py +++ b/pypy/module/_codecs/test/test_codecs.py @@ -588,10 +588,18 @@ raises(UnicodeDecodeError, '+3ADYAA-'.decode, 'utf-7') def test_utf_16_encode_decode(self): - import codecs + import codecs, sys x = u'123abc' - assert codecs.getencoder('utf-16')(x) == ('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) - assert codecs.getdecoder('utf-16')('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) + if sys.byteorder == 'big': + assert codecs.getencoder('utf-16')(x) == ( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c', 6) + assert codecs.getdecoder('utf-16')( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c') == (x, 14) + else: + assert codecs.getencoder('utf-16')(x) == ( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) + assert codecs.getdecoder('utf-16')( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) def test_unicode_escape(self): assert u'\\'.encode('unicode-escape') == '\\\\' From noreply at buildbot.pypy.org Sat Jan 14 20:37:17 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 14 Jan 2012 20:37:17 +0100 (CET) Subject: [pypy-commit] pypy numpypy-reshape: fix for 1-length shapes Message-ID: <20120114193717.D139082B12@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-reshape Changeset: r51314:38824caea2ca Date: 2012-01-14 21:34 +0200 http://bitbucket.org/pypy/pypy/changeset/38824caea2ca/ Log: fix for 1-length shapes diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -157,9 +157,6 @@ # (meaning that the realignment of elements crosses from one step into another) # return None so that the caller can raise an exception. def calc_new_strides(new_shape, old_shape, old_strides): - # Return the proper strides for new_shape, or None if the mapping crosses - # stepping boundaries - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and # len(new_shape) > 0 steps = [] @@ -167,6 +164,7 @@ oldI = 0 new_strides = [] if old_strides[0] < old_strides[-1]: + #Start at old_shape[0], old_stides[0] for i in range(len(old_shape)): steps.append(old_strides[i] / last_step) last_step *= old_shape[i] @@ -184,10 +182,11 @@ if n_new_elems_used == n_old_elems_to_use: oldI += 1 if oldI >= len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] else: + #Start at old_shape[-1], old_strides[-1] for i in range(len(old_shape) - 1, -1, -1): steps.insert(0, old_strides[i] / last_step) last_step *= old_shape[i] @@ -207,7 +206,7 @@ if n_new_elems_used == n_old_elems_to_use: oldI -= 1 if oldI < -len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] return new_strides diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -157,6 +157,8 @@ assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([105, 1], [3, 5, 7], [35, 7, 1]) == [1, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1]) == [105, 1] class AppTestNumArray(BaseNumpyAppTest): @@ -765,7 +767,6 @@ assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() raises (ValueError, 'a[:, 1, :].sum(2)') assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() - skip("Those are broken on reshape, fix!") assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) @@ -1556,3 +1557,7 @@ a = range(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert a.reshape(1, -1).shape == (1, 105) + assert a.reshape(1, 1, -1).shape == (1, 1, 105) + assert a.reshape(-1, 1, 1).shape == (105, 1, 1) From noreply at buildbot.pypy.org Sat Jan 14 20:40:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 14 Jan 2012 20:40:47 +0100 (CET) Subject: [pypy-commit] pypy default: merge numpypy-reshape Message-ID: <20120114194047.83ACA82B12@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51315:c9f77f542246 Date: 2012-01-14 21:40 +0200 http://bitbucket.org/pypy/pypy/changeset/c9f77f542246/ Log: merge numpypy-reshape diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -157,9 +157,6 @@ # (meaning that the realignment of elements crosses from one step into another) # return None so that the caller can raise an exception. def calc_new_strides(new_shape, old_shape, old_strides): - # Return the proper strides for new_shape, or None if the mapping crosses - # stepping boundaries - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and # len(new_shape) > 0 steps = [] @@ -167,6 +164,7 @@ oldI = 0 new_strides = [] if old_strides[0] < old_strides[-1]: + #Start at old_shape[0], old_stides[0] for i in range(len(old_shape)): steps.append(old_strides[i] / last_step) last_step *= old_shape[i] @@ -184,10 +182,11 @@ if n_new_elems_used == n_old_elems_to_use: oldI += 1 if oldI >= len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] else: + #Start at old_shape[-1], old_strides[-1] for i in range(len(old_shape) - 1, -1, -1): steps.insert(0, old_strides[i] / last_step) last_step *= old_shape[i] @@ -207,7 +206,7 @@ if n_new_elems_used == n_old_elems_to_use: oldI -= 1 if oldI < -len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] return new_strides diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -157,6 +157,8 @@ assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([105, 1], [3, 5, 7], [35, 7, 1]) == [1, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1]) == [105, 1] class AppTestNumArray(BaseNumpyAppTest): @@ -765,7 +767,6 @@ assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() raises (ValueError, 'a[:, 1, :].sum(2)') assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() - skip("Those are broken on reshape, fix!") assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) @@ -1556,3 +1557,7 @@ a = range(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert a.reshape(1, -1).shape == (1, 105) + assert a.reshape(1, 1, -1).shape == (1, 1, 105) + assert a.reshape(-1, 1, 1).shape == (105, 1, 1) From noreply at buildbot.pypy.org Sat Jan 14 20:40:48 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 14 Jan 2012 20:40:48 +0100 (CET) Subject: [pypy-commit] pypy numpypy-reshape: close merged branch Message-ID: <20120114194048.AF82C82B12@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-reshape Changeset: r51316:edc019f4b1b8 Date: 2012-01-14 21:40 +0200 http://bitbucket.org/pypy/pypy/changeset/edc019f4b1b8/ Log: close merged branch From notifications-noreply at bitbucket.org Sat Jan 14 21:26:56 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Sat, 14 Jan 2012 20:26:56 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120114202656.17002.30760@bitbucket05.managed.contegix.com> You have received a notification from benol. Hi, I forked pypy. My fork is at https://bitbucket.org/benol/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Sat Jan 14 21:48:25 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:25 +0100 (CET) Subject: [pypy-commit] pypy py3k: New Unicode database properties: .isxidstart and .isxidcontinue Message-ID: <20120114204825.91B0282B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51317:9779b8a0e896 Date: 2011-12-22 10:32 +0100 http://bitbucket.org/pypy/pypy/changeset/9779b8a0e896/ Log: New Unicode database properties: .isxidstart and .isxidcontinue diff too long, truncating to 10000 out of 116033 lines diff --git a/pypy/module/unicodedata/DerivedCoreProperties-5.2.0.txt b/pypy/module/unicodedata/DerivedCoreProperties-5.2.0.txt new file mode 100644 --- /dev/null +++ b/pypy/module/unicodedata/DerivedCoreProperties-5.2.0.txt @@ -0,0 +1,9243 @@ +# DerivedCoreProperties-5.2.0.txt +# Date: 2009-08-26, 00:45:22 GMT [MD] +# +# Unicode Character Database +# Copyright (c) 1991-2009 Unicode, Inc. +# For terms of use, see http://www.unicode.org/terms_of_use.html +# For documentation, see http://www.unicode.org/reports/tr44/ + +# ================================================ + +# Derived Property: Math +# Generated from: Sm + Other_Math + +002B ; Math # Sm PLUS SIGN +003C..003E ; Math # Sm [3] LESS-THAN SIGN..GREATER-THAN SIGN +005E ; Math # Sk CIRCUMFLEX ACCENT +007C ; Math # Sm VERTICAL LINE +007E ; Math # Sm TILDE +00AC ; Math # Sm NOT SIGN +00B1 ; Math # Sm PLUS-MINUS SIGN +00D7 ; Math # Sm MULTIPLICATION SIGN +00F7 ; Math # Sm DIVISION SIGN +03D0..03D2 ; Math # L& [3] GREEK BETA SYMBOL..GREEK UPSILON WITH HOOK SYMBOL +03D5 ; Math # L& GREEK PHI SYMBOL +03F0..03F1 ; Math # L& [2] GREEK KAPPA SYMBOL..GREEK RHO SYMBOL +03F4..03F5 ; Math # L& [2] GREEK CAPITAL THETA SYMBOL..GREEK LUNATE EPSILON SYMBOL +03F6 ; Math # Sm GREEK REVERSED LUNATE EPSILON SYMBOL +0606..0608 ; Math # Sm [3] ARABIC-INDIC CUBE ROOT..ARABIC RAY +2016 ; Math # Po DOUBLE VERTICAL LINE +2032..2034 ; Math # Po [3] PRIME..TRIPLE PRIME +2040 ; Math # Pc CHARACTER TIE +2044 ; Math # Sm FRACTION SLASH +2052 ; Math # Sm COMMERCIAL MINUS SIGN +2061..2064 ; Math # Cf [4] FUNCTION APPLICATION..INVISIBLE PLUS +207A..207C ; Math # Sm [3] SUPERSCRIPT PLUS SIGN..SUPERSCRIPT EQUALS SIGN +207D ; Math # Ps SUPERSCRIPT LEFT PARENTHESIS +207E ; Math # Pe SUPERSCRIPT RIGHT PARENTHESIS +208A..208C ; Math # Sm [3] SUBSCRIPT PLUS SIGN..SUBSCRIPT EQUALS SIGN +208D ; Math # Ps SUBSCRIPT LEFT PARENTHESIS +208E ; Math # Pe SUBSCRIPT RIGHT PARENTHESIS +20D0..20DC ; Math # Mn [13] COMBINING LEFT HARPOON ABOVE..COMBINING FOUR DOTS ABOVE +20E1 ; Math # Mn COMBINING LEFT RIGHT ARROW ABOVE +20E5..20E6 ; Math # Mn [2] COMBINING REVERSE SOLIDUS OVERLAY..COMBINING DOUBLE VERTICAL STROKE OVERLAY +20EB..20EF ; Math # Mn [5] COMBINING LONG DOUBLE SOLIDUS OVERLAY..COMBINING RIGHT ARROW BELOW +2102 ; Math # L& DOUBLE-STRUCK CAPITAL C +210A..2113 ; Math # L& [10] SCRIPT SMALL G..SCRIPT SMALL L +2115 ; Math # L& DOUBLE-STRUCK CAPITAL N +2119..211D ; Math # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +2124 ; Math # L& DOUBLE-STRUCK CAPITAL Z +2128 ; Math # L& BLACK-LETTER CAPITAL Z +2129 ; Math # So TURNED GREEK SMALL LETTER IOTA +212C..212D ; Math # L& [2] SCRIPT CAPITAL B..BLACK-LETTER CAPITAL C +212F..2131 ; Math # L& [3] SCRIPT SMALL E..SCRIPT CAPITAL F +2133..2134 ; Math # L& [2] SCRIPT CAPITAL M..SCRIPT SMALL O +2135..2138 ; Math # Lo [4] ALEF SYMBOL..DALET SYMBOL +213C..213F ; Math # L& [4] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK CAPITAL PI +2140..2144 ; Math # Sm [5] DOUBLE-STRUCK N-ARY SUMMATION..TURNED SANS-SERIF CAPITAL Y +2145..2149 ; Math # L& [5] DOUBLE-STRUCK ITALIC CAPITAL D..DOUBLE-STRUCK ITALIC SMALL J +214B ; Math # Sm TURNED AMPERSAND +2190..2194 ; Math # Sm [5] LEFTWARDS ARROW..LEFT RIGHT ARROW +2195..2199 ; Math # So [5] UP DOWN ARROW..SOUTH WEST ARROW +219A..219B ; Math # Sm [2] LEFTWARDS ARROW WITH STROKE..RIGHTWARDS ARROW WITH STROKE +219C..219F ; Math # So [4] LEFTWARDS WAVE ARROW..UPWARDS TWO HEADED ARROW +21A0 ; Math # Sm RIGHTWARDS TWO HEADED ARROW +21A1..21A2 ; Math # So [2] DOWNWARDS TWO HEADED ARROW..LEFTWARDS ARROW WITH TAIL +21A3 ; Math # Sm RIGHTWARDS ARROW WITH TAIL +21A4..21A5 ; Math # So [2] LEFTWARDS ARROW FROM BAR..UPWARDS ARROW FROM BAR +21A6 ; Math # Sm RIGHTWARDS ARROW FROM BAR +21A7 ; Math # So DOWNWARDS ARROW FROM BAR +21A9..21AD ; Math # So [5] LEFTWARDS ARROW WITH HOOK..LEFT RIGHT WAVE ARROW +21AE ; Math # Sm LEFT RIGHT ARROW WITH STROKE +21B0..21B1 ; Math # So [2] UPWARDS ARROW WITH TIP LEFTWARDS..UPWARDS ARROW WITH TIP RIGHTWARDS +21B6..21B7 ; Math # So [2] ANTICLOCKWISE TOP SEMICIRCLE ARROW..CLOCKWISE TOP SEMICIRCLE ARROW +21BC..21CD ; Math # So [18] LEFTWARDS HARPOON WITH BARB UPWARDS..LEFTWARDS DOUBLE ARROW WITH STROKE +21CE..21CF ; Math # Sm [2] LEFT RIGHT DOUBLE ARROW WITH STROKE..RIGHTWARDS DOUBLE ARROW WITH STROKE +21D0..21D1 ; Math # So [2] LEFTWARDS DOUBLE ARROW..UPWARDS DOUBLE ARROW +21D2 ; Math # Sm RIGHTWARDS DOUBLE ARROW +21D3 ; Math # So DOWNWARDS DOUBLE ARROW +21D4 ; Math # Sm LEFT RIGHT DOUBLE ARROW +21D5..21DB ; Math # So [7] UP DOWN DOUBLE ARROW..RIGHTWARDS TRIPLE ARROW +21DD ; Math # So RIGHTWARDS SQUIGGLE ARROW +21E4..21E5 ; Math # So [2] LEFTWARDS ARROW TO BAR..RIGHTWARDS ARROW TO BAR +21F4..22FF ; Math # Sm [268] RIGHT ARROW WITH SMALL CIRCLE..Z NOTATION BAG MEMBERSHIP +2308..230B ; Math # Sm [4] LEFT CEILING..RIGHT FLOOR +2320..2321 ; Math # Sm [2] TOP HALF INTEGRAL..BOTTOM HALF INTEGRAL +237C ; Math # Sm RIGHT ANGLE WITH DOWNWARDS ZIGZAG ARROW +239B..23B3 ; Math # Sm [25] LEFT PARENTHESIS UPPER HOOK..SUMMATION BOTTOM +23B4..23B5 ; Math # So [2] TOP SQUARE BRACKET..BOTTOM SQUARE BRACKET +23B7 ; Math # So RADICAL SYMBOL BOTTOM +23D0 ; Math # So VERTICAL LINE EXTENSION +23DC..23E1 ; Math # Sm [6] TOP PARENTHESIS..BOTTOM TORTOISE SHELL BRACKET +23E2 ; Math # So WHITE TRAPEZIUM +25A0..25A1 ; Math # So [2] BLACK SQUARE..WHITE SQUARE +25AE..25B6 ; Math # So [9] BLACK VERTICAL RECTANGLE..BLACK RIGHT-POINTING TRIANGLE +25B7 ; Math # Sm WHITE RIGHT-POINTING TRIANGLE +25BC..25C0 ; Math # So [5] BLACK DOWN-POINTING TRIANGLE..BLACK LEFT-POINTING TRIANGLE +25C1 ; Math # Sm WHITE LEFT-POINTING TRIANGLE +25C6..25C7 ; Math # So [2] BLACK DIAMOND..WHITE DIAMOND +25CA..25CB ; Math # So [2] LOZENGE..WHITE CIRCLE +25CF..25D3 ; Math # So [5] BLACK CIRCLE..CIRCLE WITH UPPER HALF BLACK +25E2 ; Math # So BLACK LOWER RIGHT TRIANGLE +25E4 ; Math # So BLACK UPPER LEFT TRIANGLE +25E7..25EC ; Math # So [6] SQUARE WITH LEFT HALF BLACK..WHITE UP-POINTING TRIANGLE WITH DOT +25F8..25FF ; Math # Sm [8] UPPER LEFT TRIANGLE..LOWER RIGHT TRIANGLE +2605..2606 ; Math # So [2] BLACK STAR..WHITE STAR +2640 ; Math # So FEMALE SIGN +2642 ; Math # So MALE SIGN +2660..2663 ; Math # So [4] BLACK SPADE SUIT..BLACK CLUB SUIT +266D..266E ; Math # So [2] MUSIC FLAT SIGN..MUSIC NATURAL SIGN +266F ; Math # Sm MUSIC SHARP SIGN +27C0..27C4 ; Math # Sm [5] THREE DIMENSIONAL ANGLE..OPEN SUPERSET +27C5 ; Math # Ps LEFT S-SHAPED BAG DELIMITER +27C6 ; Math # Pe RIGHT S-SHAPED BAG DELIMITER +27C7..27CA ; Math # Sm [4] OR WITH DOT INSIDE..VERTICAL BAR WITH HORIZONTAL STROKE +27CC ; Math # Sm LONG DIVISION +27D0..27E5 ; Math # Sm [22] WHITE DIAMOND WITH CENTRED DOT..WHITE SQUARE WITH RIGHTWARDS TICK +27E6 ; Math # Ps MATHEMATICAL LEFT WHITE SQUARE BRACKET +27E7 ; Math # Pe MATHEMATICAL RIGHT WHITE SQUARE BRACKET +27E8 ; Math # Ps MATHEMATICAL LEFT ANGLE BRACKET +27E9 ; Math # Pe MATHEMATICAL RIGHT ANGLE BRACKET +27EA ; Math # Ps MATHEMATICAL LEFT DOUBLE ANGLE BRACKET +27EB ; Math # Pe MATHEMATICAL RIGHT DOUBLE ANGLE BRACKET +27EC ; Math # Ps MATHEMATICAL LEFT WHITE TORTOISE SHELL BRACKET +27ED ; Math # Pe MATHEMATICAL RIGHT WHITE TORTOISE SHELL BRACKET +27EE ; Math # Ps MATHEMATICAL LEFT FLATTENED PARENTHESIS +27EF ; Math # Pe MATHEMATICAL RIGHT FLATTENED PARENTHESIS +27F0..27FF ; Math # Sm [16] UPWARDS QUADRUPLE ARROW..LONG RIGHTWARDS SQUIGGLE ARROW +2900..2982 ; Math # Sm [131] RIGHTWARDS TWO-HEADED ARROW WITH VERTICAL STROKE..Z NOTATION TYPE COLON +2983 ; Math # Ps LEFT WHITE CURLY BRACKET +2984 ; Math # Pe RIGHT WHITE CURLY BRACKET +2985 ; Math # Ps LEFT WHITE PARENTHESIS +2986 ; Math # Pe RIGHT WHITE PARENTHESIS +2987 ; Math # Ps Z NOTATION LEFT IMAGE BRACKET +2988 ; Math # Pe Z NOTATION RIGHT IMAGE BRACKET +2989 ; Math # Ps Z NOTATION LEFT BINDING BRACKET +298A ; Math # Pe Z NOTATION RIGHT BINDING BRACKET +298B ; Math # Ps LEFT SQUARE BRACKET WITH UNDERBAR +298C ; Math # Pe RIGHT SQUARE BRACKET WITH UNDERBAR +298D ; Math # Ps LEFT SQUARE BRACKET WITH TICK IN TOP CORNER +298E ; Math # Pe RIGHT SQUARE BRACKET WITH TICK IN BOTTOM CORNER +298F ; Math # Ps LEFT SQUARE BRACKET WITH TICK IN BOTTOM CORNER +2990 ; Math # Pe RIGHT SQUARE BRACKET WITH TICK IN TOP CORNER +2991 ; Math # Ps LEFT ANGLE BRACKET WITH DOT +2992 ; Math # Pe RIGHT ANGLE BRACKET WITH DOT +2993 ; Math # Ps LEFT ARC LESS-THAN BRACKET +2994 ; Math # Pe RIGHT ARC GREATER-THAN BRACKET +2995 ; Math # Ps DOUBLE LEFT ARC GREATER-THAN BRACKET +2996 ; Math # Pe DOUBLE RIGHT ARC LESS-THAN BRACKET +2997 ; Math # Ps LEFT BLACK TORTOISE SHELL BRACKET +2998 ; Math # Pe RIGHT BLACK TORTOISE SHELL BRACKET +2999..29D7 ; Math # Sm [63] DOTTED FENCE..BLACK HOURGLASS +29D8 ; Math # Ps LEFT WIGGLY FENCE +29D9 ; Math # Pe RIGHT WIGGLY FENCE +29DA ; Math # Ps LEFT DOUBLE WIGGLY FENCE +29DB ; Math # Pe RIGHT DOUBLE WIGGLY FENCE +29DC..29FB ; Math # Sm [32] INCOMPLETE INFINITY..TRIPLE PLUS +29FC ; Math # Ps LEFT-POINTING CURVED ANGLE BRACKET +29FD ; Math # Pe RIGHT-POINTING CURVED ANGLE BRACKET +29FE..2AFF ; Math # Sm [258] TINY..N-ARY WHITE VERTICAL BAR +2B30..2B44 ; Math # Sm [21] LEFT ARROW WITH SMALL CIRCLE..RIGHTWARDS ARROW THROUGH SUPERSET +2B47..2B4C ; Math # Sm [6] REVERSE TILDE OPERATOR ABOVE RIGHTWARDS ARROW..RIGHTWARDS ARROW ABOVE REVERSE TILDE OPERATOR +FB29 ; Math # Sm HEBREW LETTER ALTERNATIVE PLUS SIGN +FE61 ; Math # Po SMALL ASTERISK +FE62 ; Math # Sm SMALL PLUS SIGN +FE63 ; Math # Pd SMALL HYPHEN-MINUS +FE64..FE66 ; Math # Sm [3] SMALL LESS-THAN SIGN..SMALL EQUALS SIGN +FE68 ; Math # Po SMALL REVERSE SOLIDUS +FF0B ; Math # Sm FULLWIDTH PLUS SIGN +FF1C..FF1E ; Math # Sm [3] FULLWIDTH LESS-THAN SIGN..FULLWIDTH GREATER-THAN SIGN +FF3C ; Math # Po FULLWIDTH REVERSE SOLIDUS +FF3E ; Math # Sk FULLWIDTH CIRCUMFLEX ACCENT +FF5C ; Math # Sm FULLWIDTH VERTICAL LINE +FF5E ; Math # Sm FULLWIDTH TILDE +FFE2 ; Math # Sm FULLWIDTH NOT SIGN +FFE9..FFEC ; Math # Sm [4] HALFWIDTH LEFTWARDS ARROW..HALFWIDTH DOWNWARDS ARROW +1D400..1D454 ; Math # L& [85] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL ITALIC SMALL G +1D456..1D49C ; Math # L& [71] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; Math # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; Math # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; Math # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; Math # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B9 ; Math # L& [12] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT SMALL D +1D4BB ; Math # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; Math # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D505 ; Math # L& [65] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; Math # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; Math # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; Math # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D51E..1D539 ; Math # L& [28] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; Math # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; Math # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; Math # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; Math # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D552..1D6A5 ; Math # L& [340] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6A8..1D6C0 ; Math # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6C1 ; Math # Sm MATHEMATICAL BOLD NABLA +1D6C2..1D6DA ; Math # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DB ; Math # Sm MATHEMATICAL BOLD PARTIAL DIFFERENTIAL +1D6DC..1D6FA ; Math # L& [31] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL ITALIC CAPITAL OMEGA +1D6FB ; Math # Sm MATHEMATICAL ITALIC NABLA +1D6FC..1D714 ; Math # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D715 ; Math # Sm MATHEMATICAL ITALIC PARTIAL DIFFERENTIAL +1D716..1D734 ; Math # L& [31] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D735 ; Math # Sm MATHEMATICAL BOLD ITALIC NABLA +1D736..1D74E ; Math # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D74F ; Math # Sm MATHEMATICAL BOLD ITALIC PARTIAL DIFFERENTIAL +1D750..1D76E ; Math # L& [31] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D76F ; Math # Sm MATHEMATICAL SANS-SERIF BOLD NABLA +1D770..1D788 ; Math # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D789 ; Math # Sm MATHEMATICAL SANS-SERIF BOLD PARTIAL DIFFERENTIAL +1D78A..1D7A8 ; Math # L& [31] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7A9 ; Math # Sm MATHEMATICAL SANS-SERIF BOLD ITALIC NABLA +1D7AA..1D7C2 ; Math # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C3 ; Math # Sm MATHEMATICAL SANS-SERIF BOLD ITALIC PARTIAL DIFFERENTIAL +1D7C4..1D7CB ; Math # L& [8] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD SMALL DIGAMMA +1D7CE..1D7FF ; Math # Nd [50] MATHEMATICAL BOLD DIGIT ZERO..MATHEMATICAL MONOSPACE DIGIT NINE + +# Total code points: 2161 + +# ================================================ + +# Derived Property: Alphabetic +# Generated from: Lu+Ll+Lt+Lm+Lo+Nl + Other_Alphabetic + +0041..005A ; Alphabetic # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +0061..007A ; Alphabetic # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00AA ; Alphabetic # L& FEMININE ORDINAL INDICATOR +00B5 ; Alphabetic # L& MICRO SIGN +00BA ; Alphabetic # L& MASCULINE ORDINAL INDICATOR +00C0..00D6 ; Alphabetic # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00F6 ; Alphabetic # L& [31] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER O WITH DIAERESIS +00F8..01BA ; Alphabetic # L& [195] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER EZH WITH TAIL +01BB ; Alphabetic # Lo LATIN LETTER TWO WITH STROKE +01BC..01BF ; Alphabetic # L& [4] LATIN CAPITAL LETTER TONE FIVE..LATIN LETTER WYNN +01C0..01C3 ; Alphabetic # Lo [4] LATIN LETTER DENTAL CLICK..LATIN LETTER RETROFLEX CLICK +01C4..0293 ; Alphabetic # L& [208] LATIN CAPITAL LETTER DZ WITH CARON..LATIN SMALL LETTER EZH WITH CURL +0294 ; Alphabetic # Lo LATIN LETTER GLOTTAL STOP +0295..02AF ; Alphabetic # L& [27] LATIN LETTER PHARYNGEAL VOICED FRICATIVE..LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL +02B0..02C1 ; Alphabetic # Lm [18] MODIFIER LETTER SMALL H..MODIFIER LETTER REVERSED GLOTTAL STOP +02C6..02D1 ; Alphabetic # Lm [12] MODIFIER LETTER CIRCUMFLEX ACCENT..MODIFIER LETTER HALF TRIANGULAR COLON +02E0..02E4 ; Alphabetic # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +02EC ; Alphabetic # Lm MODIFIER LETTER VOICING +02EE ; Alphabetic # Lm MODIFIER LETTER DOUBLE APOSTROPHE +0345 ; Alphabetic # Mn COMBINING GREEK YPOGEGRAMMENI +0370..0373 ; Alphabetic # L& [4] GREEK CAPITAL LETTER HETA..GREEK SMALL LETTER ARCHAIC SAMPI +0374 ; Alphabetic # Lm GREEK NUMERAL SIGN +0376..0377 ; Alphabetic # L& [2] GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA..GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037A ; Alphabetic # Lm GREEK YPOGEGRAMMENI +037B..037D ; Alphabetic # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0386 ; Alphabetic # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0388..038A ; Alphabetic # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; Alphabetic # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..03A1 ; Alphabetic # L& [20] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER RHO +03A3..03F5 ; Alphabetic # L& [83] GREEK CAPITAL LETTER SIGMA..GREEK LUNATE EPSILON SYMBOL +03F7..0481 ; Alphabetic # L& [139] GREEK CAPITAL LETTER SHO..CYRILLIC SMALL LETTER KOPPA +048A..0525 ; Alphabetic # L& [156] CYRILLIC CAPITAL LETTER SHORT I WITH TAIL..CYRILLIC SMALL LETTER PE WITH DESCENDER +0531..0556 ; Alphabetic # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0559 ; Alphabetic # Lm ARMENIAN MODIFIER LETTER LEFT HALF RING +0561..0587 ; Alphabetic # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +05B0..05BD ; Alphabetic # Mn [14] HEBREW POINT SHEVA..HEBREW POINT METEG +05BF ; Alphabetic # Mn HEBREW POINT RAFE +05C1..05C2 ; Alphabetic # Mn [2] HEBREW POINT SHIN DOT..HEBREW POINT SIN DOT +05C4..05C5 ; Alphabetic # Mn [2] HEBREW MARK UPPER DOT..HEBREW MARK LOWER DOT +05C7 ; Alphabetic # Mn HEBREW POINT QAMATS QATAN +05D0..05EA ; Alphabetic # Lo [27] HEBREW LETTER ALEF..HEBREW LETTER TAV +05F0..05F2 ; Alphabetic # Lo [3] HEBREW LIGATURE YIDDISH DOUBLE VAV..HEBREW LIGATURE YIDDISH DOUBLE YOD +0610..061A ; Alphabetic # Mn [11] ARABIC SIGN SALLALLAHOU ALAYHE WASSALLAM..ARABIC SMALL KASRA +0621..063F ; Alphabetic # Lo [31] ARABIC LETTER HAMZA..ARABIC LETTER FARSI YEH WITH THREE DOTS ABOVE +0640 ; Alphabetic # Lm ARABIC TATWEEL +0641..064A ; Alphabetic # Lo [10] ARABIC LETTER FEH..ARABIC LETTER YEH +064B..0657 ; Alphabetic # Mn [13] ARABIC FATHATAN..ARABIC INVERTED DAMMA +0659..065E ; Alphabetic # Mn [6] ARABIC ZWARAKAY..ARABIC FATHA WITH TWO DOTS +066E..066F ; Alphabetic # Lo [2] ARABIC LETTER DOTLESS BEH..ARABIC LETTER DOTLESS QAF +0670 ; Alphabetic # Mn ARABIC LETTER SUPERSCRIPT ALEF +0671..06D3 ; Alphabetic # Lo [99] ARABIC LETTER ALEF WASLA..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE +06D5 ; Alphabetic # Lo ARABIC LETTER AE +06D6..06DC ; Alphabetic # Mn [7] ARABIC SMALL HIGH LIGATURE SAD WITH LAM WITH ALEF MAKSURA..ARABIC SMALL HIGH SEEN +06E1..06E4 ; Alphabetic # Mn [4] ARABIC SMALL HIGH DOTLESS HEAD OF KHAH..ARABIC SMALL HIGH MADDA +06E5..06E6 ; Alphabetic # Lm [2] ARABIC SMALL WAW..ARABIC SMALL YEH +06E7..06E8 ; Alphabetic # Mn [2] ARABIC SMALL HIGH YEH..ARABIC SMALL HIGH NOON +06ED ; Alphabetic # Mn ARABIC SMALL LOW MEEM +06EE..06EF ; Alphabetic # Lo [2] ARABIC LETTER DAL WITH INVERTED V..ARABIC LETTER REH WITH INVERTED V +06FA..06FC ; Alphabetic # Lo [3] ARABIC LETTER SHEEN WITH DOT BELOW..ARABIC LETTER GHAIN WITH DOT BELOW +06FF ; Alphabetic # Lo ARABIC LETTER HEH WITH INVERTED V +0710 ; Alphabetic # Lo SYRIAC LETTER ALAPH +0711 ; Alphabetic # Mn SYRIAC LETTER SUPERSCRIPT ALAPH +0712..072F ; Alphabetic # Lo [30] SYRIAC LETTER BETH..SYRIAC LETTER PERSIAN DHALATH +0730..073F ; Alphabetic # Mn [16] SYRIAC PTHAHA ABOVE..SYRIAC RWAHA +074D..07A5 ; Alphabetic # Lo [89] SYRIAC LETTER SOGDIAN ZHAIN..THAANA LETTER WAAVU +07A6..07B0 ; Alphabetic # Mn [11] THAANA ABAFILI..THAANA SUKUN +07B1 ; Alphabetic # Lo THAANA LETTER NAA +07CA..07EA ; Alphabetic # Lo [33] NKO LETTER A..NKO LETTER JONA RA +07F4..07F5 ; Alphabetic # Lm [2] NKO HIGH TONE APOSTROPHE..NKO LOW TONE APOSTROPHE +07FA ; Alphabetic # Lm NKO LAJANYALAN +0800..0815 ; Alphabetic # Lo [22] SAMARITAN LETTER ALAF..SAMARITAN LETTER TAAF +0816..0817 ; Alphabetic # Mn [2] SAMARITAN MARK IN..SAMARITAN MARK IN-ALAF +081A ; Alphabetic # Lm SAMARITAN MODIFIER LETTER EPENTHETIC YUT +081B..0823 ; Alphabetic # Mn [9] SAMARITAN MARK EPENTHETIC YUT..SAMARITAN VOWEL SIGN A +0824 ; Alphabetic # Lm SAMARITAN MODIFIER LETTER SHORT A +0825..0827 ; Alphabetic # Mn [3] SAMARITAN VOWEL SIGN SHORT A..SAMARITAN VOWEL SIGN U +0828 ; Alphabetic # Lm SAMARITAN MODIFIER LETTER I +0829..082C ; Alphabetic # Mn [4] SAMARITAN VOWEL SIGN LONG I..SAMARITAN VOWEL SIGN SUKUN +0900..0902 ; Alphabetic # Mn [3] DEVANAGARI SIGN INVERTED CANDRABINDU..DEVANAGARI SIGN ANUSVARA +0903 ; Alphabetic # Mc DEVANAGARI SIGN VISARGA +0904..0939 ; Alphabetic # Lo [54] DEVANAGARI LETTER SHORT A..DEVANAGARI LETTER HA +093D ; Alphabetic # Lo DEVANAGARI SIGN AVAGRAHA +093E..0940 ; Alphabetic # Mc [3] DEVANAGARI VOWEL SIGN AA..DEVANAGARI VOWEL SIGN II +0941..0948 ; Alphabetic # Mn [8] DEVANAGARI VOWEL SIGN U..DEVANAGARI VOWEL SIGN AI +0949..094C ; Alphabetic # Mc [4] DEVANAGARI VOWEL SIGN CANDRA O..DEVANAGARI VOWEL SIGN AU +094E ; Alphabetic # Mc DEVANAGARI VOWEL SIGN PRISHTHAMATRA E +0950 ; Alphabetic # Lo DEVANAGARI OM +0955 ; Alphabetic # Mn DEVANAGARI VOWEL SIGN CANDRA LONG E +0958..0961 ; Alphabetic # Lo [10] DEVANAGARI LETTER QA..DEVANAGARI LETTER VOCALIC LL +0962..0963 ; Alphabetic # Mn [2] DEVANAGARI VOWEL SIGN VOCALIC L..DEVANAGARI VOWEL SIGN VOCALIC LL +0971 ; Alphabetic # Lm DEVANAGARI SIGN HIGH SPACING DOT +0972 ; Alphabetic # Lo DEVANAGARI LETTER CANDRA A +0979..097F ; Alphabetic # Lo [7] DEVANAGARI LETTER ZHA..DEVANAGARI LETTER BBA +0981 ; Alphabetic # Mn BENGALI SIGN CANDRABINDU +0982..0983 ; Alphabetic # Mc [2] BENGALI SIGN ANUSVARA..BENGALI SIGN VISARGA +0985..098C ; Alphabetic # Lo [8] BENGALI LETTER A..BENGALI LETTER VOCALIC L +098F..0990 ; Alphabetic # Lo [2] BENGALI LETTER E..BENGALI LETTER AI +0993..09A8 ; Alphabetic # Lo [22] BENGALI LETTER O..BENGALI LETTER NA +09AA..09B0 ; Alphabetic # Lo [7] BENGALI LETTER PA..BENGALI LETTER RA +09B2 ; Alphabetic # Lo BENGALI LETTER LA +09B6..09B9 ; Alphabetic # Lo [4] BENGALI LETTER SHA..BENGALI LETTER HA +09BD ; Alphabetic # Lo BENGALI SIGN AVAGRAHA +09BE..09C0 ; Alphabetic # Mc [3] BENGALI VOWEL SIGN AA..BENGALI VOWEL SIGN II +09C1..09C4 ; Alphabetic # Mn [4] BENGALI VOWEL SIGN U..BENGALI VOWEL SIGN VOCALIC RR +09C7..09C8 ; Alphabetic # Mc [2] BENGALI VOWEL SIGN E..BENGALI VOWEL SIGN AI +09CB..09CC ; Alphabetic # Mc [2] BENGALI VOWEL SIGN O..BENGALI VOWEL SIGN AU +09CE ; Alphabetic # Lo BENGALI LETTER KHANDA TA +09D7 ; Alphabetic # Mc BENGALI AU LENGTH MARK +09DC..09DD ; Alphabetic # Lo [2] BENGALI LETTER RRA..BENGALI LETTER RHA +09DF..09E1 ; Alphabetic # Lo [3] BENGALI LETTER YYA..BENGALI LETTER VOCALIC LL +09E2..09E3 ; Alphabetic # Mn [2] BENGALI VOWEL SIGN VOCALIC L..BENGALI VOWEL SIGN VOCALIC LL +09F0..09F1 ; Alphabetic # Lo [2] BENGALI LETTER RA WITH MIDDLE DIAGONAL..BENGALI LETTER RA WITH LOWER DIAGONAL +0A01..0A02 ; Alphabetic # Mn [2] GURMUKHI SIGN ADAK BINDI..GURMUKHI SIGN BINDI +0A03 ; Alphabetic # Mc GURMUKHI SIGN VISARGA +0A05..0A0A ; Alphabetic # Lo [6] GURMUKHI LETTER A..GURMUKHI LETTER UU +0A0F..0A10 ; Alphabetic # Lo [2] GURMUKHI LETTER EE..GURMUKHI LETTER AI +0A13..0A28 ; Alphabetic # Lo [22] GURMUKHI LETTER OO..GURMUKHI LETTER NA +0A2A..0A30 ; Alphabetic # Lo [7] GURMUKHI LETTER PA..GURMUKHI LETTER RA +0A32..0A33 ; Alphabetic # Lo [2] GURMUKHI LETTER LA..GURMUKHI LETTER LLA +0A35..0A36 ; Alphabetic # Lo [2] GURMUKHI LETTER VA..GURMUKHI LETTER SHA +0A38..0A39 ; Alphabetic # Lo [2] GURMUKHI LETTER SA..GURMUKHI LETTER HA +0A3E..0A40 ; Alphabetic # Mc [3] GURMUKHI VOWEL SIGN AA..GURMUKHI VOWEL SIGN II +0A41..0A42 ; Alphabetic # Mn [2] GURMUKHI VOWEL SIGN U..GURMUKHI VOWEL SIGN UU +0A47..0A48 ; Alphabetic # Mn [2] GURMUKHI VOWEL SIGN EE..GURMUKHI VOWEL SIGN AI +0A4B..0A4C ; Alphabetic # Mn [2] GURMUKHI VOWEL SIGN OO..GURMUKHI VOWEL SIGN AU +0A51 ; Alphabetic # Mn GURMUKHI SIGN UDAAT +0A59..0A5C ; Alphabetic # Lo [4] GURMUKHI LETTER KHHA..GURMUKHI LETTER RRA +0A5E ; Alphabetic # Lo GURMUKHI LETTER FA +0A70..0A71 ; Alphabetic # Mn [2] GURMUKHI TIPPI..GURMUKHI ADDAK +0A72..0A74 ; Alphabetic # Lo [3] GURMUKHI IRI..GURMUKHI EK ONKAR +0A75 ; Alphabetic # Mn GURMUKHI SIGN YAKASH +0A81..0A82 ; Alphabetic # Mn [2] GUJARATI SIGN CANDRABINDU..GUJARATI SIGN ANUSVARA +0A83 ; Alphabetic # Mc GUJARATI SIGN VISARGA +0A85..0A8D ; Alphabetic # Lo [9] GUJARATI LETTER A..GUJARATI VOWEL CANDRA E +0A8F..0A91 ; Alphabetic # Lo [3] GUJARATI LETTER E..GUJARATI VOWEL CANDRA O +0A93..0AA8 ; Alphabetic # Lo [22] GUJARATI LETTER O..GUJARATI LETTER NA +0AAA..0AB0 ; Alphabetic # Lo [7] GUJARATI LETTER PA..GUJARATI LETTER RA +0AB2..0AB3 ; Alphabetic # Lo [2] GUJARATI LETTER LA..GUJARATI LETTER LLA +0AB5..0AB9 ; Alphabetic # Lo [5] GUJARATI LETTER VA..GUJARATI LETTER HA +0ABD ; Alphabetic # Lo GUJARATI SIGN AVAGRAHA +0ABE..0AC0 ; Alphabetic # Mc [3] GUJARATI VOWEL SIGN AA..GUJARATI VOWEL SIGN II +0AC1..0AC5 ; Alphabetic # Mn [5] GUJARATI VOWEL SIGN U..GUJARATI VOWEL SIGN CANDRA E +0AC7..0AC8 ; Alphabetic # Mn [2] GUJARATI VOWEL SIGN E..GUJARATI VOWEL SIGN AI +0AC9 ; Alphabetic # Mc GUJARATI VOWEL SIGN CANDRA O +0ACB..0ACC ; Alphabetic # Mc [2] GUJARATI VOWEL SIGN O..GUJARATI VOWEL SIGN AU +0AD0 ; Alphabetic # Lo GUJARATI OM +0AE0..0AE1 ; Alphabetic # Lo [2] GUJARATI LETTER VOCALIC RR..GUJARATI LETTER VOCALIC LL +0AE2..0AE3 ; Alphabetic # Mn [2] GUJARATI VOWEL SIGN VOCALIC L..GUJARATI VOWEL SIGN VOCALIC LL +0B01 ; Alphabetic # Mn ORIYA SIGN CANDRABINDU +0B02..0B03 ; Alphabetic # Mc [2] ORIYA SIGN ANUSVARA..ORIYA SIGN VISARGA +0B05..0B0C ; Alphabetic # Lo [8] ORIYA LETTER A..ORIYA LETTER VOCALIC L +0B0F..0B10 ; Alphabetic # Lo [2] ORIYA LETTER E..ORIYA LETTER AI +0B13..0B28 ; Alphabetic # Lo [22] ORIYA LETTER O..ORIYA LETTER NA +0B2A..0B30 ; Alphabetic # Lo [7] ORIYA LETTER PA..ORIYA LETTER RA +0B32..0B33 ; Alphabetic # Lo [2] ORIYA LETTER LA..ORIYA LETTER LLA +0B35..0B39 ; Alphabetic # Lo [5] ORIYA LETTER VA..ORIYA LETTER HA +0B3D ; Alphabetic # Lo ORIYA SIGN AVAGRAHA +0B3E ; Alphabetic # Mc ORIYA VOWEL SIGN AA +0B3F ; Alphabetic # Mn ORIYA VOWEL SIGN I +0B40 ; Alphabetic # Mc ORIYA VOWEL SIGN II +0B41..0B44 ; Alphabetic # Mn [4] ORIYA VOWEL SIGN U..ORIYA VOWEL SIGN VOCALIC RR +0B47..0B48 ; Alphabetic # Mc [2] ORIYA VOWEL SIGN E..ORIYA VOWEL SIGN AI +0B4B..0B4C ; Alphabetic # Mc [2] ORIYA VOWEL SIGN O..ORIYA VOWEL SIGN AU +0B56 ; Alphabetic # Mn ORIYA AI LENGTH MARK +0B57 ; Alphabetic # Mc ORIYA AU LENGTH MARK +0B5C..0B5D ; Alphabetic # Lo [2] ORIYA LETTER RRA..ORIYA LETTER RHA +0B5F..0B61 ; Alphabetic # Lo [3] ORIYA LETTER YYA..ORIYA LETTER VOCALIC LL +0B62..0B63 ; Alphabetic # Mn [2] ORIYA VOWEL SIGN VOCALIC L..ORIYA VOWEL SIGN VOCALIC LL +0B71 ; Alphabetic # Lo ORIYA LETTER WA +0B82 ; Alphabetic # Mn TAMIL SIGN ANUSVARA +0B83 ; Alphabetic # Lo TAMIL SIGN VISARGA +0B85..0B8A ; Alphabetic # Lo [6] TAMIL LETTER A..TAMIL LETTER UU +0B8E..0B90 ; Alphabetic # Lo [3] TAMIL LETTER E..TAMIL LETTER AI +0B92..0B95 ; Alphabetic # Lo [4] TAMIL LETTER O..TAMIL LETTER KA +0B99..0B9A ; Alphabetic # Lo [2] TAMIL LETTER NGA..TAMIL LETTER CA +0B9C ; Alphabetic # Lo TAMIL LETTER JA +0B9E..0B9F ; Alphabetic # Lo [2] TAMIL LETTER NYA..TAMIL LETTER TTA +0BA3..0BA4 ; Alphabetic # Lo [2] TAMIL LETTER NNA..TAMIL LETTER TA +0BA8..0BAA ; Alphabetic # Lo [3] TAMIL LETTER NA..TAMIL LETTER PA +0BAE..0BB9 ; Alphabetic # Lo [12] TAMIL LETTER MA..TAMIL LETTER HA +0BBE..0BBF ; Alphabetic # Mc [2] TAMIL VOWEL SIGN AA..TAMIL VOWEL SIGN I +0BC0 ; Alphabetic # Mn TAMIL VOWEL SIGN II +0BC1..0BC2 ; Alphabetic # Mc [2] TAMIL VOWEL SIGN U..TAMIL VOWEL SIGN UU +0BC6..0BC8 ; Alphabetic # Mc [3] TAMIL VOWEL SIGN E..TAMIL VOWEL SIGN AI +0BCA..0BCC ; Alphabetic # Mc [3] TAMIL VOWEL SIGN O..TAMIL VOWEL SIGN AU +0BD0 ; Alphabetic # Lo TAMIL OM +0BD7 ; Alphabetic # Mc TAMIL AU LENGTH MARK +0C01..0C03 ; Alphabetic # Mc [3] TELUGU SIGN CANDRABINDU..TELUGU SIGN VISARGA +0C05..0C0C ; Alphabetic # Lo [8] TELUGU LETTER A..TELUGU LETTER VOCALIC L +0C0E..0C10 ; Alphabetic # Lo [3] TELUGU LETTER E..TELUGU LETTER AI +0C12..0C28 ; Alphabetic # Lo [23] TELUGU LETTER O..TELUGU LETTER NA +0C2A..0C33 ; Alphabetic # Lo [10] TELUGU LETTER PA..TELUGU LETTER LLA +0C35..0C39 ; Alphabetic # Lo [5] TELUGU LETTER VA..TELUGU LETTER HA +0C3D ; Alphabetic # Lo TELUGU SIGN AVAGRAHA +0C3E..0C40 ; Alphabetic # Mn [3] TELUGU VOWEL SIGN AA..TELUGU VOWEL SIGN II +0C41..0C44 ; Alphabetic # Mc [4] TELUGU VOWEL SIGN U..TELUGU VOWEL SIGN VOCALIC RR +0C46..0C48 ; Alphabetic # Mn [3] TELUGU VOWEL SIGN E..TELUGU VOWEL SIGN AI +0C4A..0C4C ; Alphabetic # Mn [3] TELUGU VOWEL SIGN O..TELUGU VOWEL SIGN AU +0C55..0C56 ; Alphabetic # Mn [2] TELUGU LENGTH MARK..TELUGU AI LENGTH MARK +0C58..0C59 ; Alphabetic # Lo [2] TELUGU LETTER TSA..TELUGU LETTER DZA +0C60..0C61 ; Alphabetic # Lo [2] TELUGU LETTER VOCALIC RR..TELUGU LETTER VOCALIC LL +0C62..0C63 ; Alphabetic # Mn [2] TELUGU VOWEL SIGN VOCALIC L..TELUGU VOWEL SIGN VOCALIC LL +0C82..0C83 ; Alphabetic # Mc [2] KANNADA SIGN ANUSVARA..KANNADA SIGN VISARGA +0C85..0C8C ; Alphabetic # Lo [8] KANNADA LETTER A..KANNADA LETTER VOCALIC L +0C8E..0C90 ; Alphabetic # Lo [3] KANNADA LETTER E..KANNADA LETTER AI +0C92..0CA8 ; Alphabetic # Lo [23] KANNADA LETTER O..KANNADA LETTER NA +0CAA..0CB3 ; Alphabetic # Lo [10] KANNADA LETTER PA..KANNADA LETTER LLA +0CB5..0CB9 ; Alphabetic # Lo [5] KANNADA LETTER VA..KANNADA LETTER HA +0CBD ; Alphabetic # Lo KANNADA SIGN AVAGRAHA +0CBE ; Alphabetic # Mc KANNADA VOWEL SIGN AA +0CBF ; Alphabetic # Mn KANNADA VOWEL SIGN I +0CC0..0CC4 ; Alphabetic # Mc [5] KANNADA VOWEL SIGN II..KANNADA VOWEL SIGN VOCALIC RR +0CC6 ; Alphabetic # Mn KANNADA VOWEL SIGN E +0CC7..0CC8 ; Alphabetic # Mc [2] KANNADA VOWEL SIGN EE..KANNADA VOWEL SIGN AI +0CCA..0CCB ; Alphabetic # Mc [2] KANNADA VOWEL SIGN O..KANNADA VOWEL SIGN OO +0CCC ; Alphabetic # Mn KANNADA VOWEL SIGN AU +0CD5..0CD6 ; Alphabetic # Mc [2] KANNADA LENGTH MARK..KANNADA AI LENGTH MARK +0CDE ; Alphabetic # Lo KANNADA LETTER FA +0CE0..0CE1 ; Alphabetic # Lo [2] KANNADA LETTER VOCALIC RR..KANNADA LETTER VOCALIC LL +0CE2..0CE3 ; Alphabetic # Mn [2] KANNADA VOWEL SIGN VOCALIC L..KANNADA VOWEL SIGN VOCALIC LL +0D02..0D03 ; Alphabetic # Mc [2] MALAYALAM SIGN ANUSVARA..MALAYALAM SIGN VISARGA +0D05..0D0C ; Alphabetic # Lo [8] MALAYALAM LETTER A..MALAYALAM LETTER VOCALIC L +0D0E..0D10 ; Alphabetic # Lo [3] MALAYALAM LETTER E..MALAYALAM LETTER AI +0D12..0D28 ; Alphabetic # Lo [23] MALAYALAM LETTER O..MALAYALAM LETTER NA +0D2A..0D39 ; Alphabetic # Lo [16] MALAYALAM LETTER PA..MALAYALAM LETTER HA +0D3D ; Alphabetic # Lo MALAYALAM SIGN AVAGRAHA +0D3E..0D40 ; Alphabetic # Mc [3] MALAYALAM VOWEL SIGN AA..MALAYALAM VOWEL SIGN II +0D41..0D44 ; Alphabetic # Mn [4] MALAYALAM VOWEL SIGN U..MALAYALAM VOWEL SIGN VOCALIC RR +0D46..0D48 ; Alphabetic # Mc [3] MALAYALAM VOWEL SIGN E..MALAYALAM VOWEL SIGN AI +0D4A..0D4C ; Alphabetic # Mc [3] MALAYALAM VOWEL SIGN O..MALAYALAM VOWEL SIGN AU +0D57 ; Alphabetic # Mc MALAYALAM AU LENGTH MARK +0D60..0D61 ; Alphabetic # Lo [2] MALAYALAM LETTER VOCALIC RR..MALAYALAM LETTER VOCALIC LL +0D62..0D63 ; Alphabetic # Mn [2] MALAYALAM VOWEL SIGN VOCALIC L..MALAYALAM VOWEL SIGN VOCALIC LL +0D7A..0D7F ; Alphabetic # Lo [6] MALAYALAM LETTER CHILLU NN..MALAYALAM LETTER CHILLU K +0D82..0D83 ; Alphabetic # Mc [2] SINHALA SIGN ANUSVARAYA..SINHALA SIGN VISARGAYA +0D85..0D96 ; Alphabetic # Lo [18] SINHALA LETTER AYANNA..SINHALA LETTER AUYANNA +0D9A..0DB1 ; Alphabetic # Lo [24] SINHALA LETTER ALPAPRAANA KAYANNA..SINHALA LETTER DANTAJA NAYANNA +0DB3..0DBB ; Alphabetic # Lo [9] SINHALA LETTER SANYAKA DAYANNA..SINHALA LETTER RAYANNA +0DBD ; Alphabetic # Lo SINHALA LETTER DANTAJA LAYANNA +0DC0..0DC6 ; Alphabetic # Lo [7] SINHALA LETTER VAYANNA..SINHALA LETTER FAYANNA +0DCF..0DD1 ; Alphabetic # Mc [3] SINHALA VOWEL SIGN AELA-PILLA..SINHALA VOWEL SIGN DIGA AEDA-PILLA +0DD2..0DD4 ; Alphabetic # Mn [3] SINHALA VOWEL SIGN KETTI IS-PILLA..SINHALA VOWEL SIGN KETTI PAA-PILLA +0DD6 ; Alphabetic # Mn SINHALA VOWEL SIGN DIGA PAA-PILLA +0DD8..0DDF ; Alphabetic # Mc [8] SINHALA VOWEL SIGN GAETTA-PILLA..SINHALA VOWEL SIGN GAYANUKITTA +0DF2..0DF3 ; Alphabetic # Mc [2] SINHALA VOWEL SIGN DIGA GAETTA-PILLA..SINHALA VOWEL SIGN DIGA GAYANUKITTA +0E01..0E30 ; Alphabetic # Lo [48] THAI CHARACTER KO KAI..THAI CHARACTER SARA A +0E31 ; Alphabetic # Mn THAI CHARACTER MAI HAN-AKAT +0E32..0E33 ; Alphabetic # Lo [2] THAI CHARACTER SARA AA..THAI CHARACTER SARA AM +0E34..0E3A ; Alphabetic # Mn [7] THAI CHARACTER SARA I..THAI CHARACTER PHINTHU +0E40..0E45 ; Alphabetic # Lo [6] THAI CHARACTER SARA E..THAI CHARACTER LAKKHANGYAO +0E46 ; Alphabetic # Lm THAI CHARACTER MAIYAMOK +0E4D ; Alphabetic # Mn THAI CHARACTER NIKHAHIT +0E81..0E82 ; Alphabetic # Lo [2] LAO LETTER KO..LAO LETTER KHO SUNG +0E84 ; Alphabetic # Lo LAO LETTER KHO TAM +0E87..0E88 ; Alphabetic # Lo [2] LAO LETTER NGO..LAO LETTER CO +0E8A ; Alphabetic # Lo LAO LETTER SO TAM +0E8D ; Alphabetic # Lo LAO LETTER NYO +0E94..0E97 ; Alphabetic # Lo [4] LAO LETTER DO..LAO LETTER THO TAM +0E99..0E9F ; Alphabetic # Lo [7] LAO LETTER NO..LAO LETTER FO SUNG +0EA1..0EA3 ; Alphabetic # Lo [3] LAO LETTER MO..LAO LETTER LO LING +0EA5 ; Alphabetic # Lo LAO LETTER LO LOOT +0EA7 ; Alphabetic # Lo LAO LETTER WO +0EAA..0EAB ; Alphabetic # Lo [2] LAO LETTER SO SUNG..LAO LETTER HO SUNG +0EAD..0EB0 ; Alphabetic # Lo [4] LAO LETTER O..LAO VOWEL SIGN A +0EB1 ; Alphabetic # Mn LAO VOWEL SIGN MAI KAN +0EB2..0EB3 ; Alphabetic # Lo [2] LAO VOWEL SIGN AA..LAO VOWEL SIGN AM +0EB4..0EB9 ; Alphabetic # Mn [6] LAO VOWEL SIGN I..LAO VOWEL SIGN UU +0EBB..0EBC ; Alphabetic # Mn [2] LAO VOWEL SIGN MAI KON..LAO SEMIVOWEL SIGN LO +0EBD ; Alphabetic # Lo LAO SEMIVOWEL SIGN NYO +0EC0..0EC4 ; Alphabetic # Lo [5] LAO VOWEL SIGN E..LAO VOWEL SIGN AI +0EC6 ; Alphabetic # Lm LAO KO LA +0ECD ; Alphabetic # Mn LAO NIGGAHITA +0EDC..0EDD ; Alphabetic # Lo [2] LAO HO NO..LAO HO MO +0F00 ; Alphabetic # Lo TIBETAN SYLLABLE OM +0F40..0F47 ; Alphabetic # Lo [8] TIBETAN LETTER KA..TIBETAN LETTER JA +0F49..0F6C ; Alphabetic # Lo [36] TIBETAN LETTER NYA..TIBETAN LETTER RRA +0F71..0F7E ; Alphabetic # Mn [14] TIBETAN VOWEL SIGN AA..TIBETAN SIGN RJES SU NGA RO +0F7F ; Alphabetic # Mc TIBETAN SIGN RNAM BCAD +0F80..0F81 ; Alphabetic # Mn [2] TIBETAN VOWEL SIGN REVERSED I..TIBETAN VOWEL SIGN REVERSED II +0F88..0F8B ; Alphabetic # Lo [4] TIBETAN SIGN LCE TSA CAN..TIBETAN SIGN GRU MED RGYINGS +0F90..0F97 ; Alphabetic # Mn [8] TIBETAN SUBJOINED LETTER KA..TIBETAN SUBJOINED LETTER JA +0F99..0FBC ; Alphabetic # Mn [36] TIBETAN SUBJOINED LETTER NYA..TIBETAN SUBJOINED LETTER FIXED-FORM RA +1000..102A ; Alphabetic # Lo [43] MYANMAR LETTER KA..MYANMAR LETTER AU +102B..102C ; Alphabetic # Mc [2] MYANMAR VOWEL SIGN TALL AA..MYANMAR VOWEL SIGN AA +102D..1030 ; Alphabetic # Mn [4] MYANMAR VOWEL SIGN I..MYANMAR VOWEL SIGN UU +1031 ; Alphabetic # Mc MYANMAR VOWEL SIGN E +1032..1036 ; Alphabetic # Mn [5] MYANMAR VOWEL SIGN AI..MYANMAR SIGN ANUSVARA +1038 ; Alphabetic # Mc MYANMAR SIGN VISARGA +103B..103C ; Alphabetic # Mc [2] MYANMAR CONSONANT SIGN MEDIAL YA..MYANMAR CONSONANT SIGN MEDIAL RA +103D..103E ; Alphabetic # Mn [2] MYANMAR CONSONANT SIGN MEDIAL WA..MYANMAR CONSONANT SIGN MEDIAL HA +103F ; Alphabetic # Lo MYANMAR LETTER GREAT SA +1050..1055 ; Alphabetic # Lo [6] MYANMAR LETTER SHA..MYANMAR LETTER VOCALIC LL +1056..1057 ; Alphabetic # Mc [2] MYANMAR VOWEL SIGN VOCALIC R..MYANMAR VOWEL SIGN VOCALIC RR +1058..1059 ; Alphabetic # Mn [2] MYANMAR VOWEL SIGN VOCALIC L..MYANMAR VOWEL SIGN VOCALIC LL +105A..105D ; Alphabetic # Lo [4] MYANMAR LETTER MON NGA..MYANMAR LETTER MON BBE +105E..1060 ; Alphabetic # Mn [3] MYANMAR CONSONANT SIGN MON MEDIAL NA..MYANMAR CONSONANT SIGN MON MEDIAL LA +1061 ; Alphabetic # Lo MYANMAR LETTER SGAW KAREN SHA +1062 ; Alphabetic # Mc MYANMAR VOWEL SIGN SGAW KAREN EU +1065..1066 ; Alphabetic # Lo [2] MYANMAR LETTER WESTERN PWO KAREN THA..MYANMAR LETTER WESTERN PWO KAREN PWA +1067..1068 ; Alphabetic # Mc [2] MYANMAR VOWEL SIGN WESTERN PWO KAREN EU..MYANMAR VOWEL SIGN WESTERN PWO KAREN UE +106E..1070 ; Alphabetic # Lo [3] MYANMAR LETTER EASTERN PWO KAREN NNA..MYANMAR LETTER EASTERN PWO KAREN GHWA +1071..1074 ; Alphabetic # Mn [4] MYANMAR VOWEL SIGN GEBA KAREN I..MYANMAR VOWEL SIGN KAYAH EE +1075..1081 ; Alphabetic # Lo [13] MYANMAR LETTER SHAN KA..MYANMAR LETTER SHAN HA +1082 ; Alphabetic # Mn MYANMAR CONSONANT SIGN SHAN MEDIAL WA +1083..1084 ; Alphabetic # Mc [2] MYANMAR VOWEL SIGN SHAN AA..MYANMAR VOWEL SIGN SHAN E +1085..1086 ; Alphabetic # Mn [2] MYANMAR VOWEL SIGN SHAN E ABOVE..MYANMAR VOWEL SIGN SHAN FINAL Y +108E ; Alphabetic # Lo MYANMAR LETTER RUMAI PALAUNG FA +109C ; Alphabetic # Mc MYANMAR VOWEL SIGN AITON A +109D ; Alphabetic # Mn MYANMAR VOWEL SIGN AITON AI +10A0..10C5 ; Alphabetic # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +10D0..10FA ; Alphabetic # Lo [43] GEORGIAN LETTER AN..GEORGIAN LETTER AIN +10FC ; Alphabetic # Lm MODIFIER LETTER GEORGIAN NAR +1100..1248 ; Alphabetic # Lo [329] HANGUL CHOSEONG KIYEOK..ETHIOPIC SYLLABLE QWA +124A..124D ; Alphabetic # Lo [4] ETHIOPIC SYLLABLE QWI..ETHIOPIC SYLLABLE QWE +1250..1256 ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE QHA..ETHIOPIC SYLLABLE QHO +1258 ; Alphabetic # Lo ETHIOPIC SYLLABLE QHWA +125A..125D ; Alphabetic # Lo [4] ETHIOPIC SYLLABLE QHWI..ETHIOPIC SYLLABLE QHWE +1260..1288 ; Alphabetic # Lo [41] ETHIOPIC SYLLABLE BA..ETHIOPIC SYLLABLE XWA +128A..128D ; Alphabetic # Lo [4] ETHIOPIC SYLLABLE XWI..ETHIOPIC SYLLABLE XWE +1290..12B0 ; Alphabetic # Lo [33] ETHIOPIC SYLLABLE NA..ETHIOPIC SYLLABLE KWA +12B2..12B5 ; Alphabetic # Lo [4] ETHIOPIC SYLLABLE KWI..ETHIOPIC SYLLABLE KWE +12B8..12BE ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE KXA..ETHIOPIC SYLLABLE KXO +12C0 ; Alphabetic # Lo ETHIOPIC SYLLABLE KXWA +12C2..12C5 ; Alphabetic # Lo [4] ETHIOPIC SYLLABLE KXWI..ETHIOPIC SYLLABLE KXWE +12C8..12D6 ; Alphabetic # Lo [15] ETHIOPIC SYLLABLE WA..ETHIOPIC SYLLABLE PHARYNGEAL O +12D8..1310 ; Alphabetic # Lo [57] ETHIOPIC SYLLABLE ZA..ETHIOPIC SYLLABLE GWA +1312..1315 ; Alphabetic # Lo [4] ETHIOPIC SYLLABLE GWI..ETHIOPIC SYLLABLE GWE +1318..135A ; Alphabetic # Lo [67] ETHIOPIC SYLLABLE GGA..ETHIOPIC SYLLABLE FYA +135F ; Alphabetic # Mn ETHIOPIC COMBINING GEMINATION MARK +1380..138F ; Alphabetic # Lo [16] ETHIOPIC SYLLABLE SEBATBEIT MWA..ETHIOPIC SYLLABLE PWE +13A0..13F4 ; Alphabetic # Lo [85] CHEROKEE LETTER A..CHEROKEE LETTER YV +1401..166C ; Alphabetic # Lo [620] CANADIAN SYLLABICS E..CANADIAN SYLLABICS CARRIER TTSA +166F..167F ; Alphabetic # Lo [17] CANADIAN SYLLABICS QAI..CANADIAN SYLLABICS BLACKFOOT W +1681..169A ; Alphabetic # Lo [26] OGHAM LETTER BEITH..OGHAM LETTER PEITH +16A0..16EA ; Alphabetic # Lo [75] RUNIC LETTER FEHU FEOH FE F..RUNIC LETTER X +16EE..16F0 ; Alphabetic # Nl [3] RUNIC ARLAUG SYMBOL..RUNIC BELGTHOR SYMBOL +1700..170C ; Alphabetic # Lo [13] TAGALOG LETTER A..TAGALOG LETTER YA +170E..1711 ; Alphabetic # Lo [4] TAGALOG LETTER LA..TAGALOG LETTER HA +1712..1713 ; Alphabetic # Mn [2] TAGALOG VOWEL SIGN I..TAGALOG VOWEL SIGN U +1720..1731 ; Alphabetic # Lo [18] HANUNOO LETTER A..HANUNOO LETTER HA +1732..1733 ; Alphabetic # Mn [2] HANUNOO VOWEL SIGN I..HANUNOO VOWEL SIGN U +1740..1751 ; Alphabetic # Lo [18] BUHID LETTER A..BUHID LETTER HA +1752..1753 ; Alphabetic # Mn [2] BUHID VOWEL SIGN I..BUHID VOWEL SIGN U +1760..176C ; Alphabetic # Lo [13] TAGBANWA LETTER A..TAGBANWA LETTER YA +176E..1770 ; Alphabetic # Lo [3] TAGBANWA LETTER LA..TAGBANWA LETTER SA +1772..1773 ; Alphabetic # Mn [2] TAGBANWA VOWEL SIGN I..TAGBANWA VOWEL SIGN U +1780..17B3 ; Alphabetic # Lo [52] KHMER LETTER KA..KHMER INDEPENDENT VOWEL QAU +17B6 ; Alphabetic # Mc KHMER VOWEL SIGN AA +17B7..17BD ; Alphabetic # Mn [7] KHMER VOWEL SIGN I..KHMER VOWEL SIGN UA +17BE..17C5 ; Alphabetic # Mc [8] KHMER VOWEL SIGN OE..KHMER VOWEL SIGN AU +17C6 ; Alphabetic # Mn KHMER SIGN NIKAHIT +17C7..17C8 ; Alphabetic # Mc [2] KHMER SIGN REAHMUK..KHMER SIGN YUUKALEAPINTU +17D7 ; Alphabetic # Lm KHMER SIGN LEK TOO +17DC ; Alphabetic # Lo KHMER SIGN AVAKRAHASANYA +1820..1842 ; Alphabetic # Lo [35] MONGOLIAN LETTER A..MONGOLIAN LETTER CHI +1843 ; Alphabetic # Lm MONGOLIAN LETTER TODO LONG VOWEL SIGN +1844..1877 ; Alphabetic # Lo [52] MONGOLIAN LETTER TODO E..MONGOLIAN LETTER MANCHU ZHA +1880..18A8 ; Alphabetic # Lo [41] MONGOLIAN LETTER ALI GALI ANUSVARA ONE..MONGOLIAN LETTER MANCHU ALI GALI BHA +18A9 ; Alphabetic # Mn MONGOLIAN LETTER ALI GALI DAGALGA +18AA ; Alphabetic # Lo MONGOLIAN LETTER MANCHU ALI GALI LHA +18B0..18F5 ; Alphabetic # Lo [70] CANADIAN SYLLABICS OY..CANADIAN SYLLABICS CARRIER DENTAL S +1900..191C ; Alphabetic # Lo [29] LIMBU VOWEL-CARRIER LETTER..LIMBU LETTER HA +1920..1922 ; Alphabetic # Mn [3] LIMBU VOWEL SIGN A..LIMBU VOWEL SIGN U +1923..1926 ; Alphabetic # Mc [4] LIMBU VOWEL SIGN EE..LIMBU VOWEL SIGN AU +1927..1928 ; Alphabetic # Mn [2] LIMBU VOWEL SIGN E..LIMBU VOWEL SIGN O +1929..192B ; Alphabetic # Mc [3] LIMBU SUBJOINED LETTER YA..LIMBU SUBJOINED LETTER WA +1930..1931 ; Alphabetic # Mc [2] LIMBU SMALL LETTER KA..LIMBU SMALL LETTER NGA +1932 ; Alphabetic # Mn LIMBU SMALL LETTER ANUSVARA +1933..1938 ; Alphabetic # Mc [6] LIMBU SMALL LETTER TA..LIMBU SMALL LETTER LA +1950..196D ; Alphabetic # Lo [30] TAI LE LETTER KA..TAI LE LETTER AI +1970..1974 ; Alphabetic # Lo [5] TAI LE LETTER TONE-2..TAI LE LETTER TONE-6 +1980..19AB ; Alphabetic # Lo [44] NEW TAI LUE LETTER HIGH QA..NEW TAI LUE LETTER LOW SUA +19B0..19C0 ; Alphabetic # Mc [17] NEW TAI LUE VOWEL SIGN VOWEL SHORTENER..NEW TAI LUE VOWEL SIGN IY +19C1..19C7 ; Alphabetic # Lo [7] NEW TAI LUE LETTER FINAL V..NEW TAI LUE LETTER FINAL B +19C8..19C9 ; Alphabetic # Mc [2] NEW TAI LUE TONE MARK-1..NEW TAI LUE TONE MARK-2 +1A00..1A16 ; Alphabetic # Lo [23] BUGINESE LETTER KA..BUGINESE LETTER HA +1A17..1A18 ; Alphabetic # Mn [2] BUGINESE VOWEL SIGN I..BUGINESE VOWEL SIGN U +1A19..1A1B ; Alphabetic # Mc [3] BUGINESE VOWEL SIGN E..BUGINESE VOWEL SIGN AE +1A20..1A54 ; Alphabetic # Lo [53] TAI THAM LETTER HIGH KA..TAI THAM LETTER GREAT SA +1A55 ; Alphabetic # Mc TAI THAM CONSONANT SIGN MEDIAL RA +1A56 ; Alphabetic # Mn TAI THAM CONSONANT SIGN MEDIAL LA +1A57 ; Alphabetic # Mc TAI THAM CONSONANT SIGN LA TANG LAI +1A58..1A5E ; Alphabetic # Mn [7] TAI THAM SIGN MAI KANG LAI..TAI THAM CONSONANT SIGN SA +1A61 ; Alphabetic # Mc TAI THAM VOWEL SIGN A +1A62 ; Alphabetic # Mn TAI THAM VOWEL SIGN MAI SAT +1A63..1A64 ; Alphabetic # Mc [2] TAI THAM VOWEL SIGN AA..TAI THAM VOWEL SIGN TALL AA +1A65..1A6C ; Alphabetic # Mn [8] TAI THAM VOWEL SIGN I..TAI THAM VOWEL SIGN OA BELOW +1A6D..1A72 ; Alphabetic # Mc [6] TAI THAM VOWEL SIGN OY..TAI THAM VOWEL SIGN THAM AI +1A73..1A74 ; Alphabetic # Mn [2] TAI THAM VOWEL SIGN OA ABOVE..TAI THAM SIGN MAI KANG +1AA7 ; Alphabetic # Lm TAI THAM SIGN MAI YAMOK +1B00..1B03 ; Alphabetic # Mn [4] BALINESE SIGN ULU RICEM..BALINESE SIGN SURANG +1B04 ; Alphabetic # Mc BALINESE SIGN BISAH +1B05..1B33 ; Alphabetic # Lo [47] BALINESE LETTER AKARA..BALINESE LETTER HA +1B35 ; Alphabetic # Mc BALINESE VOWEL SIGN TEDUNG +1B36..1B3A ; Alphabetic # Mn [5] BALINESE VOWEL SIGN ULU..BALINESE VOWEL SIGN RA REPA +1B3B ; Alphabetic # Mc BALINESE VOWEL SIGN RA REPA TEDUNG +1B3C ; Alphabetic # Mn BALINESE VOWEL SIGN LA LENGA +1B3D..1B41 ; Alphabetic # Mc [5] BALINESE VOWEL SIGN LA LENGA TEDUNG..BALINESE VOWEL SIGN TALING REPA TEDUNG +1B42 ; Alphabetic # Mn BALINESE VOWEL SIGN PEPET +1B43 ; Alphabetic # Mc BALINESE VOWEL SIGN PEPET TEDUNG +1B45..1B4B ; Alphabetic # Lo [7] BALINESE LETTER KAF SASAK..BALINESE LETTER ASYURA SASAK +1B80..1B81 ; Alphabetic # Mn [2] SUNDANESE SIGN PANYECEK..SUNDANESE SIGN PANGLAYAR +1B82 ; Alphabetic # Mc SUNDANESE SIGN PANGWISAD +1B83..1BA0 ; Alphabetic # Lo [30] SUNDANESE LETTER A..SUNDANESE LETTER HA +1BA1 ; Alphabetic # Mc SUNDANESE CONSONANT SIGN PAMINGKAL +1BA2..1BA5 ; Alphabetic # Mn [4] SUNDANESE CONSONANT SIGN PANYAKRA..SUNDANESE VOWEL SIGN PANYUKU +1BA6..1BA7 ; Alphabetic # Mc [2] SUNDANESE VOWEL SIGN PANAELAENG..SUNDANESE VOWEL SIGN PANOLONG +1BA8..1BA9 ; Alphabetic # Mn [2] SUNDANESE VOWEL SIGN PAMEPET..SUNDANESE VOWEL SIGN PANEULEUNG +1BAE..1BAF ; Alphabetic # Lo [2] SUNDANESE LETTER KHA..SUNDANESE LETTER SYA +1C00..1C23 ; Alphabetic # Lo [36] LEPCHA LETTER KA..LEPCHA LETTER A +1C24..1C2B ; Alphabetic # Mc [8] LEPCHA SUBJOINED LETTER YA..LEPCHA VOWEL SIGN UU +1C2C..1C33 ; Alphabetic # Mn [8] LEPCHA VOWEL SIGN E..LEPCHA CONSONANT SIGN T +1C34..1C35 ; Alphabetic # Mc [2] LEPCHA CONSONANT SIGN NYIN-DO..LEPCHA CONSONANT SIGN KANG +1C4D..1C4F ; Alphabetic # Lo [3] LEPCHA LETTER TTA..LEPCHA LETTER DDA +1C5A..1C77 ; Alphabetic # Lo [30] OL CHIKI LETTER LA..OL CHIKI LETTER OH +1C78..1C7D ; Alphabetic # Lm [6] OL CHIKI MU TTUDDAG..OL CHIKI AHAD +1CE9..1CEC ; Alphabetic # Lo [4] VEDIC SIGN ANUSVARA ANTARGOMUKHA..VEDIC SIGN ANUSVARA VAMAGOMUKHA WITH TAIL +1CEE..1CF1 ; Alphabetic # Lo [4] VEDIC SIGN HEXIFORM LONG ANUSVARA..VEDIC SIGN ANUSVARA UBHAYATO MUKHA +1CF2 ; Alphabetic # Mc VEDIC SIGN ARDHAVISARGA +1D00..1D2B ; Alphabetic # L& [44] LATIN LETTER SMALL CAPITAL A..CYRILLIC LETTER SMALL CAPITAL EL +1D2C..1D61 ; Alphabetic # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D62..1D77 ; Alphabetic # L& [22] LATIN SUBSCRIPT SMALL LETTER I..LATIN SMALL LETTER TURNED G +1D78 ; Alphabetic # Lm MODIFIER LETTER CYRILLIC EN +1D79..1D9A ; Alphabetic # L& [34] LATIN SMALL LETTER INSULAR G..LATIN SMALL LETTER EZH WITH RETROFLEX HOOK +1D9B..1DBF ; Alphabetic # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1E00..1F15 ; Alphabetic # L& [278] LATIN CAPITAL LETTER A WITH RING BELOW..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F18..1F1D ; Alphabetic # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F45 ; Alphabetic # L& [38] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F48..1F4D ; Alphabetic # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; Alphabetic # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F59 ; Alphabetic # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; Alphabetic # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; Alphabetic # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F..1F7D ; Alphabetic # L& [31] GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; Alphabetic # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FBC ; Alphabetic # L& [7] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBE ; Alphabetic # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; Alphabetic # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FCC ; Alphabetic # L& [7] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD0..1FD3 ; Alphabetic # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FDB ; Alphabetic # L& [6] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE0..1FEC ; Alphabetic # L& [13] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF2..1FF4 ; Alphabetic # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FFC ; Alphabetic # L& [7] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +2071 ; Alphabetic # Lm SUPERSCRIPT LATIN SMALL LETTER I +207F ; Alphabetic # Lm SUPERSCRIPT LATIN SMALL LETTER N +2090..2094 ; Alphabetic # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +2102 ; Alphabetic # L& DOUBLE-STRUCK CAPITAL C +2107 ; Alphabetic # L& EULER CONSTANT +210A..2113 ; Alphabetic # L& [10] SCRIPT SMALL G..SCRIPT SMALL L +2115 ; Alphabetic # L& DOUBLE-STRUCK CAPITAL N +2119..211D ; Alphabetic # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +2124 ; Alphabetic # L& DOUBLE-STRUCK CAPITAL Z +2126 ; Alphabetic # L& OHM SIGN +2128 ; Alphabetic # L& BLACK-LETTER CAPITAL Z +212A..212D ; Alphabetic # L& [4] KELVIN SIGN..BLACK-LETTER CAPITAL C +212F..2134 ; Alphabetic # L& [6] SCRIPT SMALL E..SCRIPT SMALL O +2135..2138 ; Alphabetic # Lo [4] ALEF SYMBOL..DALET SYMBOL +2139 ; Alphabetic # L& INFORMATION SOURCE +213C..213F ; Alphabetic # L& [4] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK CAPITAL PI +2145..2149 ; Alphabetic # L& [5] DOUBLE-STRUCK ITALIC CAPITAL D..DOUBLE-STRUCK ITALIC SMALL J +214E ; Alphabetic # L& TURNED SMALL F +2160..2182 ; Alphabetic # Nl [35] ROMAN NUMERAL ONE..ROMAN NUMERAL TEN THOUSAND +2183..2184 ; Alphabetic # L& [2] ROMAN NUMERAL REVERSED ONE HUNDRED..LATIN SMALL LETTER REVERSED C +2185..2188 ; Alphabetic # Nl [4] ROMAN NUMERAL SIX LATE FORM..ROMAN NUMERAL ONE HUNDRED THOUSAND +24B6..24E9 ; Alphabetic # So [52] CIRCLED LATIN CAPITAL LETTER A..CIRCLED LATIN SMALL LETTER Z +2C00..2C2E ; Alphabetic # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C30..2C5E ; Alphabetic # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C60..2C7C ; Alphabetic # L& [29] LATIN CAPITAL LETTER L WITH DOUBLE BAR..LATIN SUBSCRIPT SMALL LETTER J +2C7D ; Alphabetic # Lm MODIFIER LETTER CAPITAL V +2C7E..2CE4 ; Alphabetic # L& [103] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC SYMBOL KAI +2CEB..2CEE ; Alphabetic # L& [4] COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI..COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2D00..2D25 ; Alphabetic # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +2D30..2D65 ; Alphabetic # Lo [54] TIFINAGH LETTER YA..TIFINAGH LETTER YAZZ +2D6F ; Alphabetic # Lm TIFINAGH MODIFIER LETTER LABIALIZATION MARK +2D80..2D96 ; Alphabetic # Lo [23] ETHIOPIC SYLLABLE LOA..ETHIOPIC SYLLABLE GGWE +2DA0..2DA6 ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE SSA..ETHIOPIC SYLLABLE SSO +2DA8..2DAE ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE CCA..ETHIOPIC SYLLABLE CCO +2DB0..2DB6 ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE ZZA..ETHIOPIC SYLLABLE ZZO +2DB8..2DBE ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE CCHA..ETHIOPIC SYLLABLE CCHO +2DC0..2DC6 ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE QYA..ETHIOPIC SYLLABLE QYO +2DC8..2DCE ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE KYA..ETHIOPIC SYLLABLE KYO +2DD0..2DD6 ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE XYA..ETHIOPIC SYLLABLE XYO +2DD8..2DDE ; Alphabetic # Lo [7] ETHIOPIC SYLLABLE GYA..ETHIOPIC SYLLABLE GYO +2DE0..2DFF ; Alphabetic # Mn [32] COMBINING CYRILLIC LETTER BE..COMBINING CYRILLIC LETTER IOTIFIED BIG YUS +2E2F ; Alphabetic # Lm VERTICAL TILDE +3005 ; Alphabetic # Lm IDEOGRAPHIC ITERATION MARK +3006 ; Alphabetic # Lo IDEOGRAPHIC CLOSING MARK +3007 ; Alphabetic # Nl IDEOGRAPHIC NUMBER ZERO +3021..3029 ; Alphabetic # Nl [9] HANGZHOU NUMERAL ONE..HANGZHOU NUMERAL NINE +3031..3035 ; Alphabetic # Lm [5] VERTICAL KANA REPEAT MARK..VERTICAL KANA REPEAT MARK LOWER HALF +3038..303A ; Alphabetic # Nl [3] HANGZHOU NUMERAL TEN..HANGZHOU NUMERAL THIRTY +303B ; Alphabetic # Lm VERTICAL IDEOGRAPHIC ITERATION MARK +303C ; Alphabetic # Lo MASU MARK +3041..3096 ; Alphabetic # Lo [86] HIRAGANA LETTER SMALL A..HIRAGANA LETTER SMALL KE +309D..309E ; Alphabetic # Lm [2] HIRAGANA ITERATION MARK..HIRAGANA VOICED ITERATION MARK +309F ; Alphabetic # Lo HIRAGANA DIGRAPH YORI +30A1..30FA ; Alphabetic # Lo [90] KATAKANA LETTER SMALL A..KATAKANA LETTER VO +30FC..30FE ; Alphabetic # Lm [3] KATAKANA-HIRAGANA PROLONGED SOUND MARK..KATAKANA VOICED ITERATION MARK +30FF ; Alphabetic # Lo KATAKANA DIGRAPH KOTO +3105..312D ; Alphabetic # Lo [41] BOPOMOFO LETTER B..BOPOMOFO LETTER IH +3131..318E ; Alphabetic # Lo [94] HANGUL LETTER KIYEOK..HANGUL LETTER ARAEAE +31A0..31B7 ; Alphabetic # Lo [24] BOPOMOFO LETTER BU..BOPOMOFO FINAL LETTER H +31F0..31FF ; Alphabetic # Lo [16] KATAKANA LETTER SMALL KU..KATAKANA LETTER SMALL RO +3400..4DB5 ; Alphabetic # Lo [6582] CJK UNIFIED IDEOGRAPH-3400..CJK UNIFIED IDEOGRAPH-4DB5 +4E00..9FCB ; Alphabetic # Lo [20940] CJK UNIFIED IDEOGRAPH-4E00..CJK UNIFIED IDEOGRAPH-9FCB +A000..A014 ; Alphabetic # Lo [21] YI SYLLABLE IT..YI SYLLABLE E +A015 ; Alphabetic # Lm YI SYLLABLE WU +A016..A48C ; Alphabetic # Lo [1143] YI SYLLABLE BIT..YI SYLLABLE YYR +A4D0..A4F7 ; Alphabetic # Lo [40] LISU LETTER BA..LISU LETTER OE +A4F8..A4FD ; Alphabetic # Lm [6] LISU LETTER TONE MYA TI..LISU LETTER TONE MYA JEU +A500..A60B ; Alphabetic # Lo [268] VAI SYLLABLE EE..VAI SYLLABLE NG +A60C ; Alphabetic # Lm VAI SYLLABLE LENGTHENER +A610..A61F ; Alphabetic # Lo [16] VAI SYLLABLE NDOLE FA..VAI SYMBOL JONG +A62A..A62B ; Alphabetic # Lo [2] VAI SYLLABLE NDOLE MA..VAI SYLLABLE NDOLE DO +A640..A65F ; Alphabetic # L& [32] CYRILLIC CAPITAL LETTER ZEMLYA..CYRILLIC SMALL LETTER YN +A662..A66D ; Alphabetic # L& [12] CYRILLIC CAPITAL LETTER SOFT DE..CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A66E ; Alphabetic # Lo CYRILLIC LETTER MULTIOCULAR O +A67F ; Alphabetic # Lm CYRILLIC PAYEROK +A680..A697 ; Alphabetic # L& [24] CYRILLIC CAPITAL LETTER DWE..CYRILLIC SMALL LETTER SHWE +A6A0..A6E5 ; Alphabetic # Lo [70] BAMUM LETTER A..BAMUM LETTER KI +A6E6..A6EF ; Alphabetic # Nl [10] BAMUM LETTER MO..BAMUM LETTER KOGHOM +A717..A71F ; Alphabetic # Lm [9] MODIFIER LETTER DOT VERTICAL BAR..MODIFIER LETTER LOW INVERTED EXCLAMATION MARK +A722..A76F ; Alphabetic # L& [78] LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF..LATIN SMALL LETTER CON +A770 ; Alphabetic # Lm MODIFIER LETTER US +A771..A787 ; Alphabetic # L& [23] LATIN SMALL LETTER DUM..LATIN SMALL LETTER INSULAR T +A788 ; Alphabetic # Lm MODIFIER LETTER LOW CIRCUMFLEX ACCENT +A78B..A78C ; Alphabetic # L& [2] LATIN CAPITAL LETTER SALTILLO..LATIN SMALL LETTER SALTILLO +A7FB..A801 ; Alphabetic # Lo [7] LATIN EPIGRAPHIC LETTER REVERSED F..SYLOTI NAGRI LETTER I +A803..A805 ; Alphabetic # Lo [3] SYLOTI NAGRI LETTER U..SYLOTI NAGRI LETTER O +A807..A80A ; Alphabetic # Lo [4] SYLOTI NAGRI LETTER KO..SYLOTI NAGRI LETTER GHO +A80C..A822 ; Alphabetic # Lo [23] SYLOTI NAGRI LETTER CO..SYLOTI NAGRI LETTER HO +A823..A824 ; Alphabetic # Mc [2] SYLOTI NAGRI VOWEL SIGN A..SYLOTI NAGRI VOWEL SIGN I +A825..A826 ; Alphabetic # Mn [2] SYLOTI NAGRI VOWEL SIGN U..SYLOTI NAGRI VOWEL SIGN E +A827 ; Alphabetic # Mc SYLOTI NAGRI VOWEL SIGN OO +A840..A873 ; Alphabetic # Lo [52] PHAGS-PA LETTER KA..PHAGS-PA LETTER CANDRABINDU +A880..A881 ; Alphabetic # Mc [2] SAURASHTRA SIGN ANUSVARA..SAURASHTRA SIGN VISARGA +A882..A8B3 ; Alphabetic # Lo [50] SAURASHTRA LETTER A..SAURASHTRA LETTER LLA +A8B4..A8C3 ; Alphabetic # Mc [16] SAURASHTRA CONSONANT SIGN HAARU..SAURASHTRA VOWEL SIGN AU +A8F2..A8F7 ; Alphabetic # Lo [6] DEVANAGARI SIGN SPACING CANDRABINDU..DEVANAGARI SIGN CANDRABINDU AVAGRAHA +A8FB ; Alphabetic # Lo DEVANAGARI HEADSTROKE +A90A..A925 ; Alphabetic # Lo [28] KAYAH LI LETTER KA..KAYAH LI LETTER OO +A926..A92A ; Alphabetic # Mn [5] KAYAH LI VOWEL UE..KAYAH LI VOWEL O +A930..A946 ; Alphabetic # Lo [23] REJANG LETTER KA..REJANG LETTER A +A947..A951 ; Alphabetic # Mn [11] REJANG VOWEL SIGN I..REJANG CONSONANT SIGN R +A952 ; Alphabetic # Mc REJANG CONSONANT SIGN H +A960..A97C ; Alphabetic # Lo [29] HANGUL CHOSEONG TIKEUT-MIEUM..HANGUL CHOSEONG SSANGYEORINHIEUH +A980..A982 ; Alphabetic # Mn [3] JAVANESE SIGN PANYANGGA..JAVANESE SIGN LAYAR +A983 ; Alphabetic # Mc JAVANESE SIGN WIGNYAN +A984..A9B2 ; Alphabetic # Lo [47] JAVANESE LETTER A..JAVANESE LETTER HA +A9B3 ; Alphabetic # Mn JAVANESE SIGN CECAK TELU +A9B4..A9B5 ; Alphabetic # Mc [2] JAVANESE VOWEL SIGN TARUNG..JAVANESE VOWEL SIGN TOLONG +A9B6..A9B9 ; Alphabetic # Mn [4] JAVANESE VOWEL SIGN WULU..JAVANESE VOWEL SIGN SUKU MENDUT +A9BA..A9BB ; Alphabetic # Mc [2] JAVANESE VOWEL SIGN TALING..JAVANESE VOWEL SIGN DIRGA MURE +A9BC ; Alphabetic # Mn JAVANESE VOWEL SIGN PEPET +A9BD..A9BF ; Alphabetic # Mc [3] JAVANESE CONSONANT SIGN KERET..JAVANESE CONSONANT SIGN CAKRA +A9CF ; Alphabetic # Lm JAVANESE PANGRANGKEP +AA00..AA28 ; Alphabetic # Lo [41] CHAM LETTER A..CHAM LETTER HA +AA29..AA2E ; Alphabetic # Mn [6] CHAM VOWEL SIGN AA..CHAM VOWEL SIGN OE +AA2F..AA30 ; Alphabetic # Mc [2] CHAM VOWEL SIGN O..CHAM VOWEL SIGN AI +AA31..AA32 ; Alphabetic # Mn [2] CHAM VOWEL SIGN AU..CHAM VOWEL SIGN UE +AA33..AA34 ; Alphabetic # Mc [2] CHAM CONSONANT SIGN YA..CHAM CONSONANT SIGN RA +AA35..AA36 ; Alphabetic # Mn [2] CHAM CONSONANT SIGN LA..CHAM CONSONANT SIGN WA +AA40..AA42 ; Alphabetic # Lo [3] CHAM LETTER FINAL K..CHAM LETTER FINAL NG +AA43 ; Alphabetic # Mn CHAM CONSONANT SIGN FINAL NG +AA44..AA4B ; Alphabetic # Lo [8] CHAM LETTER FINAL CH..CHAM LETTER FINAL SS +AA4C ; Alphabetic # Mn CHAM CONSONANT SIGN FINAL M +AA4D ; Alphabetic # Mc CHAM CONSONANT SIGN FINAL H +AA60..AA6F ; Alphabetic # Lo [16] MYANMAR LETTER KHAMTI GA..MYANMAR LETTER KHAMTI FA +AA70 ; Alphabetic # Lm MYANMAR MODIFIER LETTER KHAMTI REDUPLICATION +AA71..AA76 ; Alphabetic # Lo [6] MYANMAR LETTER KHAMTI XA..MYANMAR LOGOGRAM KHAMTI HM +AA7A ; Alphabetic # Lo MYANMAR LETTER AITON RA +AA80..AAAF ; Alphabetic # Lo [48] TAI VIET LETTER LOW KO..TAI VIET LETTER HIGH O +AAB0 ; Alphabetic # Mn TAI VIET MAI KANG +AAB1 ; Alphabetic # Lo TAI VIET VOWEL AA +AAB2..AAB4 ; Alphabetic # Mn [3] TAI VIET VOWEL I..TAI VIET VOWEL U +AAB5..AAB6 ; Alphabetic # Lo [2] TAI VIET VOWEL E..TAI VIET VOWEL O +AAB7..AAB8 ; Alphabetic # Mn [2] TAI VIET MAI KHIT..TAI VIET VOWEL IA +AAB9..AABD ; Alphabetic # Lo [5] TAI VIET VOWEL UEA..TAI VIET VOWEL AN +AABE ; Alphabetic # Mn TAI VIET VOWEL AM +AAC0 ; Alphabetic # Lo TAI VIET TONE MAI NUENG +AAC2 ; Alphabetic # Lo TAI VIET TONE MAI SONG +AADB..AADC ; Alphabetic # Lo [2] TAI VIET SYMBOL KON..TAI VIET SYMBOL NUENG +AADD ; Alphabetic # Lm TAI VIET SYMBOL SAM +ABC0..ABE2 ; Alphabetic # Lo [35] MEETEI MAYEK LETTER KOK..MEETEI MAYEK LETTER I LONSUM +ABE3..ABE4 ; Alphabetic # Mc [2] MEETEI MAYEK VOWEL SIGN ONAP..MEETEI MAYEK VOWEL SIGN INAP +ABE5 ; Alphabetic # Mn MEETEI MAYEK VOWEL SIGN ANAP +ABE6..ABE7 ; Alphabetic # Mc [2] MEETEI MAYEK VOWEL SIGN YENAP..MEETEI MAYEK VOWEL SIGN SOUNAP +ABE8 ; Alphabetic # Mn MEETEI MAYEK VOWEL SIGN UNAP +ABE9..ABEA ; Alphabetic # Mc [2] MEETEI MAYEK VOWEL SIGN CHEINAP..MEETEI MAYEK VOWEL SIGN NUNG +AC00..D7A3 ; Alphabetic # Lo [11172] HANGUL SYLLABLE GA..HANGUL SYLLABLE HIH +D7B0..D7C6 ; Alphabetic # Lo [23] HANGUL JUNGSEONG O-YEO..HANGUL JUNGSEONG ARAEA-E +D7CB..D7FB ; Alphabetic # Lo [49] HANGUL JONGSEONG NIEUN-RIEUL..HANGUL JONGSEONG PHIEUPH-THIEUTH +F900..FA2D ; Alphabetic # Lo [302] CJK COMPATIBILITY IDEOGRAPH-F900..CJK COMPATIBILITY IDEOGRAPH-FA2D +FA30..FA6D ; Alphabetic # Lo [62] CJK COMPATIBILITY IDEOGRAPH-FA30..CJK COMPATIBILITY IDEOGRAPH-FA6D +FA70..FAD9 ; Alphabetic # Lo [106] CJK COMPATIBILITY IDEOGRAPH-FA70..CJK COMPATIBILITY IDEOGRAPH-FAD9 +FB00..FB06 ; Alphabetic # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; Alphabetic # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FB1D ; Alphabetic # Lo HEBREW LETTER YOD WITH HIRIQ +FB1E ; Alphabetic # Mn HEBREW POINT JUDEO-SPANISH VARIKA +FB1F..FB28 ; Alphabetic # Lo [10] HEBREW LIGATURE YIDDISH YOD YOD PATAH..HEBREW LETTER WIDE TAV +FB2A..FB36 ; Alphabetic # Lo [13] HEBREW LETTER SHIN WITH SHIN DOT..HEBREW LETTER ZAYIN WITH DAGESH +FB38..FB3C ; Alphabetic # Lo [5] HEBREW LETTER TET WITH DAGESH..HEBREW LETTER LAMED WITH DAGESH +FB3E ; Alphabetic # Lo HEBREW LETTER MEM WITH DAGESH +FB40..FB41 ; Alphabetic # Lo [2] HEBREW LETTER NUN WITH DAGESH..HEBREW LETTER SAMEKH WITH DAGESH +FB43..FB44 ; Alphabetic # Lo [2] HEBREW LETTER FINAL PE WITH DAGESH..HEBREW LETTER PE WITH DAGESH +FB46..FBB1 ; Alphabetic # Lo [108] HEBREW LETTER TSADI WITH DAGESH..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE FINAL FORM +FBD3..FD3D ; Alphabetic # Lo [363] ARABIC LETTER NG ISOLATED FORM..ARABIC LIGATURE ALEF WITH FATHATAN ISOLATED FORM +FD50..FD8F ; Alphabetic # Lo [64] ARABIC LIGATURE TEH WITH JEEM WITH MEEM INITIAL FORM..ARABIC LIGATURE MEEM WITH KHAH WITH MEEM INITIAL FORM +FD92..FDC7 ; Alphabetic # Lo [54] ARABIC LIGATURE MEEM WITH JEEM WITH KHAH INITIAL FORM..ARABIC LIGATURE NOON WITH JEEM WITH YEH FINAL FORM +FDF0..FDFB ; Alphabetic # Lo [12] ARABIC LIGATURE SALLA USED AS KORANIC STOP SIGN ISOLATED FORM..ARABIC LIGATURE JALLAJALALOUHOU +FE70..FE74 ; Alphabetic # Lo [5] ARABIC FATHATAN ISOLATED FORM..ARABIC KASRATAN ISOLATED FORM +FE76..FEFC ; Alphabetic # Lo [135] ARABIC FATHA ISOLATED FORM..ARABIC LIGATURE LAM WITH ALEF FINAL FORM +FF21..FF3A ; Alphabetic # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +FF41..FF5A ; Alphabetic # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +FF66..FF6F ; Alphabetic # Lo [10] HALFWIDTH KATAKANA LETTER WO..HALFWIDTH KATAKANA LETTER SMALL TU +FF70 ; Alphabetic # Lm HALFWIDTH KATAKANA-HIRAGANA PROLONGED SOUND MARK +FF71..FF9D ; Alphabetic # Lo [45] HALFWIDTH KATAKANA LETTER A..HALFWIDTH KATAKANA LETTER N +FF9E..FF9F ; Alphabetic # Lm [2] HALFWIDTH KATAKANA VOICED SOUND MARK..HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK +FFA0..FFBE ; Alphabetic # Lo [31] HALFWIDTH HANGUL FILLER..HALFWIDTH HANGUL LETTER HIEUH +FFC2..FFC7 ; Alphabetic # Lo [6] HALFWIDTH HANGUL LETTER A..HALFWIDTH HANGUL LETTER E +FFCA..FFCF ; Alphabetic # Lo [6] HALFWIDTH HANGUL LETTER YEO..HALFWIDTH HANGUL LETTER OE +FFD2..FFD7 ; Alphabetic # Lo [6] HALFWIDTH HANGUL LETTER YO..HALFWIDTH HANGUL LETTER YU +FFDA..FFDC ; Alphabetic # Lo [3] HALFWIDTH HANGUL LETTER EU..HALFWIDTH HANGUL LETTER I +10000..1000B ; Alphabetic # Lo [12] LINEAR B SYLLABLE B008 A..LINEAR B SYLLABLE B046 JE +1000D..10026 ; Alphabetic # Lo [26] LINEAR B SYLLABLE B036 JO..LINEAR B SYLLABLE B032 QO +10028..1003A ; Alphabetic # Lo [19] LINEAR B SYLLABLE B060 RA..LINEAR B SYLLABLE B042 WO +1003C..1003D ; Alphabetic # Lo [2] LINEAR B SYLLABLE B017 ZA..LINEAR B SYLLABLE B074 ZE +1003F..1004D ; Alphabetic # Lo [15] LINEAR B SYLLABLE B020 ZO..LINEAR B SYLLABLE B091 TWO +10050..1005D ; Alphabetic # Lo [14] LINEAR B SYMBOL B018..LINEAR B SYMBOL B089 +10080..100FA ; Alphabetic # Lo [123] LINEAR B IDEOGRAM B100 MAN..LINEAR B IDEOGRAM VESSEL B305 +10140..10174 ; Alphabetic # Nl [53] GREEK ACROPHONIC ATTIC ONE QUARTER..GREEK ACROPHONIC STRATIAN FIFTY MNAS +10280..1029C ; Alphabetic # Lo [29] LYCIAN LETTER A..LYCIAN LETTER X +102A0..102D0 ; Alphabetic # Lo [49] CARIAN LETTER A..CARIAN LETTER UUU3 +10300..1031E ; Alphabetic # Lo [31] OLD ITALIC LETTER A..OLD ITALIC LETTER UU +10330..10340 ; Alphabetic # Lo [17] GOTHIC LETTER AHSA..GOTHIC LETTER PAIRTHRA +10341 ; Alphabetic # Nl GOTHIC LETTER NINETY +10342..10349 ; Alphabetic # Lo [8] GOTHIC LETTER RAIDA..GOTHIC LETTER OTHAL +1034A ; Alphabetic # Nl GOTHIC LETTER NINE HUNDRED +10380..1039D ; Alphabetic # Lo [30] UGARITIC LETTER ALPA..UGARITIC LETTER SSU +103A0..103C3 ; Alphabetic # Lo [36] OLD PERSIAN SIGN A..OLD PERSIAN SIGN HA +103C8..103CF ; Alphabetic # Lo [8] OLD PERSIAN SIGN AURAMAZDAA..OLD PERSIAN SIGN BUUMISH +103D1..103D5 ; Alphabetic # Nl [5] OLD PERSIAN NUMBER ONE..OLD PERSIAN NUMBER HUNDRED +10400..1044F ; Alphabetic # L& [80] DESERET CAPITAL LETTER LONG I..DESERET SMALL LETTER EW +10450..1049D ; Alphabetic # Lo [78] SHAVIAN LETTER PEEP..OSMANYA LETTER OO +10800..10805 ; Alphabetic # Lo [6] CYPRIOT SYLLABLE A..CYPRIOT SYLLABLE JA +10808 ; Alphabetic # Lo CYPRIOT SYLLABLE JO +1080A..10835 ; Alphabetic # Lo [44] CYPRIOT SYLLABLE KA..CYPRIOT SYLLABLE WO +10837..10838 ; Alphabetic # Lo [2] CYPRIOT SYLLABLE XA..CYPRIOT SYLLABLE XE +1083C ; Alphabetic # Lo CYPRIOT SYLLABLE ZA +1083F..10855 ; Alphabetic # Lo [23] CYPRIOT SYLLABLE ZO..IMPERIAL ARAMAIC LETTER TAW +10900..10915 ; Alphabetic # Lo [22] PHOENICIAN LETTER ALF..PHOENICIAN LETTER TAU +10920..10939 ; Alphabetic # Lo [26] LYDIAN LETTER A..LYDIAN LETTER C +10A00 ; Alphabetic # Lo KHAROSHTHI LETTER A +10A01..10A03 ; Alphabetic # Mn [3] KHAROSHTHI VOWEL SIGN I..KHAROSHTHI VOWEL SIGN VOCALIC R +10A05..10A06 ; Alphabetic # Mn [2] KHAROSHTHI VOWEL SIGN E..KHAROSHTHI VOWEL SIGN O +10A0C..10A0F ; Alphabetic # Mn [4] KHAROSHTHI VOWEL LENGTH MARK..KHAROSHTHI SIGN VISARGA +10A10..10A13 ; Alphabetic # Lo [4] KHAROSHTHI LETTER KA..KHAROSHTHI LETTER GHA +10A15..10A17 ; Alphabetic # Lo [3] KHAROSHTHI LETTER CA..KHAROSHTHI LETTER JA +10A19..10A33 ; Alphabetic # Lo [27] KHAROSHTHI LETTER NYA..KHAROSHTHI LETTER TTTHA +10A60..10A7C ; Alphabetic # Lo [29] OLD SOUTH ARABIAN LETTER HE..OLD SOUTH ARABIAN LETTER THETH +10B00..10B35 ; Alphabetic # Lo [54] AVESTAN LETTER A..AVESTAN LETTER HE +10B40..10B55 ; Alphabetic # Lo [22] INSCRIPTIONAL PARTHIAN LETTER ALEPH..INSCRIPTIONAL PARTHIAN LETTER TAW +10B60..10B72 ; Alphabetic # Lo [19] INSCRIPTIONAL PAHLAVI LETTER ALEPH..INSCRIPTIONAL PAHLAVI LETTER TAW +10C00..10C48 ; Alphabetic # Lo [73] OLD TURKIC LETTER ORKHON A..OLD TURKIC LETTER ORKHON BASH +11082 ; Alphabetic # Mc KAITHI SIGN VISARGA +11083..110AF ; Alphabetic # Lo [45] KAITHI LETTER A..KAITHI LETTER HA +110B0..110B2 ; Alphabetic # Mc [3] KAITHI VOWEL SIGN AA..KAITHI VOWEL SIGN II +110B3..110B6 ; Alphabetic # Mn [4] KAITHI VOWEL SIGN U..KAITHI VOWEL SIGN AI +110B7..110B8 ; Alphabetic # Mc [2] KAITHI VOWEL SIGN O..KAITHI VOWEL SIGN AU +12000..1236E ; Alphabetic # Lo [879] CUNEIFORM SIGN A..CUNEIFORM SIGN ZUM +12400..12462 ; Alphabetic # Nl [99] CUNEIFORM NUMERIC SIGN TWO ASH..CUNEIFORM NUMERIC SIGN OLD ASSYRIAN ONE QUARTER +13000..1342E ; Alphabetic # Lo [1071] EGYPTIAN HIEROGLYPH A001..EGYPTIAN HIEROGLYPH AA032 +1D400..1D454 ; Alphabetic # L& [85] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL ITALIC SMALL G +1D456..1D49C ; Alphabetic # L& [71] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; Alphabetic # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; Alphabetic # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; Alphabetic # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; Alphabetic # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B9 ; Alphabetic # L& [12] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT SMALL D +1D4BB ; Alphabetic # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; Alphabetic # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D505 ; Alphabetic # L& [65] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; Alphabetic # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; Alphabetic # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; Alphabetic # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D51E..1D539 ; Alphabetic # L& [28] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; Alphabetic # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; Alphabetic # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; Alphabetic # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; Alphabetic # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D552..1D6A5 ; Alphabetic # L& [340] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6A8..1D6C0 ; Alphabetic # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6C2..1D6DA ; Alphabetic # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DC..1D6FA ; Alphabetic # L& [31] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL ITALIC CAPITAL OMEGA +1D6FC..1D714 ; Alphabetic # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D716..1D734 ; Alphabetic # L& [31] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D736..1D74E ; Alphabetic # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D750..1D76E ; Alphabetic # L& [31] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D770..1D788 ; Alphabetic # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D78A..1D7A8 ; Alphabetic # L& [31] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7AA..1D7C2 ; Alphabetic # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C4..1D7CB ; Alphabetic # L& [8] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD SMALL DIGAMMA +20000..2A6D6 ; Alphabetic # Lo [42711] CJK UNIFIED IDEOGRAPH-20000..CJK UNIFIED IDEOGRAPH-2A6D6 +2A700..2B734 ; Alphabetic # Lo [4149] CJK UNIFIED IDEOGRAPH-2A700..CJK UNIFIED IDEOGRAPH-2B734 +2F800..2FA1D ; Alphabetic # Lo [542] CJK COMPATIBILITY IDEOGRAPH-2F800..CJK COMPATIBILITY IDEOGRAPH-2FA1D + +# Total code points: 100520 + +# ================================================ + +# Derived Property: Lowercase +# Generated from: Ll + Other_Lowercase + +0061..007A ; Lowercase # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00AA ; Lowercase # L& FEMININE ORDINAL INDICATOR +00B5 ; Lowercase # L& MICRO SIGN +00BA ; Lowercase # L& MASCULINE ORDINAL INDICATOR +00DF..00F6 ; Lowercase # L& [24] LATIN SMALL LETTER SHARP S..LATIN SMALL LETTER O WITH DIAERESIS +00F8..00FF ; Lowercase # L& [8] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER Y WITH DIAERESIS +0101 ; Lowercase # L& LATIN SMALL LETTER A WITH MACRON +0103 ; Lowercase # L& LATIN SMALL LETTER A WITH BREVE +0105 ; Lowercase # L& LATIN SMALL LETTER A WITH OGONEK +0107 ; Lowercase # L& LATIN SMALL LETTER C WITH ACUTE +0109 ; Lowercase # L& LATIN SMALL LETTER C WITH CIRCUMFLEX +010B ; Lowercase # L& LATIN SMALL LETTER C WITH DOT ABOVE +010D ; Lowercase # L& LATIN SMALL LETTER C WITH CARON +010F ; Lowercase # L& LATIN SMALL LETTER D WITH CARON +0111 ; Lowercase # L& LATIN SMALL LETTER D WITH STROKE +0113 ; Lowercase # L& LATIN SMALL LETTER E WITH MACRON +0115 ; Lowercase # L& LATIN SMALL LETTER E WITH BREVE +0117 ; Lowercase # L& LATIN SMALL LETTER E WITH DOT ABOVE +0119 ; Lowercase # L& LATIN SMALL LETTER E WITH OGONEK +011B ; Lowercase # L& LATIN SMALL LETTER E WITH CARON +011D ; Lowercase # L& LATIN SMALL LETTER G WITH CIRCUMFLEX +011F ; Lowercase # L& LATIN SMALL LETTER G WITH BREVE +0121 ; Lowercase # L& LATIN SMALL LETTER G WITH DOT ABOVE +0123 ; Lowercase # L& LATIN SMALL LETTER G WITH CEDILLA +0125 ; Lowercase # L& LATIN SMALL LETTER H WITH CIRCUMFLEX +0127 ; Lowercase # L& LATIN SMALL LETTER H WITH STROKE +0129 ; Lowercase # L& LATIN SMALL LETTER I WITH TILDE +012B ; Lowercase # L& LATIN SMALL LETTER I WITH MACRON +012D ; Lowercase # L& LATIN SMALL LETTER I WITH BREVE +012F ; Lowercase # L& LATIN SMALL LETTER I WITH OGONEK +0131 ; Lowercase # L& LATIN SMALL LETTER DOTLESS I +0133 ; Lowercase # L& LATIN SMALL LIGATURE IJ +0135 ; Lowercase # L& LATIN SMALL LETTER J WITH CIRCUMFLEX +0137..0138 ; Lowercase # L& [2] LATIN SMALL LETTER K WITH CEDILLA..LATIN SMALL LETTER KRA +013A ; Lowercase # L& LATIN SMALL LETTER L WITH ACUTE +013C ; Lowercase # L& LATIN SMALL LETTER L WITH CEDILLA +013E ; Lowercase # L& LATIN SMALL LETTER L WITH CARON +0140 ; Lowercase # L& LATIN SMALL LETTER L WITH MIDDLE DOT +0142 ; Lowercase # L& LATIN SMALL LETTER L WITH STROKE +0144 ; Lowercase # L& LATIN SMALL LETTER N WITH ACUTE +0146 ; Lowercase # L& LATIN SMALL LETTER N WITH CEDILLA +0148..0149 ; Lowercase # L& [2] LATIN SMALL LETTER N WITH CARON..LATIN SMALL LETTER N PRECEDED BY APOSTROPHE +014B ; Lowercase # L& LATIN SMALL LETTER ENG +014D ; Lowercase # L& LATIN SMALL LETTER O WITH MACRON +014F ; Lowercase # L& LATIN SMALL LETTER O WITH BREVE +0151 ; Lowercase # L& LATIN SMALL LETTER O WITH DOUBLE ACUTE +0153 ; Lowercase # L& LATIN SMALL LIGATURE OE +0155 ; Lowercase # L& LATIN SMALL LETTER R WITH ACUTE +0157 ; Lowercase # L& LATIN SMALL LETTER R WITH CEDILLA +0159 ; Lowercase # L& LATIN SMALL LETTER R WITH CARON +015B ; Lowercase # L& LATIN SMALL LETTER S WITH ACUTE +015D ; Lowercase # L& LATIN SMALL LETTER S WITH CIRCUMFLEX +015F ; Lowercase # L& LATIN SMALL LETTER S WITH CEDILLA +0161 ; Lowercase # L& LATIN SMALL LETTER S WITH CARON +0163 ; Lowercase # L& LATIN SMALL LETTER T WITH CEDILLA +0165 ; Lowercase # L& LATIN SMALL LETTER T WITH CARON +0167 ; Lowercase # L& LATIN SMALL LETTER T WITH STROKE +0169 ; Lowercase # L& LATIN SMALL LETTER U WITH TILDE +016B ; Lowercase # L& LATIN SMALL LETTER U WITH MACRON +016D ; Lowercase # L& LATIN SMALL LETTER U WITH BREVE +016F ; Lowercase # L& LATIN SMALL LETTER U WITH RING ABOVE +0171 ; Lowercase # L& LATIN SMALL LETTER U WITH DOUBLE ACUTE +0173 ; Lowercase # L& LATIN SMALL LETTER U WITH OGONEK +0175 ; Lowercase # L& LATIN SMALL LETTER W WITH CIRCUMFLEX +0177 ; Lowercase # L& LATIN SMALL LETTER Y WITH CIRCUMFLEX +017A ; Lowercase # L& LATIN SMALL LETTER Z WITH ACUTE +017C ; Lowercase # L& LATIN SMALL LETTER Z WITH DOT ABOVE +017E..0180 ; Lowercase # L& [3] LATIN SMALL LETTER Z WITH CARON..LATIN SMALL LETTER B WITH STROKE +0183 ; Lowercase # L& LATIN SMALL LETTER B WITH TOPBAR +0185 ; Lowercase # L& LATIN SMALL LETTER TONE SIX +0188 ; Lowercase # L& LATIN SMALL LETTER C WITH HOOK +018C..018D ; Lowercase # L& [2] LATIN SMALL LETTER D WITH TOPBAR..LATIN SMALL LETTER TURNED DELTA +0192 ; Lowercase # L& LATIN SMALL LETTER F WITH HOOK +0195 ; Lowercase # L& LATIN SMALL LETTER HV +0199..019B ; Lowercase # L& [3] LATIN SMALL LETTER K WITH HOOK..LATIN SMALL LETTER LAMBDA WITH STROKE +019E ; Lowercase # L& LATIN SMALL LETTER N WITH LONG RIGHT LEG +01A1 ; Lowercase # L& LATIN SMALL LETTER O WITH HORN +01A3 ; Lowercase # L& LATIN SMALL LETTER OI +01A5 ; Lowercase # L& LATIN SMALL LETTER P WITH HOOK +01A8 ; Lowercase # L& LATIN SMALL LETTER TONE TWO +01AA..01AB ; Lowercase # L& [2] LATIN LETTER REVERSED ESH LOOP..LATIN SMALL LETTER T WITH PALATAL HOOK +01AD ; Lowercase # L& LATIN SMALL LETTER T WITH HOOK +01B0 ; Lowercase # L& LATIN SMALL LETTER U WITH HORN +01B4 ; Lowercase # L& LATIN SMALL LETTER Y WITH HOOK +01B6 ; Lowercase # L& LATIN SMALL LETTER Z WITH STROKE +01B9..01BA ; Lowercase # L& [2] LATIN SMALL LETTER EZH REVERSED..LATIN SMALL LETTER EZH WITH TAIL +01BD..01BF ; Lowercase # L& [3] LATIN SMALL LETTER TONE FIVE..LATIN LETTER WYNN +01C6 ; Lowercase # L& LATIN SMALL LETTER DZ WITH CARON +01C9 ; Lowercase # L& LATIN SMALL LETTER LJ +01CC ; Lowercase # L& LATIN SMALL LETTER NJ +01CE ; Lowercase # L& LATIN SMALL LETTER A WITH CARON +01D0 ; Lowercase # L& LATIN SMALL LETTER I WITH CARON +01D2 ; Lowercase # L& LATIN SMALL LETTER O WITH CARON +01D4 ; Lowercase # L& LATIN SMALL LETTER U WITH CARON +01D6 ; Lowercase # L& LATIN SMALL LETTER U WITH DIAERESIS AND MACRON +01D8 ; Lowercase # L& LATIN SMALL LETTER U WITH DIAERESIS AND ACUTE +01DA ; Lowercase # L& LATIN SMALL LETTER U WITH DIAERESIS AND CARON +01DC..01DD ; Lowercase # L& [2] LATIN SMALL LETTER U WITH DIAERESIS AND GRAVE..LATIN SMALL LETTER TURNED E +01DF ; Lowercase # L& LATIN SMALL LETTER A WITH DIAERESIS AND MACRON +01E1 ; Lowercase # L& LATIN SMALL LETTER A WITH DOT ABOVE AND MACRON +01E3 ; Lowercase # L& LATIN SMALL LETTER AE WITH MACRON +01E5 ; Lowercase # L& LATIN SMALL LETTER G WITH STROKE +01E7 ; Lowercase # L& LATIN SMALL LETTER G WITH CARON +01E9 ; Lowercase # L& LATIN SMALL LETTER K WITH CARON +01EB ; Lowercase # L& LATIN SMALL LETTER O WITH OGONEK +01ED ; Lowercase # L& LATIN SMALL LETTER O WITH OGONEK AND MACRON +01EF..01F0 ; Lowercase # L& [2] LATIN SMALL LETTER EZH WITH CARON..LATIN SMALL LETTER J WITH CARON +01F3 ; Lowercase # L& LATIN SMALL LETTER DZ +01F5 ; Lowercase # L& LATIN SMALL LETTER G WITH ACUTE +01F9 ; Lowercase # L& LATIN SMALL LETTER N WITH GRAVE +01FB ; Lowercase # L& LATIN SMALL LETTER A WITH RING ABOVE AND ACUTE +01FD ; Lowercase # L& LATIN SMALL LETTER AE WITH ACUTE +01FF ; Lowercase # L& LATIN SMALL LETTER O WITH STROKE AND ACUTE +0201 ; Lowercase # L& LATIN SMALL LETTER A WITH DOUBLE GRAVE +0203 ; Lowercase # L& LATIN SMALL LETTER A WITH INVERTED BREVE +0205 ; Lowercase # L& LATIN SMALL LETTER E WITH DOUBLE GRAVE +0207 ; Lowercase # L& LATIN SMALL LETTER E WITH INVERTED BREVE +0209 ; Lowercase # L& LATIN SMALL LETTER I WITH DOUBLE GRAVE +020B ; Lowercase # L& LATIN SMALL LETTER I WITH INVERTED BREVE +020D ; Lowercase # L& LATIN SMALL LETTER O WITH DOUBLE GRAVE +020F ; Lowercase # L& LATIN SMALL LETTER O WITH INVERTED BREVE +0211 ; Lowercase # L& LATIN SMALL LETTER R WITH DOUBLE GRAVE +0213 ; Lowercase # L& LATIN SMALL LETTER R WITH INVERTED BREVE +0215 ; Lowercase # L& LATIN SMALL LETTER U WITH DOUBLE GRAVE +0217 ; Lowercase # L& LATIN SMALL LETTER U WITH INVERTED BREVE +0219 ; Lowercase # L& LATIN SMALL LETTER S WITH COMMA BELOW +021B ; Lowercase # L& LATIN SMALL LETTER T WITH COMMA BELOW +021D ; Lowercase # L& LATIN SMALL LETTER YOGH +021F ; Lowercase # L& LATIN SMALL LETTER H WITH CARON +0221 ; Lowercase # L& LATIN SMALL LETTER D WITH CURL +0223 ; Lowercase # L& LATIN SMALL LETTER OU +0225 ; Lowercase # L& LATIN SMALL LETTER Z WITH HOOK +0227 ; Lowercase # L& LATIN SMALL LETTER A WITH DOT ABOVE +0229 ; Lowercase # L& LATIN SMALL LETTER E WITH CEDILLA +022B ; Lowercase # L& LATIN SMALL LETTER O WITH DIAERESIS AND MACRON +022D ; Lowercase # L& LATIN SMALL LETTER O WITH TILDE AND MACRON +022F ; Lowercase # L& LATIN SMALL LETTER O WITH DOT ABOVE +0231 ; Lowercase # L& LATIN SMALL LETTER O WITH DOT ABOVE AND MACRON +0233..0239 ; Lowercase # L& [7] LATIN SMALL LETTER Y WITH MACRON..LATIN SMALL LETTER QP DIGRAPH +023C ; Lowercase # L& LATIN SMALL LETTER C WITH STROKE +023F..0240 ; Lowercase # L& [2] LATIN SMALL LETTER S WITH SWASH TAIL..LATIN SMALL LETTER Z WITH SWASH TAIL +0242 ; Lowercase # L& LATIN SMALL LETTER GLOTTAL STOP +0247 ; Lowercase # L& LATIN SMALL LETTER E WITH STROKE +0249 ; Lowercase # L& LATIN SMALL LETTER J WITH STROKE +024B ; Lowercase # L& LATIN SMALL LETTER Q WITH HOOK TAIL +024D ; Lowercase # L& LATIN SMALL LETTER R WITH STROKE +024F..0293 ; Lowercase # L& [69] LATIN SMALL LETTER Y WITH STROKE..LATIN SMALL LETTER EZH WITH CURL +0295..02AF ; Lowercase # L& [27] LATIN LETTER PHARYNGEAL VOICED FRICATIVE..LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL +02B0..02B8 ; Lowercase # Lm [9] MODIFIER LETTER SMALL H..MODIFIER LETTER SMALL Y +02C0..02C1 ; Lowercase # Lm [2] MODIFIER LETTER GLOTTAL STOP..MODIFIER LETTER REVERSED GLOTTAL STOP +02E0..02E4 ; Lowercase # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +0345 ; Lowercase # Mn COMBINING GREEK YPOGEGRAMMENI +0371 ; Lowercase # L& GREEK SMALL LETTER HETA +0373 ; Lowercase # L& GREEK SMALL LETTER ARCHAIC SAMPI +0377 ; Lowercase # L& GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037A ; Lowercase # Lm GREEK YPOGEGRAMMENI +037B..037D ; Lowercase # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0390 ; Lowercase # L& GREEK SMALL LETTER IOTA WITH DIALYTIKA AND TONOS +03AC..03CE ; Lowercase # L& [35] GREEK SMALL LETTER ALPHA WITH TONOS..GREEK SMALL LETTER OMEGA WITH TONOS +03D0..03D1 ; Lowercase # L& [2] GREEK BETA SYMBOL..GREEK THETA SYMBOL +03D5..03D7 ; Lowercase # L& [3] GREEK PHI SYMBOL..GREEK KAI SYMBOL +03D9 ; Lowercase # L& GREEK SMALL LETTER ARCHAIC KOPPA +03DB ; Lowercase # L& GREEK SMALL LETTER STIGMA +03DD ; Lowercase # L& GREEK SMALL LETTER DIGAMMA +03DF ; Lowercase # L& GREEK SMALL LETTER KOPPA +03E1 ; Lowercase # L& GREEK SMALL LETTER SAMPI +03E3 ; Lowercase # L& COPTIC SMALL LETTER SHEI +03E5 ; Lowercase # L& COPTIC SMALL LETTER FEI +03E7 ; Lowercase # L& COPTIC SMALL LETTER KHEI +03E9 ; Lowercase # L& COPTIC SMALL LETTER HORI +03EB ; Lowercase # L& COPTIC SMALL LETTER GANGIA +03ED ; Lowercase # L& COPTIC SMALL LETTER SHIMA +03EF..03F3 ; Lowercase # L& [5] COPTIC SMALL LETTER DEI..GREEK LETTER YOT +03F5 ; Lowercase # L& GREEK LUNATE EPSILON SYMBOL +03F8 ; Lowercase # L& GREEK SMALL LETTER SHO +03FB..03FC ; Lowercase # L& [2] GREEK SMALL LETTER SAN..GREEK RHO WITH STROKE SYMBOL +0430..045F ; Lowercase # L& [48] CYRILLIC SMALL LETTER A..CYRILLIC SMALL LETTER DZHE +0461 ; Lowercase # L& CYRILLIC SMALL LETTER OMEGA +0463 ; Lowercase # L& CYRILLIC SMALL LETTER YAT +0465 ; Lowercase # L& CYRILLIC SMALL LETTER IOTIFIED E +0467 ; Lowercase # L& CYRILLIC SMALL LETTER LITTLE YUS +0469 ; Lowercase # L& CYRILLIC SMALL LETTER IOTIFIED LITTLE YUS +046B ; Lowercase # L& CYRILLIC SMALL LETTER BIG YUS +046D ; Lowercase # L& CYRILLIC SMALL LETTER IOTIFIED BIG YUS +046F ; Lowercase # L& CYRILLIC SMALL LETTER KSI +0471 ; Lowercase # L& CYRILLIC SMALL LETTER PSI +0473 ; Lowercase # L& CYRILLIC SMALL LETTER FITA +0475 ; Lowercase # L& CYRILLIC SMALL LETTER IZHITSA +0477 ; Lowercase # L& CYRILLIC SMALL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT +0479 ; Lowercase # L& CYRILLIC SMALL LETTER UK +047B ; Lowercase # L& CYRILLIC SMALL LETTER ROUND OMEGA +047D ; Lowercase # L& CYRILLIC SMALL LETTER OMEGA WITH TITLO +047F ; Lowercase # L& CYRILLIC SMALL LETTER OT +0481 ; Lowercase # L& CYRILLIC SMALL LETTER KOPPA +048B ; Lowercase # L& CYRILLIC SMALL LETTER SHORT I WITH TAIL +048D ; Lowercase # L& CYRILLIC SMALL LETTER SEMISOFT SIGN +048F ; Lowercase # L& CYRILLIC SMALL LETTER ER WITH TICK +0491 ; Lowercase # L& CYRILLIC SMALL LETTER GHE WITH UPTURN +0493 ; Lowercase # L& CYRILLIC SMALL LETTER GHE WITH STROKE +0495 ; Lowercase # L& CYRILLIC SMALL LETTER GHE WITH MIDDLE HOOK +0497 ; Lowercase # L& CYRILLIC SMALL LETTER ZHE WITH DESCENDER +0499 ; Lowercase # L& CYRILLIC SMALL LETTER ZE WITH DESCENDER +049B ; Lowercase # L& CYRILLIC SMALL LETTER KA WITH DESCENDER +049D ; Lowercase # L& CYRILLIC SMALL LETTER KA WITH VERTICAL STROKE +049F ; Lowercase # L& CYRILLIC SMALL LETTER KA WITH STROKE +04A1 ; Lowercase # L& CYRILLIC SMALL LETTER BASHKIR KA +04A3 ; Lowercase # L& CYRILLIC SMALL LETTER EN WITH DESCENDER +04A5 ; Lowercase # L& CYRILLIC SMALL LIGATURE EN GHE +04A7 ; Lowercase # L& CYRILLIC SMALL LETTER PE WITH MIDDLE HOOK +04A9 ; Lowercase # L& CYRILLIC SMALL LETTER ABKHASIAN HA +04AB ; Lowercase # L& CYRILLIC SMALL LETTER ES WITH DESCENDER +04AD ; Lowercase # L& CYRILLIC SMALL LETTER TE WITH DESCENDER +04AF ; Lowercase # L& CYRILLIC SMALL LETTER STRAIGHT U +04B1 ; Lowercase # L& CYRILLIC SMALL LETTER STRAIGHT U WITH STROKE +04B3 ; Lowercase # L& CYRILLIC SMALL LETTER HA WITH DESCENDER +04B5 ; Lowercase # L& CYRILLIC SMALL LIGATURE TE TSE +04B7 ; Lowercase # L& CYRILLIC SMALL LETTER CHE WITH DESCENDER +04B9 ; Lowercase # L& CYRILLIC SMALL LETTER CHE WITH VERTICAL STROKE +04BB ; Lowercase # L& CYRILLIC SMALL LETTER SHHA +04BD ; Lowercase # L& CYRILLIC SMALL LETTER ABKHASIAN CHE +04BF ; Lowercase # L& CYRILLIC SMALL LETTER ABKHASIAN CHE WITH DESCENDER +04C2 ; Lowercase # L& CYRILLIC SMALL LETTER ZHE WITH BREVE +04C4 ; Lowercase # L& CYRILLIC SMALL LETTER KA WITH HOOK +04C6 ; Lowercase # L& CYRILLIC SMALL LETTER EL WITH TAIL +04C8 ; Lowercase # L& CYRILLIC SMALL LETTER EN WITH HOOK +04CA ; Lowercase # L& CYRILLIC SMALL LETTER EN WITH TAIL +04CC ; Lowercase # L& CYRILLIC SMALL LETTER KHAKASSIAN CHE +04CE..04CF ; Lowercase # L& [2] CYRILLIC SMALL LETTER EM WITH TAIL..CYRILLIC SMALL LETTER PALOCHKA +04D1 ; Lowercase # L& CYRILLIC SMALL LETTER A WITH BREVE +04D3 ; Lowercase # L& CYRILLIC SMALL LETTER A WITH DIAERESIS +04D5 ; Lowercase # L& CYRILLIC SMALL LIGATURE A IE +04D7 ; Lowercase # L& CYRILLIC SMALL LETTER IE WITH BREVE +04D9 ; Lowercase # L& CYRILLIC SMALL LETTER SCHWA +04DB ; Lowercase # L& CYRILLIC SMALL LETTER SCHWA WITH DIAERESIS +04DD ; Lowercase # L& CYRILLIC SMALL LETTER ZHE WITH DIAERESIS +04DF ; Lowercase # L& CYRILLIC SMALL LETTER ZE WITH DIAERESIS +04E1 ; Lowercase # L& CYRILLIC SMALL LETTER ABKHASIAN DZE +04E3 ; Lowercase # L& CYRILLIC SMALL LETTER I WITH MACRON +04E5 ; Lowercase # L& CYRILLIC SMALL LETTER I WITH DIAERESIS +04E7 ; Lowercase # L& CYRILLIC SMALL LETTER O WITH DIAERESIS +04E9 ; Lowercase # L& CYRILLIC SMALL LETTER BARRED O +04EB ; Lowercase # L& CYRILLIC SMALL LETTER BARRED O WITH DIAERESIS +04ED ; Lowercase # L& CYRILLIC SMALL LETTER E WITH DIAERESIS +04EF ; Lowercase # L& CYRILLIC SMALL LETTER U WITH MACRON +04F1 ; Lowercase # L& CYRILLIC SMALL LETTER U WITH DIAERESIS +04F3 ; Lowercase # L& CYRILLIC SMALL LETTER U WITH DOUBLE ACUTE +04F5 ; Lowercase # L& CYRILLIC SMALL LETTER CHE WITH DIAERESIS +04F7 ; Lowercase # L& CYRILLIC SMALL LETTER GHE WITH DESCENDER +04F9 ; Lowercase # L& CYRILLIC SMALL LETTER YERU WITH DIAERESIS +04FB ; Lowercase # L& CYRILLIC SMALL LETTER GHE WITH STROKE AND HOOK +04FD ; Lowercase # L& CYRILLIC SMALL LETTER HA WITH HOOK +04FF ; Lowercase # L& CYRILLIC SMALL LETTER HA WITH STROKE +0501 ; Lowercase # L& CYRILLIC SMALL LETTER KOMI DE +0503 ; Lowercase # L& CYRILLIC SMALL LETTER KOMI DJE +0505 ; Lowercase # L& CYRILLIC SMALL LETTER KOMI ZJE +0507 ; Lowercase # L& CYRILLIC SMALL LETTER KOMI DZJE +0509 ; Lowercase # L& CYRILLIC SMALL LETTER KOMI LJE +050B ; Lowercase # L& CYRILLIC SMALL LETTER KOMI NJE +050D ; Lowercase # L& CYRILLIC SMALL LETTER KOMI SJE +050F ; Lowercase # L& CYRILLIC SMALL LETTER KOMI TJE +0511 ; Lowercase # L& CYRILLIC SMALL LETTER REVERSED ZE +0513 ; Lowercase # L& CYRILLIC SMALL LETTER EL WITH HOOK +0515 ; Lowercase # L& CYRILLIC SMALL LETTER LHA +0517 ; Lowercase # L& CYRILLIC SMALL LETTER RHA +0519 ; Lowercase # L& CYRILLIC SMALL LETTER YAE +051B ; Lowercase # L& CYRILLIC SMALL LETTER QA +051D ; Lowercase # L& CYRILLIC SMALL LETTER WE +051F ; Lowercase # L& CYRILLIC SMALL LETTER ALEUT KA +0521 ; Lowercase # L& CYRILLIC SMALL LETTER EL WITH MIDDLE HOOK +0523 ; Lowercase # L& CYRILLIC SMALL LETTER EN WITH MIDDLE HOOK +0525 ; Lowercase # L& CYRILLIC SMALL LETTER PE WITH DESCENDER +0561..0587 ; Lowercase # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +1D00..1D2B ; Lowercase # L& [44] LATIN LETTER SMALL CAPITAL A..CYRILLIC LETTER SMALL CAPITAL EL +1D2C..1D61 ; Lowercase # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D62..1D77 ; Lowercase # L& [22] LATIN SUBSCRIPT SMALL LETTER I..LATIN SMALL LETTER TURNED G +1D78 ; Lowercase # Lm MODIFIER LETTER CYRILLIC EN +1D79..1D9A ; Lowercase # L& [34] LATIN SMALL LETTER INSULAR G..LATIN SMALL LETTER EZH WITH RETROFLEX HOOK +1D9B..1DBF ; Lowercase # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1E01 ; Lowercase # L& LATIN SMALL LETTER A WITH RING BELOW +1E03 ; Lowercase # L& LATIN SMALL LETTER B WITH DOT ABOVE +1E05 ; Lowercase # L& LATIN SMALL LETTER B WITH DOT BELOW +1E07 ; Lowercase # L& LATIN SMALL LETTER B WITH LINE BELOW +1E09 ; Lowercase # L& LATIN SMALL LETTER C WITH CEDILLA AND ACUTE +1E0B ; Lowercase # L& LATIN SMALL LETTER D WITH DOT ABOVE +1E0D ; Lowercase # L& LATIN SMALL LETTER D WITH DOT BELOW +1E0F ; Lowercase # L& LATIN SMALL LETTER D WITH LINE BELOW +1E11 ; Lowercase # L& LATIN SMALL LETTER D WITH CEDILLA +1E13 ; Lowercase # L& LATIN SMALL LETTER D WITH CIRCUMFLEX BELOW +1E15 ; Lowercase # L& LATIN SMALL LETTER E WITH MACRON AND GRAVE +1E17 ; Lowercase # L& LATIN SMALL LETTER E WITH MACRON AND ACUTE +1E19 ; Lowercase # L& LATIN SMALL LETTER E WITH CIRCUMFLEX BELOW +1E1B ; Lowercase # L& LATIN SMALL LETTER E WITH TILDE BELOW +1E1D ; Lowercase # L& LATIN SMALL LETTER E WITH CEDILLA AND BREVE +1E1F ; Lowercase # L& LATIN SMALL LETTER F WITH DOT ABOVE +1E21 ; Lowercase # L& LATIN SMALL LETTER G WITH MACRON +1E23 ; Lowercase # L& LATIN SMALL LETTER H WITH DOT ABOVE +1E25 ; Lowercase # L& LATIN SMALL LETTER H WITH DOT BELOW +1E27 ; Lowercase # L& LATIN SMALL LETTER H WITH DIAERESIS +1E29 ; Lowercase # L& LATIN SMALL LETTER H WITH CEDILLA +1E2B ; Lowercase # L& LATIN SMALL LETTER H WITH BREVE BELOW +1E2D ; Lowercase # L& LATIN SMALL LETTER I WITH TILDE BELOW +1E2F ; Lowercase # L& LATIN SMALL LETTER I WITH DIAERESIS AND ACUTE +1E31 ; Lowercase # L& LATIN SMALL LETTER K WITH ACUTE +1E33 ; Lowercase # L& LATIN SMALL LETTER K WITH DOT BELOW +1E35 ; Lowercase # L& LATIN SMALL LETTER K WITH LINE BELOW +1E37 ; Lowercase # L& LATIN SMALL LETTER L WITH DOT BELOW +1E39 ; Lowercase # L& LATIN SMALL LETTER L WITH DOT BELOW AND MACRON +1E3B ; Lowercase # L& LATIN SMALL LETTER L WITH LINE BELOW +1E3D ; Lowercase # L& LATIN SMALL LETTER L WITH CIRCUMFLEX BELOW +1E3F ; Lowercase # L& LATIN SMALL LETTER M WITH ACUTE +1E41 ; Lowercase # L& LATIN SMALL LETTER M WITH DOT ABOVE +1E43 ; Lowercase # L& LATIN SMALL LETTER M WITH DOT BELOW +1E45 ; Lowercase # L& LATIN SMALL LETTER N WITH DOT ABOVE +1E47 ; Lowercase # L& LATIN SMALL LETTER N WITH DOT BELOW +1E49 ; Lowercase # L& LATIN SMALL LETTER N WITH LINE BELOW +1E4B ; Lowercase # L& LATIN SMALL LETTER N WITH CIRCUMFLEX BELOW +1E4D ; Lowercase # L& LATIN SMALL LETTER O WITH TILDE AND ACUTE +1E4F ; Lowercase # L& LATIN SMALL LETTER O WITH TILDE AND DIAERESIS +1E51 ; Lowercase # L& LATIN SMALL LETTER O WITH MACRON AND GRAVE +1E53 ; Lowercase # L& LATIN SMALL LETTER O WITH MACRON AND ACUTE +1E55 ; Lowercase # L& LATIN SMALL LETTER P WITH ACUTE +1E57 ; Lowercase # L& LATIN SMALL LETTER P WITH DOT ABOVE +1E59 ; Lowercase # L& LATIN SMALL LETTER R WITH DOT ABOVE +1E5B ; Lowercase # L& LATIN SMALL LETTER R WITH DOT BELOW +1E5D ; Lowercase # L& LATIN SMALL LETTER R WITH DOT BELOW AND MACRON +1E5F ; Lowercase # L& LATIN SMALL LETTER R WITH LINE BELOW +1E61 ; Lowercase # L& LATIN SMALL LETTER S WITH DOT ABOVE +1E63 ; Lowercase # L& LATIN SMALL LETTER S WITH DOT BELOW +1E65 ; Lowercase # L& LATIN SMALL LETTER S WITH ACUTE AND DOT ABOVE +1E67 ; Lowercase # L& LATIN SMALL LETTER S WITH CARON AND DOT ABOVE +1E69 ; Lowercase # L& LATIN SMALL LETTER S WITH DOT BELOW AND DOT ABOVE +1E6B ; Lowercase # L& LATIN SMALL LETTER T WITH DOT ABOVE +1E6D ; Lowercase # L& LATIN SMALL LETTER T WITH DOT BELOW +1E6F ; Lowercase # L& LATIN SMALL LETTER T WITH LINE BELOW +1E71 ; Lowercase # L& LATIN SMALL LETTER T WITH CIRCUMFLEX BELOW +1E73 ; Lowercase # L& LATIN SMALL LETTER U WITH DIAERESIS BELOW +1E75 ; Lowercase # L& LATIN SMALL LETTER U WITH TILDE BELOW +1E77 ; Lowercase # L& LATIN SMALL LETTER U WITH CIRCUMFLEX BELOW +1E79 ; Lowercase # L& LATIN SMALL LETTER U WITH TILDE AND ACUTE +1E7B ; Lowercase # L& LATIN SMALL LETTER U WITH MACRON AND DIAERESIS +1E7D ; Lowercase # L& LATIN SMALL LETTER V WITH TILDE +1E7F ; Lowercase # L& LATIN SMALL LETTER V WITH DOT BELOW +1E81 ; Lowercase # L& LATIN SMALL LETTER W WITH GRAVE +1E83 ; Lowercase # L& LATIN SMALL LETTER W WITH ACUTE +1E85 ; Lowercase # L& LATIN SMALL LETTER W WITH DIAERESIS +1E87 ; Lowercase # L& LATIN SMALL LETTER W WITH DOT ABOVE +1E89 ; Lowercase # L& LATIN SMALL LETTER W WITH DOT BELOW +1E8B ; Lowercase # L& LATIN SMALL LETTER X WITH DOT ABOVE +1E8D ; Lowercase # L& LATIN SMALL LETTER X WITH DIAERESIS +1E8F ; Lowercase # L& LATIN SMALL LETTER Y WITH DOT ABOVE +1E91 ; Lowercase # L& LATIN SMALL LETTER Z WITH CIRCUMFLEX +1E93 ; Lowercase # L& LATIN SMALL LETTER Z WITH DOT BELOW +1E95..1E9D ; Lowercase # L& [9] LATIN SMALL LETTER Z WITH LINE BELOW..LATIN SMALL LETTER LONG S WITH HIGH STROKE +1E9F ; Lowercase # L& LATIN SMALL LETTER DELTA +1EA1 ; Lowercase # L& LATIN SMALL LETTER A WITH DOT BELOW +1EA3 ; Lowercase # L& LATIN SMALL LETTER A WITH HOOK ABOVE +1EA5 ; Lowercase # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND ACUTE +1EA7 ; Lowercase # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND GRAVE +1EA9 ; Lowercase # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE +1EAB ; Lowercase # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND TILDE +1EAD ; Lowercase # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND DOT BELOW +1EAF ; Lowercase # L& LATIN SMALL LETTER A WITH BREVE AND ACUTE +1EB1 ; Lowercase # L& LATIN SMALL LETTER A WITH BREVE AND GRAVE +1EB3 ; Lowercase # L& LATIN SMALL LETTER A WITH BREVE AND HOOK ABOVE +1EB5 ; Lowercase # L& LATIN SMALL LETTER A WITH BREVE AND TILDE +1EB7 ; Lowercase # L& LATIN SMALL LETTER A WITH BREVE AND DOT BELOW +1EB9 ; Lowercase # L& LATIN SMALL LETTER E WITH DOT BELOW +1EBB ; Lowercase # L& LATIN SMALL LETTER E WITH HOOK ABOVE +1EBD ; Lowercase # L& LATIN SMALL LETTER E WITH TILDE +1EBF ; Lowercase # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND ACUTE +1EC1 ; Lowercase # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND GRAVE +1EC3 ; Lowercase # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE +1EC5 ; Lowercase # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND TILDE +1EC7 ; Lowercase # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND DOT BELOW +1EC9 ; Lowercase # L& LATIN SMALL LETTER I WITH HOOK ABOVE +1ECB ; Lowercase # L& LATIN SMALL LETTER I WITH DOT BELOW +1ECD ; Lowercase # L& LATIN SMALL LETTER O WITH DOT BELOW +1ECF ; Lowercase # L& LATIN SMALL LETTER O WITH HOOK ABOVE +1ED1 ; Lowercase # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND ACUTE +1ED3 ; Lowercase # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND GRAVE +1ED5 ; Lowercase # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE +1ED7 ; Lowercase # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND TILDE +1ED9 ; Lowercase # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND DOT BELOW +1EDB ; Lowercase # L& LATIN SMALL LETTER O WITH HORN AND ACUTE +1EDD ; Lowercase # L& LATIN SMALL LETTER O WITH HORN AND GRAVE +1EDF ; Lowercase # L& LATIN SMALL LETTER O WITH HORN AND HOOK ABOVE +1EE1 ; Lowercase # L& LATIN SMALL LETTER O WITH HORN AND TILDE +1EE3 ; Lowercase # L& LATIN SMALL LETTER O WITH HORN AND DOT BELOW +1EE5 ; Lowercase # L& LATIN SMALL LETTER U WITH DOT BELOW +1EE7 ; Lowercase # L& LATIN SMALL LETTER U WITH HOOK ABOVE +1EE9 ; Lowercase # L& LATIN SMALL LETTER U WITH HORN AND ACUTE +1EEB ; Lowercase # L& LATIN SMALL LETTER U WITH HORN AND GRAVE +1EED ; Lowercase # L& LATIN SMALL LETTER U WITH HORN AND HOOK ABOVE +1EEF ; Lowercase # L& LATIN SMALL LETTER U WITH HORN AND TILDE +1EF1 ; Lowercase # L& LATIN SMALL LETTER U WITH HORN AND DOT BELOW +1EF3 ; Lowercase # L& LATIN SMALL LETTER Y WITH GRAVE +1EF5 ; Lowercase # L& LATIN SMALL LETTER Y WITH DOT BELOW +1EF7 ; Lowercase # L& LATIN SMALL LETTER Y WITH HOOK ABOVE +1EF9 ; Lowercase # L& LATIN SMALL LETTER Y WITH TILDE +1EFB ; Lowercase # L& LATIN SMALL LETTER MIDDLE-WELSH LL +1EFD ; Lowercase # L& LATIN SMALL LETTER MIDDLE-WELSH V +1EFF..1F07 ; Lowercase # L& [9] LATIN SMALL LETTER Y WITH LOOP..GREEK SMALL LETTER ALPHA WITH DASIA AND PERISPOMENI +1F10..1F15 ; Lowercase # L& [6] GREEK SMALL LETTER EPSILON WITH PSILI..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F27 ; Lowercase # L& [8] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER ETA WITH DASIA AND PERISPOMENI +1F30..1F37 ; Lowercase # L& [8] GREEK SMALL LETTER IOTA WITH PSILI..GREEK SMALL LETTER IOTA WITH DASIA AND PERISPOMENI +1F40..1F45 ; Lowercase # L& [6] GREEK SMALL LETTER OMICRON WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; Lowercase # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F60..1F67 ; Lowercase # L& [8] GREEK SMALL LETTER OMEGA WITH PSILI..GREEK SMALL LETTER OMEGA WITH DASIA AND PERISPOMENI +1F70..1F7D ; Lowercase # L& [14] GREEK SMALL LETTER ALPHA WITH VARIA..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1F87 ; Lowercase # L& [8] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH DASIA AND PERISPOMENI AND YPOGEGRAMMENI +1F90..1F97 ; Lowercase # L& [8] GREEK SMALL LETTER ETA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH DASIA AND PERISPOMENI AND YPOGEGRAMMENI +1FA0..1FA7 ; Lowercase # L& [8] GREEK SMALL LETTER OMEGA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH DASIA AND PERISPOMENI AND YPOGEGRAMMENI +1FB0..1FB4 ; Lowercase # L& [5] GREEK SMALL LETTER ALPHA WITH VRACHY..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FB7 ; Lowercase # L& [2] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK SMALL LETTER ALPHA WITH PERISPOMENI AND YPOGEGRAMMENI +1FBE ; Lowercase # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; Lowercase # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FC7 ; Lowercase # L& [2] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK SMALL LETTER ETA WITH PERISPOMENI AND YPOGEGRAMMENI +1FD0..1FD3 ; Lowercase # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FD7 ; Lowercase # L& [2] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND PERISPOMENI +1FE0..1FE7 ; Lowercase # L& [8] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND PERISPOMENI +1FF2..1FF4 ; Lowercase # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FF7 ; Lowercase # L& [2] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK SMALL LETTER OMEGA WITH PERISPOMENI AND YPOGEGRAMMENI +2090..2094 ; Lowercase # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +210A ; Lowercase # L& SCRIPT SMALL G +210E..210F ; Lowercase # L& [2] PLANCK CONSTANT..PLANCK CONSTANT OVER TWO PI +2113 ; Lowercase # L& SCRIPT SMALL L +212F ; Lowercase # L& SCRIPT SMALL E +2134 ; Lowercase # L& SCRIPT SMALL O +2139 ; Lowercase # L& INFORMATION SOURCE +213C..213D ; Lowercase # L& [2] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK SMALL GAMMA +2146..2149 ; Lowercase # L& [4] DOUBLE-STRUCK ITALIC SMALL D..DOUBLE-STRUCK ITALIC SMALL J +214E ; Lowercase # L& TURNED SMALL F +2170..217F ; Lowercase # Nl [16] SMALL ROMAN NUMERAL ONE..SMALL ROMAN NUMERAL ONE THOUSAND +2184 ; Lowercase # L& LATIN SMALL LETTER REVERSED C +24D0..24E9 ; Lowercase # So [26] CIRCLED LATIN SMALL LETTER A..CIRCLED LATIN SMALL LETTER Z +2C30..2C5E ; Lowercase # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C61 ; Lowercase # L& LATIN SMALL LETTER L WITH DOUBLE BAR +2C65..2C66 ; Lowercase # L& [2] LATIN SMALL LETTER A WITH STROKE..LATIN SMALL LETTER T WITH DIAGONAL STROKE +2C68 ; Lowercase # L& LATIN SMALL LETTER H WITH DESCENDER +2C6A ; Lowercase # L& LATIN SMALL LETTER K WITH DESCENDER +2C6C ; Lowercase # L& LATIN SMALL LETTER Z WITH DESCENDER +2C71 ; Lowercase # L& LATIN SMALL LETTER V WITH RIGHT HOOK +2C73..2C74 ; Lowercase # L& [2] LATIN SMALL LETTER W WITH HOOK..LATIN SMALL LETTER V WITH CURL +2C76..2C7C ; Lowercase # L& [7] LATIN SMALL LETTER HALF H..LATIN SUBSCRIPT SMALL LETTER J +2C7D ; Lowercase # Lm MODIFIER LETTER CAPITAL V +2C81 ; Lowercase # L& COPTIC SMALL LETTER ALFA +2C83 ; Lowercase # L& COPTIC SMALL LETTER VIDA +2C85 ; Lowercase # L& COPTIC SMALL LETTER GAMMA +2C87 ; Lowercase # L& COPTIC SMALL LETTER DALDA +2C89 ; Lowercase # L& COPTIC SMALL LETTER EIE +2C8B ; Lowercase # L& COPTIC SMALL LETTER SOU +2C8D ; Lowercase # L& COPTIC SMALL LETTER ZATA +2C8F ; Lowercase # L& COPTIC SMALL LETTER HATE +2C91 ; Lowercase # L& COPTIC SMALL LETTER THETHE +2C93 ; Lowercase # L& COPTIC SMALL LETTER IAUDA +2C95 ; Lowercase # L& COPTIC SMALL LETTER KAPA +2C97 ; Lowercase # L& COPTIC SMALL LETTER LAULA +2C99 ; Lowercase # L& COPTIC SMALL LETTER MI +2C9B ; Lowercase # L& COPTIC SMALL LETTER NI +2C9D ; Lowercase # L& COPTIC SMALL LETTER KSI +2C9F ; Lowercase # L& COPTIC SMALL LETTER O +2CA1 ; Lowercase # L& COPTIC SMALL LETTER PI +2CA3 ; Lowercase # L& COPTIC SMALL LETTER RO +2CA5 ; Lowercase # L& COPTIC SMALL LETTER SIMA +2CA7 ; Lowercase # L& COPTIC SMALL LETTER TAU +2CA9 ; Lowercase # L& COPTIC SMALL LETTER UA +2CAB ; Lowercase # L& COPTIC SMALL LETTER FI +2CAD ; Lowercase # L& COPTIC SMALL LETTER KHI +2CAF ; Lowercase # L& COPTIC SMALL LETTER PSI +2CB1 ; Lowercase # L& COPTIC SMALL LETTER OOU +2CB3 ; Lowercase # L& COPTIC SMALL LETTER DIALECT-P ALEF +2CB5 ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC AIN +2CB7 ; Lowercase # L& COPTIC SMALL LETTER CRYPTOGRAMMIC EIE +2CB9 ; Lowercase # L& COPTIC SMALL LETTER DIALECT-P KAPA +2CBB ; Lowercase # L& COPTIC SMALL LETTER DIALECT-P NI +2CBD ; Lowercase # L& COPTIC SMALL LETTER CRYPTOGRAMMIC NI +2CBF ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC OOU +2CC1 ; Lowercase # L& COPTIC SMALL LETTER SAMPI +2CC3 ; Lowercase # L& COPTIC SMALL LETTER CROSSED SHEI +2CC5 ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC SHEI +2CC7 ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC ESH +2CC9 ; Lowercase # L& COPTIC SMALL LETTER AKHMIMIC KHEI +2CCB ; Lowercase # L& COPTIC SMALL LETTER DIALECT-P HORI +2CCD ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC HORI +2CCF ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC HA +2CD1 ; Lowercase # L& COPTIC SMALL LETTER L-SHAPED HA +2CD3 ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC HEI +2CD5 ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC HAT +2CD7 ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC GANGIA +2CD9 ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC DJA +2CDB ; Lowercase # L& COPTIC SMALL LETTER OLD COPTIC SHIMA +2CDD ; Lowercase # L& COPTIC SMALL LETTER OLD NUBIAN SHIMA +2CDF ; Lowercase # L& COPTIC SMALL LETTER OLD NUBIAN NGI +2CE1 ; Lowercase # L& COPTIC SMALL LETTER OLD NUBIAN NYI +2CE3..2CE4 ; Lowercase # L& [2] COPTIC SMALL LETTER OLD NUBIAN WAU..COPTIC SYMBOL KAI +2CEC ; Lowercase # L& COPTIC SMALL LETTER CRYPTOGRAMMIC SHEI +2CEE ; Lowercase # L& COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2D00..2D25 ; Lowercase # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +A641 ; Lowercase # L& CYRILLIC SMALL LETTER ZEMLYA +A643 ; Lowercase # L& CYRILLIC SMALL LETTER DZELO +A645 ; Lowercase # L& CYRILLIC SMALL LETTER REVERSED DZE +A647 ; Lowercase # L& CYRILLIC SMALL LETTER IOTA +A649 ; Lowercase # L& CYRILLIC SMALL LETTER DJERV +A64B ; Lowercase # L& CYRILLIC SMALL LETTER MONOGRAPH UK +A64D ; Lowercase # L& CYRILLIC SMALL LETTER BROAD OMEGA +A64F ; Lowercase # L& CYRILLIC SMALL LETTER NEUTRAL YER +A651 ; Lowercase # L& CYRILLIC SMALL LETTER YERU WITH BACK YER +A653 ; Lowercase # L& CYRILLIC SMALL LETTER IOTIFIED YAT +A655 ; Lowercase # L& CYRILLIC SMALL LETTER REVERSED YU +A657 ; Lowercase # L& CYRILLIC SMALL LETTER IOTIFIED A +A659 ; Lowercase # L& CYRILLIC SMALL LETTER CLOSED LITTLE YUS +A65B ; Lowercase # L& CYRILLIC SMALL LETTER BLENDED YUS +A65D ; Lowercase # L& CYRILLIC SMALL LETTER IOTIFIED CLOSED LITTLE YUS +A65F ; Lowercase # L& CYRILLIC SMALL LETTER YN +A663 ; Lowercase # L& CYRILLIC SMALL LETTER SOFT DE +A665 ; Lowercase # L& CYRILLIC SMALL LETTER SOFT EL +A667 ; Lowercase # L& CYRILLIC SMALL LETTER SOFT EM +A669 ; Lowercase # L& CYRILLIC SMALL LETTER MONOCULAR O +A66B ; Lowercase # L& CYRILLIC SMALL LETTER BINOCULAR O +A66D ; Lowercase # L& CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A681 ; Lowercase # L& CYRILLIC SMALL LETTER DWE +A683 ; Lowercase # L& CYRILLIC SMALL LETTER DZWE +A685 ; Lowercase # L& CYRILLIC SMALL LETTER ZHWE +A687 ; Lowercase # L& CYRILLIC SMALL LETTER CCHE +A689 ; Lowercase # L& CYRILLIC SMALL LETTER DZZE +A68B ; Lowercase # L& CYRILLIC SMALL LETTER TE WITH MIDDLE HOOK +A68D ; Lowercase # L& CYRILLIC SMALL LETTER TWE +A68F ; Lowercase # L& CYRILLIC SMALL LETTER TSWE +A691 ; Lowercase # L& CYRILLIC SMALL LETTER TSSE +A693 ; Lowercase # L& CYRILLIC SMALL LETTER TCHE +A695 ; Lowercase # L& CYRILLIC SMALL LETTER HWE +A697 ; Lowercase # L& CYRILLIC SMALL LETTER SHWE +A723 ; Lowercase # L& LATIN SMALL LETTER EGYPTOLOGICAL ALEF +A725 ; Lowercase # L& LATIN SMALL LETTER EGYPTOLOGICAL AIN +A727 ; Lowercase # L& LATIN SMALL LETTER HENG +A729 ; Lowercase # L& LATIN SMALL LETTER TZ +A72B ; Lowercase # L& LATIN SMALL LETTER TRESILLO +A72D ; Lowercase # L& LATIN SMALL LETTER CUATRILLO +A72F..A731 ; Lowercase # L& [3] LATIN SMALL LETTER CUATRILLO WITH COMMA..LATIN LETTER SMALL CAPITAL S +A733 ; Lowercase # L& LATIN SMALL LETTER AA +A735 ; Lowercase # L& LATIN SMALL LETTER AO +A737 ; Lowercase # L& LATIN SMALL LETTER AU +A739 ; Lowercase # L& LATIN SMALL LETTER AV +A73B ; Lowercase # L& LATIN SMALL LETTER AV WITH HORIZONTAL BAR +A73D ; Lowercase # L& LATIN SMALL LETTER AY +A73F ; Lowercase # L& LATIN SMALL LETTER REVERSED C WITH DOT +A741 ; Lowercase # L& LATIN SMALL LETTER K WITH STROKE +A743 ; Lowercase # L& LATIN SMALL LETTER K WITH DIAGONAL STROKE +A745 ; Lowercase # L& LATIN SMALL LETTER K WITH STROKE AND DIAGONAL STROKE +A747 ; Lowercase # L& LATIN SMALL LETTER BROKEN L +A749 ; Lowercase # L& LATIN SMALL LETTER L WITH HIGH STROKE +A74B ; Lowercase # L& LATIN SMALL LETTER O WITH LONG STROKE OVERLAY +A74D ; Lowercase # L& LATIN SMALL LETTER O WITH LOOP +A74F ; Lowercase # L& LATIN SMALL LETTER OO +A751 ; Lowercase # L& LATIN SMALL LETTER P WITH STROKE THROUGH DESCENDER +A753 ; Lowercase # L& LATIN SMALL LETTER P WITH FLOURISH +A755 ; Lowercase # L& LATIN SMALL LETTER P WITH SQUIRREL TAIL +A757 ; Lowercase # L& LATIN SMALL LETTER Q WITH STROKE THROUGH DESCENDER +A759 ; Lowercase # L& LATIN SMALL LETTER Q WITH DIAGONAL STROKE +A75B ; Lowercase # L& LATIN SMALL LETTER R ROTUNDA +A75D ; Lowercase # L& LATIN SMALL LETTER RUM ROTUNDA +A75F ; Lowercase # L& LATIN SMALL LETTER V WITH DIAGONAL STROKE +A761 ; Lowercase # L& LATIN SMALL LETTER VY +A763 ; Lowercase # L& LATIN SMALL LETTER VISIGOTHIC Z +A765 ; Lowercase # L& LATIN SMALL LETTER THORN WITH STROKE +A767 ; Lowercase # L& LATIN SMALL LETTER THORN WITH STROKE THROUGH DESCENDER +A769 ; Lowercase # L& LATIN SMALL LETTER VEND +A76B ; Lowercase # L& LATIN SMALL LETTER ET +A76D ; Lowercase # L& LATIN SMALL LETTER IS +A76F ; Lowercase # L& LATIN SMALL LETTER CON +A770 ; Lowercase # Lm MODIFIER LETTER US +A771..A778 ; Lowercase # L& [8] LATIN SMALL LETTER DUM..LATIN SMALL LETTER UM +A77A ; Lowercase # L& LATIN SMALL LETTER INSULAR D +A77C ; Lowercase # L& LATIN SMALL LETTER INSULAR F +A77F ; Lowercase # L& LATIN SMALL LETTER TURNED INSULAR G +A781 ; Lowercase # L& LATIN SMALL LETTER TURNED L +A783 ; Lowercase # L& LATIN SMALL LETTER INSULAR R +A785 ; Lowercase # L& LATIN SMALL LETTER INSULAR S +A787 ; Lowercase # L& LATIN SMALL LETTER INSULAR T +A78C ; Lowercase # L& LATIN SMALL LETTER SALTILLO +FB00..FB06 ; Lowercase # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; Lowercase # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FF41..FF5A ; Lowercase # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +10428..1044F ; Lowercase # L& [40] DESERET SMALL LETTER LONG I..DESERET SMALL LETTER EW +1D41A..1D433 ; Lowercase # L& [26] MATHEMATICAL BOLD SMALL A..MATHEMATICAL BOLD SMALL Z +1D44E..1D454 ; Lowercase # L& [7] MATHEMATICAL ITALIC SMALL A..MATHEMATICAL ITALIC SMALL G +1D456..1D467 ; Lowercase # L& [18] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL ITALIC SMALL Z +1D482..1D49B ; Lowercase # L& [26] MATHEMATICAL BOLD ITALIC SMALL A..MATHEMATICAL BOLD ITALIC SMALL Z +1D4B6..1D4B9 ; Lowercase # L& [4] MATHEMATICAL SCRIPT SMALL A..MATHEMATICAL SCRIPT SMALL D +1D4BB ; Lowercase # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; Lowercase # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D4CF ; Lowercase # L& [11] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL SCRIPT SMALL Z +1D4EA..1D503 ; Lowercase # L& [26] MATHEMATICAL BOLD SCRIPT SMALL A..MATHEMATICAL BOLD SCRIPT SMALL Z +1D51E..1D537 ; Lowercase # L& [26] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL FRAKTUR SMALL Z +1D552..1D56B ; Lowercase # L& [26] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL DOUBLE-STRUCK SMALL Z +1D586..1D59F ; Lowercase # L& [26] MATHEMATICAL BOLD FRAKTUR SMALL A..MATHEMATICAL BOLD FRAKTUR SMALL Z +1D5BA..1D5D3 ; Lowercase # L& [26] MATHEMATICAL SANS-SERIF SMALL A..MATHEMATICAL SANS-SERIF SMALL Z +1D5EE..1D607 ; Lowercase # L& [26] MATHEMATICAL SANS-SERIF BOLD SMALL A..MATHEMATICAL SANS-SERIF BOLD SMALL Z +1D622..1D63B ; Lowercase # L& [26] MATHEMATICAL SANS-SERIF ITALIC SMALL A..MATHEMATICAL SANS-SERIF ITALIC SMALL Z +1D656..1D66F ; Lowercase # L& [26] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL A..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL Z +1D68A..1D6A5 ; Lowercase # L& [28] MATHEMATICAL MONOSPACE SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6C2..1D6DA ; Lowercase # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DC..1D6E1 ; Lowercase # L& [6] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL BOLD PI SYMBOL +1D6FC..1D714 ; Lowercase # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D716..1D71B ; Lowercase # L& [6] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL ITALIC PI SYMBOL +1D736..1D74E ; Lowercase # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D750..1D755 ; Lowercase # L& [6] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC PI SYMBOL +1D770..1D788 ; Lowercase # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D78A..1D78F ; Lowercase # L& [6] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD PI SYMBOL +1D7AA..1D7C2 ; Lowercase # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C4..1D7C9 ; Lowercase # L& [6] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC PI SYMBOL +1D7CB ; Lowercase # L& MATHEMATICAL BOLD SMALL DIGAMMA + +# Total code points: 1908 + +# ================================================ + +# Derived Property: Uppercase +# Generated from: Lu + Other_Uppercase + +0041..005A ; Uppercase # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +00C0..00D6 ; Uppercase # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00DE ; Uppercase # L& [7] LATIN CAPITAL LETTER O WITH STROKE..LATIN CAPITAL LETTER THORN +0100 ; Uppercase # L& LATIN CAPITAL LETTER A WITH MACRON +0102 ; Uppercase # L& LATIN CAPITAL LETTER A WITH BREVE +0104 ; Uppercase # L& LATIN CAPITAL LETTER A WITH OGONEK +0106 ; Uppercase # L& LATIN CAPITAL LETTER C WITH ACUTE +0108 ; Uppercase # L& LATIN CAPITAL LETTER C WITH CIRCUMFLEX +010A ; Uppercase # L& LATIN CAPITAL LETTER C WITH DOT ABOVE +010C ; Uppercase # L& LATIN CAPITAL LETTER C WITH CARON +010E ; Uppercase # L& LATIN CAPITAL LETTER D WITH CARON +0110 ; Uppercase # L& LATIN CAPITAL LETTER D WITH STROKE +0112 ; Uppercase # L& LATIN CAPITAL LETTER E WITH MACRON +0114 ; Uppercase # L& LATIN CAPITAL LETTER E WITH BREVE +0116 ; Uppercase # L& LATIN CAPITAL LETTER E WITH DOT ABOVE +0118 ; Uppercase # L& LATIN CAPITAL LETTER E WITH OGONEK +011A ; Uppercase # L& LATIN CAPITAL LETTER E WITH CARON +011C ; Uppercase # L& LATIN CAPITAL LETTER G WITH CIRCUMFLEX +011E ; Uppercase # L& LATIN CAPITAL LETTER G WITH BREVE +0120 ; Uppercase # L& LATIN CAPITAL LETTER G WITH DOT ABOVE +0122 ; Uppercase # L& LATIN CAPITAL LETTER G WITH CEDILLA +0124 ; Uppercase # L& LATIN CAPITAL LETTER H WITH CIRCUMFLEX +0126 ; Uppercase # L& LATIN CAPITAL LETTER H WITH STROKE +0128 ; Uppercase # L& LATIN CAPITAL LETTER I WITH TILDE +012A ; Uppercase # L& LATIN CAPITAL LETTER I WITH MACRON +012C ; Uppercase # L& LATIN CAPITAL LETTER I WITH BREVE +012E ; Uppercase # L& LATIN CAPITAL LETTER I WITH OGONEK +0130 ; Uppercase # L& LATIN CAPITAL LETTER I WITH DOT ABOVE +0132 ; Uppercase # L& LATIN CAPITAL LIGATURE IJ +0134 ; Uppercase # L& LATIN CAPITAL LETTER J WITH CIRCUMFLEX +0136 ; Uppercase # L& LATIN CAPITAL LETTER K WITH CEDILLA +0139 ; Uppercase # L& LATIN CAPITAL LETTER L WITH ACUTE +013B ; Uppercase # L& LATIN CAPITAL LETTER L WITH CEDILLA +013D ; Uppercase # L& LATIN CAPITAL LETTER L WITH CARON +013F ; Uppercase # L& LATIN CAPITAL LETTER L WITH MIDDLE DOT +0141 ; Uppercase # L& LATIN CAPITAL LETTER L WITH STROKE +0143 ; Uppercase # L& LATIN CAPITAL LETTER N WITH ACUTE +0145 ; Uppercase # L& LATIN CAPITAL LETTER N WITH CEDILLA +0147 ; Uppercase # L& LATIN CAPITAL LETTER N WITH CARON +014A ; Uppercase # L& LATIN CAPITAL LETTER ENG +014C ; Uppercase # L& LATIN CAPITAL LETTER O WITH MACRON +014E ; Uppercase # L& LATIN CAPITAL LETTER O WITH BREVE +0150 ; Uppercase # L& LATIN CAPITAL LETTER O WITH DOUBLE ACUTE +0152 ; Uppercase # L& LATIN CAPITAL LIGATURE OE +0154 ; Uppercase # L& LATIN CAPITAL LETTER R WITH ACUTE +0156 ; Uppercase # L& LATIN CAPITAL LETTER R WITH CEDILLA +0158 ; Uppercase # L& LATIN CAPITAL LETTER R WITH CARON +015A ; Uppercase # L& LATIN CAPITAL LETTER S WITH ACUTE +015C ; Uppercase # L& LATIN CAPITAL LETTER S WITH CIRCUMFLEX +015E ; Uppercase # L& LATIN CAPITAL LETTER S WITH CEDILLA +0160 ; Uppercase # L& LATIN CAPITAL LETTER S WITH CARON +0162 ; Uppercase # L& LATIN CAPITAL LETTER T WITH CEDILLA +0164 ; Uppercase # L& LATIN CAPITAL LETTER T WITH CARON +0166 ; Uppercase # L& LATIN CAPITAL LETTER T WITH STROKE +0168 ; Uppercase # L& LATIN CAPITAL LETTER U WITH TILDE +016A ; Uppercase # L& LATIN CAPITAL LETTER U WITH MACRON +016C ; Uppercase # L& LATIN CAPITAL LETTER U WITH BREVE +016E ; Uppercase # L& LATIN CAPITAL LETTER U WITH RING ABOVE +0170 ; Uppercase # L& LATIN CAPITAL LETTER U WITH DOUBLE ACUTE +0172 ; Uppercase # L& LATIN CAPITAL LETTER U WITH OGONEK +0174 ; Uppercase # L& LATIN CAPITAL LETTER W WITH CIRCUMFLEX +0176 ; Uppercase # L& LATIN CAPITAL LETTER Y WITH CIRCUMFLEX +0178..0179 ; Uppercase # L& [2] LATIN CAPITAL LETTER Y WITH DIAERESIS..LATIN CAPITAL LETTER Z WITH ACUTE +017B ; Uppercase # L& LATIN CAPITAL LETTER Z WITH DOT ABOVE +017D ; Uppercase # L& LATIN CAPITAL LETTER Z WITH CARON +0181..0182 ; Uppercase # L& [2] LATIN CAPITAL LETTER B WITH HOOK..LATIN CAPITAL LETTER B WITH TOPBAR +0184 ; Uppercase # L& LATIN CAPITAL LETTER TONE SIX +0186..0187 ; Uppercase # L& [2] LATIN CAPITAL LETTER OPEN O..LATIN CAPITAL LETTER C WITH HOOK +0189..018B ; Uppercase # L& [3] LATIN CAPITAL LETTER AFRICAN D..LATIN CAPITAL LETTER D WITH TOPBAR +018E..0191 ; Uppercase # L& [4] LATIN CAPITAL LETTER REVERSED E..LATIN CAPITAL LETTER F WITH HOOK +0193..0194 ; Uppercase # L& [2] LATIN CAPITAL LETTER G WITH HOOK..LATIN CAPITAL LETTER GAMMA +0196..0198 ; Uppercase # L& [3] LATIN CAPITAL LETTER IOTA..LATIN CAPITAL LETTER K WITH HOOK +019C..019D ; Uppercase # L& [2] LATIN CAPITAL LETTER TURNED M..LATIN CAPITAL LETTER N WITH LEFT HOOK +019F..01A0 ; Uppercase # L& [2] LATIN CAPITAL LETTER O WITH MIDDLE TILDE..LATIN CAPITAL LETTER O WITH HORN +01A2 ; Uppercase # L& LATIN CAPITAL LETTER OI +01A4 ; Uppercase # L& LATIN CAPITAL LETTER P WITH HOOK +01A6..01A7 ; Uppercase # L& [2] LATIN LETTER YR..LATIN CAPITAL LETTER TONE TWO +01A9 ; Uppercase # L& LATIN CAPITAL LETTER ESH +01AC ; Uppercase # L& LATIN CAPITAL LETTER T WITH HOOK +01AE..01AF ; Uppercase # L& [2] LATIN CAPITAL LETTER T WITH RETROFLEX HOOK..LATIN CAPITAL LETTER U WITH HORN +01B1..01B3 ; Uppercase # L& [3] LATIN CAPITAL LETTER UPSILON..LATIN CAPITAL LETTER Y WITH HOOK +01B5 ; Uppercase # L& LATIN CAPITAL LETTER Z WITH STROKE +01B7..01B8 ; Uppercase # L& [2] LATIN CAPITAL LETTER EZH..LATIN CAPITAL LETTER EZH REVERSED +01BC ; Uppercase # L& LATIN CAPITAL LETTER TONE FIVE +01C4 ; Uppercase # L& LATIN CAPITAL LETTER DZ WITH CARON +01C7 ; Uppercase # L& LATIN CAPITAL LETTER LJ +01CA ; Uppercase # L& LATIN CAPITAL LETTER NJ +01CD ; Uppercase # L& LATIN CAPITAL LETTER A WITH CARON +01CF ; Uppercase # L& LATIN CAPITAL LETTER I WITH CARON +01D1 ; Uppercase # L& LATIN CAPITAL LETTER O WITH CARON +01D3 ; Uppercase # L& LATIN CAPITAL LETTER U WITH CARON +01D5 ; Uppercase # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND MACRON +01D7 ; Uppercase # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND ACUTE +01D9 ; Uppercase # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND CARON +01DB ; Uppercase # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND GRAVE +01DE ; Uppercase # L& LATIN CAPITAL LETTER A WITH DIAERESIS AND MACRON +01E0 ; Uppercase # L& LATIN CAPITAL LETTER A WITH DOT ABOVE AND MACRON +01E2 ; Uppercase # L& LATIN CAPITAL LETTER AE WITH MACRON +01E4 ; Uppercase # L& LATIN CAPITAL LETTER G WITH STROKE +01E6 ; Uppercase # L& LATIN CAPITAL LETTER G WITH CARON +01E8 ; Uppercase # L& LATIN CAPITAL LETTER K WITH CARON +01EA ; Uppercase # L& LATIN CAPITAL LETTER O WITH OGONEK +01EC ; Uppercase # L& LATIN CAPITAL LETTER O WITH OGONEK AND MACRON +01EE ; Uppercase # L& LATIN CAPITAL LETTER EZH WITH CARON +01F1 ; Uppercase # L& LATIN CAPITAL LETTER DZ +01F4 ; Uppercase # L& LATIN CAPITAL LETTER G WITH ACUTE +01F6..01F8 ; Uppercase # L& [3] LATIN CAPITAL LETTER HWAIR..LATIN CAPITAL LETTER N WITH GRAVE +01FA ; Uppercase # L& LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE +01FC ; Uppercase # L& LATIN CAPITAL LETTER AE WITH ACUTE +01FE ; Uppercase # L& LATIN CAPITAL LETTER O WITH STROKE AND ACUTE +0200 ; Uppercase # L& LATIN CAPITAL LETTER A WITH DOUBLE GRAVE +0202 ; Uppercase # L& LATIN CAPITAL LETTER A WITH INVERTED BREVE +0204 ; Uppercase # L& LATIN CAPITAL LETTER E WITH DOUBLE GRAVE +0206 ; Uppercase # L& LATIN CAPITAL LETTER E WITH INVERTED BREVE +0208 ; Uppercase # L& LATIN CAPITAL LETTER I WITH DOUBLE GRAVE +020A ; Uppercase # L& LATIN CAPITAL LETTER I WITH INVERTED BREVE +020C ; Uppercase # L& LATIN CAPITAL LETTER O WITH DOUBLE GRAVE +020E ; Uppercase # L& LATIN CAPITAL LETTER O WITH INVERTED BREVE +0210 ; Uppercase # L& LATIN CAPITAL LETTER R WITH DOUBLE GRAVE +0212 ; Uppercase # L& LATIN CAPITAL LETTER R WITH INVERTED BREVE +0214 ; Uppercase # L& LATIN CAPITAL LETTER U WITH DOUBLE GRAVE +0216 ; Uppercase # L& LATIN CAPITAL LETTER U WITH INVERTED BREVE +0218 ; Uppercase # L& LATIN CAPITAL LETTER S WITH COMMA BELOW +021A ; Uppercase # L& LATIN CAPITAL LETTER T WITH COMMA BELOW +021C ; Uppercase # L& LATIN CAPITAL LETTER YOGH +021E ; Uppercase # L& LATIN CAPITAL LETTER H WITH CARON +0220 ; Uppercase # L& LATIN CAPITAL LETTER N WITH LONG RIGHT LEG +0222 ; Uppercase # L& LATIN CAPITAL LETTER OU +0224 ; Uppercase # L& LATIN CAPITAL LETTER Z WITH HOOK +0226 ; Uppercase # L& LATIN CAPITAL LETTER A WITH DOT ABOVE +0228 ; Uppercase # L& LATIN CAPITAL LETTER E WITH CEDILLA +022A ; Uppercase # L& LATIN CAPITAL LETTER O WITH DIAERESIS AND MACRON +022C ; Uppercase # L& LATIN CAPITAL LETTER O WITH TILDE AND MACRON +022E ; Uppercase # L& LATIN CAPITAL LETTER O WITH DOT ABOVE +0230 ; Uppercase # L& LATIN CAPITAL LETTER O WITH DOT ABOVE AND MACRON +0232 ; Uppercase # L& LATIN CAPITAL LETTER Y WITH MACRON +023A..023B ; Uppercase # L& [2] LATIN CAPITAL LETTER A WITH STROKE..LATIN CAPITAL LETTER C WITH STROKE +023D..023E ; Uppercase # L& [2] LATIN CAPITAL LETTER L WITH BAR..LATIN CAPITAL LETTER T WITH DIAGONAL STROKE +0241 ; Uppercase # L& LATIN CAPITAL LETTER GLOTTAL STOP +0243..0246 ; Uppercase # L& [4] LATIN CAPITAL LETTER B WITH STROKE..LATIN CAPITAL LETTER E WITH STROKE +0248 ; Uppercase # L& LATIN CAPITAL LETTER J WITH STROKE +024A ; Uppercase # L& LATIN CAPITAL LETTER SMALL Q WITH HOOK TAIL +024C ; Uppercase # L& LATIN CAPITAL LETTER R WITH STROKE +024E ; Uppercase # L& LATIN CAPITAL LETTER Y WITH STROKE +0370 ; Uppercase # L& GREEK CAPITAL LETTER HETA +0372 ; Uppercase # L& GREEK CAPITAL LETTER ARCHAIC SAMPI +0376 ; Uppercase # L& GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA +0386 ; Uppercase # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0388..038A ; Uppercase # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; Uppercase # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..038F ; Uppercase # L& [2] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER OMEGA WITH TONOS +0391..03A1 ; Uppercase # L& [17] GREEK CAPITAL LETTER ALPHA..GREEK CAPITAL LETTER RHO +03A3..03AB ; Uppercase # L& [9] GREEK CAPITAL LETTER SIGMA..GREEK CAPITAL LETTER UPSILON WITH DIALYTIKA +03CF ; Uppercase # L& GREEK CAPITAL KAI SYMBOL +03D2..03D4 ; Uppercase # L& [3] GREEK UPSILON WITH HOOK SYMBOL..GREEK UPSILON WITH DIAERESIS AND HOOK SYMBOL +03D8 ; Uppercase # L& GREEK LETTER ARCHAIC KOPPA +03DA ; Uppercase # L& GREEK LETTER STIGMA +03DC ; Uppercase # L& GREEK LETTER DIGAMMA +03DE ; Uppercase # L& GREEK LETTER KOPPA +03E0 ; Uppercase # L& GREEK LETTER SAMPI +03E2 ; Uppercase # L& COPTIC CAPITAL LETTER SHEI +03E4 ; Uppercase # L& COPTIC CAPITAL LETTER FEI +03E6 ; Uppercase # L& COPTIC CAPITAL LETTER KHEI +03E8 ; Uppercase # L& COPTIC CAPITAL LETTER HORI +03EA ; Uppercase # L& COPTIC CAPITAL LETTER GANGIA +03EC ; Uppercase # L& COPTIC CAPITAL LETTER SHIMA +03EE ; Uppercase # L& COPTIC CAPITAL LETTER DEI +03F4 ; Uppercase # L& GREEK CAPITAL THETA SYMBOL +03F7 ; Uppercase # L& GREEK CAPITAL LETTER SHO +03F9..03FA ; Uppercase # L& [2] GREEK CAPITAL LUNATE SIGMA SYMBOL..GREEK CAPITAL LETTER SAN +03FD..042F ; Uppercase # L& [51] GREEK CAPITAL REVERSED LUNATE SIGMA SYMBOL..CYRILLIC CAPITAL LETTER YA +0460 ; Uppercase # L& CYRILLIC CAPITAL LETTER OMEGA +0462 ; Uppercase # L& CYRILLIC CAPITAL LETTER YAT +0464 ; Uppercase # L& CYRILLIC CAPITAL LETTER IOTIFIED E +0466 ; Uppercase # L& CYRILLIC CAPITAL LETTER LITTLE YUS +0468 ; Uppercase # L& CYRILLIC CAPITAL LETTER IOTIFIED LITTLE YUS +046A ; Uppercase # L& CYRILLIC CAPITAL LETTER BIG YUS +046C ; Uppercase # L& CYRILLIC CAPITAL LETTER IOTIFIED BIG YUS +046E ; Uppercase # L& CYRILLIC CAPITAL LETTER KSI +0470 ; Uppercase # L& CYRILLIC CAPITAL LETTER PSI +0472 ; Uppercase # L& CYRILLIC CAPITAL LETTER FITA +0474 ; Uppercase # L& CYRILLIC CAPITAL LETTER IZHITSA +0476 ; Uppercase # L& CYRILLIC CAPITAL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT +0478 ; Uppercase # L& CYRILLIC CAPITAL LETTER UK +047A ; Uppercase # L& CYRILLIC CAPITAL LETTER ROUND OMEGA +047C ; Uppercase # L& CYRILLIC CAPITAL LETTER OMEGA WITH TITLO +047E ; Uppercase # L& CYRILLIC CAPITAL LETTER OT +0480 ; Uppercase # L& CYRILLIC CAPITAL LETTER KOPPA +048A ; Uppercase # L& CYRILLIC CAPITAL LETTER SHORT I WITH TAIL +048C ; Uppercase # L& CYRILLIC CAPITAL LETTER SEMISOFT SIGN +048E ; Uppercase # L& CYRILLIC CAPITAL LETTER ER WITH TICK +0490 ; Uppercase # L& CYRILLIC CAPITAL LETTER GHE WITH UPTURN +0492 ; Uppercase # L& CYRILLIC CAPITAL LETTER GHE WITH STROKE +0494 ; Uppercase # L& CYRILLIC CAPITAL LETTER GHE WITH MIDDLE HOOK +0496 ; Uppercase # L& CYRILLIC CAPITAL LETTER ZHE WITH DESCENDER +0498 ; Uppercase # L& CYRILLIC CAPITAL LETTER ZE WITH DESCENDER +049A ; Uppercase # L& CYRILLIC CAPITAL LETTER KA WITH DESCENDER +049C ; Uppercase # L& CYRILLIC CAPITAL LETTER KA WITH VERTICAL STROKE +049E ; Uppercase # L& CYRILLIC CAPITAL LETTER KA WITH STROKE +04A0 ; Uppercase # L& CYRILLIC CAPITAL LETTER BASHKIR KA +04A2 ; Uppercase # L& CYRILLIC CAPITAL LETTER EN WITH DESCENDER +04A4 ; Uppercase # L& CYRILLIC CAPITAL LIGATURE EN GHE +04A6 ; Uppercase # L& CYRILLIC CAPITAL LETTER PE WITH MIDDLE HOOK +04A8 ; Uppercase # L& CYRILLIC CAPITAL LETTER ABKHASIAN HA +04AA ; Uppercase # L& CYRILLIC CAPITAL LETTER ES WITH DESCENDER +04AC ; Uppercase # L& CYRILLIC CAPITAL LETTER TE WITH DESCENDER +04AE ; Uppercase # L& CYRILLIC CAPITAL LETTER STRAIGHT U +04B0 ; Uppercase # L& CYRILLIC CAPITAL LETTER STRAIGHT U WITH STROKE +04B2 ; Uppercase # L& CYRILLIC CAPITAL LETTER HA WITH DESCENDER +04B4 ; Uppercase # L& CYRILLIC CAPITAL LIGATURE TE TSE +04B6 ; Uppercase # L& CYRILLIC CAPITAL LETTER CHE WITH DESCENDER +04B8 ; Uppercase # L& CYRILLIC CAPITAL LETTER CHE WITH VERTICAL STROKE +04BA ; Uppercase # L& CYRILLIC CAPITAL LETTER SHHA +04BC ; Uppercase # L& CYRILLIC CAPITAL LETTER ABKHASIAN CHE +04BE ; Uppercase # L& CYRILLIC CAPITAL LETTER ABKHASIAN CHE WITH DESCENDER +04C0..04C1 ; Uppercase # L& [2] CYRILLIC LETTER PALOCHKA..CYRILLIC CAPITAL LETTER ZHE WITH BREVE +04C3 ; Uppercase # L& CYRILLIC CAPITAL LETTER KA WITH HOOK +04C5 ; Uppercase # L& CYRILLIC CAPITAL LETTER EL WITH TAIL +04C7 ; Uppercase # L& CYRILLIC CAPITAL LETTER EN WITH HOOK +04C9 ; Uppercase # L& CYRILLIC CAPITAL LETTER EN WITH TAIL +04CB ; Uppercase # L& CYRILLIC CAPITAL LETTER KHAKASSIAN CHE +04CD ; Uppercase # L& CYRILLIC CAPITAL LETTER EM WITH TAIL +04D0 ; Uppercase # L& CYRILLIC CAPITAL LETTER A WITH BREVE +04D2 ; Uppercase # L& CYRILLIC CAPITAL LETTER A WITH DIAERESIS +04D4 ; Uppercase # L& CYRILLIC CAPITAL LIGATURE A IE +04D6 ; Uppercase # L& CYRILLIC CAPITAL LETTER IE WITH BREVE +04D8 ; Uppercase # L& CYRILLIC CAPITAL LETTER SCHWA +04DA ; Uppercase # L& CYRILLIC CAPITAL LETTER SCHWA WITH DIAERESIS +04DC ; Uppercase # L& CYRILLIC CAPITAL LETTER ZHE WITH DIAERESIS +04DE ; Uppercase # L& CYRILLIC CAPITAL LETTER ZE WITH DIAERESIS +04E0 ; Uppercase # L& CYRILLIC CAPITAL LETTER ABKHASIAN DZE +04E2 ; Uppercase # L& CYRILLIC CAPITAL LETTER I WITH MACRON +04E4 ; Uppercase # L& CYRILLIC CAPITAL LETTER I WITH DIAERESIS +04E6 ; Uppercase # L& CYRILLIC CAPITAL LETTER O WITH DIAERESIS +04E8 ; Uppercase # L& CYRILLIC CAPITAL LETTER BARRED O +04EA ; Uppercase # L& CYRILLIC CAPITAL LETTER BARRED O WITH DIAERESIS +04EC ; Uppercase # L& CYRILLIC CAPITAL LETTER E WITH DIAERESIS +04EE ; Uppercase # L& CYRILLIC CAPITAL LETTER U WITH MACRON +04F0 ; Uppercase # L& CYRILLIC CAPITAL LETTER U WITH DIAERESIS +04F2 ; Uppercase # L& CYRILLIC CAPITAL LETTER U WITH DOUBLE ACUTE +04F4 ; Uppercase # L& CYRILLIC CAPITAL LETTER CHE WITH DIAERESIS +04F6 ; Uppercase # L& CYRILLIC CAPITAL LETTER GHE WITH DESCENDER +04F8 ; Uppercase # L& CYRILLIC CAPITAL LETTER YERU WITH DIAERESIS +04FA ; Uppercase # L& CYRILLIC CAPITAL LETTER GHE WITH STROKE AND HOOK +04FC ; Uppercase # L& CYRILLIC CAPITAL LETTER HA WITH HOOK +04FE ; Uppercase # L& CYRILLIC CAPITAL LETTER HA WITH STROKE +0500 ; Uppercase # L& CYRILLIC CAPITAL LETTER KOMI DE +0502 ; Uppercase # L& CYRILLIC CAPITAL LETTER KOMI DJE +0504 ; Uppercase # L& CYRILLIC CAPITAL LETTER KOMI ZJE +0506 ; Uppercase # L& CYRILLIC CAPITAL LETTER KOMI DZJE +0508 ; Uppercase # L& CYRILLIC CAPITAL LETTER KOMI LJE +050A ; Uppercase # L& CYRILLIC CAPITAL LETTER KOMI NJE +050C ; Uppercase # L& CYRILLIC CAPITAL LETTER KOMI SJE +050E ; Uppercase # L& CYRILLIC CAPITAL LETTER KOMI TJE +0510 ; Uppercase # L& CYRILLIC CAPITAL LETTER REVERSED ZE +0512 ; Uppercase # L& CYRILLIC CAPITAL LETTER EL WITH HOOK +0514 ; Uppercase # L& CYRILLIC CAPITAL LETTER LHA +0516 ; Uppercase # L& CYRILLIC CAPITAL LETTER RHA +0518 ; Uppercase # L& CYRILLIC CAPITAL LETTER YAE +051A ; Uppercase # L& CYRILLIC CAPITAL LETTER QA +051C ; Uppercase # L& CYRILLIC CAPITAL LETTER WE +051E ; Uppercase # L& CYRILLIC CAPITAL LETTER ALEUT KA +0520 ; Uppercase # L& CYRILLIC CAPITAL LETTER EL WITH MIDDLE HOOK +0522 ; Uppercase # L& CYRILLIC CAPITAL LETTER EN WITH MIDDLE HOOK +0524 ; Uppercase # L& CYRILLIC CAPITAL LETTER PE WITH DESCENDER +0531..0556 ; Uppercase # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +10A0..10C5 ; Uppercase # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +1E00 ; Uppercase # L& LATIN CAPITAL LETTER A WITH RING BELOW +1E02 ; Uppercase # L& LATIN CAPITAL LETTER B WITH DOT ABOVE +1E04 ; Uppercase # L& LATIN CAPITAL LETTER B WITH DOT BELOW +1E06 ; Uppercase # L& LATIN CAPITAL LETTER B WITH LINE BELOW +1E08 ; Uppercase # L& LATIN CAPITAL LETTER C WITH CEDILLA AND ACUTE +1E0A ; Uppercase # L& LATIN CAPITAL LETTER D WITH DOT ABOVE +1E0C ; Uppercase # L& LATIN CAPITAL LETTER D WITH DOT BELOW +1E0E ; Uppercase # L& LATIN CAPITAL LETTER D WITH LINE BELOW +1E10 ; Uppercase # L& LATIN CAPITAL LETTER D WITH CEDILLA +1E12 ; Uppercase # L& LATIN CAPITAL LETTER D WITH CIRCUMFLEX BELOW +1E14 ; Uppercase # L& LATIN CAPITAL LETTER E WITH MACRON AND GRAVE +1E16 ; Uppercase # L& LATIN CAPITAL LETTER E WITH MACRON AND ACUTE +1E18 ; Uppercase # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX BELOW +1E1A ; Uppercase # L& LATIN CAPITAL LETTER E WITH TILDE BELOW +1E1C ; Uppercase # L& LATIN CAPITAL LETTER E WITH CEDILLA AND BREVE +1E1E ; Uppercase # L& LATIN CAPITAL LETTER F WITH DOT ABOVE +1E20 ; Uppercase # L& LATIN CAPITAL LETTER G WITH MACRON +1E22 ; Uppercase # L& LATIN CAPITAL LETTER H WITH DOT ABOVE +1E24 ; Uppercase # L& LATIN CAPITAL LETTER H WITH DOT BELOW +1E26 ; Uppercase # L& LATIN CAPITAL LETTER H WITH DIAERESIS +1E28 ; Uppercase # L& LATIN CAPITAL LETTER H WITH CEDILLA +1E2A ; Uppercase # L& LATIN CAPITAL LETTER H WITH BREVE BELOW +1E2C ; Uppercase # L& LATIN CAPITAL LETTER I WITH TILDE BELOW +1E2E ; Uppercase # L& LATIN CAPITAL LETTER I WITH DIAERESIS AND ACUTE +1E30 ; Uppercase # L& LATIN CAPITAL LETTER K WITH ACUTE +1E32 ; Uppercase # L& LATIN CAPITAL LETTER K WITH DOT BELOW +1E34 ; Uppercase # L& LATIN CAPITAL LETTER K WITH LINE BELOW +1E36 ; Uppercase # L& LATIN CAPITAL LETTER L WITH DOT BELOW +1E38 ; Uppercase # L& LATIN CAPITAL LETTER L WITH DOT BELOW AND MACRON +1E3A ; Uppercase # L& LATIN CAPITAL LETTER L WITH LINE BELOW +1E3C ; Uppercase # L& LATIN CAPITAL LETTER L WITH CIRCUMFLEX BELOW +1E3E ; Uppercase # L& LATIN CAPITAL LETTER M WITH ACUTE +1E40 ; Uppercase # L& LATIN CAPITAL LETTER M WITH DOT ABOVE +1E42 ; Uppercase # L& LATIN CAPITAL LETTER M WITH DOT BELOW +1E44 ; Uppercase # L& LATIN CAPITAL LETTER N WITH DOT ABOVE +1E46 ; Uppercase # L& LATIN CAPITAL LETTER N WITH DOT BELOW +1E48 ; Uppercase # L& LATIN CAPITAL LETTER N WITH LINE BELOW +1E4A ; Uppercase # L& LATIN CAPITAL LETTER N WITH CIRCUMFLEX BELOW +1E4C ; Uppercase # L& LATIN CAPITAL LETTER O WITH TILDE AND ACUTE +1E4E ; Uppercase # L& LATIN CAPITAL LETTER O WITH TILDE AND DIAERESIS +1E50 ; Uppercase # L& LATIN CAPITAL LETTER O WITH MACRON AND GRAVE +1E52 ; Uppercase # L& LATIN CAPITAL LETTER O WITH MACRON AND ACUTE +1E54 ; Uppercase # L& LATIN CAPITAL LETTER P WITH ACUTE +1E56 ; Uppercase # L& LATIN CAPITAL LETTER P WITH DOT ABOVE +1E58 ; Uppercase # L& LATIN CAPITAL LETTER R WITH DOT ABOVE +1E5A ; Uppercase # L& LATIN CAPITAL LETTER R WITH DOT BELOW +1E5C ; Uppercase # L& LATIN CAPITAL LETTER R WITH DOT BELOW AND MACRON +1E5E ; Uppercase # L& LATIN CAPITAL LETTER R WITH LINE BELOW +1E60 ; Uppercase # L& LATIN CAPITAL LETTER S WITH DOT ABOVE +1E62 ; Uppercase # L& LATIN CAPITAL LETTER S WITH DOT BELOW +1E64 ; Uppercase # L& LATIN CAPITAL LETTER S WITH ACUTE AND DOT ABOVE +1E66 ; Uppercase # L& LATIN CAPITAL LETTER S WITH CARON AND DOT ABOVE +1E68 ; Uppercase # L& LATIN CAPITAL LETTER S WITH DOT BELOW AND DOT ABOVE +1E6A ; Uppercase # L& LATIN CAPITAL LETTER T WITH DOT ABOVE +1E6C ; Uppercase # L& LATIN CAPITAL LETTER T WITH DOT BELOW +1E6E ; Uppercase # L& LATIN CAPITAL LETTER T WITH LINE BELOW +1E70 ; Uppercase # L& LATIN CAPITAL LETTER T WITH CIRCUMFLEX BELOW +1E72 ; Uppercase # L& LATIN CAPITAL LETTER U WITH DIAERESIS BELOW +1E74 ; Uppercase # L& LATIN CAPITAL LETTER U WITH TILDE BELOW +1E76 ; Uppercase # L& LATIN CAPITAL LETTER U WITH CIRCUMFLEX BELOW +1E78 ; Uppercase # L& LATIN CAPITAL LETTER U WITH TILDE AND ACUTE +1E7A ; Uppercase # L& LATIN CAPITAL LETTER U WITH MACRON AND DIAERESIS +1E7C ; Uppercase # L& LATIN CAPITAL LETTER V WITH TILDE +1E7E ; Uppercase # L& LATIN CAPITAL LETTER V WITH DOT BELOW +1E80 ; Uppercase # L& LATIN CAPITAL LETTER W WITH GRAVE +1E82 ; Uppercase # L& LATIN CAPITAL LETTER W WITH ACUTE +1E84 ; Uppercase # L& LATIN CAPITAL LETTER W WITH DIAERESIS +1E86 ; Uppercase # L& LATIN CAPITAL LETTER W WITH DOT ABOVE +1E88 ; Uppercase # L& LATIN CAPITAL LETTER W WITH DOT BELOW +1E8A ; Uppercase # L& LATIN CAPITAL LETTER X WITH DOT ABOVE +1E8C ; Uppercase # L& LATIN CAPITAL LETTER X WITH DIAERESIS +1E8E ; Uppercase # L& LATIN CAPITAL LETTER Y WITH DOT ABOVE +1E90 ; Uppercase # L& LATIN CAPITAL LETTER Z WITH CIRCUMFLEX +1E92 ; Uppercase # L& LATIN CAPITAL LETTER Z WITH DOT BELOW +1E94 ; Uppercase # L& LATIN CAPITAL LETTER Z WITH LINE BELOW +1E9E ; Uppercase # L& LATIN CAPITAL LETTER SHARP S +1EA0 ; Uppercase # L& LATIN CAPITAL LETTER A WITH DOT BELOW +1EA2 ; Uppercase # L& LATIN CAPITAL LETTER A WITH HOOK ABOVE +1EA4 ; Uppercase # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND ACUTE +1EA6 ; Uppercase # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND GRAVE +1EA8 ; Uppercase # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE +1EAA ; Uppercase # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND TILDE +1EAC ; Uppercase # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND DOT BELOW +1EAE ; Uppercase # L& LATIN CAPITAL LETTER A WITH BREVE AND ACUTE +1EB0 ; Uppercase # L& LATIN CAPITAL LETTER A WITH BREVE AND GRAVE +1EB2 ; Uppercase # L& LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE +1EB4 ; Uppercase # L& LATIN CAPITAL LETTER A WITH BREVE AND TILDE +1EB6 ; Uppercase # L& LATIN CAPITAL LETTER A WITH BREVE AND DOT BELOW +1EB8 ; Uppercase # L& LATIN CAPITAL LETTER E WITH DOT BELOW +1EBA ; Uppercase # L& LATIN CAPITAL LETTER E WITH HOOK ABOVE +1EBC ; Uppercase # L& LATIN CAPITAL LETTER E WITH TILDE +1EBE ; Uppercase # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND ACUTE +1EC0 ; Uppercase # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND GRAVE +1EC2 ; Uppercase # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE +1EC4 ; Uppercase # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND TILDE +1EC6 ; Uppercase # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND DOT BELOW +1EC8 ; Uppercase # L& LATIN CAPITAL LETTER I WITH HOOK ABOVE +1ECA ; Uppercase # L& LATIN CAPITAL LETTER I WITH DOT BELOW +1ECC ; Uppercase # L& LATIN CAPITAL LETTER O WITH DOT BELOW +1ECE ; Uppercase # L& LATIN CAPITAL LETTER O WITH HOOK ABOVE +1ED0 ; Uppercase # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND ACUTE +1ED2 ; Uppercase # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND GRAVE +1ED4 ; Uppercase # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE +1ED6 ; Uppercase # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND TILDE +1ED8 ; Uppercase # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND DOT BELOW +1EDA ; Uppercase # L& LATIN CAPITAL LETTER O WITH HORN AND ACUTE +1EDC ; Uppercase # L& LATIN CAPITAL LETTER O WITH HORN AND GRAVE +1EDE ; Uppercase # L& LATIN CAPITAL LETTER O WITH HORN AND HOOK ABOVE +1EE0 ; Uppercase # L& LATIN CAPITAL LETTER O WITH HORN AND TILDE +1EE2 ; Uppercase # L& LATIN CAPITAL LETTER O WITH HORN AND DOT BELOW +1EE4 ; Uppercase # L& LATIN CAPITAL LETTER U WITH DOT BELOW +1EE6 ; Uppercase # L& LATIN CAPITAL LETTER U WITH HOOK ABOVE +1EE8 ; Uppercase # L& LATIN CAPITAL LETTER U WITH HORN AND ACUTE +1EEA ; Uppercase # L& LATIN CAPITAL LETTER U WITH HORN AND GRAVE +1EEC ; Uppercase # L& LATIN CAPITAL LETTER U WITH HORN AND HOOK ABOVE +1EEE ; Uppercase # L& LATIN CAPITAL LETTER U WITH HORN AND TILDE +1EF0 ; Uppercase # L& LATIN CAPITAL LETTER U WITH HORN AND DOT BELOW +1EF2 ; Uppercase # L& LATIN CAPITAL LETTER Y WITH GRAVE +1EF4 ; Uppercase # L& LATIN CAPITAL LETTER Y WITH DOT BELOW +1EF6 ; Uppercase # L& LATIN CAPITAL LETTER Y WITH HOOK ABOVE +1EF8 ; Uppercase # L& LATIN CAPITAL LETTER Y WITH TILDE +1EFA ; Uppercase # L& LATIN CAPITAL LETTER MIDDLE-WELSH LL +1EFC ; Uppercase # L& LATIN CAPITAL LETTER MIDDLE-WELSH V +1EFE ; Uppercase # L& LATIN CAPITAL LETTER Y WITH LOOP +1F08..1F0F ; Uppercase # L& [8] GREEK CAPITAL LETTER ALPHA WITH PSILI..GREEK CAPITAL LETTER ALPHA WITH DASIA AND PERISPOMENI +1F18..1F1D ; Uppercase # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F28..1F2F ; Uppercase # L& [8] GREEK CAPITAL LETTER ETA WITH PSILI..GREEK CAPITAL LETTER ETA WITH DASIA AND PERISPOMENI +1F38..1F3F ; Uppercase # L& [8] GREEK CAPITAL LETTER IOTA WITH PSILI..GREEK CAPITAL LETTER IOTA WITH DASIA AND PERISPOMENI +1F48..1F4D ; Uppercase # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F59 ; Uppercase # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; Uppercase # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; Uppercase # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F ; Uppercase # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F68..1F6F ; Uppercase # L& [8] GREEK CAPITAL LETTER OMEGA WITH PSILI..GREEK CAPITAL LETTER OMEGA WITH DASIA AND PERISPOMENI +1FB8..1FBB ; Uppercase # L& [4] GREEK CAPITAL LETTER ALPHA WITH VRACHY..GREEK CAPITAL LETTER ALPHA WITH OXIA +1FC8..1FCB ; Uppercase # L& [4] GREEK CAPITAL LETTER EPSILON WITH VARIA..GREEK CAPITAL LETTER ETA WITH OXIA +1FD8..1FDB ; Uppercase # L& [4] GREEK CAPITAL LETTER IOTA WITH VRACHY..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE8..1FEC ; Uppercase # L& [5] GREEK CAPITAL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF8..1FFB ; Uppercase # L& [4] GREEK CAPITAL LETTER OMICRON WITH VARIA..GREEK CAPITAL LETTER OMEGA WITH OXIA +2102 ; Uppercase # L& DOUBLE-STRUCK CAPITAL C +2107 ; Uppercase # L& EULER CONSTANT +210B..210D ; Uppercase # L& [3] SCRIPT CAPITAL H..DOUBLE-STRUCK CAPITAL H +2110..2112 ; Uppercase # L& [3] SCRIPT CAPITAL I..SCRIPT CAPITAL L +2115 ; Uppercase # L& DOUBLE-STRUCK CAPITAL N +2119..211D ; Uppercase # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +2124 ; Uppercase # L& DOUBLE-STRUCK CAPITAL Z +2126 ; Uppercase # L& OHM SIGN +2128 ; Uppercase # L& BLACK-LETTER CAPITAL Z +212A..212D ; Uppercase # L& [4] KELVIN SIGN..BLACK-LETTER CAPITAL C +2130..2133 ; Uppercase # L& [4] SCRIPT CAPITAL E..SCRIPT CAPITAL M +213E..213F ; Uppercase # L& [2] DOUBLE-STRUCK CAPITAL GAMMA..DOUBLE-STRUCK CAPITAL PI +2145 ; Uppercase # L& DOUBLE-STRUCK ITALIC CAPITAL D +2160..216F ; Uppercase # Nl [16] ROMAN NUMERAL ONE..ROMAN NUMERAL ONE THOUSAND +2183 ; Uppercase # L& ROMAN NUMERAL REVERSED ONE HUNDRED +24B6..24CF ; Uppercase # So [26] CIRCLED LATIN CAPITAL LETTER A..CIRCLED LATIN CAPITAL LETTER Z +2C00..2C2E ; Uppercase # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C60 ; Uppercase # L& LATIN CAPITAL LETTER L WITH DOUBLE BAR +2C62..2C64 ; Uppercase # L& [3] LATIN CAPITAL LETTER L WITH MIDDLE TILDE..LATIN CAPITAL LETTER R WITH TAIL +2C67 ; Uppercase # L& LATIN CAPITAL LETTER H WITH DESCENDER +2C69 ; Uppercase # L& LATIN CAPITAL LETTER K WITH DESCENDER +2C6B ; Uppercase # L& LATIN CAPITAL LETTER Z WITH DESCENDER +2C6D..2C70 ; Uppercase # L& [4] LATIN CAPITAL LETTER ALPHA..LATIN CAPITAL LETTER TURNED ALPHA +2C72 ; Uppercase # L& LATIN CAPITAL LETTER W WITH HOOK +2C75 ; Uppercase # L& LATIN CAPITAL LETTER HALF H +2C7E..2C80 ; Uppercase # L& [3] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC CAPITAL LETTER ALFA +2C82 ; Uppercase # L& COPTIC CAPITAL LETTER VIDA +2C84 ; Uppercase # L& COPTIC CAPITAL LETTER GAMMA +2C86 ; Uppercase # L& COPTIC CAPITAL LETTER DALDA +2C88 ; Uppercase # L& COPTIC CAPITAL LETTER EIE +2C8A ; Uppercase # L& COPTIC CAPITAL LETTER SOU +2C8C ; Uppercase # L& COPTIC CAPITAL LETTER ZATA +2C8E ; Uppercase # L& COPTIC CAPITAL LETTER HATE +2C90 ; Uppercase # L& COPTIC CAPITAL LETTER THETHE +2C92 ; Uppercase # L& COPTIC CAPITAL LETTER IAUDA +2C94 ; Uppercase # L& COPTIC CAPITAL LETTER KAPA +2C96 ; Uppercase # L& COPTIC CAPITAL LETTER LAULA +2C98 ; Uppercase # L& COPTIC CAPITAL LETTER MI +2C9A ; Uppercase # L& COPTIC CAPITAL LETTER NI +2C9C ; Uppercase # L& COPTIC CAPITAL LETTER KSI +2C9E ; Uppercase # L& COPTIC CAPITAL LETTER O +2CA0 ; Uppercase # L& COPTIC CAPITAL LETTER PI +2CA2 ; Uppercase # L& COPTIC CAPITAL LETTER RO +2CA4 ; Uppercase # L& COPTIC CAPITAL LETTER SIMA +2CA6 ; Uppercase # L& COPTIC CAPITAL LETTER TAU +2CA8 ; Uppercase # L& COPTIC CAPITAL LETTER UA +2CAA ; Uppercase # L& COPTIC CAPITAL LETTER FI +2CAC ; Uppercase # L& COPTIC CAPITAL LETTER KHI +2CAE ; Uppercase # L& COPTIC CAPITAL LETTER PSI +2CB0 ; Uppercase # L& COPTIC CAPITAL LETTER OOU +2CB2 ; Uppercase # L& COPTIC CAPITAL LETTER DIALECT-P ALEF +2CB4 ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC AIN +2CB6 ; Uppercase # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC EIE +2CB8 ; Uppercase # L& COPTIC CAPITAL LETTER DIALECT-P KAPA +2CBA ; Uppercase # L& COPTIC CAPITAL LETTER DIALECT-P NI +2CBC ; Uppercase # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC NI +2CBE ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC OOU +2CC0 ; Uppercase # L& COPTIC CAPITAL LETTER SAMPI +2CC2 ; Uppercase # L& COPTIC CAPITAL LETTER CROSSED SHEI +2CC4 ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC SHEI +2CC6 ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC ESH +2CC8 ; Uppercase # L& COPTIC CAPITAL LETTER AKHMIMIC KHEI +2CCA ; Uppercase # L& COPTIC CAPITAL LETTER DIALECT-P HORI +2CCC ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC HORI +2CCE ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC HA +2CD0 ; Uppercase # L& COPTIC CAPITAL LETTER L-SHAPED HA +2CD2 ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC HEI +2CD4 ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC HAT +2CD6 ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC GANGIA +2CD8 ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC DJA +2CDA ; Uppercase # L& COPTIC CAPITAL LETTER OLD COPTIC SHIMA +2CDC ; Uppercase # L& COPTIC CAPITAL LETTER OLD NUBIAN SHIMA +2CDE ; Uppercase # L& COPTIC CAPITAL LETTER OLD NUBIAN NGI +2CE0 ; Uppercase # L& COPTIC CAPITAL LETTER OLD NUBIAN NYI +2CE2 ; Uppercase # L& COPTIC CAPITAL LETTER OLD NUBIAN WAU +2CEB ; Uppercase # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI +2CED ; Uppercase # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC GANGIA +A640 ; Uppercase # L& CYRILLIC CAPITAL LETTER ZEMLYA +A642 ; Uppercase # L& CYRILLIC CAPITAL LETTER DZELO +A644 ; Uppercase # L& CYRILLIC CAPITAL LETTER REVERSED DZE +A646 ; Uppercase # L& CYRILLIC CAPITAL LETTER IOTA +A648 ; Uppercase # L& CYRILLIC CAPITAL LETTER DJERV +A64A ; Uppercase # L& CYRILLIC CAPITAL LETTER MONOGRAPH UK +A64C ; Uppercase # L& CYRILLIC CAPITAL LETTER BROAD OMEGA +A64E ; Uppercase # L& CYRILLIC CAPITAL LETTER NEUTRAL YER +A650 ; Uppercase # L& CYRILLIC CAPITAL LETTER YERU WITH BACK YER +A652 ; Uppercase # L& CYRILLIC CAPITAL LETTER IOTIFIED YAT +A654 ; Uppercase # L& CYRILLIC CAPITAL LETTER REVERSED YU +A656 ; Uppercase # L& CYRILLIC CAPITAL LETTER IOTIFIED A +A658 ; Uppercase # L& CYRILLIC CAPITAL LETTER CLOSED LITTLE YUS +A65A ; Uppercase # L& CYRILLIC CAPITAL LETTER BLENDED YUS +A65C ; Uppercase # L& CYRILLIC CAPITAL LETTER IOTIFIED CLOSED LITTLE YUS +A65E ; Uppercase # L& CYRILLIC CAPITAL LETTER YN +A662 ; Uppercase # L& CYRILLIC CAPITAL LETTER SOFT DE +A664 ; Uppercase # L& CYRILLIC CAPITAL LETTER SOFT EL +A666 ; Uppercase # L& CYRILLIC CAPITAL LETTER SOFT EM +A668 ; Uppercase # L& CYRILLIC CAPITAL LETTER MONOCULAR O +A66A ; Uppercase # L& CYRILLIC CAPITAL LETTER BINOCULAR O +A66C ; Uppercase # L& CYRILLIC CAPITAL LETTER DOUBLE MONOCULAR O +A680 ; Uppercase # L& CYRILLIC CAPITAL LETTER DWE +A682 ; Uppercase # L& CYRILLIC CAPITAL LETTER DZWE +A684 ; Uppercase # L& CYRILLIC CAPITAL LETTER ZHWE +A686 ; Uppercase # L& CYRILLIC CAPITAL LETTER CCHE +A688 ; Uppercase # L& CYRILLIC CAPITAL LETTER DZZE +A68A ; Uppercase # L& CYRILLIC CAPITAL LETTER TE WITH MIDDLE HOOK +A68C ; Uppercase # L& CYRILLIC CAPITAL LETTER TWE +A68E ; Uppercase # L& CYRILLIC CAPITAL LETTER TSWE +A690 ; Uppercase # L& CYRILLIC CAPITAL LETTER TSSE +A692 ; Uppercase # L& CYRILLIC CAPITAL LETTER TCHE +A694 ; Uppercase # L& CYRILLIC CAPITAL LETTER HWE +A696 ; Uppercase # L& CYRILLIC CAPITAL LETTER SHWE +A722 ; Uppercase # L& LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF +A724 ; Uppercase # L& LATIN CAPITAL LETTER EGYPTOLOGICAL AIN +A726 ; Uppercase # L& LATIN CAPITAL LETTER HENG +A728 ; Uppercase # L& LATIN CAPITAL LETTER TZ +A72A ; Uppercase # L& LATIN CAPITAL LETTER TRESILLO +A72C ; Uppercase # L& LATIN CAPITAL LETTER CUATRILLO +A72E ; Uppercase # L& LATIN CAPITAL LETTER CUATRILLO WITH COMMA +A732 ; Uppercase # L& LATIN CAPITAL LETTER AA +A734 ; Uppercase # L& LATIN CAPITAL LETTER AO +A736 ; Uppercase # L& LATIN CAPITAL LETTER AU +A738 ; Uppercase # L& LATIN CAPITAL LETTER AV +A73A ; Uppercase # L& LATIN CAPITAL LETTER AV WITH HORIZONTAL BAR +A73C ; Uppercase # L& LATIN CAPITAL LETTER AY +A73E ; Uppercase # L& LATIN CAPITAL LETTER REVERSED C WITH DOT +A740 ; Uppercase # L& LATIN CAPITAL LETTER K WITH STROKE +A742 ; Uppercase # L& LATIN CAPITAL LETTER K WITH DIAGONAL STROKE +A744 ; Uppercase # L& LATIN CAPITAL LETTER K WITH STROKE AND DIAGONAL STROKE +A746 ; Uppercase # L& LATIN CAPITAL LETTER BROKEN L +A748 ; Uppercase # L& LATIN CAPITAL LETTER L WITH HIGH STROKE +A74A ; Uppercase # L& LATIN CAPITAL LETTER O WITH LONG STROKE OVERLAY +A74C ; Uppercase # L& LATIN CAPITAL LETTER O WITH LOOP +A74E ; Uppercase # L& LATIN CAPITAL LETTER OO +A750 ; Uppercase # L& LATIN CAPITAL LETTER P WITH STROKE THROUGH DESCENDER +A752 ; Uppercase # L& LATIN CAPITAL LETTER P WITH FLOURISH +A754 ; Uppercase # L& LATIN CAPITAL LETTER P WITH SQUIRREL TAIL +A756 ; Uppercase # L& LATIN CAPITAL LETTER Q WITH STROKE THROUGH DESCENDER +A758 ; Uppercase # L& LATIN CAPITAL LETTER Q WITH DIAGONAL STROKE +A75A ; Uppercase # L& LATIN CAPITAL LETTER R ROTUNDA +A75C ; Uppercase # L& LATIN CAPITAL LETTER RUM ROTUNDA +A75E ; Uppercase # L& LATIN CAPITAL LETTER V WITH DIAGONAL STROKE +A760 ; Uppercase # L& LATIN CAPITAL LETTER VY +A762 ; Uppercase # L& LATIN CAPITAL LETTER VISIGOTHIC Z +A764 ; Uppercase # L& LATIN CAPITAL LETTER THORN WITH STROKE +A766 ; Uppercase # L& LATIN CAPITAL LETTER THORN WITH STROKE THROUGH DESCENDER +A768 ; Uppercase # L& LATIN CAPITAL LETTER VEND +A76A ; Uppercase # L& LATIN CAPITAL LETTER ET +A76C ; Uppercase # L& LATIN CAPITAL LETTER IS +A76E ; Uppercase # L& LATIN CAPITAL LETTER CON +A779 ; Uppercase # L& LATIN CAPITAL LETTER INSULAR D +A77B ; Uppercase # L& LATIN CAPITAL LETTER INSULAR F +A77D..A77E ; Uppercase # L& [2] LATIN CAPITAL LETTER INSULAR G..LATIN CAPITAL LETTER TURNED INSULAR G +A780 ; Uppercase # L& LATIN CAPITAL LETTER TURNED L +A782 ; Uppercase # L& LATIN CAPITAL LETTER INSULAR R +A784 ; Uppercase # L& LATIN CAPITAL LETTER INSULAR S +A786 ; Uppercase # L& LATIN CAPITAL LETTER INSULAR T +A78B ; Uppercase # L& LATIN CAPITAL LETTER SALTILLO +FF21..FF3A ; Uppercase # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +10400..10427 ; Uppercase # L& [40] DESERET CAPITAL LETTER LONG I..DESERET CAPITAL LETTER EW +1D400..1D419 ; Uppercase # L& [26] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL BOLD CAPITAL Z +1D434..1D44D ; Uppercase # L& [26] MATHEMATICAL ITALIC CAPITAL A..MATHEMATICAL ITALIC CAPITAL Z +1D468..1D481 ; Uppercase # L& [26] MATHEMATICAL BOLD ITALIC CAPITAL A..MATHEMATICAL BOLD ITALIC CAPITAL Z +1D49C ; Uppercase # L& MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; Uppercase # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; Uppercase # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; Uppercase # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; Uppercase # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B5 ; Uppercase # L& [8] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT CAPITAL Z +1D4D0..1D4E9 ; Uppercase # L& [26] MATHEMATICAL BOLD SCRIPT CAPITAL A..MATHEMATICAL BOLD SCRIPT CAPITAL Z +1D504..1D505 ; Uppercase # L& [2] MATHEMATICAL FRAKTUR CAPITAL A..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; Uppercase # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; Uppercase # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; Uppercase # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D538..1D539 ; Uppercase # L& [2] MATHEMATICAL DOUBLE-STRUCK CAPITAL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; Uppercase # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; Uppercase # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; Uppercase # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; Uppercase # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D56C..1D585 ; Uppercase # L& [26] MATHEMATICAL BOLD FRAKTUR CAPITAL A..MATHEMATICAL BOLD FRAKTUR CAPITAL Z +1D5A0..1D5B9 ; Uppercase # L& [26] MATHEMATICAL SANS-SERIF CAPITAL A..MATHEMATICAL SANS-SERIF CAPITAL Z +1D5D4..1D5ED ; Uppercase # L& [26] MATHEMATICAL SANS-SERIF BOLD CAPITAL A..MATHEMATICAL SANS-SERIF BOLD CAPITAL Z +1D608..1D621 ; Uppercase # L& [26] MATHEMATICAL SANS-SERIF ITALIC CAPITAL A..MATHEMATICAL SANS-SERIF ITALIC CAPITAL Z +1D63C..1D655 ; Uppercase # L& [26] MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL A..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL Z +1D670..1D689 ; Uppercase # L& [26] MATHEMATICAL MONOSPACE CAPITAL A..MATHEMATICAL MONOSPACE CAPITAL Z +1D6A8..1D6C0 ; Uppercase # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6E2..1D6FA ; Uppercase # L& [25] MATHEMATICAL ITALIC CAPITAL ALPHA..MATHEMATICAL ITALIC CAPITAL OMEGA +1D71C..1D734 ; Uppercase # L& [25] MATHEMATICAL BOLD ITALIC CAPITAL ALPHA..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D756..1D76E ; Uppercase # L& [25] MATHEMATICAL SANS-SERIF BOLD CAPITAL ALPHA..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D790..1D7A8 ; Uppercase # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7CA ; Uppercase # L& MATHEMATICAL BOLD CAPITAL DIGAMMA + +# Total code points: 1469 + +# ================================================ + +# Derived Property: Cased (Cased) +# As defined by Unicode Standard Definition D120 +# C has the Lowercase or Uppercase property or has a General_Category value of Titlecase_Letter. + +0041..005A ; Cased # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +0061..007A ; Cased # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00AA ; Cased # L& FEMININE ORDINAL INDICATOR +00B5 ; Cased # L& MICRO SIGN +00BA ; Cased # L& MASCULINE ORDINAL INDICATOR +00C0..00D6 ; Cased # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00F6 ; Cased # L& [31] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER O WITH DIAERESIS +00F8..01BA ; Cased # L& [195] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER EZH WITH TAIL +01BC..01BF ; Cased # L& [4] LATIN CAPITAL LETTER TONE FIVE..LATIN LETTER WYNN +01C4..0293 ; Cased # L& [208] LATIN CAPITAL LETTER DZ WITH CARON..LATIN SMALL LETTER EZH WITH CURL +0295..02AF ; Cased # L& [27] LATIN LETTER PHARYNGEAL VOICED FRICATIVE..LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL +02B0..02B8 ; Cased # Lm [9] MODIFIER LETTER SMALL H..MODIFIER LETTER SMALL Y +02C0..02C1 ; Cased # Lm [2] MODIFIER LETTER GLOTTAL STOP..MODIFIER LETTER REVERSED GLOTTAL STOP +02E0..02E4 ; Cased # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +0345 ; Cased # Mn COMBINING GREEK YPOGEGRAMMENI +0370..0373 ; Cased # L& [4] GREEK CAPITAL LETTER HETA..GREEK SMALL LETTER ARCHAIC SAMPI +0376..0377 ; Cased # L& [2] GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA..GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037A ; Cased # Lm GREEK YPOGEGRAMMENI +037B..037D ; Cased # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0386 ; Cased # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0388..038A ; Cased # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; Cased # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..03A1 ; Cased # L& [20] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER RHO +03A3..03F5 ; Cased # L& [83] GREEK CAPITAL LETTER SIGMA..GREEK LUNATE EPSILON SYMBOL +03F7..0481 ; Cased # L& [139] GREEK CAPITAL LETTER SHO..CYRILLIC SMALL LETTER KOPPA +048A..0525 ; Cased # L& [156] CYRILLIC CAPITAL LETTER SHORT I WITH TAIL..CYRILLIC SMALL LETTER PE WITH DESCENDER +0531..0556 ; Cased # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0561..0587 ; Cased # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +10A0..10C5 ; Cased # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +1D00..1D2B ; Cased # L& [44] LATIN LETTER SMALL CAPITAL A..CYRILLIC LETTER SMALL CAPITAL EL +1D2C..1D61 ; Cased # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D62..1D77 ; Cased # L& [22] LATIN SUBSCRIPT SMALL LETTER I..LATIN SMALL LETTER TURNED G +1D78 ; Cased # Lm MODIFIER LETTER CYRILLIC EN +1D79..1D9A ; Cased # L& [34] LATIN SMALL LETTER INSULAR G..LATIN SMALL LETTER EZH WITH RETROFLEX HOOK +1D9B..1DBF ; Cased # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1E00..1F15 ; Cased # L& [278] LATIN CAPITAL LETTER A WITH RING BELOW..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F18..1F1D ; Cased # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F45 ; Cased # L& [38] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F48..1F4D ; Cased # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; Cased # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F59 ; Cased # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; Cased # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; Cased # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F..1F7D ; Cased # L& [31] GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; Cased # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FBC ; Cased # L& [7] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBE ; Cased # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; Cased # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FCC ; Cased # L& [7] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD0..1FD3 ; Cased # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FDB ; Cased # L& [6] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE0..1FEC ; Cased # L& [13] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF2..1FF4 ; Cased # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FFC ; Cased # L& [7] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +2090..2094 ; Cased # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +2102 ; Cased # L& DOUBLE-STRUCK CAPITAL C +2107 ; Cased # L& EULER CONSTANT +210A..2113 ; Cased # L& [10] SCRIPT SMALL G..SCRIPT SMALL L +2115 ; Cased # L& DOUBLE-STRUCK CAPITAL N +2119..211D ; Cased # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +2124 ; Cased # L& DOUBLE-STRUCK CAPITAL Z +2126 ; Cased # L& OHM SIGN +2128 ; Cased # L& BLACK-LETTER CAPITAL Z +212A..212D ; Cased # L& [4] KELVIN SIGN..BLACK-LETTER CAPITAL C +212F..2134 ; Cased # L& [6] SCRIPT SMALL E..SCRIPT SMALL O +2139 ; Cased # L& INFORMATION SOURCE +213C..213F ; Cased # L& [4] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK CAPITAL PI +2145..2149 ; Cased # L& [5] DOUBLE-STRUCK ITALIC CAPITAL D..DOUBLE-STRUCK ITALIC SMALL J +214E ; Cased # L& TURNED SMALL F +2160..217F ; Cased # Nl [32] ROMAN NUMERAL ONE..SMALL ROMAN NUMERAL ONE THOUSAND +2183..2184 ; Cased # L& [2] ROMAN NUMERAL REVERSED ONE HUNDRED..LATIN SMALL LETTER REVERSED C +24B6..24E9 ; Cased # So [52] CIRCLED LATIN CAPITAL LETTER A..CIRCLED LATIN SMALL LETTER Z +2C00..2C2E ; Cased # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C30..2C5E ; Cased # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C60..2C7C ; Cased # L& [29] LATIN CAPITAL LETTER L WITH DOUBLE BAR..LATIN SUBSCRIPT SMALL LETTER J +2C7D ; Cased # Lm MODIFIER LETTER CAPITAL V +2C7E..2CE4 ; Cased # L& [103] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC SYMBOL KAI +2CEB..2CEE ; Cased # L& [4] COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI..COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2D00..2D25 ; Cased # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +A640..A65F ; Cased # L& [32] CYRILLIC CAPITAL LETTER ZEMLYA..CYRILLIC SMALL LETTER YN +A662..A66D ; Cased # L& [12] CYRILLIC CAPITAL LETTER SOFT DE..CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A680..A697 ; Cased # L& [24] CYRILLIC CAPITAL LETTER DWE..CYRILLIC SMALL LETTER SHWE +A722..A76F ; Cased # L& [78] LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF..LATIN SMALL LETTER CON +A770 ; Cased # Lm MODIFIER LETTER US +A771..A787 ; Cased # L& [23] LATIN SMALL LETTER DUM..LATIN SMALL LETTER INSULAR T +A78B..A78C ; Cased # L& [2] LATIN CAPITAL LETTER SALTILLO..LATIN SMALL LETTER SALTILLO +FB00..FB06 ; Cased # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; Cased # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FF21..FF3A ; Cased # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +FF41..FF5A ; Cased # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +10400..1044F ; Cased # L& [80] DESERET CAPITAL LETTER LONG I..DESERET SMALL LETTER EW +1D400..1D454 ; Cased # L& [85] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL ITALIC SMALL G +1D456..1D49C ; Cased # L& [71] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; Cased # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; Cased # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; Cased # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; Cased # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B9 ; Cased # L& [12] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT SMALL D +1D4BB ; Cased # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; Cased # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D505 ; Cased # L& [65] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; Cased # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; Cased # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; Cased # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D51E..1D539 ; Cased # L& [28] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; Cased # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; Cased # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; Cased # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; Cased # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D552..1D6A5 ; Cased # L& [340] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6A8..1D6C0 ; Cased # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6C2..1D6DA ; Cased # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DC..1D6FA ; Cased # L& [31] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL ITALIC CAPITAL OMEGA +1D6FC..1D714 ; Cased # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D716..1D734 ; Cased # L& [31] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D736..1D74E ; Cased # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D750..1D76E ; Cased # L& [31] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D770..1D788 ; Cased # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D78A..1D7A8 ; Cased # L& [31] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7AA..1D7C2 ; Cased # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C4..1D7CB ; Cased # L& [8] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD SMALL DIGAMMA + +# Total code points: 3408 + +# ================================================ + +# Derived Property: Case_Ignorable (CI) +# As defined by Unicode Standard Definition D121 +# C is defined to be case-ignorable if +# Word_Break(C) = MidLetter or MidNumLet, or +# General_Category(C) = Nonspacing_Mark (Mn), Enclosing_Mark (Me), Format (Cf), Modifier_Letter (Lm), or Modifier_Symbol (Sk). + +0027 ; Case_Ignorable # Po APOSTROPHE +002E ; Case_Ignorable # Po FULL STOP +003A ; Case_Ignorable # Po COLON +005E ; Case_Ignorable # Sk CIRCUMFLEX ACCENT +0060 ; Case_Ignorable # Sk GRAVE ACCENT +00A8 ; Case_Ignorable # Sk DIAERESIS +00AD ; Case_Ignorable # Cf SOFT HYPHEN +00AF ; Case_Ignorable # Sk MACRON +00B4 ; Case_Ignorable # Sk ACUTE ACCENT +00B7 ; Case_Ignorable # Po MIDDLE DOT +00B8 ; Case_Ignorable # Sk CEDILLA +02B0..02C1 ; Case_Ignorable # Lm [18] MODIFIER LETTER SMALL H..MODIFIER LETTER REVERSED GLOTTAL STOP +02C2..02C5 ; Case_Ignorable # Sk [4] MODIFIER LETTER LEFT ARROWHEAD..MODIFIER LETTER DOWN ARROWHEAD +02C6..02D1 ; Case_Ignorable # Lm [12] MODIFIER LETTER CIRCUMFLEX ACCENT..MODIFIER LETTER HALF TRIANGULAR COLON +02D2..02DF ; Case_Ignorable # Sk [14] MODIFIER LETTER CENTRED RIGHT HALF RING..MODIFIER LETTER CROSS ACCENT +02E0..02E4 ; Case_Ignorable # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +02E5..02EB ; Case_Ignorable # Sk [7] MODIFIER LETTER EXTRA-HIGH TONE BAR..MODIFIER LETTER YANG DEPARTING TONE MARK +02EC ; Case_Ignorable # Lm MODIFIER LETTER VOICING +02ED ; Case_Ignorable # Sk MODIFIER LETTER UNASPIRATED +02EE ; Case_Ignorable # Lm MODIFIER LETTER DOUBLE APOSTROPHE +02EF..02FF ; Case_Ignorable # Sk [17] MODIFIER LETTER LOW DOWN ARROWHEAD..MODIFIER LETTER LOW LEFT ARROW +0300..036F ; Case_Ignorable # Mn [112] COMBINING GRAVE ACCENT..COMBINING LATIN SMALL LETTER X +0374 ; Case_Ignorable # Lm GREEK NUMERAL SIGN +0375 ; Case_Ignorable # Sk GREEK LOWER NUMERAL SIGN +037A ; Case_Ignorable # Lm GREEK YPOGEGRAMMENI +0384..0385 ; Case_Ignorable # Sk [2] GREEK TONOS..GREEK DIALYTIKA TONOS +0387 ; Case_Ignorable # Po GREEK ANO TELEIA +0483..0487 ; Case_Ignorable # Mn [5] COMBINING CYRILLIC TITLO..COMBINING CYRILLIC POKRYTIE +0488..0489 ; Case_Ignorable # Me [2] COMBINING CYRILLIC HUNDRED THOUSANDS SIGN..COMBINING CYRILLIC MILLIONS SIGN +0559 ; Case_Ignorable # Lm ARMENIAN MODIFIER LETTER LEFT HALF RING +0591..05BD ; Case_Ignorable # Mn [45] HEBREW ACCENT ETNAHTA..HEBREW POINT METEG +05BF ; Case_Ignorable # Mn HEBREW POINT RAFE +05C1..05C2 ; Case_Ignorable # Mn [2] HEBREW POINT SHIN DOT..HEBREW POINT SIN DOT +05C4..05C5 ; Case_Ignorable # Mn [2] HEBREW MARK UPPER DOT..HEBREW MARK LOWER DOT +05C7 ; Case_Ignorable # Mn HEBREW POINT QAMATS QATAN +05F4 ; Case_Ignorable # Po HEBREW PUNCTUATION GERSHAYIM +0600..0603 ; Case_Ignorable # Cf [4] ARABIC NUMBER SIGN..ARABIC SIGN SAFHA +0610..061A ; Case_Ignorable # Mn [11] ARABIC SIGN SALLALLAHOU ALAYHE WASSALLAM..ARABIC SMALL KASRA +0640 ; Case_Ignorable # Lm ARABIC TATWEEL +064B..065E ; Case_Ignorable # Mn [20] ARABIC FATHATAN..ARABIC FATHA WITH TWO DOTS +0670 ; Case_Ignorable # Mn ARABIC LETTER SUPERSCRIPT ALEF +06D6..06DC ; Case_Ignorable # Mn [7] ARABIC SMALL HIGH LIGATURE SAD WITH LAM WITH ALEF MAKSURA..ARABIC SMALL HIGH SEEN +06DD ; Case_Ignorable # Cf ARABIC END OF AYAH +06DE ; Case_Ignorable # Me ARABIC START OF RUB EL HIZB +06DF..06E4 ; Case_Ignorable # Mn [6] ARABIC SMALL HIGH ROUNDED ZERO..ARABIC SMALL HIGH MADDA +06E5..06E6 ; Case_Ignorable # Lm [2] ARABIC SMALL WAW..ARABIC SMALL YEH +06E7..06E8 ; Case_Ignorable # Mn [2] ARABIC SMALL HIGH YEH..ARABIC SMALL HIGH NOON +06EA..06ED ; Case_Ignorable # Mn [4] ARABIC EMPTY CENTRE LOW STOP..ARABIC SMALL LOW MEEM +070F ; Case_Ignorable # Cf SYRIAC ABBREVIATION MARK +0711 ; Case_Ignorable # Mn SYRIAC LETTER SUPERSCRIPT ALAPH +0730..074A ; Case_Ignorable # Mn [27] SYRIAC PTHAHA ABOVE..SYRIAC BARREKH +07A6..07B0 ; Case_Ignorable # Mn [11] THAANA ABAFILI..THAANA SUKUN +07EB..07F3 ; Case_Ignorable # Mn [9] NKO COMBINING SHORT HIGH TONE..NKO COMBINING DOUBLE DOT ABOVE +07F4..07F5 ; Case_Ignorable # Lm [2] NKO HIGH TONE APOSTROPHE..NKO LOW TONE APOSTROPHE +07FA ; Case_Ignorable # Lm NKO LAJANYALAN +0816..0819 ; Case_Ignorable # Mn [4] SAMARITAN MARK IN..SAMARITAN MARK DAGESH +081A ; Case_Ignorable # Lm SAMARITAN MODIFIER LETTER EPENTHETIC YUT +081B..0823 ; Case_Ignorable # Mn [9] SAMARITAN MARK EPENTHETIC YUT..SAMARITAN VOWEL SIGN A +0824 ; Case_Ignorable # Lm SAMARITAN MODIFIER LETTER SHORT A +0825..0827 ; Case_Ignorable # Mn [3] SAMARITAN VOWEL SIGN SHORT A..SAMARITAN VOWEL SIGN U +0828 ; Case_Ignorable # Lm SAMARITAN MODIFIER LETTER I +0829..082D ; Case_Ignorable # Mn [5] SAMARITAN VOWEL SIGN LONG I..SAMARITAN MARK NEQUDAA +0900..0902 ; Case_Ignorable # Mn [3] DEVANAGARI SIGN INVERTED CANDRABINDU..DEVANAGARI SIGN ANUSVARA +093C ; Case_Ignorable # Mn DEVANAGARI SIGN NUKTA +0941..0948 ; Case_Ignorable # Mn [8] DEVANAGARI VOWEL SIGN U..DEVANAGARI VOWEL SIGN AI +094D ; Case_Ignorable # Mn DEVANAGARI SIGN VIRAMA +0951..0955 ; Case_Ignorable # Mn [5] DEVANAGARI STRESS SIGN UDATTA..DEVANAGARI VOWEL SIGN CANDRA LONG E +0962..0963 ; Case_Ignorable # Mn [2] DEVANAGARI VOWEL SIGN VOCALIC L..DEVANAGARI VOWEL SIGN VOCALIC LL +0971 ; Case_Ignorable # Lm DEVANAGARI SIGN HIGH SPACING DOT +0981 ; Case_Ignorable # Mn BENGALI SIGN CANDRABINDU +09BC ; Case_Ignorable # Mn BENGALI SIGN NUKTA +09C1..09C4 ; Case_Ignorable # Mn [4] BENGALI VOWEL SIGN U..BENGALI VOWEL SIGN VOCALIC RR +09CD ; Case_Ignorable # Mn BENGALI SIGN VIRAMA +09E2..09E3 ; Case_Ignorable # Mn [2] BENGALI VOWEL SIGN VOCALIC L..BENGALI VOWEL SIGN VOCALIC LL +0A01..0A02 ; Case_Ignorable # Mn [2] GURMUKHI SIGN ADAK BINDI..GURMUKHI SIGN BINDI +0A3C ; Case_Ignorable # Mn GURMUKHI SIGN NUKTA +0A41..0A42 ; Case_Ignorable # Mn [2] GURMUKHI VOWEL SIGN U..GURMUKHI VOWEL SIGN UU +0A47..0A48 ; Case_Ignorable # Mn [2] GURMUKHI VOWEL SIGN EE..GURMUKHI VOWEL SIGN AI +0A4B..0A4D ; Case_Ignorable # Mn [3] GURMUKHI VOWEL SIGN OO..GURMUKHI SIGN VIRAMA +0A51 ; Case_Ignorable # Mn GURMUKHI SIGN UDAAT +0A70..0A71 ; Case_Ignorable # Mn [2] GURMUKHI TIPPI..GURMUKHI ADDAK +0A75 ; Case_Ignorable # Mn GURMUKHI SIGN YAKASH +0A81..0A82 ; Case_Ignorable # Mn [2] GUJARATI SIGN CANDRABINDU..GUJARATI SIGN ANUSVARA +0ABC ; Case_Ignorable # Mn GUJARATI SIGN NUKTA +0AC1..0AC5 ; Case_Ignorable # Mn [5] GUJARATI VOWEL SIGN U..GUJARATI VOWEL SIGN CANDRA E +0AC7..0AC8 ; Case_Ignorable # Mn [2] GUJARATI VOWEL SIGN E..GUJARATI VOWEL SIGN AI +0ACD ; Case_Ignorable # Mn GUJARATI SIGN VIRAMA +0AE2..0AE3 ; Case_Ignorable # Mn [2] GUJARATI VOWEL SIGN VOCALIC L..GUJARATI VOWEL SIGN VOCALIC LL +0B01 ; Case_Ignorable # Mn ORIYA SIGN CANDRABINDU +0B3C ; Case_Ignorable # Mn ORIYA SIGN NUKTA +0B3F ; Case_Ignorable # Mn ORIYA VOWEL SIGN I +0B41..0B44 ; Case_Ignorable # Mn [4] ORIYA VOWEL SIGN U..ORIYA VOWEL SIGN VOCALIC RR +0B4D ; Case_Ignorable # Mn ORIYA SIGN VIRAMA +0B56 ; Case_Ignorable # Mn ORIYA AI LENGTH MARK +0B62..0B63 ; Case_Ignorable # Mn [2] ORIYA VOWEL SIGN VOCALIC L..ORIYA VOWEL SIGN VOCALIC LL +0B82 ; Case_Ignorable # Mn TAMIL SIGN ANUSVARA +0BC0 ; Case_Ignorable # Mn TAMIL VOWEL SIGN II +0BCD ; Case_Ignorable # Mn TAMIL SIGN VIRAMA +0C3E..0C40 ; Case_Ignorable # Mn [3] TELUGU VOWEL SIGN AA..TELUGU VOWEL SIGN II +0C46..0C48 ; Case_Ignorable # Mn [3] TELUGU VOWEL SIGN E..TELUGU VOWEL SIGN AI +0C4A..0C4D ; Case_Ignorable # Mn [4] TELUGU VOWEL SIGN O..TELUGU SIGN VIRAMA +0C55..0C56 ; Case_Ignorable # Mn [2] TELUGU LENGTH MARK..TELUGU AI LENGTH MARK +0C62..0C63 ; Case_Ignorable # Mn [2] TELUGU VOWEL SIGN VOCALIC L..TELUGU VOWEL SIGN VOCALIC LL +0CBC ; Case_Ignorable # Mn KANNADA SIGN NUKTA +0CBF ; Case_Ignorable # Mn KANNADA VOWEL SIGN I +0CC6 ; Case_Ignorable # Mn KANNADA VOWEL SIGN E +0CCC..0CCD ; Case_Ignorable # Mn [2] KANNADA VOWEL SIGN AU..KANNADA SIGN VIRAMA +0CE2..0CE3 ; Case_Ignorable # Mn [2] KANNADA VOWEL SIGN VOCALIC L..KANNADA VOWEL SIGN VOCALIC LL +0D41..0D44 ; Case_Ignorable # Mn [4] MALAYALAM VOWEL SIGN U..MALAYALAM VOWEL SIGN VOCALIC RR +0D4D ; Case_Ignorable # Mn MALAYALAM SIGN VIRAMA +0D62..0D63 ; Case_Ignorable # Mn [2] MALAYALAM VOWEL SIGN VOCALIC L..MALAYALAM VOWEL SIGN VOCALIC LL +0DCA ; Case_Ignorable # Mn SINHALA SIGN AL-LAKUNA +0DD2..0DD4 ; Case_Ignorable # Mn [3] SINHALA VOWEL SIGN KETTI IS-PILLA..SINHALA VOWEL SIGN KETTI PAA-PILLA +0DD6 ; Case_Ignorable # Mn SINHALA VOWEL SIGN DIGA PAA-PILLA +0E31 ; Case_Ignorable # Mn THAI CHARACTER MAI HAN-AKAT +0E34..0E3A ; Case_Ignorable # Mn [7] THAI CHARACTER SARA I..THAI CHARACTER PHINTHU +0E46 ; Case_Ignorable # Lm THAI CHARACTER MAIYAMOK +0E47..0E4E ; Case_Ignorable # Mn [8] THAI CHARACTER MAITAIKHU..THAI CHARACTER YAMAKKAN +0EB1 ; Case_Ignorable # Mn LAO VOWEL SIGN MAI KAN +0EB4..0EB9 ; Case_Ignorable # Mn [6] LAO VOWEL SIGN I..LAO VOWEL SIGN UU +0EBB..0EBC ; Case_Ignorable # Mn [2] LAO VOWEL SIGN MAI KON..LAO SEMIVOWEL SIGN LO +0EC6 ; Case_Ignorable # Lm LAO KO LA +0EC8..0ECD ; Case_Ignorable # Mn [6] LAO TONE MAI EK..LAO NIGGAHITA +0F18..0F19 ; Case_Ignorable # Mn [2] TIBETAN ASTROLOGICAL SIGN -KHYUD PA..TIBETAN ASTROLOGICAL SIGN SDONG TSHUGS +0F35 ; Case_Ignorable # Mn TIBETAN MARK NGAS BZUNG NYI ZLA +0F37 ; Case_Ignorable # Mn TIBETAN MARK NGAS BZUNG SGOR RTAGS +0F39 ; Case_Ignorable # Mn TIBETAN MARK TSA -PHRU +0F71..0F7E ; Case_Ignorable # Mn [14] TIBETAN VOWEL SIGN AA..TIBETAN SIGN RJES SU NGA RO +0F80..0F84 ; Case_Ignorable # Mn [5] TIBETAN VOWEL SIGN REVERSED I..TIBETAN MARK HALANTA +0F86..0F87 ; Case_Ignorable # Mn [2] TIBETAN SIGN LCI RTAGS..TIBETAN SIGN YANG RTAGS +0F90..0F97 ; Case_Ignorable # Mn [8] TIBETAN SUBJOINED LETTER KA..TIBETAN SUBJOINED LETTER JA +0F99..0FBC ; Case_Ignorable # Mn [36] TIBETAN SUBJOINED LETTER NYA..TIBETAN SUBJOINED LETTER FIXED-FORM RA +0FC6 ; Case_Ignorable # Mn TIBETAN SYMBOL PADMA GDAN +102D..1030 ; Case_Ignorable # Mn [4] MYANMAR VOWEL SIGN I..MYANMAR VOWEL SIGN UU +1032..1037 ; Case_Ignorable # Mn [6] MYANMAR VOWEL SIGN AI..MYANMAR SIGN DOT BELOW +1039..103A ; Case_Ignorable # Mn [2] MYANMAR SIGN VIRAMA..MYANMAR SIGN ASAT +103D..103E ; Case_Ignorable # Mn [2] MYANMAR CONSONANT SIGN MEDIAL WA..MYANMAR CONSONANT SIGN MEDIAL HA +1058..1059 ; Case_Ignorable # Mn [2] MYANMAR VOWEL SIGN VOCALIC L..MYANMAR VOWEL SIGN VOCALIC LL +105E..1060 ; Case_Ignorable # Mn [3] MYANMAR CONSONANT SIGN MON MEDIAL NA..MYANMAR CONSONANT SIGN MON MEDIAL LA +1071..1074 ; Case_Ignorable # Mn [4] MYANMAR VOWEL SIGN GEBA KAREN I..MYANMAR VOWEL SIGN KAYAH EE +1082 ; Case_Ignorable # Mn MYANMAR CONSONANT SIGN SHAN MEDIAL WA +1085..1086 ; Case_Ignorable # Mn [2] MYANMAR VOWEL SIGN SHAN E ABOVE..MYANMAR VOWEL SIGN SHAN FINAL Y +108D ; Case_Ignorable # Mn MYANMAR SIGN SHAN COUNCIL EMPHATIC TONE +109D ; Case_Ignorable # Mn MYANMAR VOWEL SIGN AITON AI +10FC ; Case_Ignorable # Lm MODIFIER LETTER GEORGIAN NAR +135F ; Case_Ignorable # Mn ETHIOPIC COMBINING GEMINATION MARK +1712..1714 ; Case_Ignorable # Mn [3] TAGALOG VOWEL SIGN I..TAGALOG SIGN VIRAMA +1732..1734 ; Case_Ignorable # Mn [3] HANUNOO VOWEL SIGN I..HANUNOO SIGN PAMUDPOD +1752..1753 ; Case_Ignorable # Mn [2] BUHID VOWEL SIGN I..BUHID VOWEL SIGN U +1772..1773 ; Case_Ignorable # Mn [2] TAGBANWA VOWEL SIGN I..TAGBANWA VOWEL SIGN U +17B4..17B5 ; Case_Ignorable # Cf [2] KHMER VOWEL INHERENT AQ..KHMER VOWEL INHERENT AA +17B7..17BD ; Case_Ignorable # Mn [7] KHMER VOWEL SIGN I..KHMER VOWEL SIGN UA +17C6 ; Case_Ignorable # Mn KHMER SIGN NIKAHIT +17C9..17D3 ; Case_Ignorable # Mn [11] KHMER SIGN MUUSIKATOAN..KHMER SIGN BATHAMASAT +17D7 ; Case_Ignorable # Lm KHMER SIGN LEK TOO +17DD ; Case_Ignorable # Mn KHMER SIGN ATTHACAN +180B..180D ; Case_Ignorable # Mn [3] MONGOLIAN FREE VARIATION SELECTOR ONE..MONGOLIAN FREE VARIATION SELECTOR THREE +1843 ; Case_Ignorable # Lm MONGOLIAN LETTER TODO LONG VOWEL SIGN +18A9 ; Case_Ignorable # Mn MONGOLIAN LETTER ALI GALI DAGALGA +1920..1922 ; Case_Ignorable # Mn [3] LIMBU VOWEL SIGN A..LIMBU VOWEL SIGN U +1927..1928 ; Case_Ignorable # Mn [2] LIMBU VOWEL SIGN E..LIMBU VOWEL SIGN O +1932 ; Case_Ignorable # Mn LIMBU SMALL LETTER ANUSVARA +1939..193B ; Case_Ignorable # Mn [3] LIMBU SIGN MUKPHRENG..LIMBU SIGN SA-I +1A17..1A18 ; Case_Ignorable # Mn [2] BUGINESE VOWEL SIGN I..BUGINESE VOWEL SIGN U +1A56 ; Case_Ignorable # Mn TAI THAM CONSONANT SIGN MEDIAL LA +1A58..1A5E ; Case_Ignorable # Mn [7] TAI THAM SIGN MAI KANG LAI..TAI THAM CONSONANT SIGN SA +1A60 ; Case_Ignorable # Mn TAI THAM SIGN SAKOT +1A62 ; Case_Ignorable # Mn TAI THAM VOWEL SIGN MAI SAT +1A65..1A6C ; Case_Ignorable # Mn [8] TAI THAM VOWEL SIGN I..TAI THAM VOWEL SIGN OA BELOW +1A73..1A7C ; Case_Ignorable # Mn [10] TAI THAM VOWEL SIGN OA ABOVE..TAI THAM SIGN KHUEN-LUE KARAN +1A7F ; Case_Ignorable # Mn TAI THAM COMBINING CRYPTOGRAMMIC DOT +1AA7 ; Case_Ignorable # Lm TAI THAM SIGN MAI YAMOK +1B00..1B03 ; Case_Ignorable # Mn [4] BALINESE SIGN ULU RICEM..BALINESE SIGN SURANG +1B34 ; Case_Ignorable # Mn BALINESE SIGN REREKAN +1B36..1B3A ; Case_Ignorable # Mn [5] BALINESE VOWEL SIGN ULU..BALINESE VOWEL SIGN RA REPA +1B3C ; Case_Ignorable # Mn BALINESE VOWEL SIGN LA LENGA +1B42 ; Case_Ignorable # Mn BALINESE VOWEL SIGN PEPET +1B6B..1B73 ; Case_Ignorable # Mn [9] BALINESE MUSICAL SYMBOL COMBINING TEGEH..BALINESE MUSICAL SYMBOL COMBINING GONG +1B80..1B81 ; Case_Ignorable # Mn [2] SUNDANESE SIGN PANYECEK..SUNDANESE SIGN PANGLAYAR +1BA2..1BA5 ; Case_Ignorable # Mn [4] SUNDANESE CONSONANT SIGN PANYAKRA..SUNDANESE VOWEL SIGN PANYUKU +1BA8..1BA9 ; Case_Ignorable # Mn [2] SUNDANESE VOWEL SIGN PAMEPET..SUNDANESE VOWEL SIGN PANEULEUNG +1C2C..1C33 ; Case_Ignorable # Mn [8] LEPCHA VOWEL SIGN E..LEPCHA CONSONANT SIGN T +1C36..1C37 ; Case_Ignorable # Mn [2] LEPCHA SIGN RAN..LEPCHA SIGN NUKTA +1C78..1C7D ; Case_Ignorable # Lm [6] OL CHIKI MU TTUDDAG..OL CHIKI AHAD +1CD0..1CD2 ; Case_Ignorable # Mn [3] VEDIC TONE KARSHANA..VEDIC TONE PRENKHA +1CD4..1CE0 ; Case_Ignorable # Mn [13] VEDIC SIGN YAJURVEDIC MIDLINE SVARITA..VEDIC TONE RIGVEDIC KASHMIRI INDEPENDENT SVARITA +1CE2..1CE8 ; Case_Ignorable # Mn [7] VEDIC SIGN VISARGA SVARITA..VEDIC SIGN VISARGA ANUDATTA WITH TAIL +1CED ; Case_Ignorable # Mn VEDIC SIGN TIRYAK +1D2C..1D61 ; Case_Ignorable # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D78 ; Case_Ignorable # Lm MODIFIER LETTER CYRILLIC EN +1D9B..1DBF ; Case_Ignorable # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1DC0..1DE6 ; Case_Ignorable # Mn [39] COMBINING DOTTED GRAVE ACCENT..COMBINING LATIN SMALL LETTER Z +1DFD..1DFF ; Case_Ignorable # Mn [3] COMBINING ALMOST EQUAL TO BELOW..COMBINING RIGHT ARROWHEAD AND DOWN ARROWHEAD BELOW +1FBD ; Case_Ignorable # Sk GREEK KORONIS +1FBF..1FC1 ; Case_Ignorable # Sk [3] GREEK PSILI..GREEK DIALYTIKA AND PERISPOMENI +1FCD..1FCF ; Case_Ignorable # Sk [3] GREEK PSILI AND VARIA..GREEK PSILI AND PERISPOMENI +1FDD..1FDF ; Case_Ignorable # Sk [3] GREEK DASIA AND VARIA..GREEK DASIA AND PERISPOMENI +1FED..1FEF ; Case_Ignorable # Sk [3] GREEK DIALYTIKA AND VARIA..GREEK VARIA +1FFD..1FFE ; Case_Ignorable # Sk [2] GREEK OXIA..GREEK DASIA +200B..200F ; Case_Ignorable # Cf [5] ZERO WIDTH SPACE..RIGHT-TO-LEFT MARK +2018 ; Case_Ignorable # Pi LEFT SINGLE QUOTATION MARK +2019 ; Case_Ignorable # Pf RIGHT SINGLE QUOTATION MARK +2024 ; Case_Ignorable # Po ONE DOT LEADER +2027 ; Case_Ignorable # Po HYPHENATION POINT +202A..202E ; Case_Ignorable # Cf [5] LEFT-TO-RIGHT EMBEDDING..RIGHT-TO-LEFT OVERRIDE +2060..2064 ; Case_Ignorable # Cf [5] WORD JOINER..INVISIBLE PLUS +206A..206F ; Case_Ignorable # Cf [6] INHIBIT SYMMETRIC SWAPPING..NOMINAL DIGIT SHAPES +2071 ; Case_Ignorable # Lm SUPERSCRIPT LATIN SMALL LETTER I +207F ; Case_Ignorable # Lm SUPERSCRIPT LATIN SMALL LETTER N +2090..2094 ; Case_Ignorable # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +20D0..20DC ; Case_Ignorable # Mn [13] COMBINING LEFT HARPOON ABOVE..COMBINING FOUR DOTS ABOVE +20DD..20E0 ; Case_Ignorable # Me [4] COMBINING ENCLOSING CIRCLE..COMBINING ENCLOSING CIRCLE BACKSLASH +20E1 ; Case_Ignorable # Mn COMBINING LEFT RIGHT ARROW ABOVE +20E2..20E4 ; Case_Ignorable # Me [3] COMBINING ENCLOSING SCREEN..COMBINING ENCLOSING UPWARD POINTING TRIANGLE +20E5..20F0 ; Case_Ignorable # Mn [12] COMBINING REVERSE SOLIDUS OVERLAY..COMBINING ASTERISK ABOVE +2C7D ; Case_Ignorable # Lm MODIFIER LETTER CAPITAL V +2CEF..2CF1 ; Case_Ignorable # Mn [3] COPTIC COMBINING NI ABOVE..COPTIC COMBINING SPIRITUS LENIS +2D6F ; Case_Ignorable # Lm TIFINAGH MODIFIER LETTER LABIALIZATION MARK +2DE0..2DFF ; Case_Ignorable # Mn [32] COMBINING CYRILLIC LETTER BE..COMBINING CYRILLIC LETTER IOTIFIED BIG YUS +2E2F ; Case_Ignorable # Lm VERTICAL TILDE +3005 ; Case_Ignorable # Lm IDEOGRAPHIC ITERATION MARK +302A..302F ; Case_Ignorable # Mn [6] IDEOGRAPHIC LEVEL TONE MARK..HANGUL DOUBLE DOT TONE MARK +3031..3035 ; Case_Ignorable # Lm [5] VERTICAL KANA REPEAT MARK..VERTICAL KANA REPEAT MARK LOWER HALF +303B ; Case_Ignorable # Lm VERTICAL IDEOGRAPHIC ITERATION MARK +3099..309A ; Case_Ignorable # Mn [2] COMBINING KATAKANA-HIRAGANA VOICED SOUND MARK..COMBINING KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK +309B..309C ; Case_Ignorable # Sk [2] KATAKANA-HIRAGANA VOICED SOUND MARK..KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK +309D..309E ; Case_Ignorable # Lm [2] HIRAGANA ITERATION MARK..HIRAGANA VOICED ITERATION MARK +30FC..30FE ; Case_Ignorable # Lm [3] KATAKANA-HIRAGANA PROLONGED SOUND MARK..KATAKANA VOICED ITERATION MARK +A015 ; Case_Ignorable # Lm YI SYLLABLE WU +A4F8..A4FD ; Case_Ignorable # Lm [6] LISU LETTER TONE MYA TI..LISU LETTER TONE MYA JEU +A60C ; Case_Ignorable # Lm VAI SYLLABLE LENGTHENER +A66F ; Case_Ignorable # Mn COMBINING CYRILLIC VZMET +A670..A672 ; Case_Ignorable # Me [3] COMBINING CYRILLIC TEN MILLIONS SIGN..COMBINING CYRILLIC THOUSAND MILLIONS SIGN +A67C..A67D ; Case_Ignorable # Mn [2] COMBINING CYRILLIC KAVYKA..COMBINING CYRILLIC PAYEROK +A67F ; Case_Ignorable # Lm CYRILLIC PAYEROK +A6F0..A6F1 ; Case_Ignorable # Mn [2] BAMUM COMBINING MARK KOQNDON..BAMUM COMBINING MARK TUKWENTIS +A700..A716 ; Case_Ignorable # Sk [23] MODIFIER LETTER CHINESE TONE YIN PING..MODIFIER LETTER EXTRA-LOW LEFT-STEM TONE BAR +A717..A71F ; Case_Ignorable # Lm [9] MODIFIER LETTER DOT VERTICAL BAR..MODIFIER LETTER LOW INVERTED EXCLAMATION MARK +A720..A721 ; Case_Ignorable # Sk [2] MODIFIER LETTER STRESS AND HIGH TONE..MODIFIER LETTER STRESS AND LOW TONE +A770 ; Case_Ignorable # Lm MODIFIER LETTER US +A788 ; Case_Ignorable # Lm MODIFIER LETTER LOW CIRCUMFLEX ACCENT +A789..A78A ; Case_Ignorable # Sk [2] MODIFIER LETTER COLON..MODIFIER LETTER SHORT EQUALS SIGN +A802 ; Case_Ignorable # Mn SYLOTI NAGRI SIGN DVISVARA +A806 ; Case_Ignorable # Mn SYLOTI NAGRI SIGN HASANTA +A80B ; Case_Ignorable # Mn SYLOTI NAGRI SIGN ANUSVARA +A825..A826 ; Case_Ignorable # Mn [2] SYLOTI NAGRI VOWEL SIGN U..SYLOTI NAGRI VOWEL SIGN E +A8C4 ; Case_Ignorable # Mn SAURASHTRA SIGN VIRAMA +A8E0..A8F1 ; Case_Ignorable # Mn [18] COMBINING DEVANAGARI DIGIT ZERO..COMBINING DEVANAGARI SIGN AVAGRAHA +A926..A92D ; Case_Ignorable # Mn [8] KAYAH LI VOWEL UE..KAYAH LI TONE CALYA PLOPHU +A947..A951 ; Case_Ignorable # Mn [11] REJANG VOWEL SIGN I..REJANG CONSONANT SIGN R +A980..A982 ; Case_Ignorable # Mn [3] JAVANESE SIGN PANYANGGA..JAVANESE SIGN LAYAR +A9B3 ; Case_Ignorable # Mn JAVANESE SIGN CECAK TELU +A9B6..A9B9 ; Case_Ignorable # Mn [4] JAVANESE VOWEL SIGN WULU..JAVANESE VOWEL SIGN SUKU MENDUT +A9BC ; Case_Ignorable # Mn JAVANESE VOWEL SIGN PEPET +A9CF ; Case_Ignorable # Lm JAVANESE PANGRANGKEP +AA29..AA2E ; Case_Ignorable # Mn [6] CHAM VOWEL SIGN AA..CHAM VOWEL SIGN OE +AA31..AA32 ; Case_Ignorable # Mn [2] CHAM VOWEL SIGN AU..CHAM VOWEL SIGN UE +AA35..AA36 ; Case_Ignorable # Mn [2] CHAM CONSONANT SIGN LA..CHAM CONSONANT SIGN WA +AA43 ; Case_Ignorable # Mn CHAM CONSONANT SIGN FINAL NG +AA4C ; Case_Ignorable # Mn CHAM CONSONANT SIGN FINAL M +AA70 ; Case_Ignorable # Lm MYANMAR MODIFIER LETTER KHAMTI REDUPLICATION +AAB0 ; Case_Ignorable # Mn TAI VIET MAI KANG +AAB2..AAB4 ; Case_Ignorable # Mn [3] TAI VIET VOWEL I..TAI VIET VOWEL U +AAB7..AAB8 ; Case_Ignorable # Mn [2] TAI VIET MAI KHIT..TAI VIET VOWEL IA +AABE..AABF ; Case_Ignorable # Mn [2] TAI VIET VOWEL AM..TAI VIET TONE MAI EK +AAC1 ; Case_Ignorable # Mn TAI VIET TONE MAI THO +AADD ; Case_Ignorable # Lm TAI VIET SYMBOL SAM +ABE5 ; Case_Ignorable # Mn MEETEI MAYEK VOWEL SIGN ANAP +ABE8 ; Case_Ignorable # Mn MEETEI MAYEK VOWEL SIGN UNAP +ABED ; Case_Ignorable # Mn MEETEI MAYEK APUN IYEK +FB1E ; Case_Ignorable # Mn HEBREW POINT JUDEO-SPANISH VARIKA +FE00..FE0F ; Case_Ignorable # Mn [16] VARIATION SELECTOR-1..VARIATION SELECTOR-16 +FE13 ; Case_Ignorable # Po PRESENTATION FORM FOR VERTICAL COLON +FE20..FE26 ; Case_Ignorable # Mn [7] COMBINING LIGATURE LEFT HALF..COMBINING CONJOINING MACRON +FE52 ; Case_Ignorable # Po SMALL FULL STOP +FE55 ; Case_Ignorable # Po SMALL COLON +FEFF ; Case_Ignorable # Cf ZERO WIDTH NO-BREAK SPACE +FF07 ; Case_Ignorable # Po FULLWIDTH APOSTROPHE +FF0E ; Case_Ignorable # Po FULLWIDTH FULL STOP +FF1A ; Case_Ignorable # Po FULLWIDTH COLON +FF3E ; Case_Ignorable # Sk FULLWIDTH CIRCUMFLEX ACCENT +FF40 ; Case_Ignorable # Sk FULLWIDTH GRAVE ACCENT +FF70 ; Case_Ignorable # Lm HALFWIDTH KATAKANA-HIRAGANA PROLONGED SOUND MARK +FF9E..FF9F ; Case_Ignorable # Lm [2] HALFWIDTH KATAKANA VOICED SOUND MARK..HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK +FFE3 ; Case_Ignorable # Sk FULLWIDTH MACRON +FFF9..FFFB ; Case_Ignorable # Cf [3] INTERLINEAR ANNOTATION ANCHOR..INTERLINEAR ANNOTATION TERMINATOR +101FD ; Case_Ignorable # Mn PHAISTOS DISC SIGN COMBINING OBLIQUE STROKE +10A01..10A03 ; Case_Ignorable # Mn [3] KHAROSHTHI VOWEL SIGN I..KHAROSHTHI VOWEL SIGN VOCALIC R +10A05..10A06 ; Case_Ignorable # Mn [2] KHAROSHTHI VOWEL SIGN E..KHAROSHTHI VOWEL SIGN O +10A0C..10A0F ; Case_Ignorable # Mn [4] KHAROSHTHI VOWEL LENGTH MARK..KHAROSHTHI SIGN VISARGA +10A38..10A3A ; Case_Ignorable # Mn [3] KHAROSHTHI SIGN BAR ABOVE..KHAROSHTHI SIGN DOT BELOW +10A3F ; Case_Ignorable # Mn KHAROSHTHI VIRAMA +11080..11081 ; Case_Ignorable # Mn [2] KAITHI SIGN CANDRABINDU..KAITHI SIGN ANUSVARA +110B3..110B6 ; Case_Ignorable # Mn [4] KAITHI VOWEL SIGN U..KAITHI VOWEL SIGN AI +110B9..110BA ; Case_Ignorable # Mn [2] KAITHI SIGN VIRAMA..KAITHI SIGN NUKTA +110BD ; Case_Ignorable # Cf KAITHI NUMBER SIGN +1D167..1D169 ; Case_Ignorable # Mn [3] MUSICAL SYMBOL COMBINING TREMOLO-1..MUSICAL SYMBOL COMBINING TREMOLO-3 +1D173..1D17A ; Case_Ignorable # Cf [8] MUSICAL SYMBOL BEGIN BEAM..MUSICAL SYMBOL END PHRASE +1D17B..1D182 ; Case_Ignorable # Mn [8] MUSICAL SYMBOL COMBINING ACCENT..MUSICAL SYMBOL COMBINING LOURE +1D185..1D18B ; Case_Ignorable # Mn [7] MUSICAL SYMBOL COMBINING DOIT..MUSICAL SYMBOL COMBINING TRIPLE TONGUE +1D1AA..1D1AD ; Case_Ignorable # Mn [4] MUSICAL SYMBOL COMBINING DOWN BOW..MUSICAL SYMBOL COMBINING SNAP PIZZICATO +1D242..1D244 ; Case_Ignorable # Mn [3] COMBINING GREEK MUSICAL TRISEME..COMBINING GREEK MUSICAL PENTASEME +E0001 ; Case_Ignorable # Cf LANGUAGE TAG +E0020..E007F ; Case_Ignorable # Cf [96] TAG SPACE..CANCEL TAG +E0100..E01EF ; Case_Ignorable # Mn [240] VARIATION SELECTOR-17..VARIATION SELECTOR-256 + +# Total code points: 1632 + +# ================================================ + +# Derived Property: Changes_When_Lowercased (CWL) +# Characters whose normalized forms are not stable under a toLowercase mapping. +# For more information, see D124 in Section 3.13, "Default Case Algorithms". +# Changes_When_Lowercased(X) is true when toLowercase(toNFD(X)) != toNFD(X) + +0041..005A ; Changes_When_Lowercased # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +00C0..00D6 ; Changes_When_Lowercased # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00DE ; Changes_When_Lowercased # L& [7] LATIN CAPITAL LETTER O WITH STROKE..LATIN CAPITAL LETTER THORN +0100 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH MACRON +0102 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH BREVE +0104 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH OGONEK +0106 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER C WITH ACUTE +0108 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER C WITH CIRCUMFLEX +010A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER C WITH DOT ABOVE +010C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER C WITH CARON +010E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER D WITH CARON +0110 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER D WITH STROKE +0112 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH MACRON +0114 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH BREVE +0116 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH DOT ABOVE +0118 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH OGONEK +011A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CARON +011C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER G WITH CIRCUMFLEX +011E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER G WITH BREVE +0120 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER G WITH DOT ABOVE +0122 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER G WITH CEDILLA +0124 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH CIRCUMFLEX +0126 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH STROKE +0128 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH TILDE +012A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH MACRON +012C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH BREVE +012E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH OGONEK +0130 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH DOT ABOVE +0132 ; Changes_When_Lowercased # L& LATIN CAPITAL LIGATURE IJ +0134 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER J WITH CIRCUMFLEX +0136 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH CEDILLA +0139 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH ACUTE +013B ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH CEDILLA +013D ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH CARON +013F ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH MIDDLE DOT +0141 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH STROKE +0143 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER N WITH ACUTE +0145 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER N WITH CEDILLA +0147 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER N WITH CARON +014A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER ENG +014C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH MACRON +014E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH BREVE +0150 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH DOUBLE ACUTE +0152 ; Changes_When_Lowercased # L& LATIN CAPITAL LIGATURE OE +0154 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH ACUTE +0156 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH CEDILLA +0158 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH CARON +015A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH ACUTE +015C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH CIRCUMFLEX +015E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH CEDILLA +0160 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH CARON +0162 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH CEDILLA +0164 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH CARON +0166 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH STROKE +0168 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH TILDE +016A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH MACRON +016C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH BREVE +016E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH RING ABOVE +0170 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH DOUBLE ACUTE +0172 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH OGONEK +0174 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER W WITH CIRCUMFLEX +0176 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH CIRCUMFLEX +0178..0179 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER Y WITH DIAERESIS..LATIN CAPITAL LETTER Z WITH ACUTE +017B ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Z WITH DOT ABOVE +017D ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Z WITH CARON +0181..0182 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER B WITH HOOK..LATIN CAPITAL LETTER B WITH TOPBAR +0184 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER TONE SIX +0186..0187 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER OPEN O..LATIN CAPITAL LETTER C WITH HOOK +0189..018B ; Changes_When_Lowercased # L& [3] LATIN CAPITAL LETTER AFRICAN D..LATIN CAPITAL LETTER D WITH TOPBAR +018E..0191 ; Changes_When_Lowercased # L& [4] LATIN CAPITAL LETTER REVERSED E..LATIN CAPITAL LETTER F WITH HOOK +0193..0194 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER G WITH HOOK..LATIN CAPITAL LETTER GAMMA +0196..0198 ; Changes_When_Lowercased # L& [3] LATIN CAPITAL LETTER IOTA..LATIN CAPITAL LETTER K WITH HOOK +019C..019D ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER TURNED M..LATIN CAPITAL LETTER N WITH LEFT HOOK +019F..01A0 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER O WITH MIDDLE TILDE..LATIN CAPITAL LETTER O WITH HORN +01A2 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER OI +01A4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER P WITH HOOK +01A6..01A7 ; Changes_When_Lowercased # L& [2] LATIN LETTER YR..LATIN CAPITAL LETTER TONE TWO +01A9 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER ESH +01AC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH HOOK +01AE..01AF ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER T WITH RETROFLEX HOOK..LATIN CAPITAL LETTER U WITH HORN +01B1..01B3 ; Changes_When_Lowercased # L& [3] LATIN CAPITAL LETTER UPSILON..LATIN CAPITAL LETTER Y WITH HOOK +01B5 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Z WITH STROKE +01B7..01B8 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER EZH..LATIN CAPITAL LETTER EZH REVERSED +01BC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER TONE FIVE +01C4..01C5 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER DZ WITH CARON..LATIN CAPITAL LETTER D WITH SMALL LETTER Z WITH CARON +01C7..01C8 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER LJ..LATIN CAPITAL LETTER L WITH SMALL LETTER J +01CA..01CB ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER NJ..LATIN CAPITAL LETTER N WITH SMALL LETTER J +01CD ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH CARON +01CF ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH CARON +01D1 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH CARON +01D3 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH CARON +01D5 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND MACRON +01D7 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND ACUTE +01D9 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND CARON +01DB ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND GRAVE +01DE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH DIAERESIS AND MACRON +01E0 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH DOT ABOVE AND MACRON +01E2 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER AE WITH MACRON +01E4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER G WITH STROKE +01E6 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER G WITH CARON +01E8 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH CARON +01EA ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH OGONEK +01EC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH OGONEK AND MACRON +01EE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER EZH WITH CARON +01F1..01F2 ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER DZ..LATIN CAPITAL LETTER D WITH SMALL LETTER Z +01F4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER G WITH ACUTE +01F6..01F8 ; Changes_When_Lowercased # L& [3] LATIN CAPITAL LETTER HWAIR..LATIN CAPITAL LETTER N WITH GRAVE +01FA ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE +01FC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER AE WITH ACUTE +01FE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH STROKE AND ACUTE +0200 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH DOUBLE GRAVE +0202 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH INVERTED BREVE +0204 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH DOUBLE GRAVE +0206 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH INVERTED BREVE +0208 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH DOUBLE GRAVE +020A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH INVERTED BREVE +020C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH DOUBLE GRAVE +020E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH INVERTED BREVE +0210 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH DOUBLE GRAVE +0212 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH INVERTED BREVE +0214 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH DOUBLE GRAVE +0216 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH INVERTED BREVE +0218 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH COMMA BELOW +021A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH COMMA BELOW +021C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER YOGH +021E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH CARON +0220 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER N WITH LONG RIGHT LEG +0222 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER OU +0224 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Z WITH HOOK +0226 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH DOT ABOVE +0228 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CEDILLA +022A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH DIAERESIS AND MACRON +022C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH TILDE AND MACRON +022E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH DOT ABOVE +0230 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH DOT ABOVE AND MACRON +0232 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH MACRON +023A..023B ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER A WITH STROKE..LATIN CAPITAL LETTER C WITH STROKE +023D..023E ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER L WITH BAR..LATIN CAPITAL LETTER T WITH DIAGONAL STROKE +0241 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER GLOTTAL STOP +0243..0246 ; Changes_When_Lowercased # L& [4] LATIN CAPITAL LETTER B WITH STROKE..LATIN CAPITAL LETTER E WITH STROKE +0248 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER J WITH STROKE +024A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER SMALL Q WITH HOOK TAIL +024C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH STROKE +024E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH STROKE +0370 ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER HETA +0372 ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER ARCHAIC SAMPI +0376 ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA +0386 ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0388..038A ; Changes_When_Lowercased # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..038F ; Changes_When_Lowercased # L& [2] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER OMEGA WITH TONOS +0391..03A1 ; Changes_When_Lowercased # L& [17] GREEK CAPITAL LETTER ALPHA..GREEK CAPITAL LETTER RHO +03A3..03AB ; Changes_When_Lowercased # L& [9] GREEK CAPITAL LETTER SIGMA..GREEK CAPITAL LETTER UPSILON WITH DIALYTIKA +03CF ; Changes_When_Lowercased # L& GREEK CAPITAL KAI SYMBOL +03D8 ; Changes_When_Lowercased # L& GREEK LETTER ARCHAIC KOPPA +03DA ; Changes_When_Lowercased # L& GREEK LETTER STIGMA +03DC ; Changes_When_Lowercased # L& GREEK LETTER DIGAMMA +03DE ; Changes_When_Lowercased # L& GREEK LETTER KOPPA +03E0 ; Changes_When_Lowercased # L& GREEK LETTER SAMPI +03E2 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER SHEI +03E4 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER FEI +03E6 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER KHEI +03E8 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER HORI +03EA ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER GANGIA +03EC ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER SHIMA +03EE ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER DEI +03F4 ; Changes_When_Lowercased # L& GREEK CAPITAL THETA SYMBOL +03F7 ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER SHO +03F9..03FA ; Changes_When_Lowercased # L& [2] GREEK CAPITAL LUNATE SIGMA SYMBOL..GREEK CAPITAL LETTER SAN +03FD..042F ; Changes_When_Lowercased # L& [51] GREEK CAPITAL REVERSED LUNATE SIGMA SYMBOL..CYRILLIC CAPITAL LETTER YA +0460 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER OMEGA +0462 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER YAT +0464 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IOTIFIED E +0466 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER LITTLE YUS +0468 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IOTIFIED LITTLE YUS +046A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER BIG YUS +046C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IOTIFIED BIG YUS +046E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KSI +0470 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER PSI +0472 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER FITA +0474 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IZHITSA +0476 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT +0478 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER UK +047A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ROUND OMEGA +047C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER OMEGA WITH TITLO +047E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER OT +0480 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOPPA +048A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SHORT I WITH TAIL +048C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SEMISOFT SIGN +048E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ER WITH TICK +0490 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER GHE WITH UPTURN +0492 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER GHE WITH STROKE +0494 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER GHE WITH MIDDLE HOOK +0496 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ZHE WITH DESCENDER +0498 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ZE WITH DESCENDER +049A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KA WITH DESCENDER +049C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KA WITH VERTICAL STROKE +049E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KA WITH STROKE +04A0 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER BASHKIR KA +04A2 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER EN WITH DESCENDER +04A4 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LIGATURE EN GHE +04A6 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER PE WITH MIDDLE HOOK +04A8 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ABKHASIAN HA +04AA ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ES WITH DESCENDER +04AC ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER TE WITH DESCENDER +04AE ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER STRAIGHT U +04B0 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER STRAIGHT U WITH STROKE +04B2 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER HA WITH DESCENDER +04B4 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LIGATURE TE TSE +04B6 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER CHE WITH DESCENDER +04B8 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER CHE WITH VERTICAL STROKE +04BA ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SHHA +04BC ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ABKHASIAN CHE +04BE ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ABKHASIAN CHE WITH DESCENDER +04C0..04C1 ; Changes_When_Lowercased # L& [2] CYRILLIC LETTER PALOCHKA..CYRILLIC CAPITAL LETTER ZHE WITH BREVE +04C3 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KA WITH HOOK +04C5 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER EL WITH TAIL +04C7 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER EN WITH HOOK +04C9 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER EN WITH TAIL +04CB ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KHAKASSIAN CHE +04CD ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER EM WITH TAIL +04D0 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER A WITH BREVE +04D2 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER A WITH DIAERESIS +04D4 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LIGATURE A IE +04D6 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IE WITH BREVE +04D8 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SCHWA +04DA ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SCHWA WITH DIAERESIS +04DC ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ZHE WITH DIAERESIS +04DE ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ZE WITH DIAERESIS +04E0 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ABKHASIAN DZE +04E2 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER I WITH MACRON +04E4 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER I WITH DIAERESIS +04E6 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER O WITH DIAERESIS +04E8 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER BARRED O +04EA ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER BARRED O WITH DIAERESIS +04EC ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER E WITH DIAERESIS +04EE ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER U WITH MACRON +04F0 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER U WITH DIAERESIS +04F2 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER U WITH DOUBLE ACUTE +04F4 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER CHE WITH DIAERESIS +04F6 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER GHE WITH DESCENDER +04F8 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER YERU WITH DIAERESIS +04FA ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER GHE WITH STROKE AND HOOK +04FC ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER HA WITH HOOK +04FE ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER HA WITH STROKE +0500 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOMI DE +0502 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOMI DJE +0504 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOMI ZJE +0506 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOMI DZJE +0508 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOMI LJE +050A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOMI NJE +050C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOMI SJE +050E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER KOMI TJE +0510 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER REVERSED ZE +0512 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER EL WITH HOOK +0514 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER LHA +0516 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER RHA +0518 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER YAE +051A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER QA +051C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER WE +051E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ALEUT KA +0520 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER EL WITH MIDDLE HOOK +0522 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER EN WITH MIDDLE HOOK +0524 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER PE WITH DESCENDER +0531..0556 ; Changes_When_Lowercased # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +10A0..10C5 ; Changes_When_Lowercased # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +1E00 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH RING BELOW +1E02 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER B WITH DOT ABOVE +1E04 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER B WITH DOT BELOW +1E06 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER B WITH LINE BELOW +1E08 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER C WITH CEDILLA AND ACUTE +1E0A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER D WITH DOT ABOVE +1E0C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER D WITH DOT BELOW +1E0E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER D WITH LINE BELOW +1E10 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER D WITH CEDILLA +1E12 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER D WITH CIRCUMFLEX BELOW +1E14 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH MACRON AND GRAVE +1E16 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH MACRON AND ACUTE +1E18 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX BELOW +1E1A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH TILDE BELOW +1E1C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CEDILLA AND BREVE +1E1E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER F WITH DOT ABOVE +1E20 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER G WITH MACRON +1E22 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH DOT ABOVE +1E24 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH DOT BELOW +1E26 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH DIAERESIS +1E28 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH CEDILLA +1E2A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH BREVE BELOW +1E2C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH TILDE BELOW +1E2E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH DIAERESIS AND ACUTE +1E30 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH ACUTE +1E32 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH DOT BELOW +1E34 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH LINE BELOW +1E36 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH DOT BELOW +1E38 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH DOT BELOW AND MACRON +1E3A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH LINE BELOW +1E3C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH CIRCUMFLEX BELOW +1E3E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER M WITH ACUTE +1E40 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER M WITH DOT ABOVE +1E42 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER M WITH DOT BELOW +1E44 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER N WITH DOT ABOVE +1E46 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER N WITH DOT BELOW +1E48 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER N WITH LINE BELOW +1E4A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER N WITH CIRCUMFLEX BELOW +1E4C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH TILDE AND ACUTE +1E4E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH TILDE AND DIAERESIS +1E50 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH MACRON AND GRAVE +1E52 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH MACRON AND ACUTE +1E54 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER P WITH ACUTE +1E56 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER P WITH DOT ABOVE +1E58 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH DOT ABOVE +1E5A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH DOT BELOW +1E5C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH DOT BELOW AND MACRON +1E5E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R WITH LINE BELOW +1E60 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH DOT ABOVE +1E62 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH DOT BELOW +1E64 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH ACUTE AND DOT ABOVE +1E66 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH CARON AND DOT ABOVE +1E68 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER S WITH DOT BELOW AND DOT ABOVE +1E6A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH DOT ABOVE +1E6C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH DOT BELOW +1E6E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH LINE BELOW +1E70 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER T WITH CIRCUMFLEX BELOW +1E72 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH DIAERESIS BELOW +1E74 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH TILDE BELOW +1E76 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH CIRCUMFLEX BELOW +1E78 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH TILDE AND ACUTE +1E7A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH MACRON AND DIAERESIS +1E7C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER V WITH TILDE +1E7E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER V WITH DOT BELOW +1E80 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER W WITH GRAVE +1E82 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER W WITH ACUTE +1E84 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER W WITH DIAERESIS +1E86 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER W WITH DOT ABOVE +1E88 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER W WITH DOT BELOW +1E8A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER X WITH DOT ABOVE +1E8C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER X WITH DIAERESIS +1E8E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH DOT ABOVE +1E90 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Z WITH CIRCUMFLEX +1E92 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Z WITH DOT BELOW +1E94 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Z WITH LINE BELOW +1E9E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER SHARP S +1EA0 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH DOT BELOW +1EA2 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH HOOK ABOVE +1EA4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND ACUTE +1EA6 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND GRAVE +1EA8 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE +1EAA ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND TILDE +1EAC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND DOT BELOW +1EAE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH BREVE AND ACUTE +1EB0 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH BREVE AND GRAVE +1EB2 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE +1EB4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH BREVE AND TILDE +1EB6 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER A WITH BREVE AND DOT BELOW +1EB8 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH DOT BELOW +1EBA ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH HOOK ABOVE +1EBC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH TILDE +1EBE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND ACUTE +1EC0 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND GRAVE +1EC2 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE +1EC4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND TILDE +1EC6 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND DOT BELOW +1EC8 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH HOOK ABOVE +1ECA ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER I WITH DOT BELOW +1ECC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH DOT BELOW +1ECE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH HOOK ABOVE +1ED0 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND ACUTE +1ED2 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND GRAVE +1ED4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE +1ED6 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND TILDE +1ED8 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND DOT BELOW +1EDA ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH HORN AND ACUTE +1EDC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH HORN AND GRAVE +1EDE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH HORN AND HOOK ABOVE +1EE0 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH HORN AND TILDE +1EE2 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH HORN AND DOT BELOW +1EE4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH DOT BELOW +1EE6 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH HOOK ABOVE +1EE8 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH HORN AND ACUTE +1EEA ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH HORN AND GRAVE +1EEC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH HORN AND HOOK ABOVE +1EEE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH HORN AND TILDE +1EF0 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER U WITH HORN AND DOT BELOW +1EF2 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH GRAVE +1EF4 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH DOT BELOW +1EF6 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH HOOK ABOVE +1EF8 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH TILDE +1EFA ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER MIDDLE-WELSH LL +1EFC ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER MIDDLE-WELSH V +1EFE ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Y WITH LOOP +1F08..1F0F ; Changes_When_Lowercased # L& [8] GREEK CAPITAL LETTER ALPHA WITH PSILI..GREEK CAPITAL LETTER ALPHA WITH DASIA AND PERISPOMENI +1F18..1F1D ; Changes_When_Lowercased # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F28..1F2F ; Changes_When_Lowercased # L& [8] GREEK CAPITAL LETTER ETA WITH PSILI..GREEK CAPITAL LETTER ETA WITH DASIA AND PERISPOMENI +1F38..1F3F ; Changes_When_Lowercased # L& [8] GREEK CAPITAL LETTER IOTA WITH PSILI..GREEK CAPITAL LETTER IOTA WITH DASIA AND PERISPOMENI +1F48..1F4D ; Changes_When_Lowercased # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F59 ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F ; Changes_When_Lowercased # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F68..1F6F ; Changes_When_Lowercased # L& [8] GREEK CAPITAL LETTER OMEGA WITH PSILI..GREEK CAPITAL LETTER OMEGA WITH DASIA AND PERISPOMENI +1F88..1F8F ; Changes_When_Lowercased # L& [8] GREEK CAPITAL LETTER ALPHA WITH PSILI AND PROSGEGRAMMENI..GREEK CAPITAL LETTER ALPHA WITH DASIA AND PERISPOMENI AND PROSGEGRAMMENI +1F98..1F9F ; Changes_When_Lowercased # L& [8] GREEK CAPITAL LETTER ETA WITH PSILI AND PROSGEGRAMMENI..GREEK CAPITAL LETTER ETA WITH DASIA AND PERISPOMENI AND PROSGEGRAMMENI +1FA8..1FAF ; Changes_When_Lowercased # L& [8] GREEK CAPITAL LETTER OMEGA WITH PSILI AND PROSGEGRAMMENI..GREEK CAPITAL LETTER OMEGA WITH DASIA AND PERISPOMENI AND PROSGEGRAMMENI +1FB8..1FBC ; Changes_When_Lowercased # L& [5] GREEK CAPITAL LETTER ALPHA WITH VRACHY..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FC8..1FCC ; Changes_When_Lowercased # L& [5] GREEK CAPITAL LETTER EPSILON WITH VARIA..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD8..1FDB ; Changes_When_Lowercased # L& [4] GREEK CAPITAL LETTER IOTA WITH VRACHY..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE8..1FEC ; Changes_When_Lowercased # L& [5] GREEK CAPITAL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF8..1FFC ; Changes_When_Lowercased # L& [5] GREEK CAPITAL LETTER OMICRON WITH VARIA..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +2126 ; Changes_When_Lowercased # L& OHM SIGN +212A..212B ; Changes_When_Lowercased # L& [2] KELVIN SIGN..ANGSTROM SIGN +2132 ; Changes_When_Lowercased # L& TURNED CAPITAL F +2160..216F ; Changes_When_Lowercased # Nl [16] ROMAN NUMERAL ONE..ROMAN NUMERAL ONE THOUSAND +2183 ; Changes_When_Lowercased # L& ROMAN NUMERAL REVERSED ONE HUNDRED +24B6..24CF ; Changes_When_Lowercased # So [26] CIRCLED LATIN CAPITAL LETTER A..CIRCLED LATIN CAPITAL LETTER Z +2C00..2C2E ; Changes_When_Lowercased # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C60 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH DOUBLE BAR +2C62..2C64 ; Changes_When_Lowercased # L& [3] LATIN CAPITAL LETTER L WITH MIDDLE TILDE..LATIN CAPITAL LETTER R WITH TAIL +2C67 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER H WITH DESCENDER +2C69 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH DESCENDER +2C6B ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Z WITH DESCENDER +2C6D..2C70 ; Changes_When_Lowercased # L& [4] LATIN CAPITAL LETTER ALPHA..LATIN CAPITAL LETTER TURNED ALPHA +2C72 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER W WITH HOOK +2C75 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER HALF H +2C7E..2C80 ; Changes_When_Lowercased # L& [3] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC CAPITAL LETTER ALFA +2C82 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER VIDA +2C84 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER GAMMA +2C86 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER DALDA +2C88 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER EIE +2C8A ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER SOU +2C8C ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER ZATA +2C8E ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER HATE +2C90 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER THETHE +2C92 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER IAUDA +2C94 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER KAPA +2C96 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER LAULA +2C98 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER MI +2C9A ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER NI +2C9C ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER KSI +2C9E ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER O +2CA0 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER PI +2CA2 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER RO +2CA4 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER SIMA +2CA6 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER TAU +2CA8 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER UA +2CAA ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER FI +2CAC ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER KHI +2CAE ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER PSI +2CB0 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OOU +2CB2 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER DIALECT-P ALEF +2CB4 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC AIN +2CB6 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC EIE +2CB8 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER DIALECT-P KAPA +2CBA ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER DIALECT-P NI +2CBC ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC NI +2CBE ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC OOU +2CC0 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER SAMPI +2CC2 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER CROSSED SHEI +2CC4 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC SHEI +2CC6 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC ESH +2CC8 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER AKHMIMIC KHEI +2CCA ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER DIALECT-P HORI +2CCC ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC HORI +2CCE ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC HA +2CD0 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER L-SHAPED HA +2CD2 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC HEI +2CD4 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC HAT +2CD6 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC GANGIA +2CD8 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC DJA +2CDA ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD COPTIC SHIMA +2CDC ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD NUBIAN SHIMA +2CDE ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD NUBIAN NGI +2CE0 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD NUBIAN NYI +2CE2 ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER OLD NUBIAN WAU +2CEB ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI +2CED ; Changes_When_Lowercased # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC GANGIA +A640 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ZEMLYA +A642 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER DZELO +A644 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER REVERSED DZE +A646 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IOTA +A648 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER DJERV +A64A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER MONOGRAPH UK +A64C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER BROAD OMEGA +A64E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER NEUTRAL YER +A650 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER YERU WITH BACK YER +A652 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IOTIFIED YAT +A654 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER REVERSED YU +A656 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IOTIFIED A +A658 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER CLOSED LITTLE YUS +A65A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER BLENDED YUS +A65C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER IOTIFIED CLOSED LITTLE YUS +A65E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER YN +A662 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SOFT DE +A664 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SOFT EL +A666 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SOFT EM +A668 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER MONOCULAR O +A66A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER BINOCULAR O +A66C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER DOUBLE MONOCULAR O +A680 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER DWE +A682 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER DZWE +A684 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER ZHWE +A686 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER CCHE +A688 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER DZZE +A68A ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER TE WITH MIDDLE HOOK +A68C ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER TWE +A68E ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER TSWE +A690 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER TSSE +A692 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER TCHE +A694 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER HWE +A696 ; Changes_When_Lowercased # L& CYRILLIC CAPITAL LETTER SHWE +A722 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF +A724 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER EGYPTOLOGICAL AIN +A726 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER HENG +A728 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER TZ +A72A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER TRESILLO +A72C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER CUATRILLO +A72E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER CUATRILLO WITH COMMA +A732 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER AA +A734 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER AO +A736 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER AU +A738 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER AV +A73A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER AV WITH HORIZONTAL BAR +A73C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER AY +A73E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER REVERSED C WITH DOT +A740 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH STROKE +A742 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH DIAGONAL STROKE +A744 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER K WITH STROKE AND DIAGONAL STROKE +A746 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER BROKEN L +A748 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER L WITH HIGH STROKE +A74A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH LONG STROKE OVERLAY +A74C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER O WITH LOOP +A74E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER OO +A750 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER P WITH STROKE THROUGH DESCENDER +A752 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER P WITH FLOURISH +A754 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER P WITH SQUIRREL TAIL +A756 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Q WITH STROKE THROUGH DESCENDER +A758 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER Q WITH DIAGONAL STROKE +A75A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER R ROTUNDA +A75C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER RUM ROTUNDA +A75E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER V WITH DIAGONAL STROKE +A760 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER VY +A762 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER VISIGOTHIC Z +A764 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER THORN WITH STROKE +A766 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER THORN WITH STROKE THROUGH DESCENDER +A768 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER VEND +A76A ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER ET +A76C ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER IS +A76E ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER CON +A779 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER INSULAR D +A77B ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER INSULAR F +A77D..A77E ; Changes_When_Lowercased # L& [2] LATIN CAPITAL LETTER INSULAR G..LATIN CAPITAL LETTER TURNED INSULAR G +A780 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER TURNED L +A782 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER INSULAR R +A784 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER INSULAR S +A786 ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER INSULAR T +A78B ; Changes_When_Lowercased # L& LATIN CAPITAL LETTER SALTILLO +FF21..FF3A ; Changes_When_Lowercased # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +10400..10427 ; Changes_When_Lowercased # L& [40] DESERET CAPITAL LETTER LONG I..DESERET CAPITAL LETTER EW + +# Total code points: 1029 + +# ================================================ + +# Derived Property: Changes_When_Uppercased (CWU) +# Characters whose normalized forms are not stable under a toUppercase mapping. +# For more information, see D125 in Section 3.13, "Default Case Algorithms". +# Changes_When_Uppercased(X) is true when toUppercase(toNFD(X)) != toNFD(X) + +0061..007A ; Changes_When_Uppercased # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00B5 ; Changes_When_Uppercased # L& MICRO SIGN +00DF..00F6 ; Changes_When_Uppercased # L& [24] LATIN SMALL LETTER SHARP S..LATIN SMALL LETTER O WITH DIAERESIS +00F8..00FF ; Changes_When_Uppercased # L& [8] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER Y WITH DIAERESIS +0101 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH MACRON +0103 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH BREVE +0105 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH OGONEK +0107 ; Changes_When_Uppercased # L& LATIN SMALL LETTER C WITH ACUTE +0109 ; Changes_When_Uppercased # L& LATIN SMALL LETTER C WITH CIRCUMFLEX +010B ; Changes_When_Uppercased # L& LATIN SMALL LETTER C WITH DOT ABOVE +010D ; Changes_When_Uppercased # L& LATIN SMALL LETTER C WITH CARON +010F ; Changes_When_Uppercased # L& LATIN SMALL LETTER D WITH CARON +0111 ; Changes_When_Uppercased # L& LATIN SMALL LETTER D WITH STROKE +0113 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH MACRON +0115 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH BREVE +0117 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH DOT ABOVE +0119 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH OGONEK +011B ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CARON +011D ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH CIRCUMFLEX +011F ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH BREVE +0121 ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH DOT ABOVE +0123 ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH CEDILLA +0125 ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH CIRCUMFLEX +0127 ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH STROKE +0129 ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH TILDE +012B ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH MACRON +012D ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH BREVE +012F ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH OGONEK +0131 ; Changes_When_Uppercased # L& LATIN SMALL LETTER DOTLESS I +0133 ; Changes_When_Uppercased # L& LATIN SMALL LIGATURE IJ +0135 ; Changes_When_Uppercased # L& LATIN SMALL LETTER J WITH CIRCUMFLEX +0137 ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH CEDILLA +013A ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH ACUTE +013C ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH CEDILLA +013E ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH CARON +0140 ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH MIDDLE DOT +0142 ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH STROKE +0144 ; Changes_When_Uppercased # L& LATIN SMALL LETTER N WITH ACUTE +0146 ; Changes_When_Uppercased # L& LATIN SMALL LETTER N WITH CEDILLA +0148..0149 ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER N WITH CARON..LATIN SMALL LETTER N PRECEDED BY APOSTROPHE +014B ; Changes_When_Uppercased # L& LATIN SMALL LETTER ENG +014D ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH MACRON +014F ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH BREVE +0151 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH DOUBLE ACUTE +0153 ; Changes_When_Uppercased # L& LATIN SMALL LIGATURE OE +0155 ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH ACUTE +0157 ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH CEDILLA +0159 ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH CARON +015B ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH ACUTE +015D ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH CIRCUMFLEX +015F ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH CEDILLA +0161 ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH CARON +0163 ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH CEDILLA +0165 ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH CARON +0167 ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH STROKE +0169 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH TILDE +016B ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH MACRON +016D ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH BREVE +016F ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH RING ABOVE +0171 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH DOUBLE ACUTE +0173 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH OGONEK +0175 ; Changes_When_Uppercased # L& LATIN SMALL LETTER W WITH CIRCUMFLEX +0177 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Y WITH CIRCUMFLEX +017A ; Changes_When_Uppercased # L& LATIN SMALL LETTER Z WITH ACUTE +017C ; Changes_When_Uppercased # L& LATIN SMALL LETTER Z WITH DOT ABOVE +017E..0180 ; Changes_When_Uppercased # L& [3] LATIN SMALL LETTER Z WITH CARON..LATIN SMALL LETTER B WITH STROKE +0183 ; Changes_When_Uppercased # L& LATIN SMALL LETTER B WITH TOPBAR +0185 ; Changes_When_Uppercased # L& LATIN SMALL LETTER TONE SIX +0188 ; Changes_When_Uppercased # L& LATIN SMALL LETTER C WITH HOOK +018C ; Changes_When_Uppercased # L& LATIN SMALL LETTER D WITH TOPBAR +0192 ; Changes_When_Uppercased # L& LATIN SMALL LETTER F WITH HOOK +0195 ; Changes_When_Uppercased # L& LATIN SMALL LETTER HV +0199..019A ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER K WITH HOOK..LATIN SMALL LETTER L WITH BAR +019E ; Changes_When_Uppercased # L& LATIN SMALL LETTER N WITH LONG RIGHT LEG +01A1 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH HORN +01A3 ; Changes_When_Uppercased # L& LATIN SMALL LETTER OI +01A5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER P WITH HOOK +01A8 ; Changes_When_Uppercased # L& LATIN SMALL LETTER TONE TWO +01AD ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH HOOK +01B0 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH HORN +01B4 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Y WITH HOOK +01B6 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Z WITH STROKE +01B9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER EZH REVERSED +01BD ; Changes_When_Uppercased # L& LATIN SMALL LETTER TONE FIVE +01BF ; Changes_When_Uppercased # L& LATIN LETTER WYNN +01C5..01C6 ; Changes_When_Uppercased # L& [2] LATIN CAPITAL LETTER D WITH SMALL LETTER Z WITH CARON..LATIN SMALL LETTER DZ WITH CARON +01C8..01C9 ; Changes_When_Uppercased # L& [2] LATIN CAPITAL LETTER L WITH SMALL LETTER J..LATIN SMALL LETTER LJ +01CB..01CC ; Changes_When_Uppercased # L& [2] LATIN CAPITAL LETTER N WITH SMALL LETTER J..LATIN SMALL LETTER NJ +01CE ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH CARON +01D0 ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH CARON +01D2 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH CARON +01D4 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH CARON +01D6 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH DIAERESIS AND MACRON +01D8 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH DIAERESIS AND ACUTE +01DA ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH DIAERESIS AND CARON +01DC..01DD ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER U WITH DIAERESIS AND GRAVE..LATIN SMALL LETTER TURNED E +01DF ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH DIAERESIS AND MACRON +01E1 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH DOT ABOVE AND MACRON +01E3 ; Changes_When_Uppercased # L& LATIN SMALL LETTER AE WITH MACRON +01E5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH STROKE +01E7 ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH CARON +01E9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH CARON +01EB ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH OGONEK +01ED ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH OGONEK AND MACRON +01EF..01F0 ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER EZH WITH CARON..LATIN SMALL LETTER J WITH CARON +01F2..01F3 ; Changes_When_Uppercased # L& [2] LATIN CAPITAL LETTER D WITH SMALL LETTER Z..LATIN SMALL LETTER DZ +01F5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH ACUTE +01F9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER N WITH GRAVE +01FB ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH RING ABOVE AND ACUTE +01FD ; Changes_When_Uppercased # L& LATIN SMALL LETTER AE WITH ACUTE +01FF ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH STROKE AND ACUTE +0201 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH DOUBLE GRAVE +0203 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH INVERTED BREVE +0205 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH DOUBLE GRAVE +0207 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH INVERTED BREVE +0209 ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH DOUBLE GRAVE +020B ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH INVERTED BREVE +020D ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH DOUBLE GRAVE +020F ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH INVERTED BREVE +0211 ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH DOUBLE GRAVE +0213 ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH INVERTED BREVE +0215 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH DOUBLE GRAVE +0217 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH INVERTED BREVE +0219 ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH COMMA BELOW +021B ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH COMMA BELOW +021D ; Changes_When_Uppercased # L& LATIN SMALL LETTER YOGH +021F ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH CARON +0223 ; Changes_When_Uppercased # L& LATIN SMALL LETTER OU +0225 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Z WITH HOOK +0227 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH DOT ABOVE +0229 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CEDILLA +022B ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH DIAERESIS AND MACRON +022D ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH TILDE AND MACRON +022F ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH DOT ABOVE +0231 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH DOT ABOVE AND MACRON +0233 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Y WITH MACRON +023C ; Changes_When_Uppercased # L& LATIN SMALL LETTER C WITH STROKE +023F..0240 ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER S WITH SWASH TAIL..LATIN SMALL LETTER Z WITH SWASH TAIL +0242 ; Changes_When_Uppercased # L& LATIN SMALL LETTER GLOTTAL STOP +0247 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH STROKE +0249 ; Changes_When_Uppercased # L& LATIN SMALL LETTER J WITH STROKE +024B ; Changes_When_Uppercased # L& LATIN SMALL LETTER Q WITH HOOK TAIL +024D ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH STROKE +024F..0254 ; Changes_When_Uppercased # L& [6] LATIN SMALL LETTER Y WITH STROKE..LATIN SMALL LETTER OPEN O +0256..0257 ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER D WITH TAIL..LATIN SMALL LETTER D WITH HOOK +0259 ; Changes_When_Uppercased # L& LATIN SMALL LETTER SCHWA +025B ; Changes_When_Uppercased # L& LATIN SMALL LETTER OPEN E +0260 ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH HOOK +0263 ; Changes_When_Uppercased # L& LATIN SMALL LETTER GAMMA +0268..0269 ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER I WITH STROKE..LATIN SMALL LETTER IOTA +026B ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH MIDDLE TILDE +026F ; Changes_When_Uppercased # L& LATIN SMALL LETTER TURNED M +0271..0272 ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER M WITH HOOK..LATIN SMALL LETTER N WITH LEFT HOOK +0275 ; Changes_When_Uppercased # L& LATIN SMALL LETTER BARRED O +027D ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH TAIL +0280 ; Changes_When_Uppercased # L& LATIN LETTER SMALL CAPITAL R +0283 ; Changes_When_Uppercased # L& LATIN SMALL LETTER ESH +0288..028C ; Changes_When_Uppercased # L& [5] LATIN SMALL LETTER T WITH RETROFLEX HOOK..LATIN SMALL LETTER TURNED V +0292 ; Changes_When_Uppercased # L& LATIN SMALL LETTER EZH +0345 ; Changes_When_Uppercased # Mn COMBINING GREEK YPOGEGRAMMENI +0371 ; Changes_When_Uppercased # L& GREEK SMALL LETTER HETA +0373 ; Changes_When_Uppercased # L& GREEK SMALL LETTER ARCHAIC SAMPI +0377 ; Changes_When_Uppercased # L& GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037B..037D ; Changes_When_Uppercased # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0390 ; Changes_When_Uppercased # L& GREEK SMALL LETTER IOTA WITH DIALYTIKA AND TONOS +03AC..03CE ; Changes_When_Uppercased # L& [35] GREEK SMALL LETTER ALPHA WITH TONOS..GREEK SMALL LETTER OMEGA WITH TONOS +03D0..03D1 ; Changes_When_Uppercased # L& [2] GREEK BETA SYMBOL..GREEK THETA SYMBOL +03D5..03D7 ; Changes_When_Uppercased # L& [3] GREEK PHI SYMBOL..GREEK KAI SYMBOL +03D9 ; Changes_When_Uppercased # L& GREEK SMALL LETTER ARCHAIC KOPPA +03DB ; Changes_When_Uppercased # L& GREEK SMALL LETTER STIGMA +03DD ; Changes_When_Uppercased # L& GREEK SMALL LETTER DIGAMMA +03DF ; Changes_When_Uppercased # L& GREEK SMALL LETTER KOPPA +03E1 ; Changes_When_Uppercased # L& GREEK SMALL LETTER SAMPI +03E3 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER SHEI +03E5 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER FEI +03E7 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER KHEI +03E9 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER HORI +03EB ; Changes_When_Uppercased # L& COPTIC SMALL LETTER GANGIA +03ED ; Changes_When_Uppercased # L& COPTIC SMALL LETTER SHIMA +03EF..03F2 ; Changes_When_Uppercased # L& [4] COPTIC SMALL LETTER DEI..GREEK LUNATE SIGMA SYMBOL +03F5 ; Changes_When_Uppercased # L& GREEK LUNATE EPSILON SYMBOL +03F8 ; Changes_When_Uppercased # L& GREEK SMALL LETTER SHO +03FB ; Changes_When_Uppercased # L& GREEK SMALL LETTER SAN +0430..045F ; Changes_When_Uppercased # L& [48] CYRILLIC SMALL LETTER A..CYRILLIC SMALL LETTER DZHE +0461 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER OMEGA +0463 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER YAT +0465 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IOTIFIED E +0467 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER LITTLE YUS +0469 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IOTIFIED LITTLE YUS +046B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER BIG YUS +046D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IOTIFIED BIG YUS +046F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KSI +0471 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER PSI +0473 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER FITA +0475 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IZHITSA +0477 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT +0479 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER UK +047B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ROUND OMEGA +047D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER OMEGA WITH TITLO +047F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER OT +0481 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOPPA +048B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SHORT I WITH TAIL +048D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SEMISOFT SIGN +048F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ER WITH TICK +0491 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER GHE WITH UPTURN +0493 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER GHE WITH STROKE +0495 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER GHE WITH MIDDLE HOOK +0497 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ZHE WITH DESCENDER +0499 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ZE WITH DESCENDER +049B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KA WITH DESCENDER +049D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KA WITH VERTICAL STROKE +049F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KA WITH STROKE +04A1 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER BASHKIR KA +04A3 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER EN WITH DESCENDER +04A5 ; Changes_When_Uppercased # L& CYRILLIC SMALL LIGATURE EN GHE +04A7 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER PE WITH MIDDLE HOOK +04A9 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ABKHASIAN HA +04AB ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ES WITH DESCENDER +04AD ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER TE WITH DESCENDER +04AF ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER STRAIGHT U +04B1 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER STRAIGHT U WITH STROKE +04B3 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER HA WITH DESCENDER +04B5 ; Changes_When_Uppercased # L& CYRILLIC SMALL LIGATURE TE TSE +04B7 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER CHE WITH DESCENDER +04B9 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER CHE WITH VERTICAL STROKE +04BB ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SHHA +04BD ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ABKHASIAN CHE +04BF ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ABKHASIAN CHE WITH DESCENDER +04C2 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ZHE WITH BREVE +04C4 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KA WITH HOOK +04C6 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER EL WITH TAIL +04C8 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER EN WITH HOOK +04CA ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER EN WITH TAIL +04CC ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KHAKASSIAN CHE +04CE..04CF ; Changes_When_Uppercased # L& [2] CYRILLIC SMALL LETTER EM WITH TAIL..CYRILLIC SMALL LETTER PALOCHKA +04D1 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER A WITH BREVE +04D3 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER A WITH DIAERESIS +04D5 ; Changes_When_Uppercased # L& CYRILLIC SMALL LIGATURE A IE +04D7 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IE WITH BREVE +04D9 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SCHWA +04DB ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SCHWA WITH DIAERESIS +04DD ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ZHE WITH DIAERESIS +04DF ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ZE WITH DIAERESIS +04E1 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ABKHASIAN DZE +04E3 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER I WITH MACRON +04E5 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER I WITH DIAERESIS +04E7 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER O WITH DIAERESIS +04E9 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER BARRED O +04EB ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER BARRED O WITH DIAERESIS +04ED ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER E WITH DIAERESIS +04EF ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER U WITH MACRON +04F1 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER U WITH DIAERESIS +04F3 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER U WITH DOUBLE ACUTE +04F5 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER CHE WITH DIAERESIS +04F7 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER GHE WITH DESCENDER +04F9 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER YERU WITH DIAERESIS +04FB ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER GHE WITH STROKE AND HOOK +04FD ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER HA WITH HOOK +04FF ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER HA WITH STROKE +0501 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOMI DE +0503 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOMI DJE +0505 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOMI ZJE +0507 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOMI DZJE +0509 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOMI LJE +050B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOMI NJE +050D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOMI SJE +050F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER KOMI TJE +0511 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER REVERSED ZE +0513 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER EL WITH HOOK +0515 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER LHA +0517 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER RHA +0519 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER YAE +051B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER QA +051D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER WE +051F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ALEUT KA +0521 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER EL WITH MIDDLE HOOK +0523 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER EN WITH MIDDLE HOOK +0525 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER PE WITH DESCENDER +0561..0587 ; Changes_When_Uppercased # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +1D79 ; Changes_When_Uppercased # L& LATIN SMALL LETTER INSULAR G +1D7D ; Changes_When_Uppercased # L& LATIN SMALL LETTER P WITH STROKE +1E01 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH RING BELOW +1E03 ; Changes_When_Uppercased # L& LATIN SMALL LETTER B WITH DOT ABOVE +1E05 ; Changes_When_Uppercased # L& LATIN SMALL LETTER B WITH DOT BELOW +1E07 ; Changes_When_Uppercased # L& LATIN SMALL LETTER B WITH LINE BELOW +1E09 ; Changes_When_Uppercased # L& LATIN SMALL LETTER C WITH CEDILLA AND ACUTE +1E0B ; Changes_When_Uppercased # L& LATIN SMALL LETTER D WITH DOT ABOVE +1E0D ; Changes_When_Uppercased # L& LATIN SMALL LETTER D WITH DOT BELOW +1E0F ; Changes_When_Uppercased # L& LATIN SMALL LETTER D WITH LINE BELOW +1E11 ; Changes_When_Uppercased # L& LATIN SMALL LETTER D WITH CEDILLA +1E13 ; Changes_When_Uppercased # L& LATIN SMALL LETTER D WITH CIRCUMFLEX BELOW +1E15 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH MACRON AND GRAVE +1E17 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH MACRON AND ACUTE +1E19 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX BELOW +1E1B ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH TILDE BELOW +1E1D ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CEDILLA AND BREVE +1E1F ; Changes_When_Uppercased # L& LATIN SMALL LETTER F WITH DOT ABOVE +1E21 ; Changes_When_Uppercased # L& LATIN SMALL LETTER G WITH MACRON +1E23 ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH DOT ABOVE +1E25 ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH DOT BELOW +1E27 ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH DIAERESIS +1E29 ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH CEDILLA +1E2B ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH BREVE BELOW +1E2D ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH TILDE BELOW +1E2F ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH DIAERESIS AND ACUTE +1E31 ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH ACUTE +1E33 ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH DOT BELOW +1E35 ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH LINE BELOW +1E37 ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH DOT BELOW +1E39 ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH DOT BELOW AND MACRON +1E3B ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH LINE BELOW +1E3D ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH CIRCUMFLEX BELOW +1E3F ; Changes_When_Uppercased # L& LATIN SMALL LETTER M WITH ACUTE +1E41 ; Changes_When_Uppercased # L& LATIN SMALL LETTER M WITH DOT ABOVE +1E43 ; Changes_When_Uppercased # L& LATIN SMALL LETTER M WITH DOT BELOW +1E45 ; Changes_When_Uppercased # L& LATIN SMALL LETTER N WITH DOT ABOVE +1E47 ; Changes_When_Uppercased # L& LATIN SMALL LETTER N WITH DOT BELOW +1E49 ; Changes_When_Uppercased # L& LATIN SMALL LETTER N WITH LINE BELOW +1E4B ; Changes_When_Uppercased # L& LATIN SMALL LETTER N WITH CIRCUMFLEX BELOW +1E4D ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH TILDE AND ACUTE +1E4F ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH TILDE AND DIAERESIS +1E51 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH MACRON AND GRAVE +1E53 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH MACRON AND ACUTE +1E55 ; Changes_When_Uppercased # L& LATIN SMALL LETTER P WITH ACUTE +1E57 ; Changes_When_Uppercased # L& LATIN SMALL LETTER P WITH DOT ABOVE +1E59 ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH DOT ABOVE +1E5B ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH DOT BELOW +1E5D ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH DOT BELOW AND MACRON +1E5F ; Changes_When_Uppercased # L& LATIN SMALL LETTER R WITH LINE BELOW +1E61 ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH DOT ABOVE +1E63 ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH DOT BELOW +1E65 ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH ACUTE AND DOT ABOVE +1E67 ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH CARON AND DOT ABOVE +1E69 ; Changes_When_Uppercased # L& LATIN SMALL LETTER S WITH DOT BELOW AND DOT ABOVE +1E6B ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH DOT ABOVE +1E6D ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH DOT BELOW +1E6F ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH LINE BELOW +1E71 ; Changes_When_Uppercased # L& LATIN SMALL LETTER T WITH CIRCUMFLEX BELOW +1E73 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH DIAERESIS BELOW +1E75 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH TILDE BELOW +1E77 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH CIRCUMFLEX BELOW +1E79 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH TILDE AND ACUTE +1E7B ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH MACRON AND DIAERESIS +1E7D ; Changes_When_Uppercased # L& LATIN SMALL LETTER V WITH TILDE +1E7F ; Changes_When_Uppercased # L& LATIN SMALL LETTER V WITH DOT BELOW +1E81 ; Changes_When_Uppercased # L& LATIN SMALL LETTER W WITH GRAVE +1E83 ; Changes_When_Uppercased # L& LATIN SMALL LETTER W WITH ACUTE +1E85 ; Changes_When_Uppercased # L& LATIN SMALL LETTER W WITH DIAERESIS +1E87 ; Changes_When_Uppercased # L& LATIN SMALL LETTER W WITH DOT ABOVE +1E89 ; Changes_When_Uppercased # L& LATIN SMALL LETTER W WITH DOT BELOW +1E8B ; Changes_When_Uppercased # L& LATIN SMALL LETTER X WITH DOT ABOVE +1E8D ; Changes_When_Uppercased # L& LATIN SMALL LETTER X WITH DIAERESIS +1E8F ; Changes_When_Uppercased # L& LATIN SMALL LETTER Y WITH DOT ABOVE +1E91 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Z WITH CIRCUMFLEX +1E93 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Z WITH DOT BELOW +1E95..1E9B ; Changes_When_Uppercased # L& [7] LATIN SMALL LETTER Z WITH LINE BELOW..LATIN SMALL LETTER LONG S WITH DOT ABOVE +1EA1 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH DOT BELOW +1EA3 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH HOOK ABOVE +1EA5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND ACUTE +1EA7 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND GRAVE +1EA9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE +1EAB ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND TILDE +1EAD ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND DOT BELOW +1EAF ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH BREVE AND ACUTE +1EB1 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH BREVE AND GRAVE +1EB3 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH BREVE AND HOOK ABOVE +1EB5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH BREVE AND TILDE +1EB7 ; Changes_When_Uppercased # L& LATIN SMALL LETTER A WITH BREVE AND DOT BELOW +1EB9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH DOT BELOW +1EBB ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH HOOK ABOVE +1EBD ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH TILDE +1EBF ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND ACUTE +1EC1 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND GRAVE +1EC3 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE +1EC5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND TILDE +1EC7 ; Changes_When_Uppercased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND DOT BELOW +1EC9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH HOOK ABOVE +1ECB ; Changes_When_Uppercased # L& LATIN SMALL LETTER I WITH DOT BELOW +1ECD ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH DOT BELOW +1ECF ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH HOOK ABOVE +1ED1 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND ACUTE +1ED3 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND GRAVE +1ED5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE +1ED7 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND TILDE +1ED9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND DOT BELOW +1EDB ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH HORN AND ACUTE +1EDD ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH HORN AND GRAVE +1EDF ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH HORN AND HOOK ABOVE +1EE1 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH HORN AND TILDE +1EE3 ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH HORN AND DOT BELOW +1EE5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH DOT BELOW +1EE7 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH HOOK ABOVE +1EE9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH HORN AND ACUTE +1EEB ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH HORN AND GRAVE +1EED ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH HORN AND HOOK ABOVE +1EEF ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH HORN AND TILDE +1EF1 ; Changes_When_Uppercased # L& LATIN SMALL LETTER U WITH HORN AND DOT BELOW +1EF3 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Y WITH GRAVE +1EF5 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Y WITH DOT BELOW +1EF7 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Y WITH HOOK ABOVE +1EF9 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Y WITH TILDE +1EFB ; Changes_When_Uppercased # L& LATIN SMALL LETTER MIDDLE-WELSH LL +1EFD ; Changes_When_Uppercased # L& LATIN SMALL LETTER MIDDLE-WELSH V +1EFF..1F07 ; Changes_When_Uppercased # L& [9] LATIN SMALL LETTER Y WITH LOOP..GREEK SMALL LETTER ALPHA WITH DASIA AND PERISPOMENI +1F10..1F15 ; Changes_When_Uppercased # L& [6] GREEK SMALL LETTER EPSILON WITH PSILI..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F27 ; Changes_When_Uppercased # L& [8] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER ETA WITH DASIA AND PERISPOMENI +1F30..1F37 ; Changes_When_Uppercased # L& [8] GREEK SMALL LETTER IOTA WITH PSILI..GREEK SMALL LETTER IOTA WITH DASIA AND PERISPOMENI +1F40..1F45 ; Changes_When_Uppercased # L& [6] GREEK SMALL LETTER OMICRON WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; Changes_When_Uppercased # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F60..1F67 ; Changes_When_Uppercased # L& [8] GREEK SMALL LETTER OMEGA WITH PSILI..GREEK SMALL LETTER OMEGA WITH DASIA AND PERISPOMENI +1F70..1F7D ; Changes_When_Uppercased # L& [14] GREEK SMALL LETTER ALPHA WITH VARIA..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; Changes_When_Uppercased # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FB7 ; Changes_When_Uppercased # L& [2] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK SMALL LETTER ALPHA WITH PERISPOMENI AND YPOGEGRAMMENI +1FBC ; Changes_When_Uppercased # L& GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBE ; Changes_When_Uppercased # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; Changes_When_Uppercased # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FC7 ; Changes_When_Uppercased # L& [2] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK SMALL LETTER ETA WITH PERISPOMENI AND YPOGEGRAMMENI +1FCC ; Changes_When_Uppercased # L& GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD0..1FD3 ; Changes_When_Uppercased # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FD7 ; Changes_When_Uppercased # L& [2] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND PERISPOMENI +1FE0..1FE7 ; Changes_When_Uppercased # L& [8] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND PERISPOMENI +1FF2..1FF4 ; Changes_When_Uppercased # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FF7 ; Changes_When_Uppercased # L& [2] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK SMALL LETTER OMEGA WITH PERISPOMENI AND YPOGEGRAMMENI +1FFC ; Changes_When_Uppercased # L& GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +214E ; Changes_When_Uppercased # L& TURNED SMALL F +2170..217F ; Changes_When_Uppercased # Nl [16] SMALL ROMAN NUMERAL ONE..SMALL ROMAN NUMERAL ONE THOUSAND +2184 ; Changes_When_Uppercased # L& LATIN SMALL LETTER REVERSED C +24D0..24E9 ; Changes_When_Uppercased # So [26] CIRCLED LATIN SMALL LETTER A..CIRCLED LATIN SMALL LETTER Z +2C30..2C5E ; Changes_When_Uppercased # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C61 ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH DOUBLE BAR +2C65..2C66 ; Changes_When_Uppercased # L& [2] LATIN SMALL LETTER A WITH STROKE..LATIN SMALL LETTER T WITH DIAGONAL STROKE +2C68 ; Changes_When_Uppercased # L& LATIN SMALL LETTER H WITH DESCENDER +2C6A ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH DESCENDER +2C6C ; Changes_When_Uppercased # L& LATIN SMALL LETTER Z WITH DESCENDER +2C73 ; Changes_When_Uppercased # L& LATIN SMALL LETTER W WITH HOOK +2C76 ; Changes_When_Uppercased # L& LATIN SMALL LETTER HALF H +2C81 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER ALFA +2C83 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER VIDA +2C85 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER GAMMA +2C87 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER DALDA +2C89 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER EIE +2C8B ; Changes_When_Uppercased # L& COPTIC SMALL LETTER SOU +2C8D ; Changes_When_Uppercased # L& COPTIC SMALL LETTER ZATA +2C8F ; Changes_When_Uppercased # L& COPTIC SMALL LETTER HATE +2C91 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER THETHE +2C93 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER IAUDA +2C95 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER KAPA +2C97 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER LAULA +2C99 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER MI +2C9B ; Changes_When_Uppercased # L& COPTIC SMALL LETTER NI +2C9D ; Changes_When_Uppercased # L& COPTIC SMALL LETTER KSI +2C9F ; Changes_When_Uppercased # L& COPTIC SMALL LETTER O +2CA1 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER PI +2CA3 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER RO +2CA5 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER SIMA +2CA7 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER TAU +2CA9 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER UA +2CAB ; Changes_When_Uppercased # L& COPTIC SMALL LETTER FI +2CAD ; Changes_When_Uppercased # L& COPTIC SMALL LETTER KHI +2CAF ; Changes_When_Uppercased # L& COPTIC SMALL LETTER PSI +2CB1 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OOU +2CB3 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER DIALECT-P ALEF +2CB5 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC AIN +2CB7 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER CRYPTOGRAMMIC EIE +2CB9 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER DIALECT-P KAPA +2CBB ; Changes_When_Uppercased # L& COPTIC SMALL LETTER DIALECT-P NI +2CBD ; Changes_When_Uppercased # L& COPTIC SMALL LETTER CRYPTOGRAMMIC NI +2CBF ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC OOU +2CC1 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER SAMPI +2CC3 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER CROSSED SHEI +2CC5 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC SHEI +2CC7 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC ESH +2CC9 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER AKHMIMIC KHEI +2CCB ; Changes_When_Uppercased # L& COPTIC SMALL LETTER DIALECT-P HORI +2CCD ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC HORI +2CCF ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC HA +2CD1 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER L-SHAPED HA +2CD3 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC HEI +2CD5 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC HAT +2CD7 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC GANGIA +2CD9 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC DJA +2CDB ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD COPTIC SHIMA +2CDD ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD NUBIAN SHIMA +2CDF ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD NUBIAN NGI +2CE1 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD NUBIAN NYI +2CE3 ; Changes_When_Uppercased # L& COPTIC SMALL LETTER OLD NUBIAN WAU +2CEC ; Changes_When_Uppercased # L& COPTIC SMALL LETTER CRYPTOGRAMMIC SHEI +2CEE ; Changes_When_Uppercased # L& COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2D00..2D25 ; Changes_When_Uppercased # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +A641 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ZEMLYA +A643 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER DZELO +A645 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER REVERSED DZE +A647 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IOTA +A649 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER DJERV +A64B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER MONOGRAPH UK +A64D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER BROAD OMEGA +A64F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER NEUTRAL YER +A651 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER YERU WITH BACK YER +A653 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IOTIFIED YAT +A655 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER REVERSED YU +A657 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IOTIFIED A +A659 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER CLOSED LITTLE YUS +A65B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER BLENDED YUS +A65D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER IOTIFIED CLOSED LITTLE YUS +A65F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER YN +A663 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SOFT DE +A665 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SOFT EL +A667 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SOFT EM +A669 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER MONOCULAR O +A66B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER BINOCULAR O +A66D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A681 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER DWE +A683 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER DZWE +A685 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER ZHWE +A687 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER CCHE +A689 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER DZZE +A68B ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER TE WITH MIDDLE HOOK +A68D ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER TWE +A68F ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER TSWE +A691 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER TSSE +A693 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER TCHE +A695 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER HWE +A697 ; Changes_When_Uppercased # L& CYRILLIC SMALL LETTER SHWE +A723 ; Changes_When_Uppercased # L& LATIN SMALL LETTER EGYPTOLOGICAL ALEF +A725 ; Changes_When_Uppercased # L& LATIN SMALL LETTER EGYPTOLOGICAL AIN +A727 ; Changes_When_Uppercased # L& LATIN SMALL LETTER HENG +A729 ; Changes_When_Uppercased # L& LATIN SMALL LETTER TZ +A72B ; Changes_When_Uppercased # L& LATIN SMALL LETTER TRESILLO +A72D ; Changes_When_Uppercased # L& LATIN SMALL LETTER CUATRILLO +A72F ; Changes_When_Uppercased # L& LATIN SMALL LETTER CUATRILLO WITH COMMA +A733 ; Changes_When_Uppercased # L& LATIN SMALL LETTER AA +A735 ; Changes_When_Uppercased # L& LATIN SMALL LETTER AO +A737 ; Changes_When_Uppercased # L& LATIN SMALL LETTER AU +A739 ; Changes_When_Uppercased # L& LATIN SMALL LETTER AV +A73B ; Changes_When_Uppercased # L& LATIN SMALL LETTER AV WITH HORIZONTAL BAR +A73D ; Changes_When_Uppercased # L& LATIN SMALL LETTER AY +A73F ; Changes_When_Uppercased # L& LATIN SMALL LETTER REVERSED C WITH DOT +A741 ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH STROKE +A743 ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH DIAGONAL STROKE +A745 ; Changes_When_Uppercased # L& LATIN SMALL LETTER K WITH STROKE AND DIAGONAL STROKE +A747 ; Changes_When_Uppercased # L& LATIN SMALL LETTER BROKEN L +A749 ; Changes_When_Uppercased # L& LATIN SMALL LETTER L WITH HIGH STROKE +A74B ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH LONG STROKE OVERLAY +A74D ; Changes_When_Uppercased # L& LATIN SMALL LETTER O WITH LOOP +A74F ; Changes_When_Uppercased # L& LATIN SMALL LETTER OO +A751 ; Changes_When_Uppercased # L& LATIN SMALL LETTER P WITH STROKE THROUGH DESCENDER +A753 ; Changes_When_Uppercased # L& LATIN SMALL LETTER P WITH FLOURISH +A755 ; Changes_When_Uppercased # L& LATIN SMALL LETTER P WITH SQUIRREL TAIL +A757 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Q WITH STROKE THROUGH DESCENDER +A759 ; Changes_When_Uppercased # L& LATIN SMALL LETTER Q WITH DIAGONAL STROKE +A75B ; Changes_When_Uppercased # L& LATIN SMALL LETTER R ROTUNDA +A75D ; Changes_When_Uppercased # L& LATIN SMALL LETTER RUM ROTUNDA +A75F ; Changes_When_Uppercased # L& LATIN SMALL LETTER V WITH DIAGONAL STROKE +A761 ; Changes_When_Uppercased # L& LATIN SMALL LETTER VY +A763 ; Changes_When_Uppercased # L& LATIN SMALL LETTER VISIGOTHIC Z +A765 ; Changes_When_Uppercased # L& LATIN SMALL LETTER THORN WITH STROKE +A767 ; Changes_When_Uppercased # L& LATIN SMALL LETTER THORN WITH STROKE THROUGH DESCENDER +A769 ; Changes_When_Uppercased # L& LATIN SMALL LETTER VEND +A76B ; Changes_When_Uppercased # L& LATIN SMALL LETTER ET +A76D ; Changes_When_Uppercased # L& LATIN SMALL LETTER IS +A76F ; Changes_When_Uppercased # L& LATIN SMALL LETTER CON +A77A ; Changes_When_Uppercased # L& LATIN SMALL LETTER INSULAR D +A77C ; Changes_When_Uppercased # L& LATIN SMALL LETTER INSULAR F +A77F ; Changes_When_Uppercased # L& LATIN SMALL LETTER TURNED INSULAR G +A781 ; Changes_When_Uppercased # L& LATIN SMALL LETTER TURNED L +A783 ; Changes_When_Uppercased # L& LATIN SMALL LETTER INSULAR R +A785 ; Changes_When_Uppercased # L& LATIN SMALL LETTER INSULAR S +A787 ; Changes_When_Uppercased # L& LATIN SMALL LETTER INSULAR T +A78C ; Changes_When_Uppercased # L& LATIN SMALL LETTER SALTILLO +FB00..FB06 ; Changes_When_Uppercased # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; Changes_When_Uppercased # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FF41..FF5A ; Changes_When_Uppercased # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +10428..1044F ; Changes_When_Uppercased # L& [40] DESERET SMALL LETTER LONG I..DESERET SMALL LETTER EW + +# Total code points: 1112 + +# ================================================ + +# Derived Property: Changes_When_Titlecased (CWT) +# Characters whose normalized forms are not stable under a toTitlecase mapping. +# For more information, see D126 in Section 3.13, "Default Case Algorithms". +# Changes_When_Titlecased(X) is true when toTitlecase(toNFD(X)) != toNFD(X) + +0061..007A ; Changes_When_Titlecased # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00B5 ; Changes_When_Titlecased # L& MICRO SIGN +00DF..00F6 ; Changes_When_Titlecased # L& [24] LATIN SMALL LETTER SHARP S..LATIN SMALL LETTER O WITH DIAERESIS +00F8..00FF ; Changes_When_Titlecased # L& [8] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER Y WITH DIAERESIS +0101 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH MACRON +0103 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH BREVE +0105 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH OGONEK +0107 ; Changes_When_Titlecased # L& LATIN SMALL LETTER C WITH ACUTE +0109 ; Changes_When_Titlecased # L& LATIN SMALL LETTER C WITH CIRCUMFLEX +010B ; Changes_When_Titlecased # L& LATIN SMALL LETTER C WITH DOT ABOVE +010D ; Changes_When_Titlecased # L& LATIN SMALL LETTER C WITH CARON +010F ; Changes_When_Titlecased # L& LATIN SMALL LETTER D WITH CARON +0111 ; Changes_When_Titlecased # L& LATIN SMALL LETTER D WITH STROKE +0113 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH MACRON +0115 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH BREVE +0117 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH DOT ABOVE +0119 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH OGONEK +011B ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CARON +011D ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH CIRCUMFLEX +011F ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH BREVE +0121 ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH DOT ABOVE +0123 ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH CEDILLA +0125 ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH CIRCUMFLEX +0127 ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH STROKE +0129 ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH TILDE +012B ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH MACRON +012D ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH BREVE +012F ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH OGONEK +0131 ; Changes_When_Titlecased # L& LATIN SMALL LETTER DOTLESS I +0133 ; Changes_When_Titlecased # L& LATIN SMALL LIGATURE IJ +0135 ; Changes_When_Titlecased # L& LATIN SMALL LETTER J WITH CIRCUMFLEX +0137 ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH CEDILLA +013A ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH ACUTE +013C ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH CEDILLA +013E ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH CARON +0140 ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH MIDDLE DOT +0142 ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH STROKE +0144 ; Changes_When_Titlecased # L& LATIN SMALL LETTER N WITH ACUTE +0146 ; Changes_When_Titlecased # L& LATIN SMALL LETTER N WITH CEDILLA +0148..0149 ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER N WITH CARON..LATIN SMALL LETTER N PRECEDED BY APOSTROPHE +014B ; Changes_When_Titlecased # L& LATIN SMALL LETTER ENG +014D ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH MACRON +014F ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH BREVE +0151 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH DOUBLE ACUTE +0153 ; Changes_When_Titlecased # L& LATIN SMALL LIGATURE OE +0155 ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH ACUTE +0157 ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH CEDILLA +0159 ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH CARON +015B ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH ACUTE +015D ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH CIRCUMFLEX +015F ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH CEDILLA +0161 ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH CARON +0163 ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH CEDILLA +0165 ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH CARON +0167 ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH STROKE +0169 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH TILDE +016B ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH MACRON +016D ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH BREVE +016F ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH RING ABOVE +0171 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH DOUBLE ACUTE +0173 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH OGONEK +0175 ; Changes_When_Titlecased # L& LATIN SMALL LETTER W WITH CIRCUMFLEX +0177 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Y WITH CIRCUMFLEX +017A ; Changes_When_Titlecased # L& LATIN SMALL LETTER Z WITH ACUTE +017C ; Changes_When_Titlecased # L& LATIN SMALL LETTER Z WITH DOT ABOVE +017E..0180 ; Changes_When_Titlecased # L& [3] LATIN SMALL LETTER Z WITH CARON..LATIN SMALL LETTER B WITH STROKE +0183 ; Changes_When_Titlecased # L& LATIN SMALL LETTER B WITH TOPBAR +0185 ; Changes_When_Titlecased # L& LATIN SMALL LETTER TONE SIX +0188 ; Changes_When_Titlecased # L& LATIN SMALL LETTER C WITH HOOK +018C ; Changes_When_Titlecased # L& LATIN SMALL LETTER D WITH TOPBAR +0192 ; Changes_When_Titlecased # L& LATIN SMALL LETTER F WITH HOOK +0195 ; Changes_When_Titlecased # L& LATIN SMALL LETTER HV +0199..019A ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER K WITH HOOK..LATIN SMALL LETTER L WITH BAR +019E ; Changes_When_Titlecased # L& LATIN SMALL LETTER N WITH LONG RIGHT LEG +01A1 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH HORN +01A3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER OI +01A5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER P WITH HOOK +01A8 ; Changes_When_Titlecased # L& LATIN SMALL LETTER TONE TWO +01AD ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH HOOK +01B0 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH HORN +01B4 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Y WITH HOOK +01B6 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Z WITH STROKE +01B9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER EZH REVERSED +01BD ; Changes_When_Titlecased # L& LATIN SMALL LETTER TONE FIVE +01BF ; Changes_When_Titlecased # L& LATIN LETTER WYNN +01C4 ; Changes_When_Titlecased # L& LATIN CAPITAL LETTER DZ WITH CARON +01C6..01C7 ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER DZ WITH CARON..LATIN CAPITAL LETTER LJ +01C9..01CA ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER LJ..LATIN CAPITAL LETTER NJ +01CC ; Changes_When_Titlecased # L& LATIN SMALL LETTER NJ +01CE ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH CARON +01D0 ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH CARON +01D2 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH CARON +01D4 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH CARON +01D6 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH DIAERESIS AND MACRON +01D8 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH DIAERESIS AND ACUTE +01DA ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH DIAERESIS AND CARON +01DC..01DD ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER U WITH DIAERESIS AND GRAVE..LATIN SMALL LETTER TURNED E +01DF ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH DIAERESIS AND MACRON +01E1 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH DOT ABOVE AND MACRON +01E3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER AE WITH MACRON +01E5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH STROKE +01E7 ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH CARON +01E9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH CARON +01EB ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH OGONEK +01ED ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH OGONEK AND MACRON +01EF..01F1 ; Changes_When_Titlecased # L& [3] LATIN SMALL LETTER EZH WITH CARON..LATIN CAPITAL LETTER DZ +01F3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER DZ +01F5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH ACUTE +01F9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER N WITH GRAVE +01FB ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH RING ABOVE AND ACUTE +01FD ; Changes_When_Titlecased # L& LATIN SMALL LETTER AE WITH ACUTE +01FF ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH STROKE AND ACUTE +0201 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH DOUBLE GRAVE +0203 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH INVERTED BREVE +0205 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH DOUBLE GRAVE +0207 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH INVERTED BREVE +0209 ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH DOUBLE GRAVE +020B ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH INVERTED BREVE +020D ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH DOUBLE GRAVE +020F ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH INVERTED BREVE +0211 ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH DOUBLE GRAVE +0213 ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH INVERTED BREVE +0215 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH DOUBLE GRAVE +0217 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH INVERTED BREVE +0219 ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH COMMA BELOW +021B ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH COMMA BELOW +021D ; Changes_When_Titlecased # L& LATIN SMALL LETTER YOGH +021F ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH CARON +0223 ; Changes_When_Titlecased # L& LATIN SMALL LETTER OU +0225 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Z WITH HOOK +0227 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH DOT ABOVE +0229 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CEDILLA +022B ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH DIAERESIS AND MACRON +022D ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH TILDE AND MACRON +022F ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH DOT ABOVE +0231 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH DOT ABOVE AND MACRON +0233 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Y WITH MACRON +023C ; Changes_When_Titlecased # L& LATIN SMALL LETTER C WITH STROKE +023F..0240 ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER S WITH SWASH TAIL..LATIN SMALL LETTER Z WITH SWASH TAIL +0242 ; Changes_When_Titlecased # L& LATIN SMALL LETTER GLOTTAL STOP +0247 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH STROKE +0249 ; Changes_When_Titlecased # L& LATIN SMALL LETTER J WITH STROKE +024B ; Changes_When_Titlecased # L& LATIN SMALL LETTER Q WITH HOOK TAIL +024D ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH STROKE +024F..0254 ; Changes_When_Titlecased # L& [6] LATIN SMALL LETTER Y WITH STROKE..LATIN SMALL LETTER OPEN O +0256..0257 ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER D WITH TAIL..LATIN SMALL LETTER D WITH HOOK +0259 ; Changes_When_Titlecased # L& LATIN SMALL LETTER SCHWA +025B ; Changes_When_Titlecased # L& LATIN SMALL LETTER OPEN E +0260 ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH HOOK +0263 ; Changes_When_Titlecased # L& LATIN SMALL LETTER GAMMA +0268..0269 ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER I WITH STROKE..LATIN SMALL LETTER IOTA +026B ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH MIDDLE TILDE +026F ; Changes_When_Titlecased # L& LATIN SMALL LETTER TURNED M +0271..0272 ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER M WITH HOOK..LATIN SMALL LETTER N WITH LEFT HOOK +0275 ; Changes_When_Titlecased # L& LATIN SMALL LETTER BARRED O +027D ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH TAIL +0280 ; Changes_When_Titlecased # L& LATIN LETTER SMALL CAPITAL R +0283 ; Changes_When_Titlecased # L& LATIN SMALL LETTER ESH +0288..028C ; Changes_When_Titlecased # L& [5] LATIN SMALL LETTER T WITH RETROFLEX HOOK..LATIN SMALL LETTER TURNED V +0292 ; Changes_When_Titlecased # L& LATIN SMALL LETTER EZH +0345 ; Changes_When_Titlecased # Mn COMBINING GREEK YPOGEGRAMMENI +0371 ; Changes_When_Titlecased # L& GREEK SMALL LETTER HETA +0373 ; Changes_When_Titlecased # L& GREEK SMALL LETTER ARCHAIC SAMPI +0377 ; Changes_When_Titlecased # L& GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037B..037D ; Changes_When_Titlecased # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0390 ; Changes_When_Titlecased # L& GREEK SMALL LETTER IOTA WITH DIALYTIKA AND TONOS +03AC..03CE ; Changes_When_Titlecased # L& [35] GREEK SMALL LETTER ALPHA WITH TONOS..GREEK SMALL LETTER OMEGA WITH TONOS +03D0..03D1 ; Changes_When_Titlecased # L& [2] GREEK BETA SYMBOL..GREEK THETA SYMBOL +03D5..03D7 ; Changes_When_Titlecased # L& [3] GREEK PHI SYMBOL..GREEK KAI SYMBOL +03D9 ; Changes_When_Titlecased # L& GREEK SMALL LETTER ARCHAIC KOPPA +03DB ; Changes_When_Titlecased # L& GREEK SMALL LETTER STIGMA +03DD ; Changes_When_Titlecased # L& GREEK SMALL LETTER DIGAMMA +03DF ; Changes_When_Titlecased # L& GREEK SMALL LETTER KOPPA +03E1 ; Changes_When_Titlecased # L& GREEK SMALL LETTER SAMPI +03E3 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER SHEI +03E5 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER FEI +03E7 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER KHEI +03E9 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER HORI +03EB ; Changes_When_Titlecased # L& COPTIC SMALL LETTER GANGIA +03ED ; Changes_When_Titlecased # L& COPTIC SMALL LETTER SHIMA +03EF..03F2 ; Changes_When_Titlecased # L& [4] COPTIC SMALL LETTER DEI..GREEK LUNATE SIGMA SYMBOL +03F5 ; Changes_When_Titlecased # L& GREEK LUNATE EPSILON SYMBOL +03F8 ; Changes_When_Titlecased # L& GREEK SMALL LETTER SHO +03FB ; Changes_When_Titlecased # L& GREEK SMALL LETTER SAN +0430..045F ; Changes_When_Titlecased # L& [48] CYRILLIC SMALL LETTER A..CYRILLIC SMALL LETTER DZHE +0461 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER OMEGA +0463 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER YAT +0465 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IOTIFIED E +0467 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER LITTLE YUS +0469 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IOTIFIED LITTLE YUS +046B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER BIG YUS +046D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IOTIFIED BIG YUS +046F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KSI +0471 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER PSI +0473 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER FITA +0475 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IZHITSA +0477 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT +0479 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER UK +047B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ROUND OMEGA +047D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER OMEGA WITH TITLO +047F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER OT +0481 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOPPA +048B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SHORT I WITH TAIL +048D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SEMISOFT SIGN +048F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ER WITH TICK +0491 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER GHE WITH UPTURN +0493 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER GHE WITH STROKE +0495 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER GHE WITH MIDDLE HOOK +0497 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ZHE WITH DESCENDER +0499 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ZE WITH DESCENDER +049B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KA WITH DESCENDER +049D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KA WITH VERTICAL STROKE +049F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KA WITH STROKE +04A1 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER BASHKIR KA +04A3 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER EN WITH DESCENDER +04A5 ; Changes_When_Titlecased # L& CYRILLIC SMALL LIGATURE EN GHE +04A7 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER PE WITH MIDDLE HOOK +04A9 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ABKHASIAN HA +04AB ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ES WITH DESCENDER +04AD ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER TE WITH DESCENDER +04AF ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER STRAIGHT U +04B1 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER STRAIGHT U WITH STROKE +04B3 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER HA WITH DESCENDER +04B5 ; Changes_When_Titlecased # L& CYRILLIC SMALL LIGATURE TE TSE +04B7 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER CHE WITH DESCENDER +04B9 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER CHE WITH VERTICAL STROKE +04BB ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SHHA +04BD ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ABKHASIAN CHE +04BF ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ABKHASIAN CHE WITH DESCENDER +04C2 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ZHE WITH BREVE +04C4 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KA WITH HOOK +04C6 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER EL WITH TAIL +04C8 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER EN WITH HOOK +04CA ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER EN WITH TAIL +04CC ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KHAKASSIAN CHE +04CE..04CF ; Changes_When_Titlecased # L& [2] CYRILLIC SMALL LETTER EM WITH TAIL..CYRILLIC SMALL LETTER PALOCHKA +04D1 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER A WITH BREVE +04D3 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER A WITH DIAERESIS +04D5 ; Changes_When_Titlecased # L& CYRILLIC SMALL LIGATURE A IE +04D7 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IE WITH BREVE +04D9 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SCHWA +04DB ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SCHWA WITH DIAERESIS +04DD ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ZHE WITH DIAERESIS +04DF ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ZE WITH DIAERESIS +04E1 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ABKHASIAN DZE +04E3 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER I WITH MACRON +04E5 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER I WITH DIAERESIS +04E7 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER O WITH DIAERESIS +04E9 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER BARRED O +04EB ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER BARRED O WITH DIAERESIS +04ED ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER E WITH DIAERESIS +04EF ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER U WITH MACRON +04F1 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER U WITH DIAERESIS +04F3 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER U WITH DOUBLE ACUTE +04F5 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER CHE WITH DIAERESIS +04F7 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER GHE WITH DESCENDER +04F9 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER YERU WITH DIAERESIS +04FB ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER GHE WITH STROKE AND HOOK +04FD ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER HA WITH HOOK +04FF ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER HA WITH STROKE +0501 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOMI DE +0503 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOMI DJE +0505 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOMI ZJE +0507 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOMI DZJE +0509 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOMI LJE +050B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOMI NJE +050D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOMI SJE +050F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER KOMI TJE +0511 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER REVERSED ZE +0513 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER EL WITH HOOK +0515 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER LHA +0517 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER RHA +0519 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER YAE +051B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER QA +051D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER WE +051F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ALEUT KA +0521 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER EL WITH MIDDLE HOOK +0523 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER EN WITH MIDDLE HOOK +0525 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER PE WITH DESCENDER +0561..0587 ; Changes_When_Titlecased # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +1D79 ; Changes_When_Titlecased # L& LATIN SMALL LETTER INSULAR G +1D7D ; Changes_When_Titlecased # L& LATIN SMALL LETTER P WITH STROKE +1E01 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH RING BELOW +1E03 ; Changes_When_Titlecased # L& LATIN SMALL LETTER B WITH DOT ABOVE +1E05 ; Changes_When_Titlecased # L& LATIN SMALL LETTER B WITH DOT BELOW +1E07 ; Changes_When_Titlecased # L& LATIN SMALL LETTER B WITH LINE BELOW +1E09 ; Changes_When_Titlecased # L& LATIN SMALL LETTER C WITH CEDILLA AND ACUTE +1E0B ; Changes_When_Titlecased # L& LATIN SMALL LETTER D WITH DOT ABOVE +1E0D ; Changes_When_Titlecased # L& LATIN SMALL LETTER D WITH DOT BELOW +1E0F ; Changes_When_Titlecased # L& LATIN SMALL LETTER D WITH LINE BELOW +1E11 ; Changes_When_Titlecased # L& LATIN SMALL LETTER D WITH CEDILLA +1E13 ; Changes_When_Titlecased # L& LATIN SMALL LETTER D WITH CIRCUMFLEX BELOW +1E15 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH MACRON AND GRAVE +1E17 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH MACRON AND ACUTE +1E19 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX BELOW +1E1B ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH TILDE BELOW +1E1D ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CEDILLA AND BREVE +1E1F ; Changes_When_Titlecased # L& LATIN SMALL LETTER F WITH DOT ABOVE +1E21 ; Changes_When_Titlecased # L& LATIN SMALL LETTER G WITH MACRON +1E23 ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH DOT ABOVE +1E25 ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH DOT BELOW +1E27 ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH DIAERESIS +1E29 ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH CEDILLA +1E2B ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH BREVE BELOW +1E2D ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH TILDE BELOW +1E2F ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH DIAERESIS AND ACUTE +1E31 ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH ACUTE +1E33 ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH DOT BELOW +1E35 ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH LINE BELOW +1E37 ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH DOT BELOW +1E39 ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH DOT BELOW AND MACRON +1E3B ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH LINE BELOW +1E3D ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH CIRCUMFLEX BELOW +1E3F ; Changes_When_Titlecased # L& LATIN SMALL LETTER M WITH ACUTE +1E41 ; Changes_When_Titlecased # L& LATIN SMALL LETTER M WITH DOT ABOVE +1E43 ; Changes_When_Titlecased # L& LATIN SMALL LETTER M WITH DOT BELOW +1E45 ; Changes_When_Titlecased # L& LATIN SMALL LETTER N WITH DOT ABOVE +1E47 ; Changes_When_Titlecased # L& LATIN SMALL LETTER N WITH DOT BELOW +1E49 ; Changes_When_Titlecased # L& LATIN SMALL LETTER N WITH LINE BELOW +1E4B ; Changes_When_Titlecased # L& LATIN SMALL LETTER N WITH CIRCUMFLEX BELOW +1E4D ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH TILDE AND ACUTE +1E4F ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH TILDE AND DIAERESIS +1E51 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH MACRON AND GRAVE +1E53 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH MACRON AND ACUTE +1E55 ; Changes_When_Titlecased # L& LATIN SMALL LETTER P WITH ACUTE +1E57 ; Changes_When_Titlecased # L& LATIN SMALL LETTER P WITH DOT ABOVE +1E59 ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH DOT ABOVE +1E5B ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH DOT BELOW +1E5D ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH DOT BELOW AND MACRON +1E5F ; Changes_When_Titlecased # L& LATIN SMALL LETTER R WITH LINE BELOW +1E61 ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH DOT ABOVE +1E63 ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH DOT BELOW +1E65 ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH ACUTE AND DOT ABOVE +1E67 ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH CARON AND DOT ABOVE +1E69 ; Changes_When_Titlecased # L& LATIN SMALL LETTER S WITH DOT BELOW AND DOT ABOVE +1E6B ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH DOT ABOVE +1E6D ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH DOT BELOW +1E6F ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH LINE BELOW +1E71 ; Changes_When_Titlecased # L& LATIN SMALL LETTER T WITH CIRCUMFLEX BELOW +1E73 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH DIAERESIS BELOW +1E75 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH TILDE BELOW +1E77 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH CIRCUMFLEX BELOW +1E79 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH TILDE AND ACUTE +1E7B ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH MACRON AND DIAERESIS +1E7D ; Changes_When_Titlecased # L& LATIN SMALL LETTER V WITH TILDE +1E7F ; Changes_When_Titlecased # L& LATIN SMALL LETTER V WITH DOT BELOW +1E81 ; Changes_When_Titlecased # L& LATIN SMALL LETTER W WITH GRAVE +1E83 ; Changes_When_Titlecased # L& LATIN SMALL LETTER W WITH ACUTE +1E85 ; Changes_When_Titlecased # L& LATIN SMALL LETTER W WITH DIAERESIS +1E87 ; Changes_When_Titlecased # L& LATIN SMALL LETTER W WITH DOT ABOVE +1E89 ; Changes_When_Titlecased # L& LATIN SMALL LETTER W WITH DOT BELOW +1E8B ; Changes_When_Titlecased # L& LATIN SMALL LETTER X WITH DOT ABOVE +1E8D ; Changes_When_Titlecased # L& LATIN SMALL LETTER X WITH DIAERESIS +1E8F ; Changes_When_Titlecased # L& LATIN SMALL LETTER Y WITH DOT ABOVE +1E91 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Z WITH CIRCUMFLEX +1E93 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Z WITH DOT BELOW +1E95..1E9B ; Changes_When_Titlecased # L& [7] LATIN SMALL LETTER Z WITH LINE BELOW..LATIN SMALL LETTER LONG S WITH DOT ABOVE +1EA1 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH DOT BELOW +1EA3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH HOOK ABOVE +1EA5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND ACUTE +1EA7 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND GRAVE +1EA9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE +1EAB ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND TILDE +1EAD ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH CIRCUMFLEX AND DOT BELOW +1EAF ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH BREVE AND ACUTE +1EB1 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH BREVE AND GRAVE +1EB3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH BREVE AND HOOK ABOVE +1EB5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH BREVE AND TILDE +1EB7 ; Changes_When_Titlecased # L& LATIN SMALL LETTER A WITH BREVE AND DOT BELOW +1EB9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH DOT BELOW +1EBB ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH HOOK ABOVE +1EBD ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH TILDE +1EBF ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND ACUTE +1EC1 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND GRAVE +1EC3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE +1EC5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND TILDE +1EC7 ; Changes_When_Titlecased # L& LATIN SMALL LETTER E WITH CIRCUMFLEX AND DOT BELOW +1EC9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH HOOK ABOVE +1ECB ; Changes_When_Titlecased # L& LATIN SMALL LETTER I WITH DOT BELOW +1ECD ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH DOT BELOW +1ECF ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH HOOK ABOVE +1ED1 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND ACUTE +1ED3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND GRAVE +1ED5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE +1ED7 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND TILDE +1ED9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH CIRCUMFLEX AND DOT BELOW +1EDB ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH HORN AND ACUTE +1EDD ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH HORN AND GRAVE +1EDF ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH HORN AND HOOK ABOVE +1EE1 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH HORN AND TILDE +1EE3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH HORN AND DOT BELOW +1EE5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH DOT BELOW +1EE7 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH HOOK ABOVE +1EE9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH HORN AND ACUTE +1EEB ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH HORN AND GRAVE +1EED ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH HORN AND HOOK ABOVE +1EEF ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH HORN AND TILDE +1EF1 ; Changes_When_Titlecased # L& LATIN SMALL LETTER U WITH HORN AND DOT BELOW +1EF3 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Y WITH GRAVE +1EF5 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Y WITH DOT BELOW +1EF7 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Y WITH HOOK ABOVE +1EF9 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Y WITH TILDE +1EFB ; Changes_When_Titlecased # L& LATIN SMALL LETTER MIDDLE-WELSH LL +1EFD ; Changes_When_Titlecased # L& LATIN SMALL LETTER MIDDLE-WELSH V +1EFF..1F07 ; Changes_When_Titlecased # L& [9] LATIN SMALL LETTER Y WITH LOOP..GREEK SMALL LETTER ALPHA WITH DASIA AND PERISPOMENI +1F10..1F15 ; Changes_When_Titlecased # L& [6] GREEK SMALL LETTER EPSILON WITH PSILI..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F27 ; Changes_When_Titlecased # L& [8] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER ETA WITH DASIA AND PERISPOMENI +1F30..1F37 ; Changes_When_Titlecased # L& [8] GREEK SMALL LETTER IOTA WITH PSILI..GREEK SMALL LETTER IOTA WITH DASIA AND PERISPOMENI +1F40..1F45 ; Changes_When_Titlecased # L& [6] GREEK SMALL LETTER OMICRON WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; Changes_When_Titlecased # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F60..1F67 ; Changes_When_Titlecased # L& [8] GREEK SMALL LETTER OMEGA WITH PSILI..GREEK SMALL LETTER OMEGA WITH DASIA AND PERISPOMENI +1F70..1F7D ; Changes_When_Titlecased # L& [14] GREEK SMALL LETTER ALPHA WITH VARIA..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1F87 ; Changes_When_Titlecased # L& [8] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH DASIA AND PERISPOMENI AND YPOGEGRAMMENI +1F90..1F97 ; Changes_When_Titlecased # L& [8] GREEK SMALL LETTER ETA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH DASIA AND PERISPOMENI AND YPOGEGRAMMENI +1FA0..1FA7 ; Changes_When_Titlecased # L& [8] GREEK SMALL LETTER OMEGA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH DASIA AND PERISPOMENI AND YPOGEGRAMMENI +1FB0..1FB4 ; Changes_When_Titlecased # L& [5] GREEK SMALL LETTER ALPHA WITH VRACHY..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FB7 ; Changes_When_Titlecased # L& [2] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK SMALL LETTER ALPHA WITH PERISPOMENI AND YPOGEGRAMMENI +1FBE ; Changes_When_Titlecased # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; Changes_When_Titlecased # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FC7 ; Changes_When_Titlecased # L& [2] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK SMALL LETTER ETA WITH PERISPOMENI AND YPOGEGRAMMENI +1FD0..1FD3 ; Changes_When_Titlecased # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FD7 ; Changes_When_Titlecased # L& [2] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND PERISPOMENI +1FE0..1FE7 ; Changes_When_Titlecased # L& [8] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND PERISPOMENI +1FF2..1FF4 ; Changes_When_Titlecased # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FF7 ; Changes_When_Titlecased # L& [2] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK SMALL LETTER OMEGA WITH PERISPOMENI AND YPOGEGRAMMENI +214E ; Changes_When_Titlecased # L& TURNED SMALL F +2170..217F ; Changes_When_Titlecased # Nl [16] SMALL ROMAN NUMERAL ONE..SMALL ROMAN NUMERAL ONE THOUSAND +2184 ; Changes_When_Titlecased # L& LATIN SMALL LETTER REVERSED C +24D0..24E9 ; Changes_When_Titlecased # So [26] CIRCLED LATIN SMALL LETTER A..CIRCLED LATIN SMALL LETTER Z +2C30..2C5E ; Changes_When_Titlecased # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C61 ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH DOUBLE BAR +2C65..2C66 ; Changes_When_Titlecased # L& [2] LATIN SMALL LETTER A WITH STROKE..LATIN SMALL LETTER T WITH DIAGONAL STROKE +2C68 ; Changes_When_Titlecased # L& LATIN SMALL LETTER H WITH DESCENDER +2C6A ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH DESCENDER +2C6C ; Changes_When_Titlecased # L& LATIN SMALL LETTER Z WITH DESCENDER +2C73 ; Changes_When_Titlecased # L& LATIN SMALL LETTER W WITH HOOK +2C76 ; Changes_When_Titlecased # L& LATIN SMALL LETTER HALF H +2C81 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER ALFA +2C83 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER VIDA +2C85 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER GAMMA +2C87 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER DALDA +2C89 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER EIE +2C8B ; Changes_When_Titlecased # L& COPTIC SMALL LETTER SOU +2C8D ; Changes_When_Titlecased # L& COPTIC SMALL LETTER ZATA +2C8F ; Changes_When_Titlecased # L& COPTIC SMALL LETTER HATE +2C91 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER THETHE +2C93 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER IAUDA +2C95 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER KAPA +2C97 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER LAULA +2C99 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER MI +2C9B ; Changes_When_Titlecased # L& COPTIC SMALL LETTER NI +2C9D ; Changes_When_Titlecased # L& COPTIC SMALL LETTER KSI +2C9F ; Changes_When_Titlecased # L& COPTIC SMALL LETTER O +2CA1 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER PI +2CA3 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER RO +2CA5 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER SIMA +2CA7 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER TAU +2CA9 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER UA +2CAB ; Changes_When_Titlecased # L& COPTIC SMALL LETTER FI +2CAD ; Changes_When_Titlecased # L& COPTIC SMALL LETTER KHI +2CAF ; Changes_When_Titlecased # L& COPTIC SMALL LETTER PSI +2CB1 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OOU +2CB3 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER DIALECT-P ALEF +2CB5 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC AIN +2CB7 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER CRYPTOGRAMMIC EIE +2CB9 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER DIALECT-P KAPA +2CBB ; Changes_When_Titlecased # L& COPTIC SMALL LETTER DIALECT-P NI +2CBD ; Changes_When_Titlecased # L& COPTIC SMALL LETTER CRYPTOGRAMMIC NI +2CBF ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC OOU +2CC1 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER SAMPI +2CC3 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER CROSSED SHEI +2CC5 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC SHEI +2CC7 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC ESH +2CC9 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER AKHMIMIC KHEI +2CCB ; Changes_When_Titlecased # L& COPTIC SMALL LETTER DIALECT-P HORI +2CCD ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC HORI +2CCF ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC HA +2CD1 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER L-SHAPED HA +2CD3 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC HEI +2CD5 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC HAT +2CD7 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC GANGIA +2CD9 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC DJA +2CDB ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD COPTIC SHIMA +2CDD ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD NUBIAN SHIMA +2CDF ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD NUBIAN NGI +2CE1 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD NUBIAN NYI +2CE3 ; Changes_When_Titlecased # L& COPTIC SMALL LETTER OLD NUBIAN WAU +2CEC ; Changes_When_Titlecased # L& COPTIC SMALL LETTER CRYPTOGRAMMIC SHEI +2CEE ; Changes_When_Titlecased # L& COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2D00..2D25 ; Changes_When_Titlecased # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +A641 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ZEMLYA +A643 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER DZELO +A645 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER REVERSED DZE +A647 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IOTA +A649 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER DJERV +A64B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER MONOGRAPH UK +A64D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER BROAD OMEGA +A64F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER NEUTRAL YER +A651 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER YERU WITH BACK YER +A653 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IOTIFIED YAT +A655 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER REVERSED YU +A657 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IOTIFIED A +A659 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER CLOSED LITTLE YUS +A65B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER BLENDED YUS +A65D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER IOTIFIED CLOSED LITTLE YUS +A65F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER YN +A663 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SOFT DE +A665 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SOFT EL +A667 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SOFT EM +A669 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER MONOCULAR O +A66B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER BINOCULAR O +A66D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A681 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER DWE +A683 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER DZWE +A685 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER ZHWE +A687 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER CCHE +A689 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER DZZE +A68B ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER TE WITH MIDDLE HOOK +A68D ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER TWE +A68F ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER TSWE +A691 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER TSSE +A693 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER TCHE +A695 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER HWE +A697 ; Changes_When_Titlecased # L& CYRILLIC SMALL LETTER SHWE +A723 ; Changes_When_Titlecased # L& LATIN SMALL LETTER EGYPTOLOGICAL ALEF +A725 ; Changes_When_Titlecased # L& LATIN SMALL LETTER EGYPTOLOGICAL AIN +A727 ; Changes_When_Titlecased # L& LATIN SMALL LETTER HENG +A729 ; Changes_When_Titlecased # L& LATIN SMALL LETTER TZ +A72B ; Changes_When_Titlecased # L& LATIN SMALL LETTER TRESILLO +A72D ; Changes_When_Titlecased # L& LATIN SMALL LETTER CUATRILLO +A72F ; Changes_When_Titlecased # L& LATIN SMALL LETTER CUATRILLO WITH COMMA +A733 ; Changes_When_Titlecased # L& LATIN SMALL LETTER AA +A735 ; Changes_When_Titlecased # L& LATIN SMALL LETTER AO +A737 ; Changes_When_Titlecased # L& LATIN SMALL LETTER AU +A739 ; Changes_When_Titlecased # L& LATIN SMALL LETTER AV +A73B ; Changes_When_Titlecased # L& LATIN SMALL LETTER AV WITH HORIZONTAL BAR +A73D ; Changes_When_Titlecased # L& LATIN SMALL LETTER AY +A73F ; Changes_When_Titlecased # L& LATIN SMALL LETTER REVERSED C WITH DOT +A741 ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH STROKE +A743 ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH DIAGONAL STROKE +A745 ; Changes_When_Titlecased # L& LATIN SMALL LETTER K WITH STROKE AND DIAGONAL STROKE +A747 ; Changes_When_Titlecased # L& LATIN SMALL LETTER BROKEN L +A749 ; Changes_When_Titlecased # L& LATIN SMALL LETTER L WITH HIGH STROKE +A74B ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH LONG STROKE OVERLAY +A74D ; Changes_When_Titlecased # L& LATIN SMALL LETTER O WITH LOOP +A74F ; Changes_When_Titlecased # L& LATIN SMALL LETTER OO +A751 ; Changes_When_Titlecased # L& LATIN SMALL LETTER P WITH STROKE THROUGH DESCENDER +A753 ; Changes_When_Titlecased # L& LATIN SMALL LETTER P WITH FLOURISH +A755 ; Changes_When_Titlecased # L& LATIN SMALL LETTER P WITH SQUIRREL TAIL +A757 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Q WITH STROKE THROUGH DESCENDER +A759 ; Changes_When_Titlecased # L& LATIN SMALL LETTER Q WITH DIAGONAL STROKE +A75B ; Changes_When_Titlecased # L& LATIN SMALL LETTER R ROTUNDA +A75D ; Changes_When_Titlecased # L& LATIN SMALL LETTER RUM ROTUNDA +A75F ; Changes_When_Titlecased # L& LATIN SMALL LETTER V WITH DIAGONAL STROKE +A761 ; Changes_When_Titlecased # L& LATIN SMALL LETTER VY +A763 ; Changes_When_Titlecased # L& LATIN SMALL LETTER VISIGOTHIC Z +A765 ; Changes_When_Titlecased # L& LATIN SMALL LETTER THORN WITH STROKE +A767 ; Changes_When_Titlecased # L& LATIN SMALL LETTER THORN WITH STROKE THROUGH DESCENDER +A769 ; Changes_When_Titlecased # L& LATIN SMALL LETTER VEND +A76B ; Changes_When_Titlecased # L& LATIN SMALL LETTER ET +A76D ; Changes_When_Titlecased # L& LATIN SMALL LETTER IS +A76F ; Changes_When_Titlecased # L& LATIN SMALL LETTER CON +A77A ; Changes_When_Titlecased # L& LATIN SMALL LETTER INSULAR D +A77C ; Changes_When_Titlecased # L& LATIN SMALL LETTER INSULAR F +A77F ; Changes_When_Titlecased # L& LATIN SMALL LETTER TURNED INSULAR G +A781 ; Changes_When_Titlecased # L& LATIN SMALL LETTER TURNED L +A783 ; Changes_When_Titlecased # L& LATIN SMALL LETTER INSULAR R +A785 ; Changes_When_Titlecased # L& LATIN SMALL LETTER INSULAR S +A787 ; Changes_When_Titlecased # L& LATIN SMALL LETTER INSULAR T +A78C ; Changes_When_Titlecased # L& LATIN SMALL LETTER SALTILLO +FB00..FB06 ; Changes_When_Titlecased # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; Changes_When_Titlecased # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FF41..FF5A ; Changes_When_Titlecased # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +10428..1044F ; Changes_When_Titlecased # L& [40] DESERET SMALL LETTER LONG I..DESERET SMALL LETTER EW + +# Total code points: 1085 + +# ================================================ + +# Derived Property: Changes_When_Casefolded (CWCF) +# Characters whose normalized forms are not stable under case folding. +# For more information, see D127 in Section 3.13, "Default Case Algorithms". +# Changes_When_Casefolded(X) is true when toCasefold(toNFD(X)) != toNFD(X) + +0041..005A ; Changes_When_Casefolded # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +00B5 ; Changes_When_Casefolded # L& MICRO SIGN +00C0..00D6 ; Changes_When_Casefolded # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00DF ; Changes_When_Casefolded # L& [8] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER SHARP S +0100 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH MACRON +0102 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH BREVE +0104 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH OGONEK +0106 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER C WITH ACUTE +0108 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER C WITH CIRCUMFLEX +010A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER C WITH DOT ABOVE +010C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER C WITH CARON +010E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER D WITH CARON +0110 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER D WITH STROKE +0112 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH MACRON +0114 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH BREVE +0116 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH DOT ABOVE +0118 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH OGONEK +011A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CARON +011C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER G WITH CIRCUMFLEX +011E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER G WITH BREVE +0120 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER G WITH DOT ABOVE +0122 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER G WITH CEDILLA +0124 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH CIRCUMFLEX +0126 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH STROKE +0128 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH TILDE +012A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH MACRON +012C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH BREVE +012E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH OGONEK +0130 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH DOT ABOVE +0132 ; Changes_When_Casefolded # L& LATIN CAPITAL LIGATURE IJ +0134 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER J WITH CIRCUMFLEX +0136 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH CEDILLA +0139 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH ACUTE +013B ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH CEDILLA +013D ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH CARON +013F ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH MIDDLE DOT +0141 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH STROKE +0143 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER N WITH ACUTE +0145 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER N WITH CEDILLA +0147 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER N WITH CARON +0149..014A ; Changes_When_Casefolded # L& [2] LATIN SMALL LETTER N PRECEDED BY APOSTROPHE..LATIN CAPITAL LETTER ENG +014C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH MACRON +014E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH BREVE +0150 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH DOUBLE ACUTE +0152 ; Changes_When_Casefolded # L& LATIN CAPITAL LIGATURE OE +0154 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH ACUTE +0156 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH CEDILLA +0158 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH CARON +015A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH ACUTE +015C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH CIRCUMFLEX +015E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH CEDILLA +0160 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH CARON +0162 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH CEDILLA +0164 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH CARON +0166 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH STROKE +0168 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH TILDE +016A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH MACRON +016C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH BREVE +016E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH RING ABOVE +0170 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH DOUBLE ACUTE +0172 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH OGONEK +0174 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER W WITH CIRCUMFLEX +0176 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH CIRCUMFLEX +0178..0179 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER Y WITH DIAERESIS..LATIN CAPITAL LETTER Z WITH ACUTE +017B ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Z WITH DOT ABOVE +017D ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Z WITH CARON +017F ; Changes_When_Casefolded # L& LATIN SMALL LETTER LONG S +0181..0182 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER B WITH HOOK..LATIN CAPITAL LETTER B WITH TOPBAR +0184 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER TONE SIX +0186..0187 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER OPEN O..LATIN CAPITAL LETTER C WITH HOOK +0189..018B ; Changes_When_Casefolded # L& [3] LATIN CAPITAL LETTER AFRICAN D..LATIN CAPITAL LETTER D WITH TOPBAR +018E..0191 ; Changes_When_Casefolded # L& [4] LATIN CAPITAL LETTER REVERSED E..LATIN CAPITAL LETTER F WITH HOOK +0193..0194 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER G WITH HOOK..LATIN CAPITAL LETTER GAMMA +0196..0198 ; Changes_When_Casefolded # L& [3] LATIN CAPITAL LETTER IOTA..LATIN CAPITAL LETTER K WITH HOOK +019C..019D ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER TURNED M..LATIN CAPITAL LETTER N WITH LEFT HOOK +019F..01A0 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER O WITH MIDDLE TILDE..LATIN CAPITAL LETTER O WITH HORN +01A2 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER OI +01A4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER P WITH HOOK +01A6..01A7 ; Changes_When_Casefolded # L& [2] LATIN LETTER YR..LATIN CAPITAL LETTER TONE TWO +01A9 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER ESH +01AC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH HOOK +01AE..01AF ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER T WITH RETROFLEX HOOK..LATIN CAPITAL LETTER U WITH HORN +01B1..01B3 ; Changes_When_Casefolded # L& [3] LATIN CAPITAL LETTER UPSILON..LATIN CAPITAL LETTER Y WITH HOOK +01B5 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Z WITH STROKE +01B7..01B8 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER EZH..LATIN CAPITAL LETTER EZH REVERSED +01BC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER TONE FIVE +01C4..01C5 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER DZ WITH CARON..LATIN CAPITAL LETTER D WITH SMALL LETTER Z WITH CARON +01C7..01C8 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER LJ..LATIN CAPITAL LETTER L WITH SMALL LETTER J +01CA..01CB ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER NJ..LATIN CAPITAL LETTER N WITH SMALL LETTER J +01CD ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH CARON +01CF ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH CARON +01D1 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH CARON +01D3 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH CARON +01D5 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND MACRON +01D7 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND ACUTE +01D9 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND CARON +01DB ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH DIAERESIS AND GRAVE +01DE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH DIAERESIS AND MACRON +01E0 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH DOT ABOVE AND MACRON +01E2 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER AE WITH MACRON +01E4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER G WITH STROKE +01E6 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER G WITH CARON +01E8 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH CARON +01EA ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH OGONEK +01EC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH OGONEK AND MACRON +01EE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER EZH WITH CARON +01F1..01F2 ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER DZ..LATIN CAPITAL LETTER D WITH SMALL LETTER Z +01F4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER G WITH ACUTE +01F6..01F8 ; Changes_When_Casefolded # L& [3] LATIN CAPITAL LETTER HWAIR..LATIN CAPITAL LETTER N WITH GRAVE +01FA ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE +01FC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER AE WITH ACUTE +01FE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH STROKE AND ACUTE +0200 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH DOUBLE GRAVE +0202 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH INVERTED BREVE +0204 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH DOUBLE GRAVE +0206 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH INVERTED BREVE +0208 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH DOUBLE GRAVE +020A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH INVERTED BREVE +020C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH DOUBLE GRAVE +020E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH INVERTED BREVE +0210 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH DOUBLE GRAVE +0212 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH INVERTED BREVE +0214 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH DOUBLE GRAVE +0216 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH INVERTED BREVE +0218 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH COMMA BELOW +021A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH COMMA BELOW +021C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER YOGH +021E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH CARON +0220 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER N WITH LONG RIGHT LEG +0222 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER OU +0224 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Z WITH HOOK +0226 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH DOT ABOVE +0228 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CEDILLA +022A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH DIAERESIS AND MACRON +022C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH TILDE AND MACRON +022E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH DOT ABOVE +0230 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH DOT ABOVE AND MACRON +0232 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH MACRON +023A..023B ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER A WITH STROKE..LATIN CAPITAL LETTER C WITH STROKE +023D..023E ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER L WITH BAR..LATIN CAPITAL LETTER T WITH DIAGONAL STROKE +0241 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER GLOTTAL STOP +0243..0246 ; Changes_When_Casefolded # L& [4] LATIN CAPITAL LETTER B WITH STROKE..LATIN CAPITAL LETTER E WITH STROKE +0248 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER J WITH STROKE +024A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER SMALL Q WITH HOOK TAIL +024C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH STROKE +024E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH STROKE +0345 ; Changes_When_Casefolded # Mn COMBINING GREEK YPOGEGRAMMENI +0370 ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER HETA +0372 ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER ARCHAIC SAMPI +0376 ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA +0386 ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0388..038A ; Changes_When_Casefolded # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..038F ; Changes_When_Casefolded # L& [2] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER OMEGA WITH TONOS +0391..03A1 ; Changes_When_Casefolded # L& [17] GREEK CAPITAL LETTER ALPHA..GREEK CAPITAL LETTER RHO +03A3..03AB ; Changes_When_Casefolded # L& [9] GREEK CAPITAL LETTER SIGMA..GREEK CAPITAL LETTER UPSILON WITH DIALYTIKA +03C2 ; Changes_When_Casefolded # L& GREEK SMALL LETTER FINAL SIGMA +03CF..03D1 ; Changes_When_Casefolded # L& [3] GREEK CAPITAL KAI SYMBOL..GREEK THETA SYMBOL +03D5..03D6 ; Changes_When_Casefolded # L& [2] GREEK PHI SYMBOL..GREEK PI SYMBOL +03D8 ; Changes_When_Casefolded # L& GREEK LETTER ARCHAIC KOPPA +03DA ; Changes_When_Casefolded # L& GREEK LETTER STIGMA +03DC ; Changes_When_Casefolded # L& GREEK LETTER DIGAMMA +03DE ; Changes_When_Casefolded # L& GREEK LETTER KOPPA +03E0 ; Changes_When_Casefolded # L& GREEK LETTER SAMPI +03E2 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER SHEI +03E4 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER FEI +03E6 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER KHEI +03E8 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER HORI +03EA ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER GANGIA +03EC ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER SHIMA +03EE ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER DEI +03F0..03F1 ; Changes_When_Casefolded # L& [2] GREEK KAPPA SYMBOL..GREEK RHO SYMBOL +03F4..03F5 ; Changes_When_Casefolded # L& [2] GREEK CAPITAL THETA SYMBOL..GREEK LUNATE EPSILON SYMBOL +03F7 ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER SHO +03F9..03FA ; Changes_When_Casefolded # L& [2] GREEK CAPITAL LUNATE SIGMA SYMBOL..GREEK CAPITAL LETTER SAN +03FD..042F ; Changes_When_Casefolded # L& [51] GREEK CAPITAL REVERSED LUNATE SIGMA SYMBOL..CYRILLIC CAPITAL LETTER YA +0460 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER OMEGA +0462 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER YAT +0464 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IOTIFIED E +0466 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER LITTLE YUS +0468 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IOTIFIED LITTLE YUS +046A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER BIG YUS +046C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IOTIFIED BIG YUS +046E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KSI +0470 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER PSI +0472 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER FITA +0474 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IZHITSA +0476 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IZHITSA WITH DOUBLE GRAVE ACCENT +0478 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER UK +047A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ROUND OMEGA +047C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER OMEGA WITH TITLO +047E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER OT +0480 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOPPA +048A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SHORT I WITH TAIL +048C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SEMISOFT SIGN +048E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ER WITH TICK +0490 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER GHE WITH UPTURN +0492 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER GHE WITH STROKE +0494 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER GHE WITH MIDDLE HOOK +0496 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ZHE WITH DESCENDER +0498 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ZE WITH DESCENDER +049A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KA WITH DESCENDER +049C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KA WITH VERTICAL STROKE +049E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KA WITH STROKE +04A0 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER BASHKIR KA +04A2 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER EN WITH DESCENDER +04A4 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LIGATURE EN GHE +04A6 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER PE WITH MIDDLE HOOK +04A8 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ABKHASIAN HA +04AA ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ES WITH DESCENDER +04AC ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER TE WITH DESCENDER +04AE ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER STRAIGHT U +04B0 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER STRAIGHT U WITH STROKE +04B2 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER HA WITH DESCENDER +04B4 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LIGATURE TE TSE +04B6 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER CHE WITH DESCENDER +04B8 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER CHE WITH VERTICAL STROKE +04BA ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SHHA +04BC ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ABKHASIAN CHE +04BE ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ABKHASIAN CHE WITH DESCENDER +04C0..04C1 ; Changes_When_Casefolded # L& [2] CYRILLIC LETTER PALOCHKA..CYRILLIC CAPITAL LETTER ZHE WITH BREVE +04C3 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KA WITH HOOK +04C5 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER EL WITH TAIL +04C7 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER EN WITH HOOK +04C9 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER EN WITH TAIL +04CB ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KHAKASSIAN CHE +04CD ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER EM WITH TAIL +04D0 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER A WITH BREVE +04D2 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER A WITH DIAERESIS +04D4 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LIGATURE A IE +04D6 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IE WITH BREVE +04D8 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SCHWA +04DA ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SCHWA WITH DIAERESIS +04DC ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ZHE WITH DIAERESIS +04DE ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ZE WITH DIAERESIS +04E0 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ABKHASIAN DZE +04E2 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER I WITH MACRON +04E4 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER I WITH DIAERESIS +04E6 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER O WITH DIAERESIS +04E8 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER BARRED O +04EA ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER BARRED O WITH DIAERESIS +04EC ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER E WITH DIAERESIS +04EE ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER U WITH MACRON +04F0 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER U WITH DIAERESIS +04F2 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER U WITH DOUBLE ACUTE +04F4 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER CHE WITH DIAERESIS +04F6 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER GHE WITH DESCENDER +04F8 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER YERU WITH DIAERESIS +04FA ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER GHE WITH STROKE AND HOOK +04FC ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER HA WITH HOOK +04FE ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER HA WITH STROKE +0500 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOMI DE +0502 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOMI DJE +0504 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOMI ZJE +0506 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOMI DZJE +0508 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOMI LJE +050A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOMI NJE +050C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOMI SJE +050E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER KOMI TJE +0510 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER REVERSED ZE +0512 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER EL WITH HOOK +0514 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER LHA +0516 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER RHA +0518 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER YAE +051A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER QA +051C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER WE +051E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ALEUT KA +0520 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER EL WITH MIDDLE HOOK +0522 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER EN WITH MIDDLE HOOK +0524 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER PE WITH DESCENDER +0531..0556 ; Changes_When_Casefolded # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0587 ; Changes_When_Casefolded # L& ARMENIAN SMALL LIGATURE ECH YIWN +10A0..10C5 ; Changes_When_Casefolded # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +1E00 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH RING BELOW +1E02 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER B WITH DOT ABOVE +1E04 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER B WITH DOT BELOW +1E06 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER B WITH LINE BELOW +1E08 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER C WITH CEDILLA AND ACUTE +1E0A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER D WITH DOT ABOVE +1E0C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER D WITH DOT BELOW +1E0E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER D WITH LINE BELOW +1E10 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER D WITH CEDILLA +1E12 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER D WITH CIRCUMFLEX BELOW +1E14 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH MACRON AND GRAVE +1E16 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH MACRON AND ACUTE +1E18 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX BELOW +1E1A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH TILDE BELOW +1E1C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CEDILLA AND BREVE +1E1E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER F WITH DOT ABOVE +1E20 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER G WITH MACRON +1E22 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH DOT ABOVE +1E24 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH DOT BELOW +1E26 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH DIAERESIS +1E28 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH CEDILLA +1E2A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH BREVE BELOW +1E2C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH TILDE BELOW +1E2E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH DIAERESIS AND ACUTE +1E30 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH ACUTE +1E32 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH DOT BELOW +1E34 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH LINE BELOW +1E36 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH DOT BELOW +1E38 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH DOT BELOW AND MACRON +1E3A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH LINE BELOW +1E3C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH CIRCUMFLEX BELOW +1E3E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER M WITH ACUTE +1E40 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER M WITH DOT ABOVE +1E42 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER M WITH DOT BELOW +1E44 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER N WITH DOT ABOVE +1E46 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER N WITH DOT BELOW +1E48 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER N WITH LINE BELOW +1E4A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER N WITH CIRCUMFLEX BELOW +1E4C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH TILDE AND ACUTE +1E4E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH TILDE AND DIAERESIS +1E50 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH MACRON AND GRAVE +1E52 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH MACRON AND ACUTE +1E54 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER P WITH ACUTE +1E56 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER P WITH DOT ABOVE +1E58 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH DOT ABOVE +1E5A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH DOT BELOW +1E5C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH DOT BELOW AND MACRON +1E5E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R WITH LINE BELOW +1E60 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH DOT ABOVE +1E62 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH DOT BELOW +1E64 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH ACUTE AND DOT ABOVE +1E66 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH CARON AND DOT ABOVE +1E68 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER S WITH DOT BELOW AND DOT ABOVE +1E6A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH DOT ABOVE +1E6C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH DOT BELOW +1E6E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH LINE BELOW +1E70 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER T WITH CIRCUMFLEX BELOW +1E72 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH DIAERESIS BELOW +1E74 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH TILDE BELOW +1E76 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH CIRCUMFLEX BELOW +1E78 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH TILDE AND ACUTE +1E7A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH MACRON AND DIAERESIS +1E7C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER V WITH TILDE +1E7E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER V WITH DOT BELOW +1E80 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER W WITH GRAVE +1E82 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER W WITH ACUTE +1E84 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER W WITH DIAERESIS +1E86 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER W WITH DOT ABOVE +1E88 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER W WITH DOT BELOW +1E8A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER X WITH DOT ABOVE +1E8C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER X WITH DIAERESIS +1E8E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH DOT ABOVE +1E90 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Z WITH CIRCUMFLEX +1E92 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Z WITH DOT BELOW +1E94 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Z WITH LINE BELOW +1E9A..1E9B ; Changes_When_Casefolded # L& [2] LATIN SMALL LETTER A WITH RIGHT HALF RING..LATIN SMALL LETTER LONG S WITH DOT ABOVE +1E9E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER SHARP S +1EA0 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH DOT BELOW +1EA2 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH HOOK ABOVE +1EA4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND ACUTE +1EA6 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND GRAVE +1EA8 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE +1EAA ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND TILDE +1EAC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND DOT BELOW +1EAE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH BREVE AND ACUTE +1EB0 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH BREVE AND GRAVE +1EB2 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE +1EB4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH BREVE AND TILDE +1EB6 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER A WITH BREVE AND DOT BELOW +1EB8 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH DOT BELOW +1EBA ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH HOOK ABOVE +1EBC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH TILDE +1EBE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND ACUTE +1EC0 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND GRAVE +1EC2 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND HOOK ABOVE +1EC4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND TILDE +1EC6 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND DOT BELOW +1EC8 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH HOOK ABOVE +1ECA ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER I WITH DOT BELOW +1ECC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH DOT BELOW +1ECE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH HOOK ABOVE +1ED0 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND ACUTE +1ED2 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND GRAVE +1ED4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND HOOK ABOVE +1ED6 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND TILDE +1ED8 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND DOT BELOW +1EDA ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH HORN AND ACUTE +1EDC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH HORN AND GRAVE +1EDE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH HORN AND HOOK ABOVE +1EE0 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH HORN AND TILDE +1EE2 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH HORN AND DOT BELOW +1EE4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH DOT BELOW +1EE6 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH HOOK ABOVE +1EE8 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH HORN AND ACUTE +1EEA ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH HORN AND GRAVE +1EEC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH HORN AND HOOK ABOVE +1EEE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH HORN AND TILDE +1EF0 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER U WITH HORN AND DOT BELOW +1EF2 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH GRAVE +1EF4 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH DOT BELOW +1EF6 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH HOOK ABOVE +1EF8 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH TILDE +1EFA ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER MIDDLE-WELSH LL +1EFC ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER MIDDLE-WELSH V +1EFE ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Y WITH LOOP +1F08..1F0F ; Changes_When_Casefolded # L& [8] GREEK CAPITAL LETTER ALPHA WITH PSILI..GREEK CAPITAL LETTER ALPHA WITH DASIA AND PERISPOMENI +1F18..1F1D ; Changes_When_Casefolded # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F28..1F2F ; Changes_When_Casefolded # L& [8] GREEK CAPITAL LETTER ETA WITH PSILI..GREEK CAPITAL LETTER ETA WITH DASIA AND PERISPOMENI +1F38..1F3F ; Changes_When_Casefolded # L& [8] GREEK CAPITAL LETTER IOTA WITH PSILI..GREEK CAPITAL LETTER IOTA WITH DASIA AND PERISPOMENI +1F48..1F4D ; Changes_When_Casefolded # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F59 ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F ; Changes_When_Casefolded # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F68..1F6F ; Changes_When_Casefolded # L& [8] GREEK CAPITAL LETTER OMEGA WITH PSILI..GREEK CAPITAL LETTER OMEGA WITH DASIA AND PERISPOMENI +1F80..1FAF ; Changes_When_Casefolded # L& [48] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK CAPITAL LETTER OMEGA WITH DASIA AND PERISPOMENI AND PROSGEGRAMMENI +1FB2..1FB4 ; Changes_When_Casefolded # L& [3] GREEK SMALL LETTER ALPHA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB7..1FBC ; Changes_When_Casefolded # L& [6] GREEK SMALL LETTER ALPHA WITH PERISPOMENI AND YPOGEGRAMMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FC2..1FC4 ; Changes_When_Casefolded # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC7..1FCC ; Changes_When_Casefolded # L& [6] GREEK SMALL LETTER ETA WITH PERISPOMENI AND YPOGEGRAMMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD8..1FDB ; Changes_When_Casefolded # L& [4] GREEK CAPITAL LETTER IOTA WITH VRACHY..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE8..1FEC ; Changes_When_Casefolded # L& [5] GREEK CAPITAL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF2..1FF4 ; Changes_When_Casefolded # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF7..1FFC ; Changes_When_Casefolded # L& [6] GREEK SMALL LETTER OMEGA WITH PERISPOMENI AND YPOGEGRAMMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +2126 ; Changes_When_Casefolded # L& OHM SIGN +212A..212B ; Changes_When_Casefolded # L& [2] KELVIN SIGN..ANGSTROM SIGN +2132 ; Changes_When_Casefolded # L& TURNED CAPITAL F +2160..216F ; Changes_When_Casefolded # Nl [16] ROMAN NUMERAL ONE..ROMAN NUMERAL ONE THOUSAND +2183 ; Changes_When_Casefolded # L& ROMAN NUMERAL REVERSED ONE HUNDRED +24B6..24CF ; Changes_When_Casefolded # So [26] CIRCLED LATIN CAPITAL LETTER A..CIRCLED LATIN CAPITAL LETTER Z +2C00..2C2E ; Changes_When_Casefolded # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C60 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH DOUBLE BAR +2C62..2C64 ; Changes_When_Casefolded # L& [3] LATIN CAPITAL LETTER L WITH MIDDLE TILDE..LATIN CAPITAL LETTER R WITH TAIL +2C67 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER H WITH DESCENDER +2C69 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH DESCENDER +2C6B ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Z WITH DESCENDER +2C6D..2C70 ; Changes_When_Casefolded # L& [4] LATIN CAPITAL LETTER ALPHA..LATIN CAPITAL LETTER TURNED ALPHA +2C72 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER W WITH HOOK +2C75 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER HALF H +2C7E..2C80 ; Changes_When_Casefolded # L& [3] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC CAPITAL LETTER ALFA +2C82 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER VIDA +2C84 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER GAMMA +2C86 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER DALDA +2C88 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER EIE +2C8A ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER SOU +2C8C ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER ZATA +2C8E ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER HATE +2C90 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER THETHE +2C92 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER IAUDA +2C94 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER KAPA +2C96 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER LAULA +2C98 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER MI +2C9A ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER NI +2C9C ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER KSI +2C9E ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER O +2CA0 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER PI +2CA2 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER RO +2CA4 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER SIMA +2CA6 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER TAU +2CA8 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER UA +2CAA ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER FI +2CAC ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER KHI +2CAE ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER PSI +2CB0 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OOU +2CB2 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER DIALECT-P ALEF +2CB4 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC AIN +2CB6 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC EIE +2CB8 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER DIALECT-P KAPA +2CBA ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER DIALECT-P NI +2CBC ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC NI +2CBE ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC OOU +2CC0 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER SAMPI +2CC2 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER CROSSED SHEI +2CC4 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC SHEI +2CC6 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC ESH +2CC8 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER AKHMIMIC KHEI +2CCA ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER DIALECT-P HORI +2CCC ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC HORI +2CCE ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC HA +2CD0 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER L-SHAPED HA +2CD2 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC HEI +2CD4 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC HAT +2CD6 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC GANGIA +2CD8 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC DJA +2CDA ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD COPTIC SHIMA +2CDC ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD NUBIAN SHIMA +2CDE ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD NUBIAN NGI +2CE0 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD NUBIAN NYI +2CE2 ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER OLD NUBIAN WAU +2CEB ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI +2CED ; Changes_When_Casefolded # L& COPTIC CAPITAL LETTER CRYPTOGRAMMIC GANGIA +A640 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ZEMLYA +A642 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER DZELO +A644 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER REVERSED DZE +A646 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IOTA +A648 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER DJERV +A64A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER MONOGRAPH UK +A64C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER BROAD OMEGA +A64E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER NEUTRAL YER +A650 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER YERU WITH BACK YER +A652 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IOTIFIED YAT +A654 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER REVERSED YU +A656 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IOTIFIED A +A658 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER CLOSED LITTLE YUS +A65A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER BLENDED YUS +A65C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER IOTIFIED CLOSED LITTLE YUS +A65E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER YN +A662 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SOFT DE +A664 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SOFT EL +A666 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SOFT EM +A668 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER MONOCULAR O +A66A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER BINOCULAR O +A66C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER DOUBLE MONOCULAR O +A680 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER DWE +A682 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER DZWE +A684 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER ZHWE +A686 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER CCHE +A688 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER DZZE +A68A ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER TE WITH MIDDLE HOOK +A68C ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER TWE +A68E ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER TSWE +A690 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER TSSE +A692 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER TCHE +A694 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER HWE +A696 ; Changes_When_Casefolded # L& CYRILLIC CAPITAL LETTER SHWE +A722 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF +A724 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER EGYPTOLOGICAL AIN +A726 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER HENG +A728 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER TZ +A72A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER TRESILLO +A72C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER CUATRILLO +A72E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER CUATRILLO WITH COMMA +A732 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER AA +A734 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER AO +A736 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER AU +A738 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER AV +A73A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER AV WITH HORIZONTAL BAR +A73C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER AY +A73E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER REVERSED C WITH DOT +A740 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH STROKE +A742 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH DIAGONAL STROKE +A744 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER K WITH STROKE AND DIAGONAL STROKE +A746 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER BROKEN L +A748 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER L WITH HIGH STROKE +A74A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH LONG STROKE OVERLAY +A74C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER O WITH LOOP +A74E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER OO +A750 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER P WITH STROKE THROUGH DESCENDER +A752 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER P WITH FLOURISH +A754 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER P WITH SQUIRREL TAIL +A756 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Q WITH STROKE THROUGH DESCENDER +A758 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER Q WITH DIAGONAL STROKE +A75A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER R ROTUNDA +A75C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER RUM ROTUNDA +A75E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER V WITH DIAGONAL STROKE +A760 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER VY +A762 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER VISIGOTHIC Z +A764 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER THORN WITH STROKE +A766 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER THORN WITH STROKE THROUGH DESCENDER +A768 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER VEND +A76A ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER ET +A76C ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER IS +A76E ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER CON +A779 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER INSULAR D +A77B ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER INSULAR F +A77D..A77E ; Changes_When_Casefolded # L& [2] LATIN CAPITAL LETTER INSULAR G..LATIN CAPITAL LETTER TURNED INSULAR G +A780 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER TURNED L +A782 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER INSULAR R +A784 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER INSULAR S +A786 ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER INSULAR T +A78B ; Changes_When_Casefolded # L& LATIN CAPITAL LETTER SALTILLO +FB00..FB06 ; Changes_When_Casefolded # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; Changes_When_Casefolded # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FF21..FF3A ; Changes_When_Casefolded # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +10400..10427 ; Changes_When_Casefolded # L& [40] DESERET CAPITAL LETTER LONG I..DESERET CAPITAL LETTER EW + +# Total code points: 1093 + +# ================================================ + +# Derived Property: Changes_When_Casemapped (CWCM) +# Characters whose normalized forms are not stable under case mapping. +# For more information, see D128 in Section 3.13, "Default Case Algorithms". +# Changes_When_Casemapped(X) is true when CWL(X), or CWT(X), or CWU(X) + +0041..005A ; Changes_When_Casemapped # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +0061..007A ; Changes_When_Casemapped # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00B5 ; Changes_When_Casemapped # L& MICRO SIGN +00C0..00D6 ; Changes_When_Casemapped # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00F6 ; Changes_When_Casemapped # L& [31] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER O WITH DIAERESIS +00F8..0137 ; Changes_When_Casemapped # L& [64] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER K WITH CEDILLA +0139..018C ; Changes_When_Casemapped # L& [84] LATIN CAPITAL LETTER L WITH ACUTE..LATIN SMALL LETTER D WITH TOPBAR +018E..019A ; Changes_When_Casemapped # L& [13] LATIN CAPITAL LETTER REVERSED E..LATIN SMALL LETTER L WITH BAR +019C..01A9 ; Changes_When_Casemapped # L& [14] LATIN CAPITAL LETTER TURNED M..LATIN CAPITAL LETTER ESH +01AC..01B9 ; Changes_When_Casemapped # L& [14] LATIN CAPITAL LETTER T WITH HOOK..LATIN SMALL LETTER EZH REVERSED +01BC..01BD ; Changes_When_Casemapped # L& [2] LATIN CAPITAL LETTER TONE FIVE..LATIN SMALL LETTER TONE FIVE +01BF ; Changes_When_Casemapped # L& LATIN LETTER WYNN +01C4..0220 ; Changes_When_Casemapped # L& [93] LATIN CAPITAL LETTER DZ WITH CARON..LATIN CAPITAL LETTER N WITH LONG RIGHT LEG +0222..0233 ; Changes_When_Casemapped # L& [18] LATIN CAPITAL LETTER OU..LATIN SMALL LETTER Y WITH MACRON +023A..0254 ; Changes_When_Casemapped # L& [27] LATIN CAPITAL LETTER A WITH STROKE..LATIN SMALL LETTER OPEN O +0256..0257 ; Changes_When_Casemapped # L& [2] LATIN SMALL LETTER D WITH TAIL..LATIN SMALL LETTER D WITH HOOK +0259 ; Changes_When_Casemapped # L& LATIN SMALL LETTER SCHWA +025B ; Changes_When_Casemapped # L& LATIN SMALL LETTER OPEN E +0260 ; Changes_When_Casemapped # L& LATIN SMALL LETTER G WITH HOOK +0263 ; Changes_When_Casemapped # L& LATIN SMALL LETTER GAMMA +0268..0269 ; Changes_When_Casemapped # L& [2] LATIN SMALL LETTER I WITH STROKE..LATIN SMALL LETTER IOTA +026B ; Changes_When_Casemapped # L& LATIN SMALL LETTER L WITH MIDDLE TILDE +026F ; Changes_When_Casemapped # L& LATIN SMALL LETTER TURNED M +0271..0272 ; Changes_When_Casemapped # L& [2] LATIN SMALL LETTER M WITH HOOK..LATIN SMALL LETTER N WITH LEFT HOOK +0275 ; Changes_When_Casemapped # L& LATIN SMALL LETTER BARRED O +027D ; Changes_When_Casemapped # L& LATIN SMALL LETTER R WITH TAIL +0280 ; Changes_When_Casemapped # L& LATIN LETTER SMALL CAPITAL R +0283 ; Changes_When_Casemapped # L& LATIN SMALL LETTER ESH +0288..028C ; Changes_When_Casemapped # L& [5] LATIN SMALL LETTER T WITH RETROFLEX HOOK..LATIN SMALL LETTER TURNED V +0292 ; Changes_When_Casemapped # L& LATIN SMALL LETTER EZH +0345 ; Changes_When_Casemapped # Mn COMBINING GREEK YPOGEGRAMMENI +0370..0373 ; Changes_When_Casemapped # L& [4] GREEK CAPITAL LETTER HETA..GREEK SMALL LETTER ARCHAIC SAMPI +0376..0377 ; Changes_When_Casemapped # L& [2] GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA..GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037B..037D ; Changes_When_Casemapped # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0386 ; Changes_When_Casemapped # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0388..038A ; Changes_When_Casemapped # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; Changes_When_Casemapped # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..03A1 ; Changes_When_Casemapped # L& [20] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER RHO +03A3..03D1 ; Changes_When_Casemapped # L& [47] GREEK CAPITAL LETTER SIGMA..GREEK THETA SYMBOL +03D5..03F2 ; Changes_When_Casemapped # L& [30] GREEK PHI SYMBOL..GREEK LUNATE SIGMA SYMBOL +03F4..03F5 ; Changes_When_Casemapped # L& [2] GREEK CAPITAL THETA SYMBOL..GREEK LUNATE EPSILON SYMBOL +03F7..03FB ; Changes_When_Casemapped # L& [5] GREEK CAPITAL LETTER SHO..GREEK SMALL LETTER SAN +03FD..0481 ; Changes_When_Casemapped # L& [133] GREEK CAPITAL REVERSED LUNATE SIGMA SYMBOL..CYRILLIC SMALL LETTER KOPPA +048A..0525 ; Changes_When_Casemapped # L& [156] CYRILLIC CAPITAL LETTER SHORT I WITH TAIL..CYRILLIC SMALL LETTER PE WITH DESCENDER +0531..0556 ; Changes_When_Casemapped # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0561..0587 ; Changes_When_Casemapped # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +10A0..10C5 ; Changes_When_Casemapped # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +1D79 ; Changes_When_Casemapped # L& LATIN SMALL LETTER INSULAR G +1D7D ; Changes_When_Casemapped # L& LATIN SMALL LETTER P WITH STROKE +1E00..1E9B ; Changes_When_Casemapped # L& [156] LATIN CAPITAL LETTER A WITH RING BELOW..LATIN SMALL LETTER LONG S WITH DOT ABOVE +1E9E ; Changes_When_Casemapped # L& LATIN CAPITAL LETTER SHARP S +1EA0..1F15 ; Changes_When_Casemapped # L& [118] LATIN CAPITAL LETTER A WITH DOT BELOW..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F18..1F1D ; Changes_When_Casemapped # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F45 ; Changes_When_Casemapped # L& [38] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F48..1F4D ; Changes_When_Casemapped # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; Changes_When_Casemapped # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F59 ; Changes_When_Casemapped # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; Changes_When_Casemapped # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; Changes_When_Casemapped # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F..1F7D ; Changes_When_Casemapped # L& [31] GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; Changes_When_Casemapped # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FBC ; Changes_When_Casemapped # L& [7] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBE ; Changes_When_Casemapped # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; Changes_When_Casemapped # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FCC ; Changes_When_Casemapped # L& [7] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD0..1FD3 ; Changes_When_Casemapped # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FDB ; Changes_When_Casemapped # L& [6] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE0..1FEC ; Changes_When_Casemapped # L& [13] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF2..1FF4 ; Changes_When_Casemapped # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FFC ; Changes_When_Casemapped # L& [7] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +2126 ; Changes_When_Casemapped # L& OHM SIGN +212A..212B ; Changes_When_Casemapped # L& [2] KELVIN SIGN..ANGSTROM SIGN +2132 ; Changes_When_Casemapped # L& TURNED CAPITAL F +214E ; Changes_When_Casemapped # L& TURNED SMALL F +2160..217F ; Changes_When_Casemapped # Nl [32] ROMAN NUMERAL ONE..SMALL ROMAN NUMERAL ONE THOUSAND +2183..2184 ; Changes_When_Casemapped # L& [2] ROMAN NUMERAL REVERSED ONE HUNDRED..LATIN SMALL LETTER REVERSED C +24B6..24E9 ; Changes_When_Casemapped # So [52] CIRCLED LATIN CAPITAL LETTER A..CIRCLED LATIN SMALL LETTER Z +2C00..2C2E ; Changes_When_Casemapped # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C30..2C5E ; Changes_When_Casemapped # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C60..2C70 ; Changes_When_Casemapped # L& [17] LATIN CAPITAL LETTER L WITH DOUBLE BAR..LATIN CAPITAL LETTER TURNED ALPHA +2C72..2C73 ; Changes_When_Casemapped # L& [2] LATIN CAPITAL LETTER W WITH HOOK..LATIN SMALL LETTER W WITH HOOK +2C75..2C76 ; Changes_When_Casemapped # L& [2] LATIN CAPITAL LETTER HALF H..LATIN SMALL LETTER HALF H +2C7E..2CE3 ; Changes_When_Casemapped # L& [102] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC SMALL LETTER OLD NUBIAN WAU +2CEB..2CEE ; Changes_When_Casemapped # L& [4] COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI..COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2D00..2D25 ; Changes_When_Casemapped # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +A640..A65F ; Changes_When_Casemapped # L& [32] CYRILLIC CAPITAL LETTER ZEMLYA..CYRILLIC SMALL LETTER YN +A662..A66D ; Changes_When_Casemapped # L& [12] CYRILLIC CAPITAL LETTER SOFT DE..CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A680..A697 ; Changes_When_Casemapped # L& [24] CYRILLIC CAPITAL LETTER DWE..CYRILLIC SMALL LETTER SHWE +A722..A72F ; Changes_When_Casemapped # L& [14] LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF..LATIN SMALL LETTER CUATRILLO WITH COMMA +A732..A76F ; Changes_When_Casemapped # L& [62] LATIN CAPITAL LETTER AA..LATIN SMALL LETTER CON +A779..A787 ; Changes_When_Casemapped # L& [15] LATIN CAPITAL LETTER INSULAR D..LATIN SMALL LETTER INSULAR T +A78B..A78C ; Changes_When_Casemapped # L& [2] LATIN CAPITAL LETTER SALTILLO..LATIN SMALL LETTER SALTILLO +FB00..FB06 ; Changes_When_Casemapped # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; Changes_When_Casemapped # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FF21..FF3A ; Changes_When_Casemapped # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +FF41..FF5A ; Changes_When_Casemapped # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +10400..1044F ; Changes_When_Casemapped # L& [80] DESERET CAPITAL LETTER LONG I..DESERET SMALL LETTER EW + +# Total code points: 2110 + +# ================================================ + +# Derived Property: ID_Start +# Characters that can start an identifier. +# Generated from: +# Lu + Ll + Lt + Lm + Lo + Nl +# + Other_ID_Start +# - Pattern_Syntax +# - Pattern_White_Space +# NOTE: See UAX #31 for more information + +0041..005A ; ID_Start # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +0061..007A ; ID_Start # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00AA ; ID_Start # L& FEMININE ORDINAL INDICATOR +00B5 ; ID_Start # L& MICRO SIGN +00BA ; ID_Start # L& MASCULINE ORDINAL INDICATOR +00C0..00D6 ; ID_Start # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00F6 ; ID_Start # L& [31] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER O WITH DIAERESIS +00F8..01BA ; ID_Start # L& [195] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER EZH WITH TAIL +01BB ; ID_Start # Lo LATIN LETTER TWO WITH STROKE +01BC..01BF ; ID_Start # L& [4] LATIN CAPITAL LETTER TONE FIVE..LATIN LETTER WYNN +01C0..01C3 ; ID_Start # Lo [4] LATIN LETTER DENTAL CLICK..LATIN LETTER RETROFLEX CLICK +01C4..0293 ; ID_Start # L& [208] LATIN CAPITAL LETTER DZ WITH CARON..LATIN SMALL LETTER EZH WITH CURL +0294 ; ID_Start # Lo LATIN LETTER GLOTTAL STOP +0295..02AF ; ID_Start # L& [27] LATIN LETTER PHARYNGEAL VOICED FRICATIVE..LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL +02B0..02C1 ; ID_Start # Lm [18] MODIFIER LETTER SMALL H..MODIFIER LETTER REVERSED GLOTTAL STOP +02C6..02D1 ; ID_Start # Lm [12] MODIFIER LETTER CIRCUMFLEX ACCENT..MODIFIER LETTER HALF TRIANGULAR COLON +02E0..02E4 ; ID_Start # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +02EC ; ID_Start # Lm MODIFIER LETTER VOICING +02EE ; ID_Start # Lm MODIFIER LETTER DOUBLE APOSTROPHE +0370..0373 ; ID_Start # L& [4] GREEK CAPITAL LETTER HETA..GREEK SMALL LETTER ARCHAIC SAMPI +0374 ; ID_Start # Lm GREEK NUMERAL SIGN +0376..0377 ; ID_Start # L& [2] GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA..GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037A ; ID_Start # Lm GREEK YPOGEGRAMMENI +037B..037D ; ID_Start # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0386 ; ID_Start # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0388..038A ; ID_Start # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; ID_Start # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..03A1 ; ID_Start # L& [20] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER RHO +03A3..03F5 ; ID_Start # L& [83] GREEK CAPITAL LETTER SIGMA..GREEK LUNATE EPSILON SYMBOL +03F7..0481 ; ID_Start # L& [139] GREEK CAPITAL LETTER SHO..CYRILLIC SMALL LETTER KOPPA +048A..0525 ; ID_Start # L& [156] CYRILLIC CAPITAL LETTER SHORT I WITH TAIL..CYRILLIC SMALL LETTER PE WITH DESCENDER +0531..0556 ; ID_Start # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0559 ; ID_Start # Lm ARMENIAN MODIFIER LETTER LEFT HALF RING +0561..0587 ; ID_Start # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +05D0..05EA ; ID_Start # Lo [27] HEBREW LETTER ALEF..HEBREW LETTER TAV +05F0..05F2 ; ID_Start # Lo [3] HEBREW LIGATURE YIDDISH DOUBLE VAV..HEBREW LIGATURE YIDDISH DOUBLE YOD +0621..063F ; ID_Start # Lo [31] ARABIC LETTER HAMZA..ARABIC LETTER FARSI YEH WITH THREE DOTS ABOVE +0640 ; ID_Start # Lm ARABIC TATWEEL +0641..064A ; ID_Start # Lo [10] ARABIC LETTER FEH..ARABIC LETTER YEH +066E..066F ; ID_Start # Lo [2] ARABIC LETTER DOTLESS BEH..ARABIC LETTER DOTLESS QAF +0671..06D3 ; ID_Start # Lo [99] ARABIC LETTER ALEF WASLA..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE +06D5 ; ID_Start # Lo ARABIC LETTER AE +06E5..06E6 ; ID_Start # Lm [2] ARABIC SMALL WAW..ARABIC SMALL YEH +06EE..06EF ; ID_Start # Lo [2] ARABIC LETTER DAL WITH INVERTED V..ARABIC LETTER REH WITH INVERTED V +06FA..06FC ; ID_Start # Lo [3] ARABIC LETTER SHEEN WITH DOT BELOW..ARABIC LETTER GHAIN WITH DOT BELOW +06FF ; ID_Start # Lo ARABIC LETTER HEH WITH INVERTED V +0710 ; ID_Start # Lo SYRIAC LETTER ALAPH +0712..072F ; ID_Start # Lo [30] SYRIAC LETTER BETH..SYRIAC LETTER PERSIAN DHALATH +074D..07A5 ; ID_Start # Lo [89] SYRIAC LETTER SOGDIAN ZHAIN..THAANA LETTER WAAVU +07B1 ; ID_Start # Lo THAANA LETTER NAA +07CA..07EA ; ID_Start # Lo [33] NKO LETTER A..NKO LETTER JONA RA +07F4..07F5 ; ID_Start # Lm [2] NKO HIGH TONE APOSTROPHE..NKO LOW TONE APOSTROPHE +07FA ; ID_Start # Lm NKO LAJANYALAN +0800..0815 ; ID_Start # Lo [22] SAMARITAN LETTER ALAF..SAMARITAN LETTER TAAF +081A ; ID_Start # Lm SAMARITAN MODIFIER LETTER EPENTHETIC YUT +0824 ; ID_Start # Lm SAMARITAN MODIFIER LETTER SHORT A +0828 ; ID_Start # Lm SAMARITAN MODIFIER LETTER I +0904..0939 ; ID_Start # Lo [54] DEVANAGARI LETTER SHORT A..DEVANAGARI LETTER HA +093D ; ID_Start # Lo DEVANAGARI SIGN AVAGRAHA +0950 ; ID_Start # Lo DEVANAGARI OM +0958..0961 ; ID_Start # Lo [10] DEVANAGARI LETTER QA..DEVANAGARI LETTER VOCALIC LL +0971 ; ID_Start # Lm DEVANAGARI SIGN HIGH SPACING DOT +0972 ; ID_Start # Lo DEVANAGARI LETTER CANDRA A +0979..097F ; ID_Start # Lo [7] DEVANAGARI LETTER ZHA..DEVANAGARI LETTER BBA +0985..098C ; ID_Start # Lo [8] BENGALI LETTER A..BENGALI LETTER VOCALIC L +098F..0990 ; ID_Start # Lo [2] BENGALI LETTER E..BENGALI LETTER AI +0993..09A8 ; ID_Start # Lo [22] BENGALI LETTER O..BENGALI LETTER NA +09AA..09B0 ; ID_Start # Lo [7] BENGALI LETTER PA..BENGALI LETTER RA +09B2 ; ID_Start # Lo BENGALI LETTER LA +09B6..09B9 ; ID_Start # Lo [4] BENGALI LETTER SHA..BENGALI LETTER HA +09BD ; ID_Start # Lo BENGALI SIGN AVAGRAHA +09CE ; ID_Start # Lo BENGALI LETTER KHANDA TA +09DC..09DD ; ID_Start # Lo [2] BENGALI LETTER RRA..BENGALI LETTER RHA +09DF..09E1 ; ID_Start # Lo [3] BENGALI LETTER YYA..BENGALI LETTER VOCALIC LL +09F0..09F1 ; ID_Start # Lo [2] BENGALI LETTER RA WITH MIDDLE DIAGONAL..BENGALI LETTER RA WITH LOWER DIAGONAL +0A05..0A0A ; ID_Start # Lo [6] GURMUKHI LETTER A..GURMUKHI LETTER UU +0A0F..0A10 ; ID_Start # Lo [2] GURMUKHI LETTER EE..GURMUKHI LETTER AI +0A13..0A28 ; ID_Start # Lo [22] GURMUKHI LETTER OO..GURMUKHI LETTER NA +0A2A..0A30 ; ID_Start # Lo [7] GURMUKHI LETTER PA..GURMUKHI LETTER RA +0A32..0A33 ; ID_Start # Lo [2] GURMUKHI LETTER LA..GURMUKHI LETTER LLA +0A35..0A36 ; ID_Start # Lo [2] GURMUKHI LETTER VA..GURMUKHI LETTER SHA +0A38..0A39 ; ID_Start # Lo [2] GURMUKHI LETTER SA..GURMUKHI LETTER HA +0A59..0A5C ; ID_Start # Lo [4] GURMUKHI LETTER KHHA..GURMUKHI LETTER RRA +0A5E ; ID_Start # Lo GURMUKHI LETTER FA +0A72..0A74 ; ID_Start # Lo [3] GURMUKHI IRI..GURMUKHI EK ONKAR +0A85..0A8D ; ID_Start # Lo [9] GUJARATI LETTER A..GUJARATI VOWEL CANDRA E +0A8F..0A91 ; ID_Start # Lo [3] GUJARATI LETTER E..GUJARATI VOWEL CANDRA O +0A93..0AA8 ; ID_Start # Lo [22] GUJARATI LETTER O..GUJARATI LETTER NA +0AAA..0AB0 ; ID_Start # Lo [7] GUJARATI LETTER PA..GUJARATI LETTER RA +0AB2..0AB3 ; ID_Start # Lo [2] GUJARATI LETTER LA..GUJARATI LETTER LLA +0AB5..0AB9 ; ID_Start # Lo [5] GUJARATI LETTER VA..GUJARATI LETTER HA +0ABD ; ID_Start # Lo GUJARATI SIGN AVAGRAHA +0AD0 ; ID_Start # Lo GUJARATI OM +0AE0..0AE1 ; ID_Start # Lo [2] GUJARATI LETTER VOCALIC RR..GUJARATI LETTER VOCALIC LL +0B05..0B0C ; ID_Start # Lo [8] ORIYA LETTER A..ORIYA LETTER VOCALIC L +0B0F..0B10 ; ID_Start # Lo [2] ORIYA LETTER E..ORIYA LETTER AI +0B13..0B28 ; ID_Start # Lo [22] ORIYA LETTER O..ORIYA LETTER NA +0B2A..0B30 ; ID_Start # Lo [7] ORIYA LETTER PA..ORIYA LETTER RA +0B32..0B33 ; ID_Start # Lo [2] ORIYA LETTER LA..ORIYA LETTER LLA +0B35..0B39 ; ID_Start # Lo [5] ORIYA LETTER VA..ORIYA LETTER HA +0B3D ; ID_Start # Lo ORIYA SIGN AVAGRAHA +0B5C..0B5D ; ID_Start # Lo [2] ORIYA LETTER RRA..ORIYA LETTER RHA +0B5F..0B61 ; ID_Start # Lo [3] ORIYA LETTER YYA..ORIYA LETTER VOCALIC LL +0B71 ; ID_Start # Lo ORIYA LETTER WA +0B83 ; ID_Start # Lo TAMIL SIGN VISARGA +0B85..0B8A ; ID_Start # Lo [6] TAMIL LETTER A..TAMIL LETTER UU +0B8E..0B90 ; ID_Start # Lo [3] TAMIL LETTER E..TAMIL LETTER AI +0B92..0B95 ; ID_Start # Lo [4] TAMIL LETTER O..TAMIL LETTER KA +0B99..0B9A ; ID_Start # Lo [2] TAMIL LETTER NGA..TAMIL LETTER CA +0B9C ; ID_Start # Lo TAMIL LETTER JA +0B9E..0B9F ; ID_Start # Lo [2] TAMIL LETTER NYA..TAMIL LETTER TTA +0BA3..0BA4 ; ID_Start # Lo [2] TAMIL LETTER NNA..TAMIL LETTER TA +0BA8..0BAA ; ID_Start # Lo [3] TAMIL LETTER NA..TAMIL LETTER PA +0BAE..0BB9 ; ID_Start # Lo [12] TAMIL LETTER MA..TAMIL LETTER HA +0BD0 ; ID_Start # Lo TAMIL OM +0C05..0C0C ; ID_Start # Lo [8] TELUGU LETTER A..TELUGU LETTER VOCALIC L +0C0E..0C10 ; ID_Start # Lo [3] TELUGU LETTER E..TELUGU LETTER AI +0C12..0C28 ; ID_Start # Lo [23] TELUGU LETTER O..TELUGU LETTER NA +0C2A..0C33 ; ID_Start # Lo [10] TELUGU LETTER PA..TELUGU LETTER LLA +0C35..0C39 ; ID_Start # Lo [5] TELUGU LETTER VA..TELUGU LETTER HA +0C3D ; ID_Start # Lo TELUGU SIGN AVAGRAHA +0C58..0C59 ; ID_Start # Lo [2] TELUGU LETTER TSA..TELUGU LETTER DZA +0C60..0C61 ; ID_Start # Lo [2] TELUGU LETTER VOCALIC RR..TELUGU LETTER VOCALIC LL +0C85..0C8C ; ID_Start # Lo [8] KANNADA LETTER A..KANNADA LETTER VOCALIC L +0C8E..0C90 ; ID_Start # Lo [3] KANNADA LETTER E..KANNADA LETTER AI +0C92..0CA8 ; ID_Start # Lo [23] KANNADA LETTER O..KANNADA LETTER NA +0CAA..0CB3 ; ID_Start # Lo [10] KANNADA LETTER PA..KANNADA LETTER LLA +0CB5..0CB9 ; ID_Start # Lo [5] KANNADA LETTER VA..KANNADA LETTER HA +0CBD ; ID_Start # Lo KANNADA SIGN AVAGRAHA +0CDE ; ID_Start # Lo KANNADA LETTER FA +0CE0..0CE1 ; ID_Start # Lo [2] KANNADA LETTER VOCALIC RR..KANNADA LETTER VOCALIC LL +0D05..0D0C ; ID_Start # Lo [8] MALAYALAM LETTER A..MALAYALAM LETTER VOCALIC L +0D0E..0D10 ; ID_Start # Lo [3] MALAYALAM LETTER E..MALAYALAM LETTER AI +0D12..0D28 ; ID_Start # Lo [23] MALAYALAM LETTER O..MALAYALAM LETTER NA +0D2A..0D39 ; ID_Start # Lo [16] MALAYALAM LETTER PA..MALAYALAM LETTER HA +0D3D ; ID_Start # Lo MALAYALAM SIGN AVAGRAHA +0D60..0D61 ; ID_Start # Lo [2] MALAYALAM LETTER VOCALIC RR..MALAYALAM LETTER VOCALIC LL +0D7A..0D7F ; ID_Start # Lo [6] MALAYALAM LETTER CHILLU NN..MALAYALAM LETTER CHILLU K +0D85..0D96 ; ID_Start # Lo [18] SINHALA LETTER AYANNA..SINHALA LETTER AUYANNA +0D9A..0DB1 ; ID_Start # Lo [24] SINHALA LETTER ALPAPRAANA KAYANNA..SINHALA LETTER DANTAJA NAYANNA +0DB3..0DBB ; ID_Start # Lo [9] SINHALA LETTER SANYAKA DAYANNA..SINHALA LETTER RAYANNA +0DBD ; ID_Start # Lo SINHALA LETTER DANTAJA LAYANNA +0DC0..0DC6 ; ID_Start # Lo [7] SINHALA LETTER VAYANNA..SINHALA LETTER FAYANNA +0E01..0E30 ; ID_Start # Lo [48] THAI CHARACTER KO KAI..THAI CHARACTER SARA A +0E32..0E33 ; ID_Start # Lo [2] THAI CHARACTER SARA AA..THAI CHARACTER SARA AM +0E40..0E45 ; ID_Start # Lo [6] THAI CHARACTER SARA E..THAI CHARACTER LAKKHANGYAO +0E46 ; ID_Start # Lm THAI CHARACTER MAIYAMOK +0E81..0E82 ; ID_Start # Lo [2] LAO LETTER KO..LAO LETTER KHO SUNG +0E84 ; ID_Start # Lo LAO LETTER KHO TAM +0E87..0E88 ; ID_Start # Lo [2] LAO LETTER NGO..LAO LETTER CO +0E8A ; ID_Start # Lo LAO LETTER SO TAM +0E8D ; ID_Start # Lo LAO LETTER NYO +0E94..0E97 ; ID_Start # Lo [4] LAO LETTER DO..LAO LETTER THO TAM +0E99..0E9F ; ID_Start # Lo [7] LAO LETTER NO..LAO LETTER FO SUNG +0EA1..0EA3 ; ID_Start # Lo [3] LAO LETTER MO..LAO LETTER LO LING +0EA5 ; ID_Start # Lo LAO LETTER LO LOOT +0EA7 ; ID_Start # Lo LAO LETTER WO +0EAA..0EAB ; ID_Start # Lo [2] LAO LETTER SO SUNG..LAO LETTER HO SUNG +0EAD..0EB0 ; ID_Start # Lo [4] LAO LETTER O..LAO VOWEL SIGN A +0EB2..0EB3 ; ID_Start # Lo [2] LAO VOWEL SIGN AA..LAO VOWEL SIGN AM +0EBD ; ID_Start # Lo LAO SEMIVOWEL SIGN NYO +0EC0..0EC4 ; ID_Start # Lo [5] LAO VOWEL SIGN E..LAO VOWEL SIGN AI +0EC6 ; ID_Start # Lm LAO KO LA +0EDC..0EDD ; ID_Start # Lo [2] LAO HO NO..LAO HO MO +0F00 ; ID_Start # Lo TIBETAN SYLLABLE OM +0F40..0F47 ; ID_Start # Lo [8] TIBETAN LETTER KA..TIBETAN LETTER JA +0F49..0F6C ; ID_Start # Lo [36] TIBETAN LETTER NYA..TIBETAN LETTER RRA +0F88..0F8B ; ID_Start # Lo [4] TIBETAN SIGN LCE TSA CAN..TIBETAN SIGN GRU MED RGYINGS +1000..102A ; ID_Start # Lo [43] MYANMAR LETTER KA..MYANMAR LETTER AU +103F ; ID_Start # Lo MYANMAR LETTER GREAT SA +1050..1055 ; ID_Start # Lo [6] MYANMAR LETTER SHA..MYANMAR LETTER VOCALIC LL +105A..105D ; ID_Start # Lo [4] MYANMAR LETTER MON NGA..MYANMAR LETTER MON BBE +1061 ; ID_Start # Lo MYANMAR LETTER SGAW KAREN SHA +1065..1066 ; ID_Start # Lo [2] MYANMAR LETTER WESTERN PWO KAREN THA..MYANMAR LETTER WESTERN PWO KAREN PWA +106E..1070 ; ID_Start # Lo [3] MYANMAR LETTER EASTERN PWO KAREN NNA..MYANMAR LETTER EASTERN PWO KAREN GHWA +1075..1081 ; ID_Start # Lo [13] MYANMAR LETTER SHAN KA..MYANMAR LETTER SHAN HA +108E ; ID_Start # Lo MYANMAR LETTER RUMAI PALAUNG FA +10A0..10C5 ; ID_Start # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +10D0..10FA ; ID_Start # Lo [43] GEORGIAN LETTER AN..GEORGIAN LETTER AIN +10FC ; ID_Start # Lm MODIFIER LETTER GEORGIAN NAR +1100..1248 ; ID_Start # Lo [329] HANGUL CHOSEONG KIYEOK..ETHIOPIC SYLLABLE QWA +124A..124D ; ID_Start # Lo [4] ETHIOPIC SYLLABLE QWI..ETHIOPIC SYLLABLE QWE +1250..1256 ; ID_Start # Lo [7] ETHIOPIC SYLLABLE QHA..ETHIOPIC SYLLABLE QHO +1258 ; ID_Start # Lo ETHIOPIC SYLLABLE QHWA +125A..125D ; ID_Start # Lo [4] ETHIOPIC SYLLABLE QHWI..ETHIOPIC SYLLABLE QHWE +1260..1288 ; ID_Start # Lo [41] ETHIOPIC SYLLABLE BA..ETHIOPIC SYLLABLE XWA +128A..128D ; ID_Start # Lo [4] ETHIOPIC SYLLABLE XWI..ETHIOPIC SYLLABLE XWE +1290..12B0 ; ID_Start # Lo [33] ETHIOPIC SYLLABLE NA..ETHIOPIC SYLLABLE KWA +12B2..12B5 ; ID_Start # Lo [4] ETHIOPIC SYLLABLE KWI..ETHIOPIC SYLLABLE KWE +12B8..12BE ; ID_Start # Lo [7] ETHIOPIC SYLLABLE KXA..ETHIOPIC SYLLABLE KXO +12C0 ; ID_Start # Lo ETHIOPIC SYLLABLE KXWA +12C2..12C5 ; ID_Start # Lo [4] ETHIOPIC SYLLABLE KXWI..ETHIOPIC SYLLABLE KXWE +12C8..12D6 ; ID_Start # Lo [15] ETHIOPIC SYLLABLE WA..ETHIOPIC SYLLABLE PHARYNGEAL O +12D8..1310 ; ID_Start # Lo [57] ETHIOPIC SYLLABLE ZA..ETHIOPIC SYLLABLE GWA +1312..1315 ; ID_Start # Lo [4] ETHIOPIC SYLLABLE GWI..ETHIOPIC SYLLABLE GWE +1318..135A ; ID_Start # Lo [67] ETHIOPIC SYLLABLE GGA..ETHIOPIC SYLLABLE FYA +1380..138F ; ID_Start # Lo [16] ETHIOPIC SYLLABLE SEBATBEIT MWA..ETHIOPIC SYLLABLE PWE +13A0..13F4 ; ID_Start # Lo [85] CHEROKEE LETTER A..CHEROKEE LETTER YV +1401..166C ; ID_Start # Lo [620] CANADIAN SYLLABICS E..CANADIAN SYLLABICS CARRIER TTSA +166F..167F ; ID_Start # Lo [17] CANADIAN SYLLABICS QAI..CANADIAN SYLLABICS BLACKFOOT W +1681..169A ; ID_Start # Lo [26] OGHAM LETTER BEITH..OGHAM LETTER PEITH +16A0..16EA ; ID_Start # Lo [75] RUNIC LETTER FEHU FEOH FE F..RUNIC LETTER X +16EE..16F0 ; ID_Start # Nl [3] RUNIC ARLAUG SYMBOL..RUNIC BELGTHOR SYMBOL +1700..170C ; ID_Start # Lo [13] TAGALOG LETTER A..TAGALOG LETTER YA +170E..1711 ; ID_Start # Lo [4] TAGALOG LETTER LA..TAGALOG LETTER HA +1720..1731 ; ID_Start # Lo [18] HANUNOO LETTER A..HANUNOO LETTER HA +1740..1751 ; ID_Start # Lo [18] BUHID LETTER A..BUHID LETTER HA +1760..176C ; ID_Start # Lo [13] TAGBANWA LETTER A..TAGBANWA LETTER YA +176E..1770 ; ID_Start # Lo [3] TAGBANWA LETTER LA..TAGBANWA LETTER SA +1780..17B3 ; ID_Start # Lo [52] KHMER LETTER KA..KHMER INDEPENDENT VOWEL QAU +17D7 ; ID_Start # Lm KHMER SIGN LEK TOO +17DC ; ID_Start # Lo KHMER SIGN AVAKRAHASANYA +1820..1842 ; ID_Start # Lo [35] MONGOLIAN LETTER A..MONGOLIAN LETTER CHI +1843 ; ID_Start # Lm MONGOLIAN LETTER TODO LONG VOWEL SIGN +1844..1877 ; ID_Start # Lo [52] MONGOLIAN LETTER TODO E..MONGOLIAN LETTER MANCHU ZHA +1880..18A8 ; ID_Start # Lo [41] MONGOLIAN LETTER ALI GALI ANUSVARA ONE..MONGOLIAN LETTER MANCHU ALI GALI BHA +18AA ; ID_Start # Lo MONGOLIAN LETTER MANCHU ALI GALI LHA +18B0..18F5 ; ID_Start # Lo [70] CANADIAN SYLLABICS OY..CANADIAN SYLLABICS CARRIER DENTAL S +1900..191C ; ID_Start # Lo [29] LIMBU VOWEL-CARRIER LETTER..LIMBU LETTER HA +1950..196D ; ID_Start # Lo [30] TAI LE LETTER KA..TAI LE LETTER AI +1970..1974 ; ID_Start # Lo [5] TAI LE LETTER TONE-2..TAI LE LETTER TONE-6 +1980..19AB ; ID_Start # Lo [44] NEW TAI LUE LETTER HIGH QA..NEW TAI LUE LETTER LOW SUA +19C1..19C7 ; ID_Start # Lo [7] NEW TAI LUE LETTER FINAL V..NEW TAI LUE LETTER FINAL B +1A00..1A16 ; ID_Start # Lo [23] BUGINESE LETTER KA..BUGINESE LETTER HA +1A20..1A54 ; ID_Start # Lo [53] TAI THAM LETTER HIGH KA..TAI THAM LETTER GREAT SA +1AA7 ; ID_Start # Lm TAI THAM SIGN MAI YAMOK +1B05..1B33 ; ID_Start # Lo [47] BALINESE LETTER AKARA..BALINESE LETTER HA +1B45..1B4B ; ID_Start # Lo [7] BALINESE LETTER KAF SASAK..BALINESE LETTER ASYURA SASAK +1B83..1BA0 ; ID_Start # Lo [30] SUNDANESE LETTER A..SUNDANESE LETTER HA +1BAE..1BAF ; ID_Start # Lo [2] SUNDANESE LETTER KHA..SUNDANESE LETTER SYA +1C00..1C23 ; ID_Start # Lo [36] LEPCHA LETTER KA..LEPCHA LETTER A +1C4D..1C4F ; ID_Start # Lo [3] LEPCHA LETTER TTA..LEPCHA LETTER DDA +1C5A..1C77 ; ID_Start # Lo [30] OL CHIKI LETTER LA..OL CHIKI LETTER OH +1C78..1C7D ; ID_Start # Lm [6] OL CHIKI MU TTUDDAG..OL CHIKI AHAD +1CE9..1CEC ; ID_Start # Lo [4] VEDIC SIGN ANUSVARA ANTARGOMUKHA..VEDIC SIGN ANUSVARA VAMAGOMUKHA WITH TAIL +1CEE..1CF1 ; ID_Start # Lo [4] VEDIC SIGN HEXIFORM LONG ANUSVARA..VEDIC SIGN ANUSVARA UBHAYATO MUKHA +1D00..1D2B ; ID_Start # L& [44] LATIN LETTER SMALL CAPITAL A..CYRILLIC LETTER SMALL CAPITAL EL +1D2C..1D61 ; ID_Start # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D62..1D77 ; ID_Start # L& [22] LATIN SUBSCRIPT SMALL LETTER I..LATIN SMALL LETTER TURNED G +1D78 ; ID_Start # Lm MODIFIER LETTER CYRILLIC EN +1D79..1D9A ; ID_Start # L& [34] LATIN SMALL LETTER INSULAR G..LATIN SMALL LETTER EZH WITH RETROFLEX HOOK +1D9B..1DBF ; ID_Start # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1E00..1F15 ; ID_Start # L& [278] LATIN CAPITAL LETTER A WITH RING BELOW..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F18..1F1D ; ID_Start # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F45 ; ID_Start # L& [38] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F48..1F4D ; ID_Start # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; ID_Start # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F59 ; ID_Start # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; ID_Start # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; ID_Start # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F..1F7D ; ID_Start # L& [31] GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; ID_Start # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FBC ; ID_Start # L& [7] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBE ; ID_Start # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; ID_Start # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FCC ; ID_Start # L& [7] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD0..1FD3 ; ID_Start # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FDB ; ID_Start # L& [6] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE0..1FEC ; ID_Start # L& [13] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF2..1FF4 ; ID_Start # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FFC ; ID_Start # L& [7] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +2071 ; ID_Start # Lm SUPERSCRIPT LATIN SMALL LETTER I +207F ; ID_Start # Lm SUPERSCRIPT LATIN SMALL LETTER N +2090..2094 ; ID_Start # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +2102 ; ID_Start # L& DOUBLE-STRUCK CAPITAL C +2107 ; ID_Start # L& EULER CONSTANT +210A..2113 ; ID_Start # L& [10] SCRIPT SMALL G..SCRIPT SMALL L +2115 ; ID_Start # L& DOUBLE-STRUCK CAPITAL N +2118 ; ID_Start # So SCRIPT CAPITAL P +2119..211D ; ID_Start # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +2124 ; ID_Start # L& DOUBLE-STRUCK CAPITAL Z +2126 ; ID_Start # L& OHM SIGN +2128 ; ID_Start # L& BLACK-LETTER CAPITAL Z +212A..212D ; ID_Start # L& [4] KELVIN SIGN..BLACK-LETTER CAPITAL C +212E ; ID_Start # So ESTIMATED SYMBOL +212F..2134 ; ID_Start # L& [6] SCRIPT SMALL E..SCRIPT SMALL O +2135..2138 ; ID_Start # Lo [4] ALEF SYMBOL..DALET SYMBOL +2139 ; ID_Start # L& INFORMATION SOURCE +213C..213F ; ID_Start # L& [4] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK CAPITAL PI +2145..2149 ; ID_Start # L& [5] DOUBLE-STRUCK ITALIC CAPITAL D..DOUBLE-STRUCK ITALIC SMALL J +214E ; ID_Start # L& TURNED SMALL F +2160..2182 ; ID_Start # Nl [35] ROMAN NUMERAL ONE..ROMAN NUMERAL TEN THOUSAND +2183..2184 ; ID_Start # L& [2] ROMAN NUMERAL REVERSED ONE HUNDRED..LATIN SMALL LETTER REVERSED C +2185..2188 ; ID_Start # Nl [4] ROMAN NUMERAL SIX LATE FORM..ROMAN NUMERAL ONE HUNDRED THOUSAND +2C00..2C2E ; ID_Start # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C30..2C5E ; ID_Start # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C60..2C7C ; ID_Start # L& [29] LATIN CAPITAL LETTER L WITH DOUBLE BAR..LATIN SUBSCRIPT SMALL LETTER J +2C7D ; ID_Start # Lm MODIFIER LETTER CAPITAL V +2C7E..2CE4 ; ID_Start # L& [103] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC SYMBOL KAI +2CEB..2CEE ; ID_Start # L& [4] COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI..COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2D00..2D25 ; ID_Start # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +2D30..2D65 ; ID_Start # Lo [54] TIFINAGH LETTER YA..TIFINAGH LETTER YAZZ +2D6F ; ID_Start # Lm TIFINAGH MODIFIER LETTER LABIALIZATION MARK +2D80..2D96 ; ID_Start # Lo [23] ETHIOPIC SYLLABLE LOA..ETHIOPIC SYLLABLE GGWE +2DA0..2DA6 ; ID_Start # Lo [7] ETHIOPIC SYLLABLE SSA..ETHIOPIC SYLLABLE SSO +2DA8..2DAE ; ID_Start # Lo [7] ETHIOPIC SYLLABLE CCA..ETHIOPIC SYLLABLE CCO +2DB0..2DB6 ; ID_Start # Lo [7] ETHIOPIC SYLLABLE ZZA..ETHIOPIC SYLLABLE ZZO +2DB8..2DBE ; ID_Start # Lo [7] ETHIOPIC SYLLABLE CCHA..ETHIOPIC SYLLABLE CCHO +2DC0..2DC6 ; ID_Start # Lo [7] ETHIOPIC SYLLABLE QYA..ETHIOPIC SYLLABLE QYO +2DC8..2DCE ; ID_Start # Lo [7] ETHIOPIC SYLLABLE KYA..ETHIOPIC SYLLABLE KYO +2DD0..2DD6 ; ID_Start # Lo [7] ETHIOPIC SYLLABLE XYA..ETHIOPIC SYLLABLE XYO +2DD8..2DDE ; ID_Start # Lo [7] ETHIOPIC SYLLABLE GYA..ETHIOPIC SYLLABLE GYO +3005 ; ID_Start # Lm IDEOGRAPHIC ITERATION MARK +3006 ; ID_Start # Lo IDEOGRAPHIC CLOSING MARK +3007 ; ID_Start # Nl IDEOGRAPHIC NUMBER ZERO +3021..3029 ; ID_Start # Nl [9] HANGZHOU NUMERAL ONE..HANGZHOU NUMERAL NINE +3031..3035 ; ID_Start # Lm [5] VERTICAL KANA REPEAT MARK..VERTICAL KANA REPEAT MARK LOWER HALF +3038..303A ; ID_Start # Nl [3] HANGZHOU NUMERAL TEN..HANGZHOU NUMERAL THIRTY +303B ; ID_Start # Lm VERTICAL IDEOGRAPHIC ITERATION MARK +303C ; ID_Start # Lo MASU MARK +3041..3096 ; ID_Start # Lo [86] HIRAGANA LETTER SMALL A..HIRAGANA LETTER SMALL KE +309B..309C ; ID_Start # Sk [2] KATAKANA-HIRAGANA VOICED SOUND MARK..KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK +309D..309E ; ID_Start # Lm [2] HIRAGANA ITERATION MARK..HIRAGANA VOICED ITERATION MARK +309F ; ID_Start # Lo HIRAGANA DIGRAPH YORI +30A1..30FA ; ID_Start # Lo [90] KATAKANA LETTER SMALL A..KATAKANA LETTER VO +30FC..30FE ; ID_Start # Lm [3] KATAKANA-HIRAGANA PROLONGED SOUND MARK..KATAKANA VOICED ITERATION MARK +30FF ; ID_Start # Lo KATAKANA DIGRAPH KOTO +3105..312D ; ID_Start # Lo [41] BOPOMOFO LETTER B..BOPOMOFO LETTER IH +3131..318E ; ID_Start # Lo [94] HANGUL LETTER KIYEOK..HANGUL LETTER ARAEAE +31A0..31B7 ; ID_Start # Lo [24] BOPOMOFO LETTER BU..BOPOMOFO FINAL LETTER H +31F0..31FF ; ID_Start # Lo [16] KATAKANA LETTER SMALL KU..KATAKANA LETTER SMALL RO +3400..4DB5 ; ID_Start # Lo [6582] CJK UNIFIED IDEOGRAPH-3400..CJK UNIFIED IDEOGRAPH-4DB5 +4E00..9FCB ; ID_Start # Lo [20940] CJK UNIFIED IDEOGRAPH-4E00..CJK UNIFIED IDEOGRAPH-9FCB +A000..A014 ; ID_Start # Lo [21] YI SYLLABLE IT..YI SYLLABLE E +A015 ; ID_Start # Lm YI SYLLABLE WU +A016..A48C ; ID_Start # Lo [1143] YI SYLLABLE BIT..YI SYLLABLE YYR +A4D0..A4F7 ; ID_Start # Lo [40] LISU LETTER BA..LISU LETTER OE +A4F8..A4FD ; ID_Start # Lm [6] LISU LETTER TONE MYA TI..LISU LETTER TONE MYA JEU +A500..A60B ; ID_Start # Lo [268] VAI SYLLABLE EE..VAI SYLLABLE NG +A60C ; ID_Start # Lm VAI SYLLABLE LENGTHENER +A610..A61F ; ID_Start # Lo [16] VAI SYLLABLE NDOLE FA..VAI SYMBOL JONG +A62A..A62B ; ID_Start # Lo [2] VAI SYLLABLE NDOLE MA..VAI SYLLABLE NDOLE DO +A640..A65F ; ID_Start # L& [32] CYRILLIC CAPITAL LETTER ZEMLYA..CYRILLIC SMALL LETTER YN +A662..A66D ; ID_Start # L& [12] CYRILLIC CAPITAL LETTER SOFT DE..CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A66E ; ID_Start # Lo CYRILLIC LETTER MULTIOCULAR O +A67F ; ID_Start # Lm CYRILLIC PAYEROK +A680..A697 ; ID_Start # L& [24] CYRILLIC CAPITAL LETTER DWE..CYRILLIC SMALL LETTER SHWE +A6A0..A6E5 ; ID_Start # Lo [70] BAMUM LETTER A..BAMUM LETTER KI +A6E6..A6EF ; ID_Start # Nl [10] BAMUM LETTER MO..BAMUM LETTER KOGHOM +A717..A71F ; ID_Start # Lm [9] MODIFIER LETTER DOT VERTICAL BAR..MODIFIER LETTER LOW INVERTED EXCLAMATION MARK +A722..A76F ; ID_Start # L& [78] LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF..LATIN SMALL LETTER CON +A770 ; ID_Start # Lm MODIFIER LETTER US +A771..A787 ; ID_Start # L& [23] LATIN SMALL LETTER DUM..LATIN SMALL LETTER INSULAR T +A788 ; ID_Start # Lm MODIFIER LETTER LOW CIRCUMFLEX ACCENT +A78B..A78C ; ID_Start # L& [2] LATIN CAPITAL LETTER SALTILLO..LATIN SMALL LETTER SALTILLO +A7FB..A801 ; ID_Start # Lo [7] LATIN EPIGRAPHIC LETTER REVERSED F..SYLOTI NAGRI LETTER I +A803..A805 ; ID_Start # Lo [3] SYLOTI NAGRI LETTER U..SYLOTI NAGRI LETTER O +A807..A80A ; ID_Start # Lo [4] SYLOTI NAGRI LETTER KO..SYLOTI NAGRI LETTER GHO +A80C..A822 ; ID_Start # Lo [23] SYLOTI NAGRI LETTER CO..SYLOTI NAGRI LETTER HO +A840..A873 ; ID_Start # Lo [52] PHAGS-PA LETTER KA..PHAGS-PA LETTER CANDRABINDU +A882..A8B3 ; ID_Start # Lo [50] SAURASHTRA LETTER A..SAURASHTRA LETTER LLA +A8F2..A8F7 ; ID_Start # Lo [6] DEVANAGARI SIGN SPACING CANDRABINDU..DEVANAGARI SIGN CANDRABINDU AVAGRAHA +A8FB ; ID_Start # Lo DEVANAGARI HEADSTROKE +A90A..A925 ; ID_Start # Lo [28] KAYAH LI LETTER KA..KAYAH LI LETTER OO +A930..A946 ; ID_Start # Lo [23] REJANG LETTER KA..REJANG LETTER A +A960..A97C ; ID_Start # Lo [29] HANGUL CHOSEONG TIKEUT-MIEUM..HANGUL CHOSEONG SSANGYEORINHIEUH +A984..A9B2 ; ID_Start # Lo [47] JAVANESE LETTER A..JAVANESE LETTER HA +A9CF ; ID_Start # Lm JAVANESE PANGRANGKEP +AA00..AA28 ; ID_Start # Lo [41] CHAM LETTER A..CHAM LETTER HA +AA40..AA42 ; ID_Start # Lo [3] CHAM LETTER FINAL K..CHAM LETTER FINAL NG +AA44..AA4B ; ID_Start # Lo [8] CHAM LETTER FINAL CH..CHAM LETTER FINAL SS +AA60..AA6F ; ID_Start # Lo [16] MYANMAR LETTER KHAMTI GA..MYANMAR LETTER KHAMTI FA +AA70 ; ID_Start # Lm MYANMAR MODIFIER LETTER KHAMTI REDUPLICATION +AA71..AA76 ; ID_Start # Lo [6] MYANMAR LETTER KHAMTI XA..MYANMAR LOGOGRAM KHAMTI HM +AA7A ; ID_Start # Lo MYANMAR LETTER AITON RA +AA80..AAAF ; ID_Start # Lo [48] TAI VIET LETTER LOW KO..TAI VIET LETTER HIGH O +AAB1 ; ID_Start # Lo TAI VIET VOWEL AA +AAB5..AAB6 ; ID_Start # Lo [2] TAI VIET VOWEL E..TAI VIET VOWEL O +AAB9..AABD ; ID_Start # Lo [5] TAI VIET VOWEL UEA..TAI VIET VOWEL AN +AAC0 ; ID_Start # Lo TAI VIET TONE MAI NUENG +AAC2 ; ID_Start # Lo TAI VIET TONE MAI SONG +AADB..AADC ; ID_Start # Lo [2] TAI VIET SYMBOL KON..TAI VIET SYMBOL NUENG +AADD ; ID_Start # Lm TAI VIET SYMBOL SAM +ABC0..ABE2 ; ID_Start # Lo [35] MEETEI MAYEK LETTER KOK..MEETEI MAYEK LETTER I LONSUM +AC00..D7A3 ; ID_Start # Lo [11172] HANGUL SYLLABLE GA..HANGUL SYLLABLE HIH +D7B0..D7C6 ; ID_Start # Lo [23] HANGUL JUNGSEONG O-YEO..HANGUL JUNGSEONG ARAEA-E +D7CB..D7FB ; ID_Start # Lo [49] HANGUL JONGSEONG NIEUN-RIEUL..HANGUL JONGSEONG PHIEUPH-THIEUTH +F900..FA2D ; ID_Start # Lo [302] CJK COMPATIBILITY IDEOGRAPH-F900..CJK COMPATIBILITY IDEOGRAPH-FA2D +FA30..FA6D ; ID_Start # Lo [62] CJK COMPATIBILITY IDEOGRAPH-FA30..CJK COMPATIBILITY IDEOGRAPH-FA6D +FA70..FAD9 ; ID_Start # Lo [106] CJK COMPATIBILITY IDEOGRAPH-FA70..CJK COMPATIBILITY IDEOGRAPH-FAD9 +FB00..FB06 ; ID_Start # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; ID_Start # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FB1D ; ID_Start # Lo HEBREW LETTER YOD WITH HIRIQ +FB1F..FB28 ; ID_Start # Lo [10] HEBREW LIGATURE YIDDISH YOD YOD PATAH..HEBREW LETTER WIDE TAV +FB2A..FB36 ; ID_Start # Lo [13] HEBREW LETTER SHIN WITH SHIN DOT..HEBREW LETTER ZAYIN WITH DAGESH +FB38..FB3C ; ID_Start # Lo [5] HEBREW LETTER TET WITH DAGESH..HEBREW LETTER LAMED WITH DAGESH +FB3E ; ID_Start # Lo HEBREW LETTER MEM WITH DAGESH +FB40..FB41 ; ID_Start # Lo [2] HEBREW LETTER NUN WITH DAGESH..HEBREW LETTER SAMEKH WITH DAGESH +FB43..FB44 ; ID_Start # Lo [2] HEBREW LETTER FINAL PE WITH DAGESH..HEBREW LETTER PE WITH DAGESH +FB46..FBB1 ; ID_Start # Lo [108] HEBREW LETTER TSADI WITH DAGESH..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE FINAL FORM +FBD3..FD3D ; ID_Start # Lo [363] ARABIC LETTER NG ISOLATED FORM..ARABIC LIGATURE ALEF WITH FATHATAN ISOLATED FORM +FD50..FD8F ; ID_Start # Lo [64] ARABIC LIGATURE TEH WITH JEEM WITH MEEM INITIAL FORM..ARABIC LIGATURE MEEM WITH KHAH WITH MEEM INITIAL FORM +FD92..FDC7 ; ID_Start # Lo [54] ARABIC LIGATURE MEEM WITH JEEM WITH KHAH INITIAL FORM..ARABIC LIGATURE NOON WITH JEEM WITH YEH FINAL FORM +FDF0..FDFB ; ID_Start # Lo [12] ARABIC LIGATURE SALLA USED AS KORANIC STOP SIGN ISOLATED FORM..ARABIC LIGATURE JALLAJALALOUHOU +FE70..FE74 ; ID_Start # Lo [5] ARABIC FATHATAN ISOLATED FORM..ARABIC KASRATAN ISOLATED FORM +FE76..FEFC ; ID_Start # Lo [135] ARABIC FATHA ISOLATED FORM..ARABIC LIGATURE LAM WITH ALEF FINAL FORM +FF21..FF3A ; ID_Start # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +FF41..FF5A ; ID_Start # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +FF66..FF6F ; ID_Start # Lo [10] HALFWIDTH KATAKANA LETTER WO..HALFWIDTH KATAKANA LETTER SMALL TU +FF70 ; ID_Start # Lm HALFWIDTH KATAKANA-HIRAGANA PROLONGED SOUND MARK +FF71..FF9D ; ID_Start # Lo [45] HALFWIDTH KATAKANA LETTER A..HALFWIDTH KATAKANA LETTER N +FF9E..FF9F ; ID_Start # Lm [2] HALFWIDTH KATAKANA VOICED SOUND MARK..HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK +FFA0..FFBE ; ID_Start # Lo [31] HALFWIDTH HANGUL FILLER..HALFWIDTH HANGUL LETTER HIEUH +FFC2..FFC7 ; ID_Start # Lo [6] HALFWIDTH HANGUL LETTER A..HALFWIDTH HANGUL LETTER E +FFCA..FFCF ; ID_Start # Lo [6] HALFWIDTH HANGUL LETTER YEO..HALFWIDTH HANGUL LETTER OE +FFD2..FFD7 ; ID_Start # Lo [6] HALFWIDTH HANGUL LETTER YO..HALFWIDTH HANGUL LETTER YU +FFDA..FFDC ; ID_Start # Lo [3] HALFWIDTH HANGUL LETTER EU..HALFWIDTH HANGUL LETTER I +10000..1000B ; ID_Start # Lo [12] LINEAR B SYLLABLE B008 A..LINEAR B SYLLABLE B046 JE +1000D..10026 ; ID_Start # Lo [26] LINEAR B SYLLABLE B036 JO..LINEAR B SYLLABLE B032 QO +10028..1003A ; ID_Start # Lo [19] LINEAR B SYLLABLE B060 RA..LINEAR B SYLLABLE B042 WO +1003C..1003D ; ID_Start # Lo [2] LINEAR B SYLLABLE B017 ZA..LINEAR B SYLLABLE B074 ZE +1003F..1004D ; ID_Start # Lo [15] LINEAR B SYLLABLE B020 ZO..LINEAR B SYLLABLE B091 TWO +10050..1005D ; ID_Start # Lo [14] LINEAR B SYMBOL B018..LINEAR B SYMBOL B089 +10080..100FA ; ID_Start # Lo [123] LINEAR B IDEOGRAM B100 MAN..LINEAR B IDEOGRAM VESSEL B305 +10140..10174 ; ID_Start # Nl [53] GREEK ACROPHONIC ATTIC ONE QUARTER..GREEK ACROPHONIC STRATIAN FIFTY MNAS +10280..1029C ; ID_Start # Lo [29] LYCIAN LETTER A..LYCIAN LETTER X +102A0..102D0 ; ID_Start # Lo [49] CARIAN LETTER A..CARIAN LETTER UUU3 +10300..1031E ; ID_Start # Lo [31] OLD ITALIC LETTER A..OLD ITALIC LETTER UU +10330..10340 ; ID_Start # Lo [17] GOTHIC LETTER AHSA..GOTHIC LETTER PAIRTHRA +10341 ; ID_Start # Nl GOTHIC LETTER NINETY +10342..10349 ; ID_Start # Lo [8] GOTHIC LETTER RAIDA..GOTHIC LETTER OTHAL +1034A ; ID_Start # Nl GOTHIC LETTER NINE HUNDRED +10380..1039D ; ID_Start # Lo [30] UGARITIC LETTER ALPA..UGARITIC LETTER SSU +103A0..103C3 ; ID_Start # Lo [36] OLD PERSIAN SIGN A..OLD PERSIAN SIGN HA +103C8..103CF ; ID_Start # Lo [8] OLD PERSIAN SIGN AURAMAZDAA..OLD PERSIAN SIGN BUUMISH +103D1..103D5 ; ID_Start # Nl [5] OLD PERSIAN NUMBER ONE..OLD PERSIAN NUMBER HUNDRED +10400..1044F ; ID_Start # L& [80] DESERET CAPITAL LETTER LONG I..DESERET SMALL LETTER EW +10450..1049D ; ID_Start # Lo [78] SHAVIAN LETTER PEEP..OSMANYA LETTER OO +10800..10805 ; ID_Start # Lo [6] CYPRIOT SYLLABLE A..CYPRIOT SYLLABLE JA +10808 ; ID_Start # Lo CYPRIOT SYLLABLE JO +1080A..10835 ; ID_Start # Lo [44] CYPRIOT SYLLABLE KA..CYPRIOT SYLLABLE WO +10837..10838 ; ID_Start # Lo [2] CYPRIOT SYLLABLE XA..CYPRIOT SYLLABLE XE +1083C ; ID_Start # Lo CYPRIOT SYLLABLE ZA +1083F..10855 ; ID_Start # Lo [23] CYPRIOT SYLLABLE ZO..IMPERIAL ARAMAIC LETTER TAW +10900..10915 ; ID_Start # Lo [22] PHOENICIAN LETTER ALF..PHOENICIAN LETTER TAU +10920..10939 ; ID_Start # Lo [26] LYDIAN LETTER A..LYDIAN LETTER C +10A00 ; ID_Start # Lo KHAROSHTHI LETTER A +10A10..10A13 ; ID_Start # Lo [4] KHAROSHTHI LETTER KA..KHAROSHTHI LETTER GHA +10A15..10A17 ; ID_Start # Lo [3] KHAROSHTHI LETTER CA..KHAROSHTHI LETTER JA +10A19..10A33 ; ID_Start # Lo [27] KHAROSHTHI LETTER NYA..KHAROSHTHI LETTER TTTHA +10A60..10A7C ; ID_Start # Lo [29] OLD SOUTH ARABIAN LETTER HE..OLD SOUTH ARABIAN LETTER THETH +10B00..10B35 ; ID_Start # Lo [54] AVESTAN LETTER A..AVESTAN LETTER HE +10B40..10B55 ; ID_Start # Lo [22] INSCRIPTIONAL PARTHIAN LETTER ALEPH..INSCRIPTIONAL PARTHIAN LETTER TAW +10B60..10B72 ; ID_Start # Lo [19] INSCRIPTIONAL PAHLAVI LETTER ALEPH..INSCRIPTIONAL PAHLAVI LETTER TAW +10C00..10C48 ; ID_Start # Lo [73] OLD TURKIC LETTER ORKHON A..OLD TURKIC LETTER ORKHON BASH +11083..110AF ; ID_Start # Lo [45] KAITHI LETTER A..KAITHI LETTER HA +12000..1236E ; ID_Start # Lo [879] CUNEIFORM SIGN A..CUNEIFORM SIGN ZUM +12400..12462 ; ID_Start # Nl [99] CUNEIFORM NUMERIC SIGN TWO ASH..CUNEIFORM NUMERIC SIGN OLD ASSYRIAN ONE QUARTER +13000..1342E ; ID_Start # Lo [1071] EGYPTIAN HIEROGLYPH A001..EGYPTIAN HIEROGLYPH AA032 +1D400..1D454 ; ID_Start # L& [85] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL ITALIC SMALL G +1D456..1D49C ; ID_Start # L& [71] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; ID_Start # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; ID_Start # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; ID_Start # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; ID_Start # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B9 ; ID_Start # L& [12] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT SMALL D +1D4BB ; ID_Start # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; ID_Start # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D505 ; ID_Start # L& [65] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; ID_Start # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; ID_Start # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; ID_Start # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D51E..1D539 ; ID_Start # L& [28] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; ID_Start # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; ID_Start # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; ID_Start # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; ID_Start # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D552..1D6A5 ; ID_Start # L& [340] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6A8..1D6C0 ; ID_Start # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6C2..1D6DA ; ID_Start # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DC..1D6FA ; ID_Start # L& [31] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL ITALIC CAPITAL OMEGA +1D6FC..1D714 ; ID_Start # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D716..1D734 ; ID_Start # L& [31] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D736..1D74E ; ID_Start # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D750..1D76E ; ID_Start # L& [31] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D770..1D788 ; ID_Start # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D78A..1D7A8 ; ID_Start # L& [31] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7AA..1D7C2 ; ID_Start # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C4..1D7CB ; ID_Start # L& [8] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD SMALL DIGAMMA +20000..2A6D6 ; ID_Start # Lo [42711] CJK UNIFIED IDEOGRAPH-20000..CJK UNIFIED IDEOGRAPH-2A6D6 +2A700..2B734 ; ID_Start # Lo [4149] CJK UNIFIED IDEOGRAPH-2A700..CJK UNIFIED IDEOGRAPH-2B734 +2F800..2FA1D ; ID_Start # Lo [542] CJK COMPATIBILITY IDEOGRAPH-2F800..CJK COMPATIBILITY IDEOGRAPH-2FA1D + +# Total code points: 99764 + +# ================================================ + +# Derived Property: ID_Continue +# Characters that can continue an identifier. +# Generated from: +# ID_Start +# + Mn + Mc + Nd + Pc +# + Other_ID_Continue +# - Pattern_Syntax +# - Pattern_White_Space +# NOTE: See UAX #31 for more information + +0030..0039 ; ID_Continue # Nd [10] DIGIT ZERO..DIGIT NINE +0041..005A ; ID_Continue # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +005F ; ID_Continue # Pc LOW LINE +0061..007A ; ID_Continue # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00AA ; ID_Continue # L& FEMININE ORDINAL INDICATOR +00B5 ; ID_Continue # L& MICRO SIGN +00B7 ; ID_Continue # Po MIDDLE DOT +00BA ; ID_Continue # L& MASCULINE ORDINAL INDICATOR +00C0..00D6 ; ID_Continue # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00F6 ; ID_Continue # L& [31] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER O WITH DIAERESIS +00F8..01BA ; ID_Continue # L& [195] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER EZH WITH TAIL +01BB ; ID_Continue # Lo LATIN LETTER TWO WITH STROKE +01BC..01BF ; ID_Continue # L& [4] LATIN CAPITAL LETTER TONE FIVE..LATIN LETTER WYNN +01C0..01C3 ; ID_Continue # Lo [4] LATIN LETTER DENTAL CLICK..LATIN LETTER RETROFLEX CLICK +01C4..0293 ; ID_Continue # L& [208] LATIN CAPITAL LETTER DZ WITH CARON..LATIN SMALL LETTER EZH WITH CURL +0294 ; ID_Continue # Lo LATIN LETTER GLOTTAL STOP +0295..02AF ; ID_Continue # L& [27] LATIN LETTER PHARYNGEAL VOICED FRICATIVE..LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL +02B0..02C1 ; ID_Continue # Lm [18] MODIFIER LETTER SMALL H..MODIFIER LETTER REVERSED GLOTTAL STOP +02C6..02D1 ; ID_Continue # Lm [12] MODIFIER LETTER CIRCUMFLEX ACCENT..MODIFIER LETTER HALF TRIANGULAR COLON +02E0..02E4 ; ID_Continue # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +02EC ; ID_Continue # Lm MODIFIER LETTER VOICING +02EE ; ID_Continue # Lm MODIFIER LETTER DOUBLE APOSTROPHE +0300..036F ; ID_Continue # Mn [112] COMBINING GRAVE ACCENT..COMBINING LATIN SMALL LETTER X +0370..0373 ; ID_Continue # L& [4] GREEK CAPITAL LETTER HETA..GREEK SMALL LETTER ARCHAIC SAMPI +0374 ; ID_Continue # Lm GREEK NUMERAL SIGN +0376..0377 ; ID_Continue # L& [2] GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA..GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037A ; ID_Continue # Lm GREEK YPOGEGRAMMENI +037B..037D ; ID_Continue # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0386 ; ID_Continue # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0387 ; ID_Continue # Po GREEK ANO TELEIA +0388..038A ; ID_Continue # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; ID_Continue # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..03A1 ; ID_Continue # L& [20] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER RHO +03A3..03F5 ; ID_Continue # L& [83] GREEK CAPITAL LETTER SIGMA..GREEK LUNATE EPSILON SYMBOL +03F7..0481 ; ID_Continue # L& [139] GREEK CAPITAL LETTER SHO..CYRILLIC SMALL LETTER KOPPA +0483..0487 ; ID_Continue # Mn [5] COMBINING CYRILLIC TITLO..COMBINING CYRILLIC POKRYTIE +048A..0525 ; ID_Continue # L& [156] CYRILLIC CAPITAL LETTER SHORT I WITH TAIL..CYRILLIC SMALL LETTER PE WITH DESCENDER +0531..0556 ; ID_Continue # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0559 ; ID_Continue # Lm ARMENIAN MODIFIER LETTER LEFT HALF RING +0561..0587 ; ID_Continue # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +0591..05BD ; ID_Continue # Mn [45] HEBREW ACCENT ETNAHTA..HEBREW POINT METEG +05BF ; ID_Continue # Mn HEBREW POINT RAFE +05C1..05C2 ; ID_Continue # Mn [2] HEBREW POINT SHIN DOT..HEBREW POINT SIN DOT +05C4..05C5 ; ID_Continue # Mn [2] HEBREW MARK UPPER DOT..HEBREW MARK LOWER DOT +05C7 ; ID_Continue # Mn HEBREW POINT QAMATS QATAN +05D0..05EA ; ID_Continue # Lo [27] HEBREW LETTER ALEF..HEBREW LETTER TAV +05F0..05F2 ; ID_Continue # Lo [3] HEBREW LIGATURE YIDDISH DOUBLE VAV..HEBREW LIGATURE YIDDISH DOUBLE YOD +0610..061A ; ID_Continue # Mn [11] ARABIC SIGN SALLALLAHOU ALAYHE WASSALLAM..ARABIC SMALL KASRA +0621..063F ; ID_Continue # Lo [31] ARABIC LETTER HAMZA..ARABIC LETTER FARSI YEH WITH THREE DOTS ABOVE +0640 ; ID_Continue # Lm ARABIC TATWEEL +0641..064A ; ID_Continue # Lo [10] ARABIC LETTER FEH..ARABIC LETTER YEH +064B..065E ; ID_Continue # Mn [20] ARABIC FATHATAN..ARABIC FATHA WITH TWO DOTS +0660..0669 ; ID_Continue # Nd [10] ARABIC-INDIC DIGIT ZERO..ARABIC-INDIC DIGIT NINE +066E..066F ; ID_Continue # Lo [2] ARABIC LETTER DOTLESS BEH..ARABIC LETTER DOTLESS QAF +0670 ; ID_Continue # Mn ARABIC LETTER SUPERSCRIPT ALEF +0671..06D3 ; ID_Continue # Lo [99] ARABIC LETTER ALEF WASLA..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE +06D5 ; ID_Continue # Lo ARABIC LETTER AE +06D6..06DC ; ID_Continue # Mn [7] ARABIC SMALL HIGH LIGATURE SAD WITH LAM WITH ALEF MAKSURA..ARABIC SMALL HIGH SEEN +06DF..06E4 ; ID_Continue # Mn [6] ARABIC SMALL HIGH ROUNDED ZERO..ARABIC SMALL HIGH MADDA +06E5..06E6 ; ID_Continue # Lm [2] ARABIC SMALL WAW..ARABIC SMALL YEH +06E7..06E8 ; ID_Continue # Mn [2] ARABIC SMALL HIGH YEH..ARABIC SMALL HIGH NOON +06EA..06ED ; ID_Continue # Mn [4] ARABIC EMPTY CENTRE LOW STOP..ARABIC SMALL LOW MEEM +06EE..06EF ; ID_Continue # Lo [2] ARABIC LETTER DAL WITH INVERTED V..ARABIC LETTER REH WITH INVERTED V +06F0..06F9 ; ID_Continue # Nd [10] EXTENDED ARABIC-INDIC DIGIT ZERO..EXTENDED ARABIC-INDIC DIGIT NINE +06FA..06FC ; ID_Continue # Lo [3] ARABIC LETTER SHEEN WITH DOT BELOW..ARABIC LETTER GHAIN WITH DOT BELOW +06FF ; ID_Continue # Lo ARABIC LETTER HEH WITH INVERTED V +0710 ; ID_Continue # Lo SYRIAC LETTER ALAPH +0711 ; ID_Continue # Mn SYRIAC LETTER SUPERSCRIPT ALAPH +0712..072F ; ID_Continue # Lo [30] SYRIAC LETTER BETH..SYRIAC LETTER PERSIAN DHALATH +0730..074A ; ID_Continue # Mn [27] SYRIAC PTHAHA ABOVE..SYRIAC BARREKH +074D..07A5 ; ID_Continue # Lo [89] SYRIAC LETTER SOGDIAN ZHAIN..THAANA LETTER WAAVU +07A6..07B0 ; ID_Continue # Mn [11] THAANA ABAFILI..THAANA SUKUN +07B1 ; ID_Continue # Lo THAANA LETTER NAA +07C0..07C9 ; ID_Continue # Nd [10] NKO DIGIT ZERO..NKO DIGIT NINE +07CA..07EA ; ID_Continue # Lo [33] NKO LETTER A..NKO LETTER JONA RA +07EB..07F3 ; ID_Continue # Mn [9] NKO COMBINING SHORT HIGH TONE..NKO COMBINING DOUBLE DOT ABOVE +07F4..07F5 ; ID_Continue # Lm [2] NKO HIGH TONE APOSTROPHE..NKO LOW TONE APOSTROPHE +07FA ; ID_Continue # Lm NKO LAJANYALAN +0800..0815 ; ID_Continue # Lo [22] SAMARITAN LETTER ALAF..SAMARITAN LETTER TAAF +0816..0819 ; ID_Continue # Mn [4] SAMARITAN MARK IN..SAMARITAN MARK DAGESH +081A ; ID_Continue # Lm SAMARITAN MODIFIER LETTER EPENTHETIC YUT +081B..0823 ; ID_Continue # Mn [9] SAMARITAN MARK EPENTHETIC YUT..SAMARITAN VOWEL SIGN A +0824 ; ID_Continue # Lm SAMARITAN MODIFIER LETTER SHORT A +0825..0827 ; ID_Continue # Mn [3] SAMARITAN VOWEL SIGN SHORT A..SAMARITAN VOWEL SIGN U +0828 ; ID_Continue # Lm SAMARITAN MODIFIER LETTER I +0829..082D ; ID_Continue # Mn [5] SAMARITAN VOWEL SIGN LONG I..SAMARITAN MARK NEQUDAA +0900..0902 ; ID_Continue # Mn [3] DEVANAGARI SIGN INVERTED CANDRABINDU..DEVANAGARI SIGN ANUSVARA +0903 ; ID_Continue # Mc DEVANAGARI SIGN VISARGA +0904..0939 ; ID_Continue # Lo [54] DEVANAGARI LETTER SHORT A..DEVANAGARI LETTER HA +093C ; ID_Continue # Mn DEVANAGARI SIGN NUKTA +093D ; ID_Continue # Lo DEVANAGARI SIGN AVAGRAHA +093E..0940 ; ID_Continue # Mc [3] DEVANAGARI VOWEL SIGN AA..DEVANAGARI VOWEL SIGN II +0941..0948 ; ID_Continue # Mn [8] DEVANAGARI VOWEL SIGN U..DEVANAGARI VOWEL SIGN AI +0949..094C ; ID_Continue # Mc [4] DEVANAGARI VOWEL SIGN CANDRA O..DEVANAGARI VOWEL SIGN AU +094D ; ID_Continue # Mn DEVANAGARI SIGN VIRAMA +094E ; ID_Continue # Mc DEVANAGARI VOWEL SIGN PRISHTHAMATRA E +0950 ; ID_Continue # Lo DEVANAGARI OM +0951..0955 ; ID_Continue # Mn [5] DEVANAGARI STRESS SIGN UDATTA..DEVANAGARI VOWEL SIGN CANDRA LONG E +0958..0961 ; ID_Continue # Lo [10] DEVANAGARI LETTER QA..DEVANAGARI LETTER VOCALIC LL +0962..0963 ; ID_Continue # Mn [2] DEVANAGARI VOWEL SIGN VOCALIC L..DEVANAGARI VOWEL SIGN VOCALIC LL +0966..096F ; ID_Continue # Nd [10] DEVANAGARI DIGIT ZERO..DEVANAGARI DIGIT NINE +0971 ; ID_Continue # Lm DEVANAGARI SIGN HIGH SPACING DOT +0972 ; ID_Continue # Lo DEVANAGARI LETTER CANDRA A +0979..097F ; ID_Continue # Lo [7] DEVANAGARI LETTER ZHA..DEVANAGARI LETTER BBA +0981 ; ID_Continue # Mn BENGALI SIGN CANDRABINDU +0982..0983 ; ID_Continue # Mc [2] BENGALI SIGN ANUSVARA..BENGALI SIGN VISARGA +0985..098C ; ID_Continue # Lo [8] BENGALI LETTER A..BENGALI LETTER VOCALIC L +098F..0990 ; ID_Continue # Lo [2] BENGALI LETTER E..BENGALI LETTER AI +0993..09A8 ; ID_Continue # Lo [22] BENGALI LETTER O..BENGALI LETTER NA +09AA..09B0 ; ID_Continue # Lo [7] BENGALI LETTER PA..BENGALI LETTER RA +09B2 ; ID_Continue # Lo BENGALI LETTER LA +09B6..09B9 ; ID_Continue # Lo [4] BENGALI LETTER SHA..BENGALI LETTER HA +09BC ; ID_Continue # Mn BENGALI SIGN NUKTA +09BD ; ID_Continue # Lo BENGALI SIGN AVAGRAHA +09BE..09C0 ; ID_Continue # Mc [3] BENGALI VOWEL SIGN AA..BENGALI VOWEL SIGN II +09C1..09C4 ; ID_Continue # Mn [4] BENGALI VOWEL SIGN U..BENGALI VOWEL SIGN VOCALIC RR +09C7..09C8 ; ID_Continue # Mc [2] BENGALI VOWEL SIGN E..BENGALI VOWEL SIGN AI +09CB..09CC ; ID_Continue # Mc [2] BENGALI VOWEL SIGN O..BENGALI VOWEL SIGN AU +09CD ; ID_Continue # Mn BENGALI SIGN VIRAMA +09CE ; ID_Continue # Lo BENGALI LETTER KHANDA TA +09D7 ; ID_Continue # Mc BENGALI AU LENGTH MARK +09DC..09DD ; ID_Continue # Lo [2] BENGALI LETTER RRA..BENGALI LETTER RHA +09DF..09E1 ; ID_Continue # Lo [3] BENGALI LETTER YYA..BENGALI LETTER VOCALIC LL +09E2..09E3 ; ID_Continue # Mn [2] BENGALI VOWEL SIGN VOCALIC L..BENGALI VOWEL SIGN VOCALIC LL +09E6..09EF ; ID_Continue # Nd [10] BENGALI DIGIT ZERO..BENGALI DIGIT NINE +09F0..09F1 ; ID_Continue # Lo [2] BENGALI LETTER RA WITH MIDDLE DIAGONAL..BENGALI LETTER RA WITH LOWER DIAGONAL +0A01..0A02 ; ID_Continue # Mn [2] GURMUKHI SIGN ADAK BINDI..GURMUKHI SIGN BINDI +0A03 ; ID_Continue # Mc GURMUKHI SIGN VISARGA +0A05..0A0A ; ID_Continue # Lo [6] GURMUKHI LETTER A..GURMUKHI LETTER UU +0A0F..0A10 ; ID_Continue # Lo [2] GURMUKHI LETTER EE..GURMUKHI LETTER AI +0A13..0A28 ; ID_Continue # Lo [22] GURMUKHI LETTER OO..GURMUKHI LETTER NA +0A2A..0A30 ; ID_Continue # Lo [7] GURMUKHI LETTER PA..GURMUKHI LETTER RA +0A32..0A33 ; ID_Continue # Lo [2] GURMUKHI LETTER LA..GURMUKHI LETTER LLA +0A35..0A36 ; ID_Continue # Lo [2] GURMUKHI LETTER VA..GURMUKHI LETTER SHA +0A38..0A39 ; ID_Continue # Lo [2] GURMUKHI LETTER SA..GURMUKHI LETTER HA +0A3C ; ID_Continue # Mn GURMUKHI SIGN NUKTA +0A3E..0A40 ; ID_Continue # Mc [3] GURMUKHI VOWEL SIGN AA..GURMUKHI VOWEL SIGN II +0A41..0A42 ; ID_Continue # Mn [2] GURMUKHI VOWEL SIGN U..GURMUKHI VOWEL SIGN UU +0A47..0A48 ; ID_Continue # Mn [2] GURMUKHI VOWEL SIGN EE..GURMUKHI VOWEL SIGN AI +0A4B..0A4D ; ID_Continue # Mn [3] GURMUKHI VOWEL SIGN OO..GURMUKHI SIGN VIRAMA +0A51 ; ID_Continue # Mn GURMUKHI SIGN UDAAT +0A59..0A5C ; ID_Continue # Lo [4] GURMUKHI LETTER KHHA..GURMUKHI LETTER RRA +0A5E ; ID_Continue # Lo GURMUKHI LETTER FA +0A66..0A6F ; ID_Continue # Nd [10] GURMUKHI DIGIT ZERO..GURMUKHI DIGIT NINE +0A70..0A71 ; ID_Continue # Mn [2] GURMUKHI TIPPI..GURMUKHI ADDAK +0A72..0A74 ; ID_Continue # Lo [3] GURMUKHI IRI..GURMUKHI EK ONKAR +0A75 ; ID_Continue # Mn GURMUKHI SIGN YAKASH +0A81..0A82 ; ID_Continue # Mn [2] GUJARATI SIGN CANDRABINDU..GUJARATI SIGN ANUSVARA +0A83 ; ID_Continue # Mc GUJARATI SIGN VISARGA +0A85..0A8D ; ID_Continue # Lo [9] GUJARATI LETTER A..GUJARATI VOWEL CANDRA E +0A8F..0A91 ; ID_Continue # Lo [3] GUJARATI LETTER E..GUJARATI VOWEL CANDRA O +0A93..0AA8 ; ID_Continue # Lo [22] GUJARATI LETTER O..GUJARATI LETTER NA +0AAA..0AB0 ; ID_Continue # Lo [7] GUJARATI LETTER PA..GUJARATI LETTER RA +0AB2..0AB3 ; ID_Continue # Lo [2] GUJARATI LETTER LA..GUJARATI LETTER LLA +0AB5..0AB9 ; ID_Continue # Lo [5] GUJARATI LETTER VA..GUJARATI LETTER HA +0ABC ; ID_Continue # Mn GUJARATI SIGN NUKTA +0ABD ; ID_Continue # Lo GUJARATI SIGN AVAGRAHA +0ABE..0AC0 ; ID_Continue # Mc [3] GUJARATI VOWEL SIGN AA..GUJARATI VOWEL SIGN II +0AC1..0AC5 ; ID_Continue # Mn [5] GUJARATI VOWEL SIGN U..GUJARATI VOWEL SIGN CANDRA E +0AC7..0AC8 ; ID_Continue # Mn [2] GUJARATI VOWEL SIGN E..GUJARATI VOWEL SIGN AI +0AC9 ; ID_Continue # Mc GUJARATI VOWEL SIGN CANDRA O +0ACB..0ACC ; ID_Continue # Mc [2] GUJARATI VOWEL SIGN O..GUJARATI VOWEL SIGN AU +0ACD ; ID_Continue # Mn GUJARATI SIGN VIRAMA +0AD0 ; ID_Continue # Lo GUJARATI OM +0AE0..0AE1 ; ID_Continue # Lo [2] GUJARATI LETTER VOCALIC RR..GUJARATI LETTER VOCALIC LL +0AE2..0AE3 ; ID_Continue # Mn [2] GUJARATI VOWEL SIGN VOCALIC L..GUJARATI VOWEL SIGN VOCALIC LL +0AE6..0AEF ; ID_Continue # Nd [10] GUJARATI DIGIT ZERO..GUJARATI DIGIT NINE +0B01 ; ID_Continue # Mn ORIYA SIGN CANDRABINDU +0B02..0B03 ; ID_Continue # Mc [2] ORIYA SIGN ANUSVARA..ORIYA SIGN VISARGA +0B05..0B0C ; ID_Continue # Lo [8] ORIYA LETTER A..ORIYA LETTER VOCALIC L +0B0F..0B10 ; ID_Continue # Lo [2] ORIYA LETTER E..ORIYA LETTER AI +0B13..0B28 ; ID_Continue # Lo [22] ORIYA LETTER O..ORIYA LETTER NA +0B2A..0B30 ; ID_Continue # Lo [7] ORIYA LETTER PA..ORIYA LETTER RA +0B32..0B33 ; ID_Continue # Lo [2] ORIYA LETTER LA..ORIYA LETTER LLA +0B35..0B39 ; ID_Continue # Lo [5] ORIYA LETTER VA..ORIYA LETTER HA +0B3C ; ID_Continue # Mn ORIYA SIGN NUKTA +0B3D ; ID_Continue # Lo ORIYA SIGN AVAGRAHA +0B3E ; ID_Continue # Mc ORIYA VOWEL SIGN AA +0B3F ; ID_Continue # Mn ORIYA VOWEL SIGN I +0B40 ; ID_Continue # Mc ORIYA VOWEL SIGN II +0B41..0B44 ; ID_Continue # Mn [4] ORIYA VOWEL SIGN U..ORIYA VOWEL SIGN VOCALIC RR +0B47..0B48 ; ID_Continue # Mc [2] ORIYA VOWEL SIGN E..ORIYA VOWEL SIGN AI +0B4B..0B4C ; ID_Continue # Mc [2] ORIYA VOWEL SIGN O..ORIYA VOWEL SIGN AU +0B4D ; ID_Continue # Mn ORIYA SIGN VIRAMA +0B56 ; ID_Continue # Mn ORIYA AI LENGTH MARK +0B57 ; ID_Continue # Mc ORIYA AU LENGTH MARK +0B5C..0B5D ; ID_Continue # Lo [2] ORIYA LETTER RRA..ORIYA LETTER RHA +0B5F..0B61 ; ID_Continue # Lo [3] ORIYA LETTER YYA..ORIYA LETTER VOCALIC LL +0B62..0B63 ; ID_Continue # Mn [2] ORIYA VOWEL SIGN VOCALIC L..ORIYA VOWEL SIGN VOCALIC LL +0B66..0B6F ; ID_Continue # Nd [10] ORIYA DIGIT ZERO..ORIYA DIGIT NINE +0B71 ; ID_Continue # Lo ORIYA LETTER WA +0B82 ; ID_Continue # Mn TAMIL SIGN ANUSVARA +0B83 ; ID_Continue # Lo TAMIL SIGN VISARGA +0B85..0B8A ; ID_Continue # Lo [6] TAMIL LETTER A..TAMIL LETTER UU +0B8E..0B90 ; ID_Continue # Lo [3] TAMIL LETTER E..TAMIL LETTER AI +0B92..0B95 ; ID_Continue # Lo [4] TAMIL LETTER O..TAMIL LETTER KA +0B99..0B9A ; ID_Continue # Lo [2] TAMIL LETTER NGA..TAMIL LETTER CA +0B9C ; ID_Continue # Lo TAMIL LETTER JA +0B9E..0B9F ; ID_Continue # Lo [2] TAMIL LETTER NYA..TAMIL LETTER TTA +0BA3..0BA4 ; ID_Continue # Lo [2] TAMIL LETTER NNA..TAMIL LETTER TA +0BA8..0BAA ; ID_Continue # Lo [3] TAMIL LETTER NA..TAMIL LETTER PA +0BAE..0BB9 ; ID_Continue # Lo [12] TAMIL LETTER MA..TAMIL LETTER HA +0BBE..0BBF ; ID_Continue # Mc [2] TAMIL VOWEL SIGN AA..TAMIL VOWEL SIGN I +0BC0 ; ID_Continue # Mn TAMIL VOWEL SIGN II +0BC1..0BC2 ; ID_Continue # Mc [2] TAMIL VOWEL SIGN U..TAMIL VOWEL SIGN UU +0BC6..0BC8 ; ID_Continue # Mc [3] TAMIL VOWEL SIGN E..TAMIL VOWEL SIGN AI +0BCA..0BCC ; ID_Continue # Mc [3] TAMIL VOWEL SIGN O..TAMIL VOWEL SIGN AU +0BCD ; ID_Continue # Mn TAMIL SIGN VIRAMA +0BD0 ; ID_Continue # Lo TAMIL OM +0BD7 ; ID_Continue # Mc TAMIL AU LENGTH MARK +0BE6..0BEF ; ID_Continue # Nd [10] TAMIL DIGIT ZERO..TAMIL DIGIT NINE +0C01..0C03 ; ID_Continue # Mc [3] TELUGU SIGN CANDRABINDU..TELUGU SIGN VISARGA +0C05..0C0C ; ID_Continue # Lo [8] TELUGU LETTER A..TELUGU LETTER VOCALIC L +0C0E..0C10 ; ID_Continue # Lo [3] TELUGU LETTER E..TELUGU LETTER AI +0C12..0C28 ; ID_Continue # Lo [23] TELUGU LETTER O..TELUGU LETTER NA +0C2A..0C33 ; ID_Continue # Lo [10] TELUGU LETTER PA..TELUGU LETTER LLA +0C35..0C39 ; ID_Continue # Lo [5] TELUGU LETTER VA..TELUGU LETTER HA +0C3D ; ID_Continue # Lo TELUGU SIGN AVAGRAHA +0C3E..0C40 ; ID_Continue # Mn [3] TELUGU VOWEL SIGN AA..TELUGU VOWEL SIGN II +0C41..0C44 ; ID_Continue # Mc [4] TELUGU VOWEL SIGN U..TELUGU VOWEL SIGN VOCALIC RR +0C46..0C48 ; ID_Continue # Mn [3] TELUGU VOWEL SIGN E..TELUGU VOWEL SIGN AI +0C4A..0C4D ; ID_Continue # Mn [4] TELUGU VOWEL SIGN O..TELUGU SIGN VIRAMA +0C55..0C56 ; ID_Continue # Mn [2] TELUGU LENGTH MARK..TELUGU AI LENGTH MARK +0C58..0C59 ; ID_Continue # Lo [2] TELUGU LETTER TSA..TELUGU LETTER DZA +0C60..0C61 ; ID_Continue # Lo [2] TELUGU LETTER VOCALIC RR..TELUGU LETTER VOCALIC LL +0C62..0C63 ; ID_Continue # Mn [2] TELUGU VOWEL SIGN VOCALIC L..TELUGU VOWEL SIGN VOCALIC LL +0C66..0C6F ; ID_Continue # Nd [10] TELUGU DIGIT ZERO..TELUGU DIGIT NINE +0C82..0C83 ; ID_Continue # Mc [2] KANNADA SIGN ANUSVARA..KANNADA SIGN VISARGA +0C85..0C8C ; ID_Continue # Lo [8] KANNADA LETTER A..KANNADA LETTER VOCALIC L +0C8E..0C90 ; ID_Continue # Lo [3] KANNADA LETTER E..KANNADA LETTER AI +0C92..0CA8 ; ID_Continue # Lo [23] KANNADA LETTER O..KANNADA LETTER NA +0CAA..0CB3 ; ID_Continue # Lo [10] KANNADA LETTER PA..KANNADA LETTER LLA +0CB5..0CB9 ; ID_Continue # Lo [5] KANNADA LETTER VA..KANNADA LETTER HA +0CBC ; ID_Continue # Mn KANNADA SIGN NUKTA +0CBD ; ID_Continue # Lo KANNADA SIGN AVAGRAHA +0CBE ; ID_Continue # Mc KANNADA VOWEL SIGN AA +0CBF ; ID_Continue # Mn KANNADA VOWEL SIGN I +0CC0..0CC4 ; ID_Continue # Mc [5] KANNADA VOWEL SIGN II..KANNADA VOWEL SIGN VOCALIC RR +0CC6 ; ID_Continue # Mn KANNADA VOWEL SIGN E +0CC7..0CC8 ; ID_Continue # Mc [2] KANNADA VOWEL SIGN EE..KANNADA VOWEL SIGN AI +0CCA..0CCB ; ID_Continue # Mc [2] KANNADA VOWEL SIGN O..KANNADA VOWEL SIGN OO +0CCC..0CCD ; ID_Continue # Mn [2] KANNADA VOWEL SIGN AU..KANNADA SIGN VIRAMA +0CD5..0CD6 ; ID_Continue # Mc [2] KANNADA LENGTH MARK..KANNADA AI LENGTH MARK +0CDE ; ID_Continue # Lo KANNADA LETTER FA +0CE0..0CE1 ; ID_Continue # Lo [2] KANNADA LETTER VOCALIC RR..KANNADA LETTER VOCALIC LL +0CE2..0CE3 ; ID_Continue # Mn [2] KANNADA VOWEL SIGN VOCALIC L..KANNADA VOWEL SIGN VOCALIC LL +0CE6..0CEF ; ID_Continue # Nd [10] KANNADA DIGIT ZERO..KANNADA DIGIT NINE +0D02..0D03 ; ID_Continue # Mc [2] MALAYALAM SIGN ANUSVARA..MALAYALAM SIGN VISARGA +0D05..0D0C ; ID_Continue # Lo [8] MALAYALAM LETTER A..MALAYALAM LETTER VOCALIC L +0D0E..0D10 ; ID_Continue # Lo [3] MALAYALAM LETTER E..MALAYALAM LETTER AI +0D12..0D28 ; ID_Continue # Lo [23] MALAYALAM LETTER O..MALAYALAM LETTER NA +0D2A..0D39 ; ID_Continue # Lo [16] MALAYALAM LETTER PA..MALAYALAM LETTER HA +0D3D ; ID_Continue # Lo MALAYALAM SIGN AVAGRAHA +0D3E..0D40 ; ID_Continue # Mc [3] MALAYALAM VOWEL SIGN AA..MALAYALAM VOWEL SIGN II +0D41..0D44 ; ID_Continue # Mn [4] MALAYALAM VOWEL SIGN U..MALAYALAM VOWEL SIGN VOCALIC RR +0D46..0D48 ; ID_Continue # Mc [3] MALAYALAM VOWEL SIGN E..MALAYALAM VOWEL SIGN AI +0D4A..0D4C ; ID_Continue # Mc [3] MALAYALAM VOWEL SIGN O..MALAYALAM VOWEL SIGN AU +0D4D ; ID_Continue # Mn MALAYALAM SIGN VIRAMA +0D57 ; ID_Continue # Mc MALAYALAM AU LENGTH MARK +0D60..0D61 ; ID_Continue # Lo [2] MALAYALAM LETTER VOCALIC RR..MALAYALAM LETTER VOCALIC LL +0D62..0D63 ; ID_Continue # Mn [2] MALAYALAM VOWEL SIGN VOCALIC L..MALAYALAM VOWEL SIGN VOCALIC LL +0D66..0D6F ; ID_Continue # Nd [10] MALAYALAM DIGIT ZERO..MALAYALAM DIGIT NINE +0D7A..0D7F ; ID_Continue # Lo [6] MALAYALAM LETTER CHILLU NN..MALAYALAM LETTER CHILLU K +0D82..0D83 ; ID_Continue # Mc [2] SINHALA SIGN ANUSVARAYA..SINHALA SIGN VISARGAYA +0D85..0D96 ; ID_Continue # Lo [18] SINHALA LETTER AYANNA..SINHALA LETTER AUYANNA +0D9A..0DB1 ; ID_Continue # Lo [24] SINHALA LETTER ALPAPRAANA KAYANNA..SINHALA LETTER DANTAJA NAYANNA +0DB3..0DBB ; ID_Continue # Lo [9] SINHALA LETTER SANYAKA DAYANNA..SINHALA LETTER RAYANNA +0DBD ; ID_Continue # Lo SINHALA LETTER DANTAJA LAYANNA +0DC0..0DC6 ; ID_Continue # Lo [7] SINHALA LETTER VAYANNA..SINHALA LETTER FAYANNA +0DCA ; ID_Continue # Mn SINHALA SIGN AL-LAKUNA +0DCF..0DD1 ; ID_Continue # Mc [3] SINHALA VOWEL SIGN AELA-PILLA..SINHALA VOWEL SIGN DIGA AEDA-PILLA +0DD2..0DD4 ; ID_Continue # Mn [3] SINHALA VOWEL SIGN KETTI IS-PILLA..SINHALA VOWEL SIGN KETTI PAA-PILLA +0DD6 ; ID_Continue # Mn SINHALA VOWEL SIGN DIGA PAA-PILLA +0DD8..0DDF ; ID_Continue # Mc [8] SINHALA VOWEL SIGN GAETTA-PILLA..SINHALA VOWEL SIGN GAYANUKITTA +0DF2..0DF3 ; ID_Continue # Mc [2] SINHALA VOWEL SIGN DIGA GAETTA-PILLA..SINHALA VOWEL SIGN DIGA GAYANUKITTA +0E01..0E30 ; ID_Continue # Lo [48] THAI CHARACTER KO KAI..THAI CHARACTER SARA A +0E31 ; ID_Continue # Mn THAI CHARACTER MAI HAN-AKAT +0E32..0E33 ; ID_Continue # Lo [2] THAI CHARACTER SARA AA..THAI CHARACTER SARA AM +0E34..0E3A ; ID_Continue # Mn [7] THAI CHARACTER SARA I..THAI CHARACTER PHINTHU +0E40..0E45 ; ID_Continue # Lo [6] THAI CHARACTER SARA E..THAI CHARACTER LAKKHANGYAO +0E46 ; ID_Continue # Lm THAI CHARACTER MAIYAMOK +0E47..0E4E ; ID_Continue # Mn [8] THAI CHARACTER MAITAIKHU..THAI CHARACTER YAMAKKAN +0E50..0E59 ; ID_Continue # Nd [10] THAI DIGIT ZERO..THAI DIGIT NINE +0E81..0E82 ; ID_Continue # Lo [2] LAO LETTER KO..LAO LETTER KHO SUNG +0E84 ; ID_Continue # Lo LAO LETTER KHO TAM +0E87..0E88 ; ID_Continue # Lo [2] LAO LETTER NGO..LAO LETTER CO +0E8A ; ID_Continue # Lo LAO LETTER SO TAM +0E8D ; ID_Continue # Lo LAO LETTER NYO +0E94..0E97 ; ID_Continue # Lo [4] LAO LETTER DO..LAO LETTER THO TAM +0E99..0E9F ; ID_Continue # Lo [7] LAO LETTER NO..LAO LETTER FO SUNG +0EA1..0EA3 ; ID_Continue # Lo [3] LAO LETTER MO..LAO LETTER LO LING +0EA5 ; ID_Continue # Lo LAO LETTER LO LOOT +0EA7 ; ID_Continue # Lo LAO LETTER WO +0EAA..0EAB ; ID_Continue # Lo [2] LAO LETTER SO SUNG..LAO LETTER HO SUNG +0EAD..0EB0 ; ID_Continue # Lo [4] LAO LETTER O..LAO VOWEL SIGN A +0EB1 ; ID_Continue # Mn LAO VOWEL SIGN MAI KAN +0EB2..0EB3 ; ID_Continue # Lo [2] LAO VOWEL SIGN AA..LAO VOWEL SIGN AM +0EB4..0EB9 ; ID_Continue # Mn [6] LAO VOWEL SIGN I..LAO VOWEL SIGN UU +0EBB..0EBC ; ID_Continue # Mn [2] LAO VOWEL SIGN MAI KON..LAO SEMIVOWEL SIGN LO +0EBD ; ID_Continue # Lo LAO SEMIVOWEL SIGN NYO +0EC0..0EC4 ; ID_Continue # Lo [5] LAO VOWEL SIGN E..LAO VOWEL SIGN AI +0EC6 ; ID_Continue # Lm LAO KO LA +0EC8..0ECD ; ID_Continue # Mn [6] LAO TONE MAI EK..LAO NIGGAHITA +0ED0..0ED9 ; ID_Continue # Nd [10] LAO DIGIT ZERO..LAO DIGIT NINE +0EDC..0EDD ; ID_Continue # Lo [2] LAO HO NO..LAO HO MO +0F00 ; ID_Continue # Lo TIBETAN SYLLABLE OM +0F18..0F19 ; ID_Continue # Mn [2] TIBETAN ASTROLOGICAL SIGN -KHYUD PA..TIBETAN ASTROLOGICAL SIGN SDONG TSHUGS +0F20..0F29 ; ID_Continue # Nd [10] TIBETAN DIGIT ZERO..TIBETAN DIGIT NINE +0F35 ; ID_Continue # Mn TIBETAN MARK NGAS BZUNG NYI ZLA +0F37 ; ID_Continue # Mn TIBETAN MARK NGAS BZUNG SGOR RTAGS +0F39 ; ID_Continue # Mn TIBETAN MARK TSA -PHRU +0F3E..0F3F ; ID_Continue # Mc [2] TIBETAN SIGN YAR TSHES..TIBETAN SIGN MAR TSHES +0F40..0F47 ; ID_Continue # Lo [8] TIBETAN LETTER KA..TIBETAN LETTER JA +0F49..0F6C ; ID_Continue # Lo [36] TIBETAN LETTER NYA..TIBETAN LETTER RRA +0F71..0F7E ; ID_Continue # Mn [14] TIBETAN VOWEL SIGN AA..TIBETAN SIGN RJES SU NGA RO +0F7F ; ID_Continue # Mc TIBETAN SIGN RNAM BCAD +0F80..0F84 ; ID_Continue # Mn [5] TIBETAN VOWEL SIGN REVERSED I..TIBETAN MARK HALANTA +0F86..0F87 ; ID_Continue # Mn [2] TIBETAN SIGN LCI RTAGS..TIBETAN SIGN YANG RTAGS +0F88..0F8B ; ID_Continue # Lo [4] TIBETAN SIGN LCE TSA CAN..TIBETAN SIGN GRU MED RGYINGS +0F90..0F97 ; ID_Continue # Mn [8] TIBETAN SUBJOINED LETTER KA..TIBETAN SUBJOINED LETTER JA +0F99..0FBC ; ID_Continue # Mn [36] TIBETAN SUBJOINED LETTER NYA..TIBETAN SUBJOINED LETTER FIXED-FORM RA +0FC6 ; ID_Continue # Mn TIBETAN SYMBOL PADMA GDAN +1000..102A ; ID_Continue # Lo [43] MYANMAR LETTER KA..MYANMAR LETTER AU +102B..102C ; ID_Continue # Mc [2] MYANMAR VOWEL SIGN TALL AA..MYANMAR VOWEL SIGN AA +102D..1030 ; ID_Continue # Mn [4] MYANMAR VOWEL SIGN I..MYANMAR VOWEL SIGN UU +1031 ; ID_Continue # Mc MYANMAR VOWEL SIGN E +1032..1037 ; ID_Continue # Mn [6] MYANMAR VOWEL SIGN AI..MYANMAR SIGN DOT BELOW +1038 ; ID_Continue # Mc MYANMAR SIGN VISARGA +1039..103A ; ID_Continue # Mn [2] MYANMAR SIGN VIRAMA..MYANMAR SIGN ASAT +103B..103C ; ID_Continue # Mc [2] MYANMAR CONSONANT SIGN MEDIAL YA..MYANMAR CONSONANT SIGN MEDIAL RA +103D..103E ; ID_Continue # Mn [2] MYANMAR CONSONANT SIGN MEDIAL WA..MYANMAR CONSONANT SIGN MEDIAL HA +103F ; ID_Continue # Lo MYANMAR LETTER GREAT SA +1040..1049 ; ID_Continue # Nd [10] MYANMAR DIGIT ZERO..MYANMAR DIGIT NINE +1050..1055 ; ID_Continue # Lo [6] MYANMAR LETTER SHA..MYANMAR LETTER VOCALIC LL +1056..1057 ; ID_Continue # Mc [2] MYANMAR VOWEL SIGN VOCALIC R..MYANMAR VOWEL SIGN VOCALIC RR +1058..1059 ; ID_Continue # Mn [2] MYANMAR VOWEL SIGN VOCALIC L..MYANMAR VOWEL SIGN VOCALIC LL +105A..105D ; ID_Continue # Lo [4] MYANMAR LETTER MON NGA..MYANMAR LETTER MON BBE +105E..1060 ; ID_Continue # Mn [3] MYANMAR CONSONANT SIGN MON MEDIAL NA..MYANMAR CONSONANT SIGN MON MEDIAL LA +1061 ; ID_Continue # Lo MYANMAR LETTER SGAW KAREN SHA +1062..1064 ; ID_Continue # Mc [3] MYANMAR VOWEL SIGN SGAW KAREN EU..MYANMAR TONE MARK SGAW KAREN KE PHO +1065..1066 ; ID_Continue # Lo [2] MYANMAR LETTER WESTERN PWO KAREN THA..MYANMAR LETTER WESTERN PWO KAREN PWA +1067..106D ; ID_Continue # Mc [7] MYANMAR VOWEL SIGN WESTERN PWO KAREN EU..MYANMAR SIGN WESTERN PWO KAREN TONE-5 +106E..1070 ; ID_Continue # Lo [3] MYANMAR LETTER EASTERN PWO KAREN NNA..MYANMAR LETTER EASTERN PWO KAREN GHWA +1071..1074 ; ID_Continue # Mn [4] MYANMAR VOWEL SIGN GEBA KAREN I..MYANMAR VOWEL SIGN KAYAH EE +1075..1081 ; ID_Continue # Lo [13] MYANMAR LETTER SHAN KA..MYANMAR LETTER SHAN HA +1082 ; ID_Continue # Mn MYANMAR CONSONANT SIGN SHAN MEDIAL WA +1083..1084 ; ID_Continue # Mc [2] MYANMAR VOWEL SIGN SHAN AA..MYANMAR VOWEL SIGN SHAN E +1085..1086 ; ID_Continue # Mn [2] MYANMAR VOWEL SIGN SHAN E ABOVE..MYANMAR VOWEL SIGN SHAN FINAL Y +1087..108C ; ID_Continue # Mc [6] MYANMAR SIGN SHAN TONE-2..MYANMAR SIGN SHAN COUNCIL TONE-3 +108D ; ID_Continue # Mn MYANMAR SIGN SHAN COUNCIL EMPHATIC TONE +108E ; ID_Continue # Lo MYANMAR LETTER RUMAI PALAUNG FA +108F ; ID_Continue # Mc MYANMAR SIGN RUMAI PALAUNG TONE-5 +1090..1099 ; ID_Continue # Nd [10] MYANMAR SHAN DIGIT ZERO..MYANMAR SHAN DIGIT NINE +109A..109C ; ID_Continue # Mc [3] MYANMAR SIGN KHAMTI TONE-1..MYANMAR VOWEL SIGN AITON A +109D ; ID_Continue # Mn MYANMAR VOWEL SIGN AITON AI +10A0..10C5 ; ID_Continue # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +10D0..10FA ; ID_Continue # Lo [43] GEORGIAN LETTER AN..GEORGIAN LETTER AIN +10FC ; ID_Continue # Lm MODIFIER LETTER GEORGIAN NAR +1100..1248 ; ID_Continue # Lo [329] HANGUL CHOSEONG KIYEOK..ETHIOPIC SYLLABLE QWA +124A..124D ; ID_Continue # Lo [4] ETHIOPIC SYLLABLE QWI..ETHIOPIC SYLLABLE QWE +1250..1256 ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE QHA..ETHIOPIC SYLLABLE QHO +1258 ; ID_Continue # Lo ETHIOPIC SYLLABLE QHWA +125A..125D ; ID_Continue # Lo [4] ETHIOPIC SYLLABLE QHWI..ETHIOPIC SYLLABLE QHWE +1260..1288 ; ID_Continue # Lo [41] ETHIOPIC SYLLABLE BA..ETHIOPIC SYLLABLE XWA +128A..128D ; ID_Continue # Lo [4] ETHIOPIC SYLLABLE XWI..ETHIOPIC SYLLABLE XWE +1290..12B0 ; ID_Continue # Lo [33] ETHIOPIC SYLLABLE NA..ETHIOPIC SYLLABLE KWA +12B2..12B5 ; ID_Continue # Lo [4] ETHIOPIC SYLLABLE KWI..ETHIOPIC SYLLABLE KWE +12B8..12BE ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE KXA..ETHIOPIC SYLLABLE KXO +12C0 ; ID_Continue # Lo ETHIOPIC SYLLABLE KXWA +12C2..12C5 ; ID_Continue # Lo [4] ETHIOPIC SYLLABLE KXWI..ETHIOPIC SYLLABLE KXWE +12C8..12D6 ; ID_Continue # Lo [15] ETHIOPIC SYLLABLE WA..ETHIOPIC SYLLABLE PHARYNGEAL O +12D8..1310 ; ID_Continue # Lo [57] ETHIOPIC SYLLABLE ZA..ETHIOPIC SYLLABLE GWA +1312..1315 ; ID_Continue # Lo [4] ETHIOPIC SYLLABLE GWI..ETHIOPIC SYLLABLE GWE +1318..135A ; ID_Continue # Lo [67] ETHIOPIC SYLLABLE GGA..ETHIOPIC SYLLABLE FYA +135F ; ID_Continue # Mn ETHIOPIC COMBINING GEMINATION MARK +1369..1371 ; ID_Continue # No [9] ETHIOPIC DIGIT ONE..ETHIOPIC DIGIT NINE +1380..138F ; ID_Continue # Lo [16] ETHIOPIC SYLLABLE SEBATBEIT MWA..ETHIOPIC SYLLABLE PWE +13A0..13F4 ; ID_Continue # Lo [85] CHEROKEE LETTER A..CHEROKEE LETTER YV +1401..166C ; ID_Continue # Lo [620] CANADIAN SYLLABICS E..CANADIAN SYLLABICS CARRIER TTSA +166F..167F ; ID_Continue # Lo [17] CANADIAN SYLLABICS QAI..CANADIAN SYLLABICS BLACKFOOT W +1681..169A ; ID_Continue # Lo [26] OGHAM LETTER BEITH..OGHAM LETTER PEITH +16A0..16EA ; ID_Continue # Lo [75] RUNIC LETTER FEHU FEOH FE F..RUNIC LETTER X +16EE..16F0 ; ID_Continue # Nl [3] RUNIC ARLAUG SYMBOL..RUNIC BELGTHOR SYMBOL +1700..170C ; ID_Continue # Lo [13] TAGALOG LETTER A..TAGALOG LETTER YA +170E..1711 ; ID_Continue # Lo [4] TAGALOG LETTER LA..TAGALOG LETTER HA +1712..1714 ; ID_Continue # Mn [3] TAGALOG VOWEL SIGN I..TAGALOG SIGN VIRAMA +1720..1731 ; ID_Continue # Lo [18] HANUNOO LETTER A..HANUNOO LETTER HA +1732..1734 ; ID_Continue # Mn [3] HANUNOO VOWEL SIGN I..HANUNOO SIGN PAMUDPOD +1740..1751 ; ID_Continue # Lo [18] BUHID LETTER A..BUHID LETTER HA +1752..1753 ; ID_Continue # Mn [2] BUHID VOWEL SIGN I..BUHID VOWEL SIGN U +1760..176C ; ID_Continue # Lo [13] TAGBANWA LETTER A..TAGBANWA LETTER YA +176E..1770 ; ID_Continue # Lo [3] TAGBANWA LETTER LA..TAGBANWA LETTER SA +1772..1773 ; ID_Continue # Mn [2] TAGBANWA VOWEL SIGN I..TAGBANWA VOWEL SIGN U +1780..17B3 ; ID_Continue # Lo [52] KHMER LETTER KA..KHMER INDEPENDENT VOWEL QAU +17B6 ; ID_Continue # Mc KHMER VOWEL SIGN AA +17B7..17BD ; ID_Continue # Mn [7] KHMER VOWEL SIGN I..KHMER VOWEL SIGN UA +17BE..17C5 ; ID_Continue # Mc [8] KHMER VOWEL SIGN OE..KHMER VOWEL SIGN AU +17C6 ; ID_Continue # Mn KHMER SIGN NIKAHIT +17C7..17C8 ; ID_Continue # Mc [2] KHMER SIGN REAHMUK..KHMER SIGN YUUKALEAPINTU +17C9..17D3 ; ID_Continue # Mn [11] KHMER SIGN MUUSIKATOAN..KHMER SIGN BATHAMASAT +17D7 ; ID_Continue # Lm KHMER SIGN LEK TOO +17DC ; ID_Continue # Lo KHMER SIGN AVAKRAHASANYA +17DD ; ID_Continue # Mn KHMER SIGN ATTHACAN +17E0..17E9 ; ID_Continue # Nd [10] KHMER DIGIT ZERO..KHMER DIGIT NINE +180B..180D ; ID_Continue # Mn [3] MONGOLIAN FREE VARIATION SELECTOR ONE..MONGOLIAN FREE VARIATION SELECTOR THREE +1810..1819 ; ID_Continue # Nd [10] MONGOLIAN DIGIT ZERO..MONGOLIAN DIGIT NINE +1820..1842 ; ID_Continue # Lo [35] MONGOLIAN LETTER A..MONGOLIAN LETTER CHI +1843 ; ID_Continue # Lm MONGOLIAN LETTER TODO LONG VOWEL SIGN +1844..1877 ; ID_Continue # Lo [52] MONGOLIAN LETTER TODO E..MONGOLIAN LETTER MANCHU ZHA +1880..18A8 ; ID_Continue # Lo [41] MONGOLIAN LETTER ALI GALI ANUSVARA ONE..MONGOLIAN LETTER MANCHU ALI GALI BHA +18A9 ; ID_Continue # Mn MONGOLIAN LETTER ALI GALI DAGALGA +18AA ; ID_Continue # Lo MONGOLIAN LETTER MANCHU ALI GALI LHA +18B0..18F5 ; ID_Continue # Lo [70] CANADIAN SYLLABICS OY..CANADIAN SYLLABICS CARRIER DENTAL S +1900..191C ; ID_Continue # Lo [29] LIMBU VOWEL-CARRIER LETTER..LIMBU LETTER HA +1920..1922 ; ID_Continue # Mn [3] LIMBU VOWEL SIGN A..LIMBU VOWEL SIGN U +1923..1926 ; ID_Continue # Mc [4] LIMBU VOWEL SIGN EE..LIMBU VOWEL SIGN AU +1927..1928 ; ID_Continue # Mn [2] LIMBU VOWEL SIGN E..LIMBU VOWEL SIGN O +1929..192B ; ID_Continue # Mc [3] LIMBU SUBJOINED LETTER YA..LIMBU SUBJOINED LETTER WA +1930..1931 ; ID_Continue # Mc [2] LIMBU SMALL LETTER KA..LIMBU SMALL LETTER NGA +1932 ; ID_Continue # Mn LIMBU SMALL LETTER ANUSVARA +1933..1938 ; ID_Continue # Mc [6] LIMBU SMALL LETTER TA..LIMBU SMALL LETTER LA +1939..193B ; ID_Continue # Mn [3] LIMBU SIGN MUKPHRENG..LIMBU SIGN SA-I +1946..194F ; ID_Continue # Nd [10] LIMBU DIGIT ZERO..LIMBU DIGIT NINE +1950..196D ; ID_Continue # Lo [30] TAI LE LETTER KA..TAI LE LETTER AI +1970..1974 ; ID_Continue # Lo [5] TAI LE LETTER TONE-2..TAI LE LETTER TONE-6 +1980..19AB ; ID_Continue # Lo [44] NEW TAI LUE LETTER HIGH QA..NEW TAI LUE LETTER LOW SUA +19B0..19C0 ; ID_Continue # Mc [17] NEW TAI LUE VOWEL SIGN VOWEL SHORTENER..NEW TAI LUE VOWEL SIGN IY +19C1..19C7 ; ID_Continue # Lo [7] NEW TAI LUE LETTER FINAL V..NEW TAI LUE LETTER FINAL B +19C8..19C9 ; ID_Continue # Mc [2] NEW TAI LUE TONE MARK-1..NEW TAI LUE TONE MARK-2 +19D0..19DA ; ID_Continue # Nd [11] NEW TAI LUE DIGIT ZERO..NEW TAI LUE THAM DIGIT ONE +1A00..1A16 ; ID_Continue # Lo [23] BUGINESE LETTER KA..BUGINESE LETTER HA +1A17..1A18 ; ID_Continue # Mn [2] BUGINESE VOWEL SIGN I..BUGINESE VOWEL SIGN U +1A19..1A1B ; ID_Continue # Mc [3] BUGINESE VOWEL SIGN E..BUGINESE VOWEL SIGN AE +1A20..1A54 ; ID_Continue # Lo [53] TAI THAM LETTER HIGH KA..TAI THAM LETTER GREAT SA +1A55 ; ID_Continue # Mc TAI THAM CONSONANT SIGN MEDIAL RA +1A56 ; ID_Continue # Mn TAI THAM CONSONANT SIGN MEDIAL LA +1A57 ; ID_Continue # Mc TAI THAM CONSONANT SIGN LA TANG LAI +1A58..1A5E ; ID_Continue # Mn [7] TAI THAM SIGN MAI KANG LAI..TAI THAM CONSONANT SIGN SA +1A60 ; ID_Continue # Mn TAI THAM SIGN SAKOT +1A61 ; ID_Continue # Mc TAI THAM VOWEL SIGN A +1A62 ; ID_Continue # Mn TAI THAM VOWEL SIGN MAI SAT +1A63..1A64 ; ID_Continue # Mc [2] TAI THAM VOWEL SIGN AA..TAI THAM VOWEL SIGN TALL AA +1A65..1A6C ; ID_Continue # Mn [8] TAI THAM VOWEL SIGN I..TAI THAM VOWEL SIGN OA BELOW +1A6D..1A72 ; ID_Continue # Mc [6] TAI THAM VOWEL SIGN OY..TAI THAM VOWEL SIGN THAM AI +1A73..1A7C ; ID_Continue # Mn [10] TAI THAM VOWEL SIGN OA ABOVE..TAI THAM SIGN KHUEN-LUE KARAN +1A7F ; ID_Continue # Mn TAI THAM COMBINING CRYPTOGRAMMIC DOT +1A80..1A89 ; ID_Continue # Nd [10] TAI THAM HORA DIGIT ZERO..TAI THAM HORA DIGIT NINE +1A90..1A99 ; ID_Continue # Nd [10] TAI THAM THAM DIGIT ZERO..TAI THAM THAM DIGIT NINE +1AA7 ; ID_Continue # Lm TAI THAM SIGN MAI YAMOK +1B00..1B03 ; ID_Continue # Mn [4] BALINESE SIGN ULU RICEM..BALINESE SIGN SURANG +1B04 ; ID_Continue # Mc BALINESE SIGN BISAH +1B05..1B33 ; ID_Continue # Lo [47] BALINESE LETTER AKARA..BALINESE LETTER HA +1B34 ; ID_Continue # Mn BALINESE SIGN REREKAN +1B35 ; ID_Continue # Mc BALINESE VOWEL SIGN TEDUNG +1B36..1B3A ; ID_Continue # Mn [5] BALINESE VOWEL SIGN ULU..BALINESE VOWEL SIGN RA REPA +1B3B ; ID_Continue # Mc BALINESE VOWEL SIGN RA REPA TEDUNG +1B3C ; ID_Continue # Mn BALINESE VOWEL SIGN LA LENGA +1B3D..1B41 ; ID_Continue # Mc [5] BALINESE VOWEL SIGN LA LENGA TEDUNG..BALINESE VOWEL SIGN TALING REPA TEDUNG +1B42 ; ID_Continue # Mn BALINESE VOWEL SIGN PEPET +1B43..1B44 ; ID_Continue # Mc [2] BALINESE VOWEL SIGN PEPET TEDUNG..BALINESE ADEG ADEG +1B45..1B4B ; ID_Continue # Lo [7] BALINESE LETTER KAF SASAK..BALINESE LETTER ASYURA SASAK +1B50..1B59 ; ID_Continue # Nd [10] BALINESE DIGIT ZERO..BALINESE DIGIT NINE +1B6B..1B73 ; ID_Continue # Mn [9] BALINESE MUSICAL SYMBOL COMBINING TEGEH..BALINESE MUSICAL SYMBOL COMBINING GONG +1B80..1B81 ; ID_Continue # Mn [2] SUNDANESE SIGN PANYECEK..SUNDANESE SIGN PANGLAYAR +1B82 ; ID_Continue # Mc SUNDANESE SIGN PANGWISAD +1B83..1BA0 ; ID_Continue # Lo [30] SUNDANESE LETTER A..SUNDANESE LETTER HA +1BA1 ; ID_Continue # Mc SUNDANESE CONSONANT SIGN PAMINGKAL +1BA2..1BA5 ; ID_Continue # Mn [4] SUNDANESE CONSONANT SIGN PANYAKRA..SUNDANESE VOWEL SIGN PANYUKU +1BA6..1BA7 ; ID_Continue # Mc [2] SUNDANESE VOWEL SIGN PANAELAENG..SUNDANESE VOWEL SIGN PANOLONG +1BA8..1BA9 ; ID_Continue # Mn [2] SUNDANESE VOWEL SIGN PAMEPET..SUNDANESE VOWEL SIGN PANEULEUNG +1BAA ; ID_Continue # Mc SUNDANESE SIGN PAMAAEH +1BAE..1BAF ; ID_Continue # Lo [2] SUNDANESE LETTER KHA..SUNDANESE LETTER SYA +1BB0..1BB9 ; ID_Continue # Nd [10] SUNDANESE DIGIT ZERO..SUNDANESE DIGIT NINE +1C00..1C23 ; ID_Continue # Lo [36] LEPCHA LETTER KA..LEPCHA LETTER A +1C24..1C2B ; ID_Continue # Mc [8] LEPCHA SUBJOINED LETTER YA..LEPCHA VOWEL SIGN UU +1C2C..1C33 ; ID_Continue # Mn [8] LEPCHA VOWEL SIGN E..LEPCHA CONSONANT SIGN T +1C34..1C35 ; ID_Continue # Mc [2] LEPCHA CONSONANT SIGN NYIN-DO..LEPCHA CONSONANT SIGN KANG +1C36..1C37 ; ID_Continue # Mn [2] LEPCHA SIGN RAN..LEPCHA SIGN NUKTA +1C40..1C49 ; ID_Continue # Nd [10] LEPCHA DIGIT ZERO..LEPCHA DIGIT NINE +1C4D..1C4F ; ID_Continue # Lo [3] LEPCHA LETTER TTA..LEPCHA LETTER DDA +1C50..1C59 ; ID_Continue # Nd [10] OL CHIKI DIGIT ZERO..OL CHIKI DIGIT NINE +1C5A..1C77 ; ID_Continue # Lo [30] OL CHIKI LETTER LA..OL CHIKI LETTER OH +1C78..1C7D ; ID_Continue # Lm [6] OL CHIKI MU TTUDDAG..OL CHIKI AHAD +1CD0..1CD2 ; ID_Continue # Mn [3] VEDIC TONE KARSHANA..VEDIC TONE PRENKHA +1CD4..1CE0 ; ID_Continue # Mn [13] VEDIC SIGN YAJURVEDIC MIDLINE SVARITA..VEDIC TONE RIGVEDIC KASHMIRI INDEPENDENT SVARITA +1CE1 ; ID_Continue # Mc VEDIC TONE ATHARVAVEDIC INDEPENDENT SVARITA +1CE2..1CE8 ; ID_Continue # Mn [7] VEDIC SIGN VISARGA SVARITA..VEDIC SIGN VISARGA ANUDATTA WITH TAIL +1CE9..1CEC ; ID_Continue # Lo [4] VEDIC SIGN ANUSVARA ANTARGOMUKHA..VEDIC SIGN ANUSVARA VAMAGOMUKHA WITH TAIL +1CED ; ID_Continue # Mn VEDIC SIGN TIRYAK +1CEE..1CF1 ; ID_Continue # Lo [4] VEDIC SIGN HEXIFORM LONG ANUSVARA..VEDIC SIGN ANUSVARA UBHAYATO MUKHA +1CF2 ; ID_Continue # Mc VEDIC SIGN ARDHAVISARGA +1D00..1D2B ; ID_Continue # L& [44] LATIN LETTER SMALL CAPITAL A..CYRILLIC LETTER SMALL CAPITAL EL +1D2C..1D61 ; ID_Continue # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D62..1D77 ; ID_Continue # L& [22] LATIN SUBSCRIPT SMALL LETTER I..LATIN SMALL LETTER TURNED G +1D78 ; ID_Continue # Lm MODIFIER LETTER CYRILLIC EN +1D79..1D9A ; ID_Continue # L& [34] LATIN SMALL LETTER INSULAR G..LATIN SMALL LETTER EZH WITH RETROFLEX HOOK +1D9B..1DBF ; ID_Continue # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1DC0..1DE6 ; ID_Continue # Mn [39] COMBINING DOTTED GRAVE ACCENT..COMBINING LATIN SMALL LETTER Z +1DFD..1DFF ; ID_Continue # Mn [3] COMBINING ALMOST EQUAL TO BELOW..COMBINING RIGHT ARROWHEAD AND DOWN ARROWHEAD BELOW +1E00..1F15 ; ID_Continue # L& [278] LATIN CAPITAL LETTER A WITH RING BELOW..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F18..1F1D ; ID_Continue # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F45 ; ID_Continue # L& [38] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F48..1F4D ; ID_Continue # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; ID_Continue # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F59 ; ID_Continue # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; ID_Continue # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; ID_Continue # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F..1F7D ; ID_Continue # L& [31] GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; ID_Continue # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FBC ; ID_Continue # L& [7] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBE ; ID_Continue # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; ID_Continue # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FCC ; ID_Continue # L& [7] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD0..1FD3 ; ID_Continue # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FDB ; ID_Continue # L& [6] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE0..1FEC ; ID_Continue # L& [13] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF2..1FF4 ; ID_Continue # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FFC ; ID_Continue # L& [7] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +203F..2040 ; ID_Continue # Pc [2] UNDERTIE..CHARACTER TIE +2054 ; ID_Continue # Pc INVERTED UNDERTIE +2071 ; ID_Continue # Lm SUPERSCRIPT LATIN SMALL LETTER I +207F ; ID_Continue # Lm SUPERSCRIPT LATIN SMALL LETTER N +2090..2094 ; ID_Continue # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +20D0..20DC ; ID_Continue # Mn [13] COMBINING LEFT HARPOON ABOVE..COMBINING FOUR DOTS ABOVE +20E1 ; ID_Continue # Mn COMBINING LEFT RIGHT ARROW ABOVE +20E5..20F0 ; ID_Continue # Mn [12] COMBINING REVERSE SOLIDUS OVERLAY..COMBINING ASTERISK ABOVE +2102 ; ID_Continue # L& DOUBLE-STRUCK CAPITAL C +2107 ; ID_Continue # L& EULER CONSTANT +210A..2113 ; ID_Continue # L& [10] SCRIPT SMALL G..SCRIPT SMALL L +2115 ; ID_Continue # L& DOUBLE-STRUCK CAPITAL N +2118 ; ID_Continue # So SCRIPT CAPITAL P +2119..211D ; ID_Continue # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +2124 ; ID_Continue # L& DOUBLE-STRUCK CAPITAL Z +2126 ; ID_Continue # L& OHM SIGN +2128 ; ID_Continue # L& BLACK-LETTER CAPITAL Z +212A..212D ; ID_Continue # L& [4] KELVIN SIGN..BLACK-LETTER CAPITAL C +212E ; ID_Continue # So ESTIMATED SYMBOL +212F..2134 ; ID_Continue # L& [6] SCRIPT SMALL E..SCRIPT SMALL O +2135..2138 ; ID_Continue # Lo [4] ALEF SYMBOL..DALET SYMBOL +2139 ; ID_Continue # L& INFORMATION SOURCE +213C..213F ; ID_Continue # L& [4] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK CAPITAL PI +2145..2149 ; ID_Continue # L& [5] DOUBLE-STRUCK ITALIC CAPITAL D..DOUBLE-STRUCK ITALIC SMALL J +214E ; ID_Continue # L& TURNED SMALL F +2160..2182 ; ID_Continue # Nl [35] ROMAN NUMERAL ONE..ROMAN NUMERAL TEN THOUSAND +2183..2184 ; ID_Continue # L& [2] ROMAN NUMERAL REVERSED ONE HUNDRED..LATIN SMALL LETTER REVERSED C +2185..2188 ; ID_Continue # Nl [4] ROMAN NUMERAL SIX LATE FORM..ROMAN NUMERAL ONE HUNDRED THOUSAND +2C00..2C2E ; ID_Continue # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C30..2C5E ; ID_Continue # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C60..2C7C ; ID_Continue # L& [29] LATIN CAPITAL LETTER L WITH DOUBLE BAR..LATIN SUBSCRIPT SMALL LETTER J +2C7D ; ID_Continue # Lm MODIFIER LETTER CAPITAL V +2C7E..2CE4 ; ID_Continue # L& [103] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC SYMBOL KAI +2CEB..2CEE ; ID_Continue # L& [4] COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI..COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2CEF..2CF1 ; ID_Continue # Mn [3] COPTIC COMBINING NI ABOVE..COPTIC COMBINING SPIRITUS LENIS +2D00..2D25 ; ID_Continue # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +2D30..2D65 ; ID_Continue # Lo [54] TIFINAGH LETTER YA..TIFINAGH LETTER YAZZ +2D6F ; ID_Continue # Lm TIFINAGH MODIFIER LETTER LABIALIZATION MARK +2D80..2D96 ; ID_Continue # Lo [23] ETHIOPIC SYLLABLE LOA..ETHIOPIC SYLLABLE GGWE +2DA0..2DA6 ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE SSA..ETHIOPIC SYLLABLE SSO +2DA8..2DAE ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE CCA..ETHIOPIC SYLLABLE CCO +2DB0..2DB6 ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE ZZA..ETHIOPIC SYLLABLE ZZO +2DB8..2DBE ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE CCHA..ETHIOPIC SYLLABLE CCHO +2DC0..2DC6 ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE QYA..ETHIOPIC SYLLABLE QYO +2DC8..2DCE ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE KYA..ETHIOPIC SYLLABLE KYO +2DD0..2DD6 ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE XYA..ETHIOPIC SYLLABLE XYO +2DD8..2DDE ; ID_Continue # Lo [7] ETHIOPIC SYLLABLE GYA..ETHIOPIC SYLLABLE GYO +2DE0..2DFF ; ID_Continue # Mn [32] COMBINING CYRILLIC LETTER BE..COMBINING CYRILLIC LETTER IOTIFIED BIG YUS +3005 ; ID_Continue # Lm IDEOGRAPHIC ITERATION MARK +3006 ; ID_Continue # Lo IDEOGRAPHIC CLOSING MARK +3007 ; ID_Continue # Nl IDEOGRAPHIC NUMBER ZERO +3021..3029 ; ID_Continue # Nl [9] HANGZHOU NUMERAL ONE..HANGZHOU NUMERAL NINE +302A..302F ; ID_Continue # Mn [6] IDEOGRAPHIC LEVEL TONE MARK..HANGUL DOUBLE DOT TONE MARK +3031..3035 ; ID_Continue # Lm [5] VERTICAL KANA REPEAT MARK..VERTICAL KANA REPEAT MARK LOWER HALF +3038..303A ; ID_Continue # Nl [3] HANGZHOU NUMERAL TEN..HANGZHOU NUMERAL THIRTY +303B ; ID_Continue # Lm VERTICAL IDEOGRAPHIC ITERATION MARK +303C ; ID_Continue # Lo MASU MARK +3041..3096 ; ID_Continue # Lo [86] HIRAGANA LETTER SMALL A..HIRAGANA LETTER SMALL KE +3099..309A ; ID_Continue # Mn [2] COMBINING KATAKANA-HIRAGANA VOICED SOUND MARK..COMBINING KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK +309B..309C ; ID_Continue # Sk [2] KATAKANA-HIRAGANA VOICED SOUND MARK..KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK +309D..309E ; ID_Continue # Lm [2] HIRAGANA ITERATION MARK..HIRAGANA VOICED ITERATION MARK +309F ; ID_Continue # Lo HIRAGANA DIGRAPH YORI +30A1..30FA ; ID_Continue # Lo [90] KATAKANA LETTER SMALL A..KATAKANA LETTER VO +30FC..30FE ; ID_Continue # Lm [3] KATAKANA-HIRAGANA PROLONGED SOUND MARK..KATAKANA VOICED ITERATION MARK +30FF ; ID_Continue # Lo KATAKANA DIGRAPH KOTO +3105..312D ; ID_Continue # Lo [41] BOPOMOFO LETTER B..BOPOMOFO LETTER IH +3131..318E ; ID_Continue # Lo [94] HANGUL LETTER KIYEOK..HANGUL LETTER ARAEAE +31A0..31B7 ; ID_Continue # Lo [24] BOPOMOFO LETTER BU..BOPOMOFO FINAL LETTER H +31F0..31FF ; ID_Continue # Lo [16] KATAKANA LETTER SMALL KU..KATAKANA LETTER SMALL RO +3400..4DB5 ; ID_Continue # Lo [6582] CJK UNIFIED IDEOGRAPH-3400..CJK UNIFIED IDEOGRAPH-4DB5 +4E00..9FCB ; ID_Continue # Lo [20940] CJK UNIFIED IDEOGRAPH-4E00..CJK UNIFIED IDEOGRAPH-9FCB +A000..A014 ; ID_Continue # Lo [21] YI SYLLABLE IT..YI SYLLABLE E +A015 ; ID_Continue # Lm YI SYLLABLE WU +A016..A48C ; ID_Continue # Lo [1143] YI SYLLABLE BIT..YI SYLLABLE YYR +A4D0..A4F7 ; ID_Continue # Lo [40] LISU LETTER BA..LISU LETTER OE +A4F8..A4FD ; ID_Continue # Lm [6] LISU LETTER TONE MYA TI..LISU LETTER TONE MYA JEU +A500..A60B ; ID_Continue # Lo [268] VAI SYLLABLE EE..VAI SYLLABLE NG +A60C ; ID_Continue # Lm VAI SYLLABLE LENGTHENER +A610..A61F ; ID_Continue # Lo [16] VAI SYLLABLE NDOLE FA..VAI SYMBOL JONG +A620..A629 ; ID_Continue # Nd [10] VAI DIGIT ZERO..VAI DIGIT NINE +A62A..A62B ; ID_Continue # Lo [2] VAI SYLLABLE NDOLE MA..VAI SYLLABLE NDOLE DO +A640..A65F ; ID_Continue # L& [32] CYRILLIC CAPITAL LETTER ZEMLYA..CYRILLIC SMALL LETTER YN +A662..A66D ; ID_Continue # L& [12] CYRILLIC CAPITAL LETTER SOFT DE..CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A66E ; ID_Continue # Lo CYRILLIC LETTER MULTIOCULAR O +A66F ; ID_Continue # Mn COMBINING CYRILLIC VZMET +A67C..A67D ; ID_Continue # Mn [2] COMBINING CYRILLIC KAVYKA..COMBINING CYRILLIC PAYEROK +A67F ; ID_Continue # Lm CYRILLIC PAYEROK +A680..A697 ; ID_Continue # L& [24] CYRILLIC CAPITAL LETTER DWE..CYRILLIC SMALL LETTER SHWE +A6A0..A6E5 ; ID_Continue # Lo [70] BAMUM LETTER A..BAMUM LETTER KI +A6E6..A6EF ; ID_Continue # Nl [10] BAMUM LETTER MO..BAMUM LETTER KOGHOM +A6F0..A6F1 ; ID_Continue # Mn [2] BAMUM COMBINING MARK KOQNDON..BAMUM COMBINING MARK TUKWENTIS +A717..A71F ; ID_Continue # Lm [9] MODIFIER LETTER DOT VERTICAL BAR..MODIFIER LETTER LOW INVERTED EXCLAMATION MARK +A722..A76F ; ID_Continue # L& [78] LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF..LATIN SMALL LETTER CON +A770 ; ID_Continue # Lm MODIFIER LETTER US +A771..A787 ; ID_Continue # L& [23] LATIN SMALL LETTER DUM..LATIN SMALL LETTER INSULAR T +A788 ; ID_Continue # Lm MODIFIER LETTER LOW CIRCUMFLEX ACCENT +A78B..A78C ; ID_Continue # L& [2] LATIN CAPITAL LETTER SALTILLO..LATIN SMALL LETTER SALTILLO +A7FB..A801 ; ID_Continue # Lo [7] LATIN EPIGRAPHIC LETTER REVERSED F..SYLOTI NAGRI LETTER I +A802 ; ID_Continue # Mn SYLOTI NAGRI SIGN DVISVARA +A803..A805 ; ID_Continue # Lo [3] SYLOTI NAGRI LETTER U..SYLOTI NAGRI LETTER O +A806 ; ID_Continue # Mn SYLOTI NAGRI SIGN HASANTA +A807..A80A ; ID_Continue # Lo [4] SYLOTI NAGRI LETTER KO..SYLOTI NAGRI LETTER GHO +A80B ; ID_Continue # Mn SYLOTI NAGRI SIGN ANUSVARA +A80C..A822 ; ID_Continue # Lo [23] SYLOTI NAGRI LETTER CO..SYLOTI NAGRI LETTER HO +A823..A824 ; ID_Continue # Mc [2] SYLOTI NAGRI VOWEL SIGN A..SYLOTI NAGRI VOWEL SIGN I +A825..A826 ; ID_Continue # Mn [2] SYLOTI NAGRI VOWEL SIGN U..SYLOTI NAGRI VOWEL SIGN E +A827 ; ID_Continue # Mc SYLOTI NAGRI VOWEL SIGN OO +A840..A873 ; ID_Continue # Lo [52] PHAGS-PA LETTER KA..PHAGS-PA LETTER CANDRABINDU +A880..A881 ; ID_Continue # Mc [2] SAURASHTRA SIGN ANUSVARA..SAURASHTRA SIGN VISARGA +A882..A8B3 ; ID_Continue # Lo [50] SAURASHTRA LETTER A..SAURASHTRA LETTER LLA +A8B4..A8C3 ; ID_Continue # Mc [16] SAURASHTRA CONSONANT SIGN HAARU..SAURASHTRA VOWEL SIGN AU +A8C4 ; ID_Continue # Mn SAURASHTRA SIGN VIRAMA +A8D0..A8D9 ; ID_Continue # Nd [10] SAURASHTRA DIGIT ZERO..SAURASHTRA DIGIT NINE +A8E0..A8F1 ; ID_Continue # Mn [18] COMBINING DEVANAGARI DIGIT ZERO..COMBINING DEVANAGARI SIGN AVAGRAHA +A8F2..A8F7 ; ID_Continue # Lo [6] DEVANAGARI SIGN SPACING CANDRABINDU..DEVANAGARI SIGN CANDRABINDU AVAGRAHA +A8FB ; ID_Continue # Lo DEVANAGARI HEADSTROKE +A900..A909 ; ID_Continue # Nd [10] KAYAH LI DIGIT ZERO..KAYAH LI DIGIT NINE +A90A..A925 ; ID_Continue # Lo [28] KAYAH LI LETTER KA..KAYAH LI LETTER OO +A926..A92D ; ID_Continue # Mn [8] KAYAH LI VOWEL UE..KAYAH LI TONE CALYA PLOPHU +A930..A946 ; ID_Continue # Lo [23] REJANG LETTER KA..REJANG LETTER A +A947..A951 ; ID_Continue # Mn [11] REJANG VOWEL SIGN I..REJANG CONSONANT SIGN R +A952..A953 ; ID_Continue # Mc [2] REJANG CONSONANT SIGN H..REJANG VIRAMA +A960..A97C ; ID_Continue # Lo [29] HANGUL CHOSEONG TIKEUT-MIEUM..HANGUL CHOSEONG SSANGYEORINHIEUH +A980..A982 ; ID_Continue # Mn [3] JAVANESE SIGN PANYANGGA..JAVANESE SIGN LAYAR +A983 ; ID_Continue # Mc JAVANESE SIGN WIGNYAN +A984..A9B2 ; ID_Continue # Lo [47] JAVANESE LETTER A..JAVANESE LETTER HA +A9B3 ; ID_Continue # Mn JAVANESE SIGN CECAK TELU +A9B4..A9B5 ; ID_Continue # Mc [2] JAVANESE VOWEL SIGN TARUNG..JAVANESE VOWEL SIGN TOLONG +A9B6..A9B9 ; ID_Continue # Mn [4] JAVANESE VOWEL SIGN WULU..JAVANESE VOWEL SIGN SUKU MENDUT +A9BA..A9BB ; ID_Continue # Mc [2] JAVANESE VOWEL SIGN TALING..JAVANESE VOWEL SIGN DIRGA MURE +A9BC ; ID_Continue # Mn JAVANESE VOWEL SIGN PEPET +A9BD..A9C0 ; ID_Continue # Mc [4] JAVANESE CONSONANT SIGN KERET..JAVANESE PANGKON +A9CF ; ID_Continue # Lm JAVANESE PANGRANGKEP +A9D0..A9D9 ; ID_Continue # Nd [10] JAVANESE DIGIT ZERO..JAVANESE DIGIT NINE +AA00..AA28 ; ID_Continue # Lo [41] CHAM LETTER A..CHAM LETTER HA +AA29..AA2E ; ID_Continue # Mn [6] CHAM VOWEL SIGN AA..CHAM VOWEL SIGN OE +AA2F..AA30 ; ID_Continue # Mc [2] CHAM VOWEL SIGN O..CHAM VOWEL SIGN AI +AA31..AA32 ; ID_Continue # Mn [2] CHAM VOWEL SIGN AU..CHAM VOWEL SIGN UE +AA33..AA34 ; ID_Continue # Mc [2] CHAM CONSONANT SIGN YA..CHAM CONSONANT SIGN RA +AA35..AA36 ; ID_Continue # Mn [2] CHAM CONSONANT SIGN LA..CHAM CONSONANT SIGN WA +AA40..AA42 ; ID_Continue # Lo [3] CHAM LETTER FINAL K..CHAM LETTER FINAL NG +AA43 ; ID_Continue # Mn CHAM CONSONANT SIGN FINAL NG +AA44..AA4B ; ID_Continue # Lo [8] CHAM LETTER FINAL CH..CHAM LETTER FINAL SS +AA4C ; ID_Continue # Mn CHAM CONSONANT SIGN FINAL M +AA4D ; ID_Continue # Mc CHAM CONSONANT SIGN FINAL H +AA50..AA59 ; ID_Continue # Nd [10] CHAM DIGIT ZERO..CHAM DIGIT NINE +AA60..AA6F ; ID_Continue # Lo [16] MYANMAR LETTER KHAMTI GA..MYANMAR LETTER KHAMTI FA +AA70 ; ID_Continue # Lm MYANMAR MODIFIER LETTER KHAMTI REDUPLICATION +AA71..AA76 ; ID_Continue # Lo [6] MYANMAR LETTER KHAMTI XA..MYANMAR LOGOGRAM KHAMTI HM +AA7A ; ID_Continue # Lo MYANMAR LETTER AITON RA +AA7B ; ID_Continue # Mc MYANMAR SIGN PAO KAREN TONE +AA80..AAAF ; ID_Continue # Lo [48] TAI VIET LETTER LOW KO..TAI VIET LETTER HIGH O +AAB0 ; ID_Continue # Mn TAI VIET MAI KANG +AAB1 ; ID_Continue # Lo TAI VIET VOWEL AA +AAB2..AAB4 ; ID_Continue # Mn [3] TAI VIET VOWEL I..TAI VIET VOWEL U +AAB5..AAB6 ; ID_Continue # Lo [2] TAI VIET VOWEL E..TAI VIET VOWEL O +AAB7..AAB8 ; ID_Continue # Mn [2] TAI VIET MAI KHIT..TAI VIET VOWEL IA +AAB9..AABD ; ID_Continue # Lo [5] TAI VIET VOWEL UEA..TAI VIET VOWEL AN +AABE..AABF ; ID_Continue # Mn [2] TAI VIET VOWEL AM..TAI VIET TONE MAI EK +AAC0 ; ID_Continue # Lo TAI VIET TONE MAI NUENG +AAC1 ; ID_Continue # Mn TAI VIET TONE MAI THO +AAC2 ; ID_Continue # Lo TAI VIET TONE MAI SONG +AADB..AADC ; ID_Continue # Lo [2] TAI VIET SYMBOL KON..TAI VIET SYMBOL NUENG +AADD ; ID_Continue # Lm TAI VIET SYMBOL SAM +ABC0..ABE2 ; ID_Continue # Lo [35] MEETEI MAYEK LETTER KOK..MEETEI MAYEK LETTER I LONSUM +ABE3..ABE4 ; ID_Continue # Mc [2] MEETEI MAYEK VOWEL SIGN ONAP..MEETEI MAYEK VOWEL SIGN INAP +ABE5 ; ID_Continue # Mn MEETEI MAYEK VOWEL SIGN ANAP +ABE6..ABE7 ; ID_Continue # Mc [2] MEETEI MAYEK VOWEL SIGN YENAP..MEETEI MAYEK VOWEL SIGN SOUNAP +ABE8 ; ID_Continue # Mn MEETEI MAYEK VOWEL SIGN UNAP +ABE9..ABEA ; ID_Continue # Mc [2] MEETEI MAYEK VOWEL SIGN CHEINAP..MEETEI MAYEK VOWEL SIGN NUNG +ABEC ; ID_Continue # Mc MEETEI MAYEK LUM IYEK +ABED ; ID_Continue # Mn MEETEI MAYEK APUN IYEK +ABF0..ABF9 ; ID_Continue # Nd [10] MEETEI MAYEK DIGIT ZERO..MEETEI MAYEK DIGIT NINE +AC00..D7A3 ; ID_Continue # Lo [11172] HANGUL SYLLABLE GA..HANGUL SYLLABLE HIH +D7B0..D7C6 ; ID_Continue # Lo [23] HANGUL JUNGSEONG O-YEO..HANGUL JUNGSEONG ARAEA-E +D7CB..D7FB ; ID_Continue # Lo [49] HANGUL JONGSEONG NIEUN-RIEUL..HANGUL JONGSEONG PHIEUPH-THIEUTH +F900..FA2D ; ID_Continue # Lo [302] CJK COMPATIBILITY IDEOGRAPH-F900..CJK COMPATIBILITY IDEOGRAPH-FA2D +FA30..FA6D ; ID_Continue # Lo [62] CJK COMPATIBILITY IDEOGRAPH-FA30..CJK COMPATIBILITY IDEOGRAPH-FA6D +FA70..FAD9 ; ID_Continue # Lo [106] CJK COMPATIBILITY IDEOGRAPH-FA70..CJK COMPATIBILITY IDEOGRAPH-FAD9 +FB00..FB06 ; ID_Continue # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; ID_Continue # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FB1D ; ID_Continue # Lo HEBREW LETTER YOD WITH HIRIQ +FB1E ; ID_Continue # Mn HEBREW POINT JUDEO-SPANISH VARIKA +FB1F..FB28 ; ID_Continue # Lo [10] HEBREW LIGATURE YIDDISH YOD YOD PATAH..HEBREW LETTER WIDE TAV +FB2A..FB36 ; ID_Continue # Lo [13] HEBREW LETTER SHIN WITH SHIN DOT..HEBREW LETTER ZAYIN WITH DAGESH +FB38..FB3C ; ID_Continue # Lo [5] HEBREW LETTER TET WITH DAGESH..HEBREW LETTER LAMED WITH DAGESH +FB3E ; ID_Continue # Lo HEBREW LETTER MEM WITH DAGESH +FB40..FB41 ; ID_Continue # Lo [2] HEBREW LETTER NUN WITH DAGESH..HEBREW LETTER SAMEKH WITH DAGESH +FB43..FB44 ; ID_Continue # Lo [2] HEBREW LETTER FINAL PE WITH DAGESH..HEBREW LETTER PE WITH DAGESH +FB46..FBB1 ; ID_Continue # Lo [108] HEBREW LETTER TSADI WITH DAGESH..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE FINAL FORM +FBD3..FD3D ; ID_Continue # Lo [363] ARABIC LETTER NG ISOLATED FORM..ARABIC LIGATURE ALEF WITH FATHATAN ISOLATED FORM +FD50..FD8F ; ID_Continue # Lo [64] ARABIC LIGATURE TEH WITH JEEM WITH MEEM INITIAL FORM..ARABIC LIGATURE MEEM WITH KHAH WITH MEEM INITIAL FORM +FD92..FDC7 ; ID_Continue # Lo [54] ARABIC LIGATURE MEEM WITH JEEM WITH KHAH INITIAL FORM..ARABIC LIGATURE NOON WITH JEEM WITH YEH FINAL FORM +FDF0..FDFB ; ID_Continue # Lo [12] ARABIC LIGATURE SALLA USED AS KORANIC STOP SIGN ISOLATED FORM..ARABIC LIGATURE JALLAJALALOUHOU +FE00..FE0F ; ID_Continue # Mn [16] VARIATION SELECTOR-1..VARIATION SELECTOR-16 +FE20..FE26 ; ID_Continue # Mn [7] COMBINING LIGATURE LEFT HALF..COMBINING CONJOINING MACRON +FE33..FE34 ; ID_Continue # Pc [2] PRESENTATION FORM FOR VERTICAL LOW LINE..PRESENTATION FORM FOR VERTICAL WAVY LOW LINE +FE4D..FE4F ; ID_Continue # Pc [3] DASHED LOW LINE..WAVY LOW LINE +FE70..FE74 ; ID_Continue # Lo [5] ARABIC FATHATAN ISOLATED FORM..ARABIC KASRATAN ISOLATED FORM +FE76..FEFC ; ID_Continue # Lo [135] ARABIC FATHA ISOLATED FORM..ARABIC LIGATURE LAM WITH ALEF FINAL FORM +FF10..FF19 ; ID_Continue # Nd [10] FULLWIDTH DIGIT ZERO..FULLWIDTH DIGIT NINE +FF21..FF3A ; ID_Continue # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +FF3F ; ID_Continue # Pc FULLWIDTH LOW LINE +FF41..FF5A ; ID_Continue # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +FF66..FF6F ; ID_Continue # Lo [10] HALFWIDTH KATAKANA LETTER WO..HALFWIDTH KATAKANA LETTER SMALL TU +FF70 ; ID_Continue # Lm HALFWIDTH KATAKANA-HIRAGANA PROLONGED SOUND MARK +FF71..FF9D ; ID_Continue # Lo [45] HALFWIDTH KATAKANA LETTER A..HALFWIDTH KATAKANA LETTER N +FF9E..FF9F ; ID_Continue # Lm [2] HALFWIDTH KATAKANA VOICED SOUND MARK..HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK +FFA0..FFBE ; ID_Continue # Lo [31] HALFWIDTH HANGUL FILLER..HALFWIDTH HANGUL LETTER HIEUH +FFC2..FFC7 ; ID_Continue # Lo [6] HALFWIDTH HANGUL LETTER A..HALFWIDTH HANGUL LETTER E +FFCA..FFCF ; ID_Continue # Lo [6] HALFWIDTH HANGUL LETTER YEO..HALFWIDTH HANGUL LETTER OE +FFD2..FFD7 ; ID_Continue # Lo [6] HALFWIDTH HANGUL LETTER YO..HALFWIDTH HANGUL LETTER YU +FFDA..FFDC ; ID_Continue # Lo [3] HALFWIDTH HANGUL LETTER EU..HALFWIDTH HANGUL LETTER I +10000..1000B ; ID_Continue # Lo [12] LINEAR B SYLLABLE B008 A..LINEAR B SYLLABLE B046 JE +1000D..10026 ; ID_Continue # Lo [26] LINEAR B SYLLABLE B036 JO..LINEAR B SYLLABLE B032 QO +10028..1003A ; ID_Continue # Lo [19] LINEAR B SYLLABLE B060 RA..LINEAR B SYLLABLE B042 WO +1003C..1003D ; ID_Continue # Lo [2] LINEAR B SYLLABLE B017 ZA..LINEAR B SYLLABLE B074 ZE +1003F..1004D ; ID_Continue # Lo [15] LINEAR B SYLLABLE B020 ZO..LINEAR B SYLLABLE B091 TWO +10050..1005D ; ID_Continue # Lo [14] LINEAR B SYMBOL B018..LINEAR B SYMBOL B089 +10080..100FA ; ID_Continue # Lo [123] LINEAR B IDEOGRAM B100 MAN..LINEAR B IDEOGRAM VESSEL B305 +10140..10174 ; ID_Continue # Nl [53] GREEK ACROPHONIC ATTIC ONE QUARTER..GREEK ACROPHONIC STRATIAN FIFTY MNAS +101FD ; ID_Continue # Mn PHAISTOS DISC SIGN COMBINING OBLIQUE STROKE +10280..1029C ; ID_Continue # Lo [29] LYCIAN LETTER A..LYCIAN LETTER X +102A0..102D0 ; ID_Continue # Lo [49] CARIAN LETTER A..CARIAN LETTER UUU3 +10300..1031E ; ID_Continue # Lo [31] OLD ITALIC LETTER A..OLD ITALIC LETTER UU +10330..10340 ; ID_Continue # Lo [17] GOTHIC LETTER AHSA..GOTHIC LETTER PAIRTHRA +10341 ; ID_Continue # Nl GOTHIC LETTER NINETY +10342..10349 ; ID_Continue # Lo [8] GOTHIC LETTER RAIDA..GOTHIC LETTER OTHAL +1034A ; ID_Continue # Nl GOTHIC LETTER NINE HUNDRED +10380..1039D ; ID_Continue # Lo [30] UGARITIC LETTER ALPA..UGARITIC LETTER SSU +103A0..103C3 ; ID_Continue # Lo [36] OLD PERSIAN SIGN A..OLD PERSIAN SIGN HA +103C8..103CF ; ID_Continue # Lo [8] OLD PERSIAN SIGN AURAMAZDAA..OLD PERSIAN SIGN BUUMISH +103D1..103D5 ; ID_Continue # Nl [5] OLD PERSIAN NUMBER ONE..OLD PERSIAN NUMBER HUNDRED +10400..1044F ; ID_Continue # L& [80] DESERET CAPITAL LETTER LONG I..DESERET SMALL LETTER EW +10450..1049D ; ID_Continue # Lo [78] SHAVIAN LETTER PEEP..OSMANYA LETTER OO +104A0..104A9 ; ID_Continue # Nd [10] OSMANYA DIGIT ZERO..OSMANYA DIGIT NINE +10800..10805 ; ID_Continue # Lo [6] CYPRIOT SYLLABLE A..CYPRIOT SYLLABLE JA +10808 ; ID_Continue # Lo CYPRIOT SYLLABLE JO +1080A..10835 ; ID_Continue # Lo [44] CYPRIOT SYLLABLE KA..CYPRIOT SYLLABLE WO +10837..10838 ; ID_Continue # Lo [2] CYPRIOT SYLLABLE XA..CYPRIOT SYLLABLE XE +1083C ; ID_Continue # Lo CYPRIOT SYLLABLE ZA +1083F..10855 ; ID_Continue # Lo [23] CYPRIOT SYLLABLE ZO..IMPERIAL ARAMAIC LETTER TAW +10900..10915 ; ID_Continue # Lo [22] PHOENICIAN LETTER ALF..PHOENICIAN LETTER TAU +10920..10939 ; ID_Continue # Lo [26] LYDIAN LETTER A..LYDIAN LETTER C +10A00 ; ID_Continue # Lo KHAROSHTHI LETTER A +10A01..10A03 ; ID_Continue # Mn [3] KHAROSHTHI VOWEL SIGN I..KHAROSHTHI VOWEL SIGN VOCALIC R +10A05..10A06 ; ID_Continue # Mn [2] KHAROSHTHI VOWEL SIGN E..KHAROSHTHI VOWEL SIGN O +10A0C..10A0F ; ID_Continue # Mn [4] KHAROSHTHI VOWEL LENGTH MARK..KHAROSHTHI SIGN VISARGA +10A10..10A13 ; ID_Continue # Lo [4] KHAROSHTHI LETTER KA..KHAROSHTHI LETTER GHA +10A15..10A17 ; ID_Continue # Lo [3] KHAROSHTHI LETTER CA..KHAROSHTHI LETTER JA +10A19..10A33 ; ID_Continue # Lo [27] KHAROSHTHI LETTER NYA..KHAROSHTHI LETTER TTTHA +10A38..10A3A ; ID_Continue # Mn [3] KHAROSHTHI SIGN BAR ABOVE..KHAROSHTHI SIGN DOT BELOW +10A3F ; ID_Continue # Mn KHAROSHTHI VIRAMA +10A60..10A7C ; ID_Continue # Lo [29] OLD SOUTH ARABIAN LETTER HE..OLD SOUTH ARABIAN LETTER THETH +10B00..10B35 ; ID_Continue # Lo [54] AVESTAN LETTER A..AVESTAN LETTER HE +10B40..10B55 ; ID_Continue # Lo [22] INSCRIPTIONAL PARTHIAN LETTER ALEPH..INSCRIPTIONAL PARTHIAN LETTER TAW +10B60..10B72 ; ID_Continue # Lo [19] INSCRIPTIONAL PAHLAVI LETTER ALEPH..INSCRIPTIONAL PAHLAVI LETTER TAW +10C00..10C48 ; ID_Continue # Lo [73] OLD TURKIC LETTER ORKHON A..OLD TURKIC LETTER ORKHON BASH +11080..11081 ; ID_Continue # Mn [2] KAITHI SIGN CANDRABINDU..KAITHI SIGN ANUSVARA +11082 ; ID_Continue # Mc KAITHI SIGN VISARGA +11083..110AF ; ID_Continue # Lo [45] KAITHI LETTER A..KAITHI LETTER HA +110B0..110B2 ; ID_Continue # Mc [3] KAITHI VOWEL SIGN AA..KAITHI VOWEL SIGN II +110B3..110B6 ; ID_Continue # Mn [4] KAITHI VOWEL SIGN U..KAITHI VOWEL SIGN AI +110B7..110B8 ; ID_Continue # Mc [2] KAITHI VOWEL SIGN O..KAITHI VOWEL SIGN AU +110B9..110BA ; ID_Continue # Mn [2] KAITHI SIGN VIRAMA..KAITHI SIGN NUKTA +12000..1236E ; ID_Continue # Lo [879] CUNEIFORM SIGN A..CUNEIFORM SIGN ZUM +12400..12462 ; ID_Continue # Nl [99] CUNEIFORM NUMERIC SIGN TWO ASH..CUNEIFORM NUMERIC SIGN OLD ASSYRIAN ONE QUARTER +13000..1342E ; ID_Continue # Lo [1071] EGYPTIAN HIEROGLYPH A001..EGYPTIAN HIEROGLYPH AA032 +1D165..1D166 ; ID_Continue # Mc [2] MUSICAL SYMBOL COMBINING STEM..MUSICAL SYMBOL COMBINING SPRECHGESANG STEM +1D167..1D169 ; ID_Continue # Mn [3] MUSICAL SYMBOL COMBINING TREMOLO-1..MUSICAL SYMBOL COMBINING TREMOLO-3 +1D16D..1D172 ; ID_Continue # Mc [6] MUSICAL SYMBOL COMBINING AUGMENTATION DOT..MUSICAL SYMBOL COMBINING FLAG-5 +1D17B..1D182 ; ID_Continue # Mn [8] MUSICAL SYMBOL COMBINING ACCENT..MUSICAL SYMBOL COMBINING LOURE +1D185..1D18B ; ID_Continue # Mn [7] MUSICAL SYMBOL COMBINING DOIT..MUSICAL SYMBOL COMBINING TRIPLE TONGUE +1D1AA..1D1AD ; ID_Continue # Mn [4] MUSICAL SYMBOL COMBINING DOWN BOW..MUSICAL SYMBOL COMBINING SNAP PIZZICATO +1D242..1D244 ; ID_Continue # Mn [3] COMBINING GREEK MUSICAL TRISEME..COMBINING GREEK MUSICAL PENTASEME +1D400..1D454 ; ID_Continue # L& [85] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL ITALIC SMALL G +1D456..1D49C ; ID_Continue # L& [71] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; ID_Continue # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; ID_Continue # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; ID_Continue # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; ID_Continue # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B9 ; ID_Continue # L& [12] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT SMALL D +1D4BB ; ID_Continue # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; ID_Continue # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D505 ; ID_Continue # L& [65] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; ID_Continue # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; ID_Continue # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; ID_Continue # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D51E..1D539 ; ID_Continue # L& [28] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; ID_Continue # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; ID_Continue # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; ID_Continue # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; ID_Continue # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D552..1D6A5 ; ID_Continue # L& [340] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6A8..1D6C0 ; ID_Continue # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6C2..1D6DA ; ID_Continue # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DC..1D6FA ; ID_Continue # L& [31] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL ITALIC CAPITAL OMEGA +1D6FC..1D714 ; ID_Continue # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D716..1D734 ; ID_Continue # L& [31] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D736..1D74E ; ID_Continue # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D750..1D76E ; ID_Continue # L& [31] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D770..1D788 ; ID_Continue # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D78A..1D7A8 ; ID_Continue # L& [31] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7AA..1D7C2 ; ID_Continue # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C4..1D7CB ; ID_Continue # L& [8] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD SMALL DIGAMMA +1D7CE..1D7FF ; ID_Continue # Nd [50] MATHEMATICAL BOLD DIGIT ZERO..MATHEMATICAL MONOSPACE DIGIT NINE +20000..2A6D6 ; ID_Continue # Lo [42711] CJK UNIFIED IDEOGRAPH-20000..CJK UNIFIED IDEOGRAPH-2A6D6 +2A700..2B734 ; ID_Continue # Lo [4149] CJK UNIFIED IDEOGRAPH-2A700..CJK UNIFIED IDEOGRAPH-2B734 +2F800..2FA1D ; ID_Continue # Lo [542] CJK COMPATIBILITY IDEOGRAPH-2F800..CJK COMPATIBILITY IDEOGRAPH-2FA1D +E0100..E01EF ; ID_Continue # Mn [240] VARIATION SELECTOR-17..VARIATION SELECTOR-256 + +# Total code points: 101634 + +# ================================================ + +# Derived Property: XID_Start +# ID_Start modified for closure under NFKx +# Modified as described in UAX #15 +# NOTE: Does NOT remove the non-NFKx characters. +# Merely ensures that if isIdentifer(string) then isIdentifier(NFKx(string)) +# NOTE: See UAX #31 for more information + +0041..005A ; XID_Start # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +0061..007A ; XID_Start # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00AA ; XID_Start # L& FEMININE ORDINAL INDICATOR +00B5 ; XID_Start # L& MICRO SIGN +00BA ; XID_Start # L& MASCULINE ORDINAL INDICATOR +00C0..00D6 ; XID_Start # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00F6 ; XID_Start # L& [31] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER O WITH DIAERESIS +00F8..01BA ; XID_Start # L& [195] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER EZH WITH TAIL +01BB ; XID_Start # Lo LATIN LETTER TWO WITH STROKE +01BC..01BF ; XID_Start # L& [4] LATIN CAPITAL LETTER TONE FIVE..LATIN LETTER WYNN +01C0..01C3 ; XID_Start # Lo [4] LATIN LETTER DENTAL CLICK..LATIN LETTER RETROFLEX CLICK +01C4..0293 ; XID_Start # L& [208] LATIN CAPITAL LETTER DZ WITH CARON..LATIN SMALL LETTER EZH WITH CURL +0294 ; XID_Start # Lo LATIN LETTER GLOTTAL STOP +0295..02AF ; XID_Start # L& [27] LATIN LETTER PHARYNGEAL VOICED FRICATIVE..LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL +02B0..02C1 ; XID_Start # Lm [18] MODIFIER LETTER SMALL H..MODIFIER LETTER REVERSED GLOTTAL STOP +02C6..02D1 ; XID_Start # Lm [12] MODIFIER LETTER CIRCUMFLEX ACCENT..MODIFIER LETTER HALF TRIANGULAR COLON +02E0..02E4 ; XID_Start # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +02EC ; XID_Start # Lm MODIFIER LETTER VOICING +02EE ; XID_Start # Lm MODIFIER LETTER DOUBLE APOSTROPHE +0370..0373 ; XID_Start # L& [4] GREEK CAPITAL LETTER HETA..GREEK SMALL LETTER ARCHAIC SAMPI +0374 ; XID_Start # Lm GREEK NUMERAL SIGN +0376..0377 ; XID_Start # L& [2] GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA..GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037B..037D ; XID_Start # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0386 ; XID_Start # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0388..038A ; XID_Start # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; XID_Start # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..03A1 ; XID_Start # L& [20] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER RHO +03A3..03F5 ; XID_Start # L& [83] GREEK CAPITAL LETTER SIGMA..GREEK LUNATE EPSILON SYMBOL +03F7..0481 ; XID_Start # L& [139] GREEK CAPITAL LETTER SHO..CYRILLIC SMALL LETTER KOPPA +048A..0525 ; XID_Start # L& [156] CYRILLIC CAPITAL LETTER SHORT I WITH TAIL..CYRILLIC SMALL LETTER PE WITH DESCENDER +0531..0556 ; XID_Start # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0559 ; XID_Start # Lm ARMENIAN MODIFIER LETTER LEFT HALF RING +0561..0587 ; XID_Start # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +05D0..05EA ; XID_Start # Lo [27] HEBREW LETTER ALEF..HEBREW LETTER TAV +05F0..05F2 ; XID_Start # Lo [3] HEBREW LIGATURE YIDDISH DOUBLE VAV..HEBREW LIGATURE YIDDISH DOUBLE YOD +0621..063F ; XID_Start # Lo [31] ARABIC LETTER HAMZA..ARABIC LETTER FARSI YEH WITH THREE DOTS ABOVE +0640 ; XID_Start # Lm ARABIC TATWEEL +0641..064A ; XID_Start # Lo [10] ARABIC LETTER FEH..ARABIC LETTER YEH +066E..066F ; XID_Start # Lo [2] ARABIC LETTER DOTLESS BEH..ARABIC LETTER DOTLESS QAF +0671..06D3 ; XID_Start # Lo [99] ARABIC LETTER ALEF WASLA..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE +06D5 ; XID_Start # Lo ARABIC LETTER AE +06E5..06E6 ; XID_Start # Lm [2] ARABIC SMALL WAW..ARABIC SMALL YEH +06EE..06EF ; XID_Start # Lo [2] ARABIC LETTER DAL WITH INVERTED V..ARABIC LETTER REH WITH INVERTED V +06FA..06FC ; XID_Start # Lo [3] ARABIC LETTER SHEEN WITH DOT BELOW..ARABIC LETTER GHAIN WITH DOT BELOW +06FF ; XID_Start # Lo ARABIC LETTER HEH WITH INVERTED V +0710 ; XID_Start # Lo SYRIAC LETTER ALAPH +0712..072F ; XID_Start # Lo [30] SYRIAC LETTER BETH..SYRIAC LETTER PERSIAN DHALATH +074D..07A5 ; XID_Start # Lo [89] SYRIAC LETTER SOGDIAN ZHAIN..THAANA LETTER WAAVU +07B1 ; XID_Start # Lo THAANA LETTER NAA +07CA..07EA ; XID_Start # Lo [33] NKO LETTER A..NKO LETTER JONA RA +07F4..07F5 ; XID_Start # Lm [2] NKO HIGH TONE APOSTROPHE..NKO LOW TONE APOSTROPHE +07FA ; XID_Start # Lm NKO LAJANYALAN +0800..0815 ; XID_Start # Lo [22] SAMARITAN LETTER ALAF..SAMARITAN LETTER TAAF +081A ; XID_Start # Lm SAMARITAN MODIFIER LETTER EPENTHETIC YUT +0824 ; XID_Start # Lm SAMARITAN MODIFIER LETTER SHORT A +0828 ; XID_Start # Lm SAMARITAN MODIFIER LETTER I +0904..0939 ; XID_Start # Lo [54] DEVANAGARI LETTER SHORT A..DEVANAGARI LETTER HA +093D ; XID_Start # Lo DEVANAGARI SIGN AVAGRAHA +0950 ; XID_Start # Lo DEVANAGARI OM +0958..0961 ; XID_Start # Lo [10] DEVANAGARI LETTER QA..DEVANAGARI LETTER VOCALIC LL +0971 ; XID_Start # Lm DEVANAGARI SIGN HIGH SPACING DOT +0972 ; XID_Start # Lo DEVANAGARI LETTER CANDRA A +0979..097F ; XID_Start # Lo [7] DEVANAGARI LETTER ZHA..DEVANAGARI LETTER BBA +0985..098C ; XID_Start # Lo [8] BENGALI LETTER A..BENGALI LETTER VOCALIC L +098F..0990 ; XID_Start # Lo [2] BENGALI LETTER E..BENGALI LETTER AI +0993..09A8 ; XID_Start # Lo [22] BENGALI LETTER O..BENGALI LETTER NA +09AA..09B0 ; XID_Start # Lo [7] BENGALI LETTER PA..BENGALI LETTER RA +09B2 ; XID_Start # Lo BENGALI LETTER LA +09B6..09B9 ; XID_Start # Lo [4] BENGALI LETTER SHA..BENGALI LETTER HA +09BD ; XID_Start # Lo BENGALI SIGN AVAGRAHA +09CE ; XID_Start # Lo BENGALI LETTER KHANDA TA +09DC..09DD ; XID_Start # Lo [2] BENGALI LETTER RRA..BENGALI LETTER RHA +09DF..09E1 ; XID_Start # Lo [3] BENGALI LETTER YYA..BENGALI LETTER VOCALIC LL +09F0..09F1 ; XID_Start # Lo [2] BENGALI LETTER RA WITH MIDDLE DIAGONAL..BENGALI LETTER RA WITH LOWER DIAGONAL +0A05..0A0A ; XID_Start # Lo [6] GURMUKHI LETTER A..GURMUKHI LETTER UU +0A0F..0A10 ; XID_Start # Lo [2] GURMUKHI LETTER EE..GURMUKHI LETTER AI +0A13..0A28 ; XID_Start # Lo [22] GURMUKHI LETTER OO..GURMUKHI LETTER NA +0A2A..0A30 ; XID_Start # Lo [7] GURMUKHI LETTER PA..GURMUKHI LETTER RA +0A32..0A33 ; XID_Start # Lo [2] GURMUKHI LETTER LA..GURMUKHI LETTER LLA +0A35..0A36 ; XID_Start # Lo [2] GURMUKHI LETTER VA..GURMUKHI LETTER SHA +0A38..0A39 ; XID_Start # Lo [2] GURMUKHI LETTER SA..GURMUKHI LETTER HA +0A59..0A5C ; XID_Start # Lo [4] GURMUKHI LETTER KHHA..GURMUKHI LETTER RRA +0A5E ; XID_Start # Lo GURMUKHI LETTER FA +0A72..0A74 ; XID_Start # Lo [3] GURMUKHI IRI..GURMUKHI EK ONKAR +0A85..0A8D ; XID_Start # Lo [9] GUJARATI LETTER A..GUJARATI VOWEL CANDRA E +0A8F..0A91 ; XID_Start # Lo [3] GUJARATI LETTER E..GUJARATI VOWEL CANDRA O +0A93..0AA8 ; XID_Start # Lo [22] GUJARATI LETTER O..GUJARATI LETTER NA +0AAA..0AB0 ; XID_Start # Lo [7] GUJARATI LETTER PA..GUJARATI LETTER RA +0AB2..0AB3 ; XID_Start # Lo [2] GUJARATI LETTER LA..GUJARATI LETTER LLA +0AB5..0AB9 ; XID_Start # Lo [5] GUJARATI LETTER VA..GUJARATI LETTER HA +0ABD ; XID_Start # Lo GUJARATI SIGN AVAGRAHA +0AD0 ; XID_Start # Lo GUJARATI OM +0AE0..0AE1 ; XID_Start # Lo [2] GUJARATI LETTER VOCALIC RR..GUJARATI LETTER VOCALIC LL +0B05..0B0C ; XID_Start # Lo [8] ORIYA LETTER A..ORIYA LETTER VOCALIC L +0B0F..0B10 ; XID_Start # Lo [2] ORIYA LETTER E..ORIYA LETTER AI +0B13..0B28 ; XID_Start # Lo [22] ORIYA LETTER O..ORIYA LETTER NA +0B2A..0B30 ; XID_Start # Lo [7] ORIYA LETTER PA..ORIYA LETTER RA +0B32..0B33 ; XID_Start # Lo [2] ORIYA LETTER LA..ORIYA LETTER LLA +0B35..0B39 ; XID_Start # Lo [5] ORIYA LETTER VA..ORIYA LETTER HA +0B3D ; XID_Start # Lo ORIYA SIGN AVAGRAHA +0B5C..0B5D ; XID_Start # Lo [2] ORIYA LETTER RRA..ORIYA LETTER RHA +0B5F..0B61 ; XID_Start # Lo [3] ORIYA LETTER YYA..ORIYA LETTER VOCALIC LL +0B71 ; XID_Start # Lo ORIYA LETTER WA +0B83 ; XID_Start # Lo TAMIL SIGN VISARGA +0B85..0B8A ; XID_Start # Lo [6] TAMIL LETTER A..TAMIL LETTER UU +0B8E..0B90 ; XID_Start # Lo [3] TAMIL LETTER E..TAMIL LETTER AI +0B92..0B95 ; XID_Start # Lo [4] TAMIL LETTER O..TAMIL LETTER KA +0B99..0B9A ; XID_Start # Lo [2] TAMIL LETTER NGA..TAMIL LETTER CA +0B9C ; XID_Start # Lo TAMIL LETTER JA +0B9E..0B9F ; XID_Start # Lo [2] TAMIL LETTER NYA..TAMIL LETTER TTA +0BA3..0BA4 ; XID_Start # Lo [2] TAMIL LETTER NNA..TAMIL LETTER TA +0BA8..0BAA ; XID_Start # Lo [3] TAMIL LETTER NA..TAMIL LETTER PA +0BAE..0BB9 ; XID_Start # Lo [12] TAMIL LETTER MA..TAMIL LETTER HA +0BD0 ; XID_Start # Lo TAMIL OM +0C05..0C0C ; XID_Start # Lo [8] TELUGU LETTER A..TELUGU LETTER VOCALIC L +0C0E..0C10 ; XID_Start # Lo [3] TELUGU LETTER E..TELUGU LETTER AI +0C12..0C28 ; XID_Start # Lo [23] TELUGU LETTER O..TELUGU LETTER NA +0C2A..0C33 ; XID_Start # Lo [10] TELUGU LETTER PA..TELUGU LETTER LLA +0C35..0C39 ; XID_Start # Lo [5] TELUGU LETTER VA..TELUGU LETTER HA +0C3D ; XID_Start # Lo TELUGU SIGN AVAGRAHA +0C58..0C59 ; XID_Start # Lo [2] TELUGU LETTER TSA..TELUGU LETTER DZA +0C60..0C61 ; XID_Start # Lo [2] TELUGU LETTER VOCALIC RR..TELUGU LETTER VOCALIC LL +0C85..0C8C ; XID_Start # Lo [8] KANNADA LETTER A..KANNADA LETTER VOCALIC L +0C8E..0C90 ; XID_Start # Lo [3] KANNADA LETTER E..KANNADA LETTER AI +0C92..0CA8 ; XID_Start # Lo [23] KANNADA LETTER O..KANNADA LETTER NA +0CAA..0CB3 ; XID_Start # Lo [10] KANNADA LETTER PA..KANNADA LETTER LLA +0CB5..0CB9 ; XID_Start # Lo [5] KANNADA LETTER VA..KANNADA LETTER HA +0CBD ; XID_Start # Lo KANNADA SIGN AVAGRAHA +0CDE ; XID_Start # Lo KANNADA LETTER FA +0CE0..0CE1 ; XID_Start # Lo [2] KANNADA LETTER VOCALIC RR..KANNADA LETTER VOCALIC LL +0D05..0D0C ; XID_Start # Lo [8] MALAYALAM LETTER A..MALAYALAM LETTER VOCALIC L +0D0E..0D10 ; XID_Start # Lo [3] MALAYALAM LETTER E..MALAYALAM LETTER AI +0D12..0D28 ; XID_Start # Lo [23] MALAYALAM LETTER O..MALAYALAM LETTER NA +0D2A..0D39 ; XID_Start # Lo [16] MALAYALAM LETTER PA..MALAYALAM LETTER HA +0D3D ; XID_Start # Lo MALAYALAM SIGN AVAGRAHA +0D60..0D61 ; XID_Start # Lo [2] MALAYALAM LETTER VOCALIC RR..MALAYALAM LETTER VOCALIC LL +0D7A..0D7F ; XID_Start # Lo [6] MALAYALAM LETTER CHILLU NN..MALAYALAM LETTER CHILLU K +0D85..0D96 ; XID_Start # Lo [18] SINHALA LETTER AYANNA..SINHALA LETTER AUYANNA +0D9A..0DB1 ; XID_Start # Lo [24] SINHALA LETTER ALPAPRAANA KAYANNA..SINHALA LETTER DANTAJA NAYANNA +0DB3..0DBB ; XID_Start # Lo [9] SINHALA LETTER SANYAKA DAYANNA..SINHALA LETTER RAYANNA +0DBD ; XID_Start # Lo SINHALA LETTER DANTAJA LAYANNA +0DC0..0DC6 ; XID_Start # Lo [7] SINHALA LETTER VAYANNA..SINHALA LETTER FAYANNA +0E01..0E30 ; XID_Start # Lo [48] THAI CHARACTER KO KAI..THAI CHARACTER SARA A +0E32 ; XID_Start # Lo THAI CHARACTER SARA AA +0E40..0E45 ; XID_Start # Lo [6] THAI CHARACTER SARA E..THAI CHARACTER LAKKHANGYAO +0E46 ; XID_Start # Lm THAI CHARACTER MAIYAMOK +0E81..0E82 ; XID_Start # Lo [2] LAO LETTER KO..LAO LETTER KHO SUNG +0E84 ; XID_Start # Lo LAO LETTER KHO TAM +0E87..0E88 ; XID_Start # Lo [2] LAO LETTER NGO..LAO LETTER CO +0E8A ; XID_Start # Lo LAO LETTER SO TAM +0E8D ; XID_Start # Lo LAO LETTER NYO +0E94..0E97 ; XID_Start # Lo [4] LAO LETTER DO..LAO LETTER THO TAM +0E99..0E9F ; XID_Start # Lo [7] LAO LETTER NO..LAO LETTER FO SUNG +0EA1..0EA3 ; XID_Start # Lo [3] LAO LETTER MO..LAO LETTER LO LING +0EA5 ; XID_Start # Lo LAO LETTER LO LOOT +0EA7 ; XID_Start # Lo LAO LETTER WO +0EAA..0EAB ; XID_Start # Lo [2] LAO LETTER SO SUNG..LAO LETTER HO SUNG +0EAD..0EB0 ; XID_Start # Lo [4] LAO LETTER O..LAO VOWEL SIGN A +0EB2 ; XID_Start # Lo LAO VOWEL SIGN AA +0EBD ; XID_Start # Lo LAO SEMIVOWEL SIGN NYO +0EC0..0EC4 ; XID_Start # Lo [5] LAO VOWEL SIGN E..LAO VOWEL SIGN AI +0EC6 ; XID_Start # Lm LAO KO LA +0EDC..0EDD ; XID_Start # Lo [2] LAO HO NO..LAO HO MO +0F00 ; XID_Start # Lo TIBETAN SYLLABLE OM +0F40..0F47 ; XID_Start # Lo [8] TIBETAN LETTER KA..TIBETAN LETTER JA +0F49..0F6C ; XID_Start # Lo [36] TIBETAN LETTER NYA..TIBETAN LETTER RRA +0F88..0F8B ; XID_Start # Lo [4] TIBETAN SIGN LCE TSA CAN..TIBETAN SIGN GRU MED RGYINGS +1000..102A ; XID_Start # Lo [43] MYANMAR LETTER KA..MYANMAR LETTER AU +103F ; XID_Start # Lo MYANMAR LETTER GREAT SA +1050..1055 ; XID_Start # Lo [6] MYANMAR LETTER SHA..MYANMAR LETTER VOCALIC LL +105A..105D ; XID_Start # Lo [4] MYANMAR LETTER MON NGA..MYANMAR LETTER MON BBE +1061 ; XID_Start # Lo MYANMAR LETTER SGAW KAREN SHA +1065..1066 ; XID_Start # Lo [2] MYANMAR LETTER WESTERN PWO KAREN THA..MYANMAR LETTER WESTERN PWO KAREN PWA +106E..1070 ; XID_Start # Lo [3] MYANMAR LETTER EASTERN PWO KAREN NNA..MYANMAR LETTER EASTERN PWO KAREN GHWA +1075..1081 ; XID_Start # Lo [13] MYANMAR LETTER SHAN KA..MYANMAR LETTER SHAN HA +108E ; XID_Start # Lo MYANMAR LETTER RUMAI PALAUNG FA +10A0..10C5 ; XID_Start # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +10D0..10FA ; XID_Start # Lo [43] GEORGIAN LETTER AN..GEORGIAN LETTER AIN +10FC ; XID_Start # Lm MODIFIER LETTER GEORGIAN NAR +1100..1248 ; XID_Start # Lo [329] HANGUL CHOSEONG KIYEOK..ETHIOPIC SYLLABLE QWA +124A..124D ; XID_Start # Lo [4] ETHIOPIC SYLLABLE QWI..ETHIOPIC SYLLABLE QWE +1250..1256 ; XID_Start # Lo [7] ETHIOPIC SYLLABLE QHA..ETHIOPIC SYLLABLE QHO +1258 ; XID_Start # Lo ETHIOPIC SYLLABLE QHWA +125A..125D ; XID_Start # Lo [4] ETHIOPIC SYLLABLE QHWI..ETHIOPIC SYLLABLE QHWE +1260..1288 ; XID_Start # Lo [41] ETHIOPIC SYLLABLE BA..ETHIOPIC SYLLABLE XWA +128A..128D ; XID_Start # Lo [4] ETHIOPIC SYLLABLE XWI..ETHIOPIC SYLLABLE XWE +1290..12B0 ; XID_Start # Lo [33] ETHIOPIC SYLLABLE NA..ETHIOPIC SYLLABLE KWA +12B2..12B5 ; XID_Start # Lo [4] ETHIOPIC SYLLABLE KWI..ETHIOPIC SYLLABLE KWE +12B8..12BE ; XID_Start # Lo [7] ETHIOPIC SYLLABLE KXA..ETHIOPIC SYLLABLE KXO +12C0 ; XID_Start # Lo ETHIOPIC SYLLABLE KXWA +12C2..12C5 ; XID_Start # Lo [4] ETHIOPIC SYLLABLE KXWI..ETHIOPIC SYLLABLE KXWE +12C8..12D6 ; XID_Start # Lo [15] ETHIOPIC SYLLABLE WA..ETHIOPIC SYLLABLE PHARYNGEAL O +12D8..1310 ; XID_Start # Lo [57] ETHIOPIC SYLLABLE ZA..ETHIOPIC SYLLABLE GWA +1312..1315 ; XID_Start # Lo [4] ETHIOPIC SYLLABLE GWI..ETHIOPIC SYLLABLE GWE +1318..135A ; XID_Start # Lo [67] ETHIOPIC SYLLABLE GGA..ETHIOPIC SYLLABLE FYA +1380..138F ; XID_Start # Lo [16] ETHIOPIC SYLLABLE SEBATBEIT MWA..ETHIOPIC SYLLABLE PWE +13A0..13F4 ; XID_Start # Lo [85] CHEROKEE LETTER A..CHEROKEE LETTER YV +1401..166C ; XID_Start # Lo [620] CANADIAN SYLLABICS E..CANADIAN SYLLABICS CARRIER TTSA +166F..167F ; XID_Start # Lo [17] CANADIAN SYLLABICS QAI..CANADIAN SYLLABICS BLACKFOOT W +1681..169A ; XID_Start # Lo [26] OGHAM LETTER BEITH..OGHAM LETTER PEITH +16A0..16EA ; XID_Start # Lo [75] RUNIC LETTER FEHU FEOH FE F..RUNIC LETTER X +16EE..16F0 ; XID_Start # Nl [3] RUNIC ARLAUG SYMBOL..RUNIC BELGTHOR SYMBOL +1700..170C ; XID_Start # Lo [13] TAGALOG LETTER A..TAGALOG LETTER YA +170E..1711 ; XID_Start # Lo [4] TAGALOG LETTER LA..TAGALOG LETTER HA +1720..1731 ; XID_Start # Lo [18] HANUNOO LETTER A..HANUNOO LETTER HA +1740..1751 ; XID_Start # Lo [18] BUHID LETTER A..BUHID LETTER HA +1760..176C ; XID_Start # Lo [13] TAGBANWA LETTER A..TAGBANWA LETTER YA +176E..1770 ; XID_Start # Lo [3] TAGBANWA LETTER LA..TAGBANWA LETTER SA +1780..17B3 ; XID_Start # Lo [52] KHMER LETTER KA..KHMER INDEPENDENT VOWEL QAU +17D7 ; XID_Start # Lm KHMER SIGN LEK TOO +17DC ; XID_Start # Lo KHMER SIGN AVAKRAHASANYA +1820..1842 ; XID_Start # Lo [35] MONGOLIAN LETTER A..MONGOLIAN LETTER CHI +1843 ; XID_Start # Lm MONGOLIAN LETTER TODO LONG VOWEL SIGN +1844..1877 ; XID_Start # Lo [52] MONGOLIAN LETTER TODO E..MONGOLIAN LETTER MANCHU ZHA +1880..18A8 ; XID_Start # Lo [41] MONGOLIAN LETTER ALI GALI ANUSVARA ONE..MONGOLIAN LETTER MANCHU ALI GALI BHA +18AA ; XID_Start # Lo MONGOLIAN LETTER MANCHU ALI GALI LHA +18B0..18F5 ; XID_Start # Lo [70] CANADIAN SYLLABICS OY..CANADIAN SYLLABICS CARRIER DENTAL S +1900..191C ; XID_Start # Lo [29] LIMBU VOWEL-CARRIER LETTER..LIMBU LETTER HA +1950..196D ; XID_Start # Lo [30] TAI LE LETTER KA..TAI LE LETTER AI +1970..1974 ; XID_Start # Lo [5] TAI LE LETTER TONE-2..TAI LE LETTER TONE-6 +1980..19AB ; XID_Start # Lo [44] NEW TAI LUE LETTER HIGH QA..NEW TAI LUE LETTER LOW SUA +19C1..19C7 ; XID_Start # Lo [7] NEW TAI LUE LETTER FINAL V..NEW TAI LUE LETTER FINAL B +1A00..1A16 ; XID_Start # Lo [23] BUGINESE LETTER KA..BUGINESE LETTER HA +1A20..1A54 ; XID_Start # Lo [53] TAI THAM LETTER HIGH KA..TAI THAM LETTER GREAT SA +1AA7 ; XID_Start # Lm TAI THAM SIGN MAI YAMOK +1B05..1B33 ; XID_Start # Lo [47] BALINESE LETTER AKARA..BALINESE LETTER HA +1B45..1B4B ; XID_Start # Lo [7] BALINESE LETTER KAF SASAK..BALINESE LETTER ASYURA SASAK +1B83..1BA0 ; XID_Start # Lo [30] SUNDANESE LETTER A..SUNDANESE LETTER HA +1BAE..1BAF ; XID_Start # Lo [2] SUNDANESE LETTER KHA..SUNDANESE LETTER SYA +1C00..1C23 ; XID_Start # Lo [36] LEPCHA LETTER KA..LEPCHA LETTER A +1C4D..1C4F ; XID_Start # Lo [3] LEPCHA LETTER TTA..LEPCHA LETTER DDA +1C5A..1C77 ; XID_Start # Lo [30] OL CHIKI LETTER LA..OL CHIKI LETTER OH +1C78..1C7D ; XID_Start # Lm [6] OL CHIKI MU TTUDDAG..OL CHIKI AHAD +1CE9..1CEC ; XID_Start # Lo [4] VEDIC SIGN ANUSVARA ANTARGOMUKHA..VEDIC SIGN ANUSVARA VAMAGOMUKHA WITH TAIL +1CEE..1CF1 ; XID_Start # Lo [4] VEDIC SIGN HEXIFORM LONG ANUSVARA..VEDIC SIGN ANUSVARA UBHAYATO MUKHA +1D00..1D2B ; XID_Start # L& [44] LATIN LETTER SMALL CAPITAL A..CYRILLIC LETTER SMALL CAPITAL EL +1D2C..1D61 ; XID_Start # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D62..1D77 ; XID_Start # L& [22] LATIN SUBSCRIPT SMALL LETTER I..LATIN SMALL LETTER TURNED G +1D78 ; XID_Start # Lm MODIFIER LETTER CYRILLIC EN +1D79..1D9A ; XID_Start # L& [34] LATIN SMALL LETTER INSULAR G..LATIN SMALL LETTER EZH WITH RETROFLEX HOOK +1D9B..1DBF ; XID_Start # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1E00..1F15 ; XID_Start # L& [278] LATIN CAPITAL LETTER A WITH RING BELOW..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F18..1F1D ; XID_Start # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F45 ; XID_Start # L& [38] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F48..1F4D ; XID_Start # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; XID_Start # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F59 ; XID_Start # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; XID_Start # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; XID_Start # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F..1F7D ; XID_Start # L& [31] GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; XID_Start # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FBC ; XID_Start # L& [7] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBE ; XID_Start # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; XID_Start # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FCC ; XID_Start # L& [7] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD0..1FD3 ; XID_Start # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FDB ; XID_Start # L& [6] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE0..1FEC ; XID_Start # L& [13] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF2..1FF4 ; XID_Start # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FFC ; XID_Start # L& [7] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +2071 ; XID_Start # Lm SUPERSCRIPT LATIN SMALL LETTER I +207F ; XID_Start # Lm SUPERSCRIPT LATIN SMALL LETTER N +2090..2094 ; XID_Start # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +2102 ; XID_Start # L& DOUBLE-STRUCK CAPITAL C +2107 ; XID_Start # L& EULER CONSTANT +210A..2113 ; XID_Start # L& [10] SCRIPT SMALL G..SCRIPT SMALL L +2115 ; XID_Start # L& DOUBLE-STRUCK CAPITAL N +2118 ; XID_Start # So SCRIPT CAPITAL P +2119..211D ; XID_Start # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +2124 ; XID_Start # L& DOUBLE-STRUCK CAPITAL Z +2126 ; XID_Start # L& OHM SIGN +2128 ; XID_Start # L& BLACK-LETTER CAPITAL Z +212A..212D ; XID_Start # L& [4] KELVIN SIGN..BLACK-LETTER CAPITAL C +212E ; XID_Start # So ESTIMATED SYMBOL +212F..2134 ; XID_Start # L& [6] SCRIPT SMALL E..SCRIPT SMALL O +2135..2138 ; XID_Start # Lo [4] ALEF SYMBOL..DALET SYMBOL +2139 ; XID_Start # L& INFORMATION SOURCE +213C..213F ; XID_Start # L& [4] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK CAPITAL PI +2145..2149 ; XID_Start # L& [5] DOUBLE-STRUCK ITALIC CAPITAL D..DOUBLE-STRUCK ITALIC SMALL J +214E ; XID_Start # L& TURNED SMALL F +2160..2182 ; XID_Start # Nl [35] ROMAN NUMERAL ONE..ROMAN NUMERAL TEN THOUSAND +2183..2184 ; XID_Start # L& [2] ROMAN NUMERAL REVERSED ONE HUNDRED..LATIN SMALL LETTER REVERSED C +2185..2188 ; XID_Start # Nl [4] ROMAN NUMERAL SIX LATE FORM..ROMAN NUMERAL ONE HUNDRED THOUSAND +2C00..2C2E ; XID_Start # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C30..2C5E ; XID_Start # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C60..2C7C ; XID_Start # L& [29] LATIN CAPITAL LETTER L WITH DOUBLE BAR..LATIN SUBSCRIPT SMALL LETTER J +2C7D ; XID_Start # Lm MODIFIER LETTER CAPITAL V +2C7E..2CE4 ; XID_Start # L& [103] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC SYMBOL KAI +2CEB..2CEE ; XID_Start # L& [4] COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI..COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2D00..2D25 ; XID_Start # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +2D30..2D65 ; XID_Start # Lo [54] TIFINAGH LETTER YA..TIFINAGH LETTER YAZZ +2D6F ; XID_Start # Lm TIFINAGH MODIFIER LETTER LABIALIZATION MARK +2D80..2D96 ; XID_Start # Lo [23] ETHIOPIC SYLLABLE LOA..ETHIOPIC SYLLABLE GGWE +2DA0..2DA6 ; XID_Start # Lo [7] ETHIOPIC SYLLABLE SSA..ETHIOPIC SYLLABLE SSO +2DA8..2DAE ; XID_Start # Lo [7] ETHIOPIC SYLLABLE CCA..ETHIOPIC SYLLABLE CCO +2DB0..2DB6 ; XID_Start # Lo [7] ETHIOPIC SYLLABLE ZZA..ETHIOPIC SYLLABLE ZZO +2DB8..2DBE ; XID_Start # Lo [7] ETHIOPIC SYLLABLE CCHA..ETHIOPIC SYLLABLE CCHO +2DC0..2DC6 ; XID_Start # Lo [7] ETHIOPIC SYLLABLE QYA..ETHIOPIC SYLLABLE QYO +2DC8..2DCE ; XID_Start # Lo [7] ETHIOPIC SYLLABLE KYA..ETHIOPIC SYLLABLE KYO +2DD0..2DD6 ; XID_Start # Lo [7] ETHIOPIC SYLLABLE XYA..ETHIOPIC SYLLABLE XYO +2DD8..2DDE ; XID_Start # Lo [7] ETHIOPIC SYLLABLE GYA..ETHIOPIC SYLLABLE GYO +3005 ; XID_Start # Lm IDEOGRAPHIC ITERATION MARK +3006 ; XID_Start # Lo IDEOGRAPHIC CLOSING MARK +3007 ; XID_Start # Nl IDEOGRAPHIC NUMBER ZERO +3021..3029 ; XID_Start # Nl [9] HANGZHOU NUMERAL ONE..HANGZHOU NUMERAL NINE +3031..3035 ; XID_Start # Lm [5] VERTICAL KANA REPEAT MARK..VERTICAL KANA REPEAT MARK LOWER HALF +3038..303A ; XID_Start # Nl [3] HANGZHOU NUMERAL TEN..HANGZHOU NUMERAL THIRTY +303B ; XID_Start # Lm VERTICAL IDEOGRAPHIC ITERATION MARK +303C ; XID_Start # Lo MASU MARK +3041..3096 ; XID_Start # Lo [86] HIRAGANA LETTER SMALL A..HIRAGANA LETTER SMALL KE +309D..309E ; XID_Start # Lm [2] HIRAGANA ITERATION MARK..HIRAGANA VOICED ITERATION MARK +309F ; XID_Start # Lo HIRAGANA DIGRAPH YORI +30A1..30FA ; XID_Start # Lo [90] KATAKANA LETTER SMALL A..KATAKANA LETTER VO +30FC..30FE ; XID_Start # Lm [3] KATAKANA-HIRAGANA PROLONGED SOUND MARK..KATAKANA VOICED ITERATION MARK +30FF ; XID_Start # Lo KATAKANA DIGRAPH KOTO +3105..312D ; XID_Start # Lo [41] BOPOMOFO LETTER B..BOPOMOFO LETTER IH +3131..318E ; XID_Start # Lo [94] HANGUL LETTER KIYEOK..HANGUL LETTER ARAEAE +31A0..31B7 ; XID_Start # Lo [24] BOPOMOFO LETTER BU..BOPOMOFO FINAL LETTER H +31F0..31FF ; XID_Start # Lo [16] KATAKANA LETTER SMALL KU..KATAKANA LETTER SMALL RO +3400..4DB5 ; XID_Start # Lo [6582] CJK UNIFIED IDEOGRAPH-3400..CJK UNIFIED IDEOGRAPH-4DB5 +4E00..9FCB ; XID_Start # Lo [20940] CJK UNIFIED IDEOGRAPH-4E00..CJK UNIFIED IDEOGRAPH-9FCB +A000..A014 ; XID_Start # Lo [21] YI SYLLABLE IT..YI SYLLABLE E +A015 ; XID_Start # Lm YI SYLLABLE WU +A016..A48C ; XID_Start # Lo [1143] YI SYLLABLE BIT..YI SYLLABLE YYR +A4D0..A4F7 ; XID_Start # Lo [40] LISU LETTER BA..LISU LETTER OE +A4F8..A4FD ; XID_Start # Lm [6] LISU LETTER TONE MYA TI..LISU LETTER TONE MYA JEU +A500..A60B ; XID_Start # Lo [268] VAI SYLLABLE EE..VAI SYLLABLE NG +A60C ; XID_Start # Lm VAI SYLLABLE LENGTHENER +A610..A61F ; XID_Start # Lo [16] VAI SYLLABLE NDOLE FA..VAI SYMBOL JONG +A62A..A62B ; XID_Start # Lo [2] VAI SYLLABLE NDOLE MA..VAI SYLLABLE NDOLE DO +A640..A65F ; XID_Start # L& [32] CYRILLIC CAPITAL LETTER ZEMLYA..CYRILLIC SMALL LETTER YN +A662..A66D ; XID_Start # L& [12] CYRILLIC CAPITAL LETTER SOFT DE..CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A66E ; XID_Start # Lo CYRILLIC LETTER MULTIOCULAR O +A67F ; XID_Start # Lm CYRILLIC PAYEROK +A680..A697 ; XID_Start # L& [24] CYRILLIC CAPITAL LETTER DWE..CYRILLIC SMALL LETTER SHWE +A6A0..A6E5 ; XID_Start # Lo [70] BAMUM LETTER A..BAMUM LETTER KI +A6E6..A6EF ; XID_Start # Nl [10] BAMUM LETTER MO..BAMUM LETTER KOGHOM +A717..A71F ; XID_Start # Lm [9] MODIFIER LETTER DOT VERTICAL BAR..MODIFIER LETTER LOW INVERTED EXCLAMATION MARK +A722..A76F ; XID_Start # L& [78] LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF..LATIN SMALL LETTER CON +A770 ; XID_Start # Lm MODIFIER LETTER US +A771..A787 ; XID_Start # L& [23] LATIN SMALL LETTER DUM..LATIN SMALL LETTER INSULAR T +A788 ; XID_Start # Lm MODIFIER LETTER LOW CIRCUMFLEX ACCENT +A78B..A78C ; XID_Start # L& [2] LATIN CAPITAL LETTER SALTILLO..LATIN SMALL LETTER SALTILLO +A7FB..A801 ; XID_Start # Lo [7] LATIN EPIGRAPHIC LETTER REVERSED F..SYLOTI NAGRI LETTER I +A803..A805 ; XID_Start # Lo [3] SYLOTI NAGRI LETTER U..SYLOTI NAGRI LETTER O +A807..A80A ; XID_Start # Lo [4] SYLOTI NAGRI LETTER KO..SYLOTI NAGRI LETTER GHO +A80C..A822 ; XID_Start # Lo [23] SYLOTI NAGRI LETTER CO..SYLOTI NAGRI LETTER HO +A840..A873 ; XID_Start # Lo [52] PHAGS-PA LETTER KA..PHAGS-PA LETTER CANDRABINDU +A882..A8B3 ; XID_Start # Lo [50] SAURASHTRA LETTER A..SAURASHTRA LETTER LLA +A8F2..A8F7 ; XID_Start # Lo [6] DEVANAGARI SIGN SPACING CANDRABINDU..DEVANAGARI SIGN CANDRABINDU AVAGRAHA +A8FB ; XID_Start # Lo DEVANAGARI HEADSTROKE +A90A..A925 ; XID_Start # Lo [28] KAYAH LI LETTER KA..KAYAH LI LETTER OO +A930..A946 ; XID_Start # Lo [23] REJANG LETTER KA..REJANG LETTER A +A960..A97C ; XID_Start # Lo [29] HANGUL CHOSEONG TIKEUT-MIEUM..HANGUL CHOSEONG SSANGYEORINHIEUH +A984..A9B2 ; XID_Start # Lo [47] JAVANESE LETTER A..JAVANESE LETTER HA +A9CF ; XID_Start # Lm JAVANESE PANGRANGKEP +AA00..AA28 ; XID_Start # Lo [41] CHAM LETTER A..CHAM LETTER HA +AA40..AA42 ; XID_Start # Lo [3] CHAM LETTER FINAL K..CHAM LETTER FINAL NG +AA44..AA4B ; XID_Start # Lo [8] CHAM LETTER FINAL CH..CHAM LETTER FINAL SS +AA60..AA6F ; XID_Start # Lo [16] MYANMAR LETTER KHAMTI GA..MYANMAR LETTER KHAMTI FA +AA70 ; XID_Start # Lm MYANMAR MODIFIER LETTER KHAMTI REDUPLICATION +AA71..AA76 ; XID_Start # Lo [6] MYANMAR LETTER KHAMTI XA..MYANMAR LOGOGRAM KHAMTI HM +AA7A ; XID_Start # Lo MYANMAR LETTER AITON RA +AA80..AAAF ; XID_Start # Lo [48] TAI VIET LETTER LOW KO..TAI VIET LETTER HIGH O +AAB1 ; XID_Start # Lo TAI VIET VOWEL AA +AAB5..AAB6 ; XID_Start # Lo [2] TAI VIET VOWEL E..TAI VIET VOWEL O +AAB9..AABD ; XID_Start # Lo [5] TAI VIET VOWEL UEA..TAI VIET VOWEL AN +AAC0 ; XID_Start # Lo TAI VIET TONE MAI NUENG +AAC2 ; XID_Start # Lo TAI VIET TONE MAI SONG +AADB..AADC ; XID_Start # Lo [2] TAI VIET SYMBOL KON..TAI VIET SYMBOL NUENG +AADD ; XID_Start # Lm TAI VIET SYMBOL SAM +ABC0..ABE2 ; XID_Start # Lo [35] MEETEI MAYEK LETTER KOK..MEETEI MAYEK LETTER I LONSUM +AC00..D7A3 ; XID_Start # Lo [11172] HANGUL SYLLABLE GA..HANGUL SYLLABLE HIH +D7B0..D7C6 ; XID_Start # Lo [23] HANGUL JUNGSEONG O-YEO..HANGUL JUNGSEONG ARAEA-E +D7CB..D7FB ; XID_Start # Lo [49] HANGUL JONGSEONG NIEUN-RIEUL..HANGUL JONGSEONG PHIEUPH-THIEUTH +F900..FA2D ; XID_Start # Lo [302] CJK COMPATIBILITY IDEOGRAPH-F900..CJK COMPATIBILITY IDEOGRAPH-FA2D +FA30..FA6D ; XID_Start # Lo [62] CJK COMPATIBILITY IDEOGRAPH-FA30..CJK COMPATIBILITY IDEOGRAPH-FA6D +FA70..FAD9 ; XID_Start # Lo [106] CJK COMPATIBILITY IDEOGRAPH-FA70..CJK COMPATIBILITY IDEOGRAPH-FAD9 +FB00..FB06 ; XID_Start # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; XID_Start # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FB1D ; XID_Start # Lo HEBREW LETTER YOD WITH HIRIQ +FB1F..FB28 ; XID_Start # Lo [10] HEBREW LIGATURE YIDDISH YOD YOD PATAH..HEBREW LETTER WIDE TAV +FB2A..FB36 ; XID_Start # Lo [13] HEBREW LETTER SHIN WITH SHIN DOT..HEBREW LETTER ZAYIN WITH DAGESH +FB38..FB3C ; XID_Start # Lo [5] HEBREW LETTER TET WITH DAGESH..HEBREW LETTER LAMED WITH DAGESH +FB3E ; XID_Start # Lo HEBREW LETTER MEM WITH DAGESH +FB40..FB41 ; XID_Start # Lo [2] HEBREW LETTER NUN WITH DAGESH..HEBREW LETTER SAMEKH WITH DAGESH +FB43..FB44 ; XID_Start # Lo [2] HEBREW LETTER FINAL PE WITH DAGESH..HEBREW LETTER PE WITH DAGESH +FB46..FBB1 ; XID_Start # Lo [108] HEBREW LETTER TSADI WITH DAGESH..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE FINAL FORM +FBD3..FC5D ; XID_Start # Lo [139] ARABIC LETTER NG ISOLATED FORM..ARABIC LIGATURE ALEF MAKSURA WITH SUPERSCRIPT ALEF ISOLATED FORM +FC64..FD3D ; XID_Start # Lo [218] ARABIC LIGATURE YEH WITH HAMZA ABOVE WITH REH FINAL FORM..ARABIC LIGATURE ALEF WITH FATHATAN ISOLATED FORM +FD50..FD8F ; XID_Start # Lo [64] ARABIC LIGATURE TEH WITH JEEM WITH MEEM INITIAL FORM..ARABIC LIGATURE MEEM WITH KHAH WITH MEEM INITIAL FORM +FD92..FDC7 ; XID_Start # Lo [54] ARABIC LIGATURE MEEM WITH JEEM WITH KHAH INITIAL FORM..ARABIC LIGATURE NOON WITH JEEM WITH YEH FINAL FORM +FDF0..FDF9 ; XID_Start # Lo [10] ARABIC LIGATURE SALLA USED AS KORANIC STOP SIGN ISOLATED FORM..ARABIC LIGATURE SALLA ISOLATED FORM +FE71 ; XID_Start # Lo ARABIC TATWEEL WITH FATHATAN ABOVE +FE73 ; XID_Start # Lo ARABIC TAIL FRAGMENT +FE77 ; XID_Start # Lo ARABIC FATHA MEDIAL FORM +FE79 ; XID_Start # Lo ARABIC DAMMA MEDIAL FORM +FE7B ; XID_Start # Lo ARABIC KASRA MEDIAL FORM +FE7D ; XID_Start # Lo ARABIC SHADDA MEDIAL FORM +FE7F..FEFC ; XID_Start # Lo [126] ARABIC SUKUN MEDIAL FORM..ARABIC LIGATURE LAM WITH ALEF FINAL FORM +FF21..FF3A ; XID_Start # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +FF41..FF5A ; XID_Start # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +FF66..FF6F ; XID_Start # Lo [10] HALFWIDTH KATAKANA LETTER WO..HALFWIDTH KATAKANA LETTER SMALL TU +FF70 ; XID_Start # Lm HALFWIDTH KATAKANA-HIRAGANA PROLONGED SOUND MARK +FF71..FF9D ; XID_Start # Lo [45] HALFWIDTH KATAKANA LETTER A..HALFWIDTH KATAKANA LETTER N +FFA0..FFBE ; XID_Start # Lo [31] HALFWIDTH HANGUL FILLER..HALFWIDTH HANGUL LETTER HIEUH +FFC2..FFC7 ; XID_Start # Lo [6] HALFWIDTH HANGUL LETTER A..HALFWIDTH HANGUL LETTER E +FFCA..FFCF ; XID_Start # Lo [6] HALFWIDTH HANGUL LETTER YEO..HALFWIDTH HANGUL LETTER OE +FFD2..FFD7 ; XID_Start # Lo [6] HALFWIDTH HANGUL LETTER YO..HALFWIDTH HANGUL LETTER YU +FFDA..FFDC ; XID_Start # Lo [3] HALFWIDTH HANGUL LETTER EU..HALFWIDTH HANGUL LETTER I +10000..1000B ; XID_Start # Lo [12] LINEAR B SYLLABLE B008 A..LINEAR B SYLLABLE B046 JE +1000D..10026 ; XID_Start # Lo [26] LINEAR B SYLLABLE B036 JO..LINEAR B SYLLABLE B032 QO +10028..1003A ; XID_Start # Lo [19] LINEAR B SYLLABLE B060 RA..LINEAR B SYLLABLE B042 WO +1003C..1003D ; XID_Start # Lo [2] LINEAR B SYLLABLE B017 ZA..LINEAR B SYLLABLE B074 ZE +1003F..1004D ; XID_Start # Lo [15] LINEAR B SYLLABLE B020 ZO..LINEAR B SYLLABLE B091 TWO +10050..1005D ; XID_Start # Lo [14] LINEAR B SYMBOL B018..LINEAR B SYMBOL B089 +10080..100FA ; XID_Start # Lo [123] LINEAR B IDEOGRAM B100 MAN..LINEAR B IDEOGRAM VESSEL B305 +10140..10174 ; XID_Start # Nl [53] GREEK ACROPHONIC ATTIC ONE QUARTER..GREEK ACROPHONIC STRATIAN FIFTY MNAS +10280..1029C ; XID_Start # Lo [29] LYCIAN LETTER A..LYCIAN LETTER X +102A0..102D0 ; XID_Start # Lo [49] CARIAN LETTER A..CARIAN LETTER UUU3 +10300..1031E ; XID_Start # Lo [31] OLD ITALIC LETTER A..OLD ITALIC LETTER UU +10330..10340 ; XID_Start # Lo [17] GOTHIC LETTER AHSA..GOTHIC LETTER PAIRTHRA +10341 ; XID_Start # Nl GOTHIC LETTER NINETY +10342..10349 ; XID_Start # Lo [8] GOTHIC LETTER RAIDA..GOTHIC LETTER OTHAL +1034A ; XID_Start # Nl GOTHIC LETTER NINE HUNDRED +10380..1039D ; XID_Start # Lo [30] UGARITIC LETTER ALPA..UGARITIC LETTER SSU +103A0..103C3 ; XID_Start # Lo [36] OLD PERSIAN SIGN A..OLD PERSIAN SIGN HA +103C8..103CF ; XID_Start # Lo [8] OLD PERSIAN SIGN AURAMAZDAA..OLD PERSIAN SIGN BUUMISH +103D1..103D5 ; XID_Start # Nl [5] OLD PERSIAN NUMBER ONE..OLD PERSIAN NUMBER HUNDRED +10400..1044F ; XID_Start # L& [80] DESERET CAPITAL LETTER LONG I..DESERET SMALL LETTER EW +10450..1049D ; XID_Start # Lo [78] SHAVIAN LETTER PEEP..OSMANYA LETTER OO +10800..10805 ; XID_Start # Lo [6] CYPRIOT SYLLABLE A..CYPRIOT SYLLABLE JA +10808 ; XID_Start # Lo CYPRIOT SYLLABLE JO +1080A..10835 ; XID_Start # Lo [44] CYPRIOT SYLLABLE KA..CYPRIOT SYLLABLE WO +10837..10838 ; XID_Start # Lo [2] CYPRIOT SYLLABLE XA..CYPRIOT SYLLABLE XE +1083C ; XID_Start # Lo CYPRIOT SYLLABLE ZA +1083F..10855 ; XID_Start # Lo [23] CYPRIOT SYLLABLE ZO..IMPERIAL ARAMAIC LETTER TAW +10900..10915 ; XID_Start # Lo [22] PHOENICIAN LETTER ALF..PHOENICIAN LETTER TAU +10920..10939 ; XID_Start # Lo [26] LYDIAN LETTER A..LYDIAN LETTER C +10A00 ; XID_Start # Lo KHAROSHTHI LETTER A +10A10..10A13 ; XID_Start # Lo [4] KHAROSHTHI LETTER KA..KHAROSHTHI LETTER GHA +10A15..10A17 ; XID_Start # Lo [3] KHAROSHTHI LETTER CA..KHAROSHTHI LETTER JA +10A19..10A33 ; XID_Start # Lo [27] KHAROSHTHI LETTER NYA..KHAROSHTHI LETTER TTTHA +10A60..10A7C ; XID_Start # Lo [29] OLD SOUTH ARABIAN LETTER HE..OLD SOUTH ARABIAN LETTER THETH +10B00..10B35 ; XID_Start # Lo [54] AVESTAN LETTER A..AVESTAN LETTER HE +10B40..10B55 ; XID_Start # Lo [22] INSCRIPTIONAL PARTHIAN LETTER ALEPH..INSCRIPTIONAL PARTHIAN LETTER TAW +10B60..10B72 ; XID_Start # Lo [19] INSCRIPTIONAL PAHLAVI LETTER ALEPH..INSCRIPTIONAL PAHLAVI LETTER TAW +10C00..10C48 ; XID_Start # Lo [73] OLD TURKIC LETTER ORKHON A..OLD TURKIC LETTER ORKHON BASH +11083..110AF ; XID_Start # Lo [45] KAITHI LETTER A..KAITHI LETTER HA +12000..1236E ; XID_Start # Lo [879] CUNEIFORM SIGN A..CUNEIFORM SIGN ZUM +12400..12462 ; XID_Start # Nl [99] CUNEIFORM NUMERIC SIGN TWO ASH..CUNEIFORM NUMERIC SIGN OLD ASSYRIAN ONE QUARTER +13000..1342E ; XID_Start # Lo [1071] EGYPTIAN HIEROGLYPH A001..EGYPTIAN HIEROGLYPH AA032 +1D400..1D454 ; XID_Start # L& [85] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL ITALIC SMALL G +1D456..1D49C ; XID_Start # L& [71] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; XID_Start # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; XID_Start # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; XID_Start # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; XID_Start # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B9 ; XID_Start # L& [12] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT SMALL D +1D4BB ; XID_Start # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; XID_Start # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D505 ; XID_Start # L& [65] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; XID_Start # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; XID_Start # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; XID_Start # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D51E..1D539 ; XID_Start # L& [28] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; XID_Start # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; XID_Start # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; XID_Start # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; XID_Start # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D552..1D6A5 ; XID_Start # L& [340] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6A8..1D6C0 ; XID_Start # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6C2..1D6DA ; XID_Start # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DC..1D6FA ; XID_Start # L& [31] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL ITALIC CAPITAL OMEGA +1D6FC..1D714 ; XID_Start # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D716..1D734 ; XID_Start # L& [31] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D736..1D74E ; XID_Start # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D750..1D76E ; XID_Start # L& [31] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D770..1D788 ; XID_Start # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D78A..1D7A8 ; XID_Start # L& [31] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7AA..1D7C2 ; XID_Start # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C4..1D7CB ; XID_Start # L& [8] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD SMALL DIGAMMA +20000..2A6D6 ; XID_Start # Lo [42711] CJK UNIFIED IDEOGRAPH-20000..CJK UNIFIED IDEOGRAPH-2A6D6 +2A700..2B734 ; XID_Start # Lo [4149] CJK UNIFIED IDEOGRAPH-2A700..CJK UNIFIED IDEOGRAPH-2B734 +2F800..2FA1D ; XID_Start # Lo [542] CJK COMPATIBILITY IDEOGRAPH-2F800..CJK COMPATIBILITY IDEOGRAPH-2FA1D + +# Total code points: 99741 + +# ================================================ + +# Derived Property: XID_Continue +# Mod_ID_Continue modified for closure under NFKx +# Modified as described in UAX #15 +# NOTE: Cf characters should be filtered out. +# NOTE: Does NOT remove the non-NFKx characters. +# Merely ensures that if isIdentifer(string) then isIdentifier(NFKx(string)) +# NOTE: See UAX #31 for more information + +0030..0039 ; XID_Continue # Nd [10] DIGIT ZERO..DIGIT NINE +0041..005A ; XID_Continue # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +005F ; XID_Continue # Pc LOW LINE +0061..007A ; XID_Continue # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +00AA ; XID_Continue # L& FEMININE ORDINAL INDICATOR +00B5 ; XID_Continue # L& MICRO SIGN +00B7 ; XID_Continue # Po MIDDLE DOT +00BA ; XID_Continue # L& MASCULINE ORDINAL INDICATOR +00C0..00D6 ; XID_Continue # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D8..00F6 ; XID_Continue # L& [31] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER O WITH DIAERESIS +00F8..01BA ; XID_Continue # L& [195] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER EZH WITH TAIL +01BB ; XID_Continue # Lo LATIN LETTER TWO WITH STROKE +01BC..01BF ; XID_Continue # L& [4] LATIN CAPITAL LETTER TONE FIVE..LATIN LETTER WYNN +01C0..01C3 ; XID_Continue # Lo [4] LATIN LETTER DENTAL CLICK..LATIN LETTER RETROFLEX CLICK +01C4..0293 ; XID_Continue # L& [208] LATIN CAPITAL LETTER DZ WITH CARON..LATIN SMALL LETTER EZH WITH CURL +0294 ; XID_Continue # Lo LATIN LETTER GLOTTAL STOP +0295..02AF ; XID_Continue # L& [27] LATIN LETTER PHARYNGEAL VOICED FRICATIVE..LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL +02B0..02C1 ; XID_Continue # Lm [18] MODIFIER LETTER SMALL H..MODIFIER LETTER REVERSED GLOTTAL STOP +02C6..02D1 ; XID_Continue # Lm [12] MODIFIER LETTER CIRCUMFLEX ACCENT..MODIFIER LETTER HALF TRIANGULAR COLON +02E0..02E4 ; XID_Continue # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +02EC ; XID_Continue # Lm MODIFIER LETTER VOICING +02EE ; XID_Continue # Lm MODIFIER LETTER DOUBLE APOSTROPHE +0300..036F ; XID_Continue # Mn [112] COMBINING GRAVE ACCENT..COMBINING LATIN SMALL LETTER X +0370..0373 ; XID_Continue # L& [4] GREEK CAPITAL LETTER HETA..GREEK SMALL LETTER ARCHAIC SAMPI +0374 ; XID_Continue # Lm GREEK NUMERAL SIGN +0376..0377 ; XID_Continue # L& [2] GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA..GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037B..037D ; XID_Continue # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +0386 ; XID_Continue # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0387 ; XID_Continue # Po GREEK ANO TELEIA +0388..038A ; XID_Continue # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; XID_Continue # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..03A1 ; XID_Continue # L& [20] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER RHO +03A3..03F5 ; XID_Continue # L& [83] GREEK CAPITAL LETTER SIGMA..GREEK LUNATE EPSILON SYMBOL +03F7..0481 ; XID_Continue # L& [139] GREEK CAPITAL LETTER SHO..CYRILLIC SMALL LETTER KOPPA +0483..0487 ; XID_Continue # Mn [5] COMBINING CYRILLIC TITLO..COMBINING CYRILLIC POKRYTIE +048A..0525 ; XID_Continue # L& [156] CYRILLIC CAPITAL LETTER SHORT I WITH TAIL..CYRILLIC SMALL LETTER PE WITH DESCENDER +0531..0556 ; XID_Continue # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0559 ; XID_Continue # Lm ARMENIAN MODIFIER LETTER LEFT HALF RING +0561..0587 ; XID_Continue # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +0591..05BD ; XID_Continue # Mn [45] HEBREW ACCENT ETNAHTA..HEBREW POINT METEG +05BF ; XID_Continue # Mn HEBREW POINT RAFE +05C1..05C2 ; XID_Continue # Mn [2] HEBREW POINT SHIN DOT..HEBREW POINT SIN DOT +05C4..05C5 ; XID_Continue # Mn [2] HEBREW MARK UPPER DOT..HEBREW MARK LOWER DOT +05C7 ; XID_Continue # Mn HEBREW POINT QAMATS QATAN +05D0..05EA ; XID_Continue # Lo [27] HEBREW LETTER ALEF..HEBREW LETTER TAV +05F0..05F2 ; XID_Continue # Lo [3] HEBREW LIGATURE YIDDISH DOUBLE VAV..HEBREW LIGATURE YIDDISH DOUBLE YOD +0610..061A ; XID_Continue # Mn [11] ARABIC SIGN SALLALLAHOU ALAYHE WASSALLAM..ARABIC SMALL KASRA +0621..063F ; XID_Continue # Lo [31] ARABIC LETTER HAMZA..ARABIC LETTER FARSI YEH WITH THREE DOTS ABOVE +0640 ; XID_Continue # Lm ARABIC TATWEEL +0641..064A ; XID_Continue # Lo [10] ARABIC LETTER FEH..ARABIC LETTER YEH +064B..065E ; XID_Continue # Mn [20] ARABIC FATHATAN..ARABIC FATHA WITH TWO DOTS +0660..0669 ; XID_Continue # Nd [10] ARABIC-INDIC DIGIT ZERO..ARABIC-INDIC DIGIT NINE +066E..066F ; XID_Continue # Lo [2] ARABIC LETTER DOTLESS BEH..ARABIC LETTER DOTLESS QAF +0670 ; XID_Continue # Mn ARABIC LETTER SUPERSCRIPT ALEF +0671..06D3 ; XID_Continue # Lo [99] ARABIC LETTER ALEF WASLA..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE +06D5 ; XID_Continue # Lo ARABIC LETTER AE +06D6..06DC ; XID_Continue # Mn [7] ARABIC SMALL HIGH LIGATURE SAD WITH LAM WITH ALEF MAKSURA..ARABIC SMALL HIGH SEEN +06DF..06E4 ; XID_Continue # Mn [6] ARABIC SMALL HIGH ROUNDED ZERO..ARABIC SMALL HIGH MADDA +06E5..06E6 ; XID_Continue # Lm [2] ARABIC SMALL WAW..ARABIC SMALL YEH +06E7..06E8 ; XID_Continue # Mn [2] ARABIC SMALL HIGH YEH..ARABIC SMALL HIGH NOON +06EA..06ED ; XID_Continue # Mn [4] ARABIC EMPTY CENTRE LOW STOP..ARABIC SMALL LOW MEEM +06EE..06EF ; XID_Continue # Lo [2] ARABIC LETTER DAL WITH INVERTED V..ARABIC LETTER REH WITH INVERTED V +06F0..06F9 ; XID_Continue # Nd [10] EXTENDED ARABIC-INDIC DIGIT ZERO..EXTENDED ARABIC-INDIC DIGIT NINE +06FA..06FC ; XID_Continue # Lo [3] ARABIC LETTER SHEEN WITH DOT BELOW..ARABIC LETTER GHAIN WITH DOT BELOW +06FF ; XID_Continue # Lo ARABIC LETTER HEH WITH INVERTED V +0710 ; XID_Continue # Lo SYRIAC LETTER ALAPH +0711 ; XID_Continue # Mn SYRIAC LETTER SUPERSCRIPT ALAPH +0712..072F ; XID_Continue # Lo [30] SYRIAC LETTER BETH..SYRIAC LETTER PERSIAN DHALATH +0730..074A ; XID_Continue # Mn [27] SYRIAC PTHAHA ABOVE..SYRIAC BARREKH +074D..07A5 ; XID_Continue # Lo [89] SYRIAC LETTER SOGDIAN ZHAIN..THAANA LETTER WAAVU +07A6..07B0 ; XID_Continue # Mn [11] THAANA ABAFILI..THAANA SUKUN +07B1 ; XID_Continue # Lo THAANA LETTER NAA +07C0..07C9 ; XID_Continue # Nd [10] NKO DIGIT ZERO..NKO DIGIT NINE +07CA..07EA ; XID_Continue # Lo [33] NKO LETTER A..NKO LETTER JONA RA +07EB..07F3 ; XID_Continue # Mn [9] NKO COMBINING SHORT HIGH TONE..NKO COMBINING DOUBLE DOT ABOVE +07F4..07F5 ; XID_Continue # Lm [2] NKO HIGH TONE APOSTROPHE..NKO LOW TONE APOSTROPHE +07FA ; XID_Continue # Lm NKO LAJANYALAN +0800..0815 ; XID_Continue # Lo [22] SAMARITAN LETTER ALAF..SAMARITAN LETTER TAAF +0816..0819 ; XID_Continue # Mn [4] SAMARITAN MARK IN..SAMARITAN MARK DAGESH +081A ; XID_Continue # Lm SAMARITAN MODIFIER LETTER EPENTHETIC YUT +081B..0823 ; XID_Continue # Mn [9] SAMARITAN MARK EPENTHETIC YUT..SAMARITAN VOWEL SIGN A +0824 ; XID_Continue # Lm SAMARITAN MODIFIER LETTER SHORT A +0825..0827 ; XID_Continue # Mn [3] SAMARITAN VOWEL SIGN SHORT A..SAMARITAN VOWEL SIGN U +0828 ; XID_Continue # Lm SAMARITAN MODIFIER LETTER I +0829..082D ; XID_Continue # Mn [5] SAMARITAN VOWEL SIGN LONG I..SAMARITAN MARK NEQUDAA +0900..0902 ; XID_Continue # Mn [3] DEVANAGARI SIGN INVERTED CANDRABINDU..DEVANAGARI SIGN ANUSVARA +0903 ; XID_Continue # Mc DEVANAGARI SIGN VISARGA +0904..0939 ; XID_Continue # Lo [54] DEVANAGARI LETTER SHORT A..DEVANAGARI LETTER HA +093C ; XID_Continue # Mn DEVANAGARI SIGN NUKTA +093D ; XID_Continue # Lo DEVANAGARI SIGN AVAGRAHA +093E..0940 ; XID_Continue # Mc [3] DEVANAGARI VOWEL SIGN AA..DEVANAGARI VOWEL SIGN II +0941..0948 ; XID_Continue # Mn [8] DEVANAGARI VOWEL SIGN U..DEVANAGARI VOWEL SIGN AI +0949..094C ; XID_Continue # Mc [4] DEVANAGARI VOWEL SIGN CANDRA O..DEVANAGARI VOWEL SIGN AU +094D ; XID_Continue # Mn DEVANAGARI SIGN VIRAMA +094E ; XID_Continue # Mc DEVANAGARI VOWEL SIGN PRISHTHAMATRA E +0950 ; XID_Continue # Lo DEVANAGARI OM +0951..0955 ; XID_Continue # Mn [5] DEVANAGARI STRESS SIGN UDATTA..DEVANAGARI VOWEL SIGN CANDRA LONG E +0958..0961 ; XID_Continue # Lo [10] DEVANAGARI LETTER QA..DEVANAGARI LETTER VOCALIC LL +0962..0963 ; XID_Continue # Mn [2] DEVANAGARI VOWEL SIGN VOCALIC L..DEVANAGARI VOWEL SIGN VOCALIC LL +0966..096F ; XID_Continue # Nd [10] DEVANAGARI DIGIT ZERO..DEVANAGARI DIGIT NINE +0971 ; XID_Continue # Lm DEVANAGARI SIGN HIGH SPACING DOT +0972 ; XID_Continue # Lo DEVANAGARI LETTER CANDRA A +0979..097F ; XID_Continue # Lo [7] DEVANAGARI LETTER ZHA..DEVANAGARI LETTER BBA +0981 ; XID_Continue # Mn BENGALI SIGN CANDRABINDU +0982..0983 ; XID_Continue # Mc [2] BENGALI SIGN ANUSVARA..BENGALI SIGN VISARGA +0985..098C ; XID_Continue # Lo [8] BENGALI LETTER A..BENGALI LETTER VOCALIC L +098F..0990 ; XID_Continue # Lo [2] BENGALI LETTER E..BENGALI LETTER AI +0993..09A8 ; XID_Continue # Lo [22] BENGALI LETTER O..BENGALI LETTER NA +09AA..09B0 ; XID_Continue # Lo [7] BENGALI LETTER PA..BENGALI LETTER RA +09B2 ; XID_Continue # Lo BENGALI LETTER LA +09B6..09B9 ; XID_Continue # Lo [4] BENGALI LETTER SHA..BENGALI LETTER HA +09BC ; XID_Continue # Mn BENGALI SIGN NUKTA +09BD ; XID_Continue # Lo BENGALI SIGN AVAGRAHA +09BE..09C0 ; XID_Continue # Mc [3] BENGALI VOWEL SIGN AA..BENGALI VOWEL SIGN II +09C1..09C4 ; XID_Continue # Mn [4] BENGALI VOWEL SIGN U..BENGALI VOWEL SIGN VOCALIC RR +09C7..09C8 ; XID_Continue # Mc [2] BENGALI VOWEL SIGN E..BENGALI VOWEL SIGN AI +09CB..09CC ; XID_Continue # Mc [2] BENGALI VOWEL SIGN O..BENGALI VOWEL SIGN AU +09CD ; XID_Continue # Mn BENGALI SIGN VIRAMA +09CE ; XID_Continue # Lo BENGALI LETTER KHANDA TA +09D7 ; XID_Continue # Mc BENGALI AU LENGTH MARK +09DC..09DD ; XID_Continue # Lo [2] BENGALI LETTER RRA..BENGALI LETTER RHA +09DF..09E1 ; XID_Continue # Lo [3] BENGALI LETTER YYA..BENGALI LETTER VOCALIC LL +09E2..09E3 ; XID_Continue # Mn [2] BENGALI VOWEL SIGN VOCALIC L..BENGALI VOWEL SIGN VOCALIC LL +09E6..09EF ; XID_Continue # Nd [10] BENGALI DIGIT ZERO..BENGALI DIGIT NINE +09F0..09F1 ; XID_Continue # Lo [2] BENGALI LETTER RA WITH MIDDLE DIAGONAL..BENGALI LETTER RA WITH LOWER DIAGONAL +0A01..0A02 ; XID_Continue # Mn [2] GURMUKHI SIGN ADAK BINDI..GURMUKHI SIGN BINDI +0A03 ; XID_Continue # Mc GURMUKHI SIGN VISARGA +0A05..0A0A ; XID_Continue # Lo [6] GURMUKHI LETTER A..GURMUKHI LETTER UU +0A0F..0A10 ; XID_Continue # Lo [2] GURMUKHI LETTER EE..GURMUKHI LETTER AI +0A13..0A28 ; XID_Continue # Lo [22] GURMUKHI LETTER OO..GURMUKHI LETTER NA +0A2A..0A30 ; XID_Continue # Lo [7] GURMUKHI LETTER PA..GURMUKHI LETTER RA +0A32..0A33 ; XID_Continue # Lo [2] GURMUKHI LETTER LA..GURMUKHI LETTER LLA +0A35..0A36 ; XID_Continue # Lo [2] GURMUKHI LETTER VA..GURMUKHI LETTER SHA +0A38..0A39 ; XID_Continue # Lo [2] GURMUKHI LETTER SA..GURMUKHI LETTER HA +0A3C ; XID_Continue # Mn GURMUKHI SIGN NUKTA +0A3E..0A40 ; XID_Continue # Mc [3] GURMUKHI VOWEL SIGN AA..GURMUKHI VOWEL SIGN II +0A41..0A42 ; XID_Continue # Mn [2] GURMUKHI VOWEL SIGN U..GURMUKHI VOWEL SIGN UU +0A47..0A48 ; XID_Continue # Mn [2] GURMUKHI VOWEL SIGN EE..GURMUKHI VOWEL SIGN AI +0A4B..0A4D ; XID_Continue # Mn [3] GURMUKHI VOWEL SIGN OO..GURMUKHI SIGN VIRAMA +0A51 ; XID_Continue # Mn GURMUKHI SIGN UDAAT +0A59..0A5C ; XID_Continue # Lo [4] GURMUKHI LETTER KHHA..GURMUKHI LETTER RRA +0A5E ; XID_Continue # Lo GURMUKHI LETTER FA +0A66..0A6F ; XID_Continue # Nd [10] GURMUKHI DIGIT ZERO..GURMUKHI DIGIT NINE +0A70..0A71 ; XID_Continue # Mn [2] GURMUKHI TIPPI..GURMUKHI ADDAK +0A72..0A74 ; XID_Continue # Lo [3] GURMUKHI IRI..GURMUKHI EK ONKAR +0A75 ; XID_Continue # Mn GURMUKHI SIGN YAKASH +0A81..0A82 ; XID_Continue # Mn [2] GUJARATI SIGN CANDRABINDU..GUJARATI SIGN ANUSVARA +0A83 ; XID_Continue # Mc GUJARATI SIGN VISARGA +0A85..0A8D ; XID_Continue # Lo [9] GUJARATI LETTER A..GUJARATI VOWEL CANDRA E +0A8F..0A91 ; XID_Continue # Lo [3] GUJARATI LETTER E..GUJARATI VOWEL CANDRA O +0A93..0AA8 ; XID_Continue # Lo [22] GUJARATI LETTER O..GUJARATI LETTER NA +0AAA..0AB0 ; XID_Continue # Lo [7] GUJARATI LETTER PA..GUJARATI LETTER RA +0AB2..0AB3 ; XID_Continue # Lo [2] GUJARATI LETTER LA..GUJARATI LETTER LLA +0AB5..0AB9 ; XID_Continue # Lo [5] GUJARATI LETTER VA..GUJARATI LETTER HA +0ABC ; XID_Continue # Mn GUJARATI SIGN NUKTA +0ABD ; XID_Continue # Lo GUJARATI SIGN AVAGRAHA +0ABE..0AC0 ; XID_Continue # Mc [3] GUJARATI VOWEL SIGN AA..GUJARATI VOWEL SIGN II +0AC1..0AC5 ; XID_Continue # Mn [5] GUJARATI VOWEL SIGN U..GUJARATI VOWEL SIGN CANDRA E +0AC7..0AC8 ; XID_Continue # Mn [2] GUJARATI VOWEL SIGN E..GUJARATI VOWEL SIGN AI +0AC9 ; XID_Continue # Mc GUJARATI VOWEL SIGN CANDRA O +0ACB..0ACC ; XID_Continue # Mc [2] GUJARATI VOWEL SIGN O..GUJARATI VOWEL SIGN AU +0ACD ; XID_Continue # Mn GUJARATI SIGN VIRAMA +0AD0 ; XID_Continue # Lo GUJARATI OM +0AE0..0AE1 ; XID_Continue # Lo [2] GUJARATI LETTER VOCALIC RR..GUJARATI LETTER VOCALIC LL +0AE2..0AE3 ; XID_Continue # Mn [2] GUJARATI VOWEL SIGN VOCALIC L..GUJARATI VOWEL SIGN VOCALIC LL +0AE6..0AEF ; XID_Continue # Nd [10] GUJARATI DIGIT ZERO..GUJARATI DIGIT NINE +0B01 ; XID_Continue # Mn ORIYA SIGN CANDRABINDU +0B02..0B03 ; XID_Continue # Mc [2] ORIYA SIGN ANUSVARA..ORIYA SIGN VISARGA +0B05..0B0C ; XID_Continue # Lo [8] ORIYA LETTER A..ORIYA LETTER VOCALIC L +0B0F..0B10 ; XID_Continue # Lo [2] ORIYA LETTER E..ORIYA LETTER AI +0B13..0B28 ; XID_Continue # Lo [22] ORIYA LETTER O..ORIYA LETTER NA +0B2A..0B30 ; XID_Continue # Lo [7] ORIYA LETTER PA..ORIYA LETTER RA +0B32..0B33 ; XID_Continue # Lo [2] ORIYA LETTER LA..ORIYA LETTER LLA +0B35..0B39 ; XID_Continue # Lo [5] ORIYA LETTER VA..ORIYA LETTER HA +0B3C ; XID_Continue # Mn ORIYA SIGN NUKTA +0B3D ; XID_Continue # Lo ORIYA SIGN AVAGRAHA +0B3E ; XID_Continue # Mc ORIYA VOWEL SIGN AA +0B3F ; XID_Continue # Mn ORIYA VOWEL SIGN I +0B40 ; XID_Continue # Mc ORIYA VOWEL SIGN II +0B41..0B44 ; XID_Continue # Mn [4] ORIYA VOWEL SIGN U..ORIYA VOWEL SIGN VOCALIC RR +0B47..0B48 ; XID_Continue # Mc [2] ORIYA VOWEL SIGN E..ORIYA VOWEL SIGN AI +0B4B..0B4C ; XID_Continue # Mc [2] ORIYA VOWEL SIGN O..ORIYA VOWEL SIGN AU +0B4D ; XID_Continue # Mn ORIYA SIGN VIRAMA +0B56 ; XID_Continue # Mn ORIYA AI LENGTH MARK +0B57 ; XID_Continue # Mc ORIYA AU LENGTH MARK +0B5C..0B5D ; XID_Continue # Lo [2] ORIYA LETTER RRA..ORIYA LETTER RHA +0B5F..0B61 ; XID_Continue # Lo [3] ORIYA LETTER YYA..ORIYA LETTER VOCALIC LL +0B62..0B63 ; XID_Continue # Mn [2] ORIYA VOWEL SIGN VOCALIC L..ORIYA VOWEL SIGN VOCALIC LL +0B66..0B6F ; XID_Continue # Nd [10] ORIYA DIGIT ZERO..ORIYA DIGIT NINE +0B71 ; XID_Continue # Lo ORIYA LETTER WA +0B82 ; XID_Continue # Mn TAMIL SIGN ANUSVARA +0B83 ; XID_Continue # Lo TAMIL SIGN VISARGA +0B85..0B8A ; XID_Continue # Lo [6] TAMIL LETTER A..TAMIL LETTER UU +0B8E..0B90 ; XID_Continue # Lo [3] TAMIL LETTER E..TAMIL LETTER AI +0B92..0B95 ; XID_Continue # Lo [4] TAMIL LETTER O..TAMIL LETTER KA +0B99..0B9A ; XID_Continue # Lo [2] TAMIL LETTER NGA..TAMIL LETTER CA +0B9C ; XID_Continue # Lo TAMIL LETTER JA +0B9E..0B9F ; XID_Continue # Lo [2] TAMIL LETTER NYA..TAMIL LETTER TTA +0BA3..0BA4 ; XID_Continue # Lo [2] TAMIL LETTER NNA..TAMIL LETTER TA +0BA8..0BAA ; XID_Continue # Lo [3] TAMIL LETTER NA..TAMIL LETTER PA +0BAE..0BB9 ; XID_Continue # Lo [12] TAMIL LETTER MA..TAMIL LETTER HA +0BBE..0BBF ; XID_Continue # Mc [2] TAMIL VOWEL SIGN AA..TAMIL VOWEL SIGN I +0BC0 ; XID_Continue # Mn TAMIL VOWEL SIGN II +0BC1..0BC2 ; XID_Continue # Mc [2] TAMIL VOWEL SIGN U..TAMIL VOWEL SIGN UU +0BC6..0BC8 ; XID_Continue # Mc [3] TAMIL VOWEL SIGN E..TAMIL VOWEL SIGN AI +0BCA..0BCC ; XID_Continue # Mc [3] TAMIL VOWEL SIGN O..TAMIL VOWEL SIGN AU +0BCD ; XID_Continue # Mn TAMIL SIGN VIRAMA +0BD0 ; XID_Continue # Lo TAMIL OM +0BD7 ; XID_Continue # Mc TAMIL AU LENGTH MARK +0BE6..0BEF ; XID_Continue # Nd [10] TAMIL DIGIT ZERO..TAMIL DIGIT NINE +0C01..0C03 ; XID_Continue # Mc [3] TELUGU SIGN CANDRABINDU..TELUGU SIGN VISARGA +0C05..0C0C ; XID_Continue # Lo [8] TELUGU LETTER A..TELUGU LETTER VOCALIC L +0C0E..0C10 ; XID_Continue # Lo [3] TELUGU LETTER E..TELUGU LETTER AI +0C12..0C28 ; XID_Continue # Lo [23] TELUGU LETTER O..TELUGU LETTER NA +0C2A..0C33 ; XID_Continue # Lo [10] TELUGU LETTER PA..TELUGU LETTER LLA +0C35..0C39 ; XID_Continue # Lo [5] TELUGU LETTER VA..TELUGU LETTER HA +0C3D ; XID_Continue # Lo TELUGU SIGN AVAGRAHA +0C3E..0C40 ; XID_Continue # Mn [3] TELUGU VOWEL SIGN AA..TELUGU VOWEL SIGN II +0C41..0C44 ; XID_Continue # Mc [4] TELUGU VOWEL SIGN U..TELUGU VOWEL SIGN VOCALIC RR +0C46..0C48 ; XID_Continue # Mn [3] TELUGU VOWEL SIGN E..TELUGU VOWEL SIGN AI +0C4A..0C4D ; XID_Continue # Mn [4] TELUGU VOWEL SIGN O..TELUGU SIGN VIRAMA +0C55..0C56 ; XID_Continue # Mn [2] TELUGU LENGTH MARK..TELUGU AI LENGTH MARK +0C58..0C59 ; XID_Continue # Lo [2] TELUGU LETTER TSA..TELUGU LETTER DZA +0C60..0C61 ; XID_Continue # Lo [2] TELUGU LETTER VOCALIC RR..TELUGU LETTER VOCALIC LL +0C62..0C63 ; XID_Continue # Mn [2] TELUGU VOWEL SIGN VOCALIC L..TELUGU VOWEL SIGN VOCALIC LL +0C66..0C6F ; XID_Continue # Nd [10] TELUGU DIGIT ZERO..TELUGU DIGIT NINE +0C82..0C83 ; XID_Continue # Mc [2] KANNADA SIGN ANUSVARA..KANNADA SIGN VISARGA +0C85..0C8C ; XID_Continue # Lo [8] KANNADA LETTER A..KANNADA LETTER VOCALIC L +0C8E..0C90 ; XID_Continue # Lo [3] KANNADA LETTER E..KANNADA LETTER AI +0C92..0CA8 ; XID_Continue # Lo [23] KANNADA LETTER O..KANNADA LETTER NA +0CAA..0CB3 ; XID_Continue # Lo [10] KANNADA LETTER PA..KANNADA LETTER LLA +0CB5..0CB9 ; XID_Continue # Lo [5] KANNADA LETTER VA..KANNADA LETTER HA +0CBC ; XID_Continue # Mn KANNADA SIGN NUKTA +0CBD ; XID_Continue # Lo KANNADA SIGN AVAGRAHA +0CBE ; XID_Continue # Mc KANNADA VOWEL SIGN AA +0CBF ; XID_Continue # Mn KANNADA VOWEL SIGN I +0CC0..0CC4 ; XID_Continue # Mc [5] KANNADA VOWEL SIGN II..KANNADA VOWEL SIGN VOCALIC RR +0CC6 ; XID_Continue # Mn KANNADA VOWEL SIGN E +0CC7..0CC8 ; XID_Continue # Mc [2] KANNADA VOWEL SIGN EE..KANNADA VOWEL SIGN AI +0CCA..0CCB ; XID_Continue # Mc [2] KANNADA VOWEL SIGN O..KANNADA VOWEL SIGN OO +0CCC..0CCD ; XID_Continue # Mn [2] KANNADA VOWEL SIGN AU..KANNADA SIGN VIRAMA +0CD5..0CD6 ; XID_Continue # Mc [2] KANNADA LENGTH MARK..KANNADA AI LENGTH MARK +0CDE ; XID_Continue # Lo KANNADA LETTER FA +0CE0..0CE1 ; XID_Continue # Lo [2] KANNADA LETTER VOCALIC RR..KANNADA LETTER VOCALIC LL +0CE2..0CE3 ; XID_Continue # Mn [2] KANNADA VOWEL SIGN VOCALIC L..KANNADA VOWEL SIGN VOCALIC LL +0CE6..0CEF ; XID_Continue # Nd [10] KANNADA DIGIT ZERO..KANNADA DIGIT NINE +0D02..0D03 ; XID_Continue # Mc [2] MALAYALAM SIGN ANUSVARA..MALAYALAM SIGN VISARGA +0D05..0D0C ; XID_Continue # Lo [8] MALAYALAM LETTER A..MALAYALAM LETTER VOCALIC L +0D0E..0D10 ; XID_Continue # Lo [3] MALAYALAM LETTER E..MALAYALAM LETTER AI +0D12..0D28 ; XID_Continue # Lo [23] MALAYALAM LETTER O..MALAYALAM LETTER NA +0D2A..0D39 ; XID_Continue # Lo [16] MALAYALAM LETTER PA..MALAYALAM LETTER HA +0D3D ; XID_Continue # Lo MALAYALAM SIGN AVAGRAHA +0D3E..0D40 ; XID_Continue # Mc [3] MALAYALAM VOWEL SIGN AA..MALAYALAM VOWEL SIGN II +0D41..0D44 ; XID_Continue # Mn [4] MALAYALAM VOWEL SIGN U..MALAYALAM VOWEL SIGN VOCALIC RR +0D46..0D48 ; XID_Continue # Mc [3] MALAYALAM VOWEL SIGN E..MALAYALAM VOWEL SIGN AI +0D4A..0D4C ; XID_Continue # Mc [3] MALAYALAM VOWEL SIGN O..MALAYALAM VOWEL SIGN AU +0D4D ; XID_Continue # Mn MALAYALAM SIGN VIRAMA +0D57 ; XID_Continue # Mc MALAYALAM AU LENGTH MARK +0D60..0D61 ; XID_Continue # Lo [2] MALAYALAM LETTER VOCALIC RR..MALAYALAM LETTER VOCALIC LL +0D62..0D63 ; XID_Continue # Mn [2] MALAYALAM VOWEL SIGN VOCALIC L..MALAYALAM VOWEL SIGN VOCALIC LL +0D66..0D6F ; XID_Continue # Nd [10] MALAYALAM DIGIT ZERO..MALAYALAM DIGIT NINE +0D7A..0D7F ; XID_Continue # Lo [6] MALAYALAM LETTER CHILLU NN..MALAYALAM LETTER CHILLU K +0D82..0D83 ; XID_Continue # Mc [2] SINHALA SIGN ANUSVARAYA..SINHALA SIGN VISARGAYA +0D85..0D96 ; XID_Continue # Lo [18] SINHALA LETTER AYANNA..SINHALA LETTER AUYANNA +0D9A..0DB1 ; XID_Continue # Lo [24] SINHALA LETTER ALPAPRAANA KAYANNA..SINHALA LETTER DANTAJA NAYANNA +0DB3..0DBB ; XID_Continue # Lo [9] SINHALA LETTER SANYAKA DAYANNA..SINHALA LETTER RAYANNA +0DBD ; XID_Continue # Lo SINHALA LETTER DANTAJA LAYANNA +0DC0..0DC6 ; XID_Continue # Lo [7] SINHALA LETTER VAYANNA..SINHALA LETTER FAYANNA +0DCA ; XID_Continue # Mn SINHALA SIGN AL-LAKUNA +0DCF..0DD1 ; XID_Continue # Mc [3] SINHALA VOWEL SIGN AELA-PILLA..SINHALA VOWEL SIGN DIGA AEDA-PILLA +0DD2..0DD4 ; XID_Continue # Mn [3] SINHALA VOWEL SIGN KETTI IS-PILLA..SINHALA VOWEL SIGN KETTI PAA-PILLA +0DD6 ; XID_Continue # Mn SINHALA VOWEL SIGN DIGA PAA-PILLA +0DD8..0DDF ; XID_Continue # Mc [8] SINHALA VOWEL SIGN GAETTA-PILLA..SINHALA VOWEL SIGN GAYANUKITTA +0DF2..0DF3 ; XID_Continue # Mc [2] SINHALA VOWEL SIGN DIGA GAETTA-PILLA..SINHALA VOWEL SIGN DIGA GAYANUKITTA +0E01..0E30 ; XID_Continue # Lo [48] THAI CHARACTER KO KAI..THAI CHARACTER SARA A +0E31 ; XID_Continue # Mn THAI CHARACTER MAI HAN-AKAT +0E32..0E33 ; XID_Continue # Lo [2] THAI CHARACTER SARA AA..THAI CHARACTER SARA AM +0E34..0E3A ; XID_Continue # Mn [7] THAI CHARACTER SARA I..THAI CHARACTER PHINTHU +0E40..0E45 ; XID_Continue # Lo [6] THAI CHARACTER SARA E..THAI CHARACTER LAKKHANGYAO +0E46 ; XID_Continue # Lm THAI CHARACTER MAIYAMOK +0E47..0E4E ; XID_Continue # Mn [8] THAI CHARACTER MAITAIKHU..THAI CHARACTER YAMAKKAN +0E50..0E59 ; XID_Continue # Nd [10] THAI DIGIT ZERO..THAI DIGIT NINE +0E81..0E82 ; XID_Continue # Lo [2] LAO LETTER KO..LAO LETTER KHO SUNG +0E84 ; XID_Continue # Lo LAO LETTER KHO TAM +0E87..0E88 ; XID_Continue # Lo [2] LAO LETTER NGO..LAO LETTER CO +0E8A ; XID_Continue # Lo LAO LETTER SO TAM +0E8D ; XID_Continue # Lo LAO LETTER NYO +0E94..0E97 ; XID_Continue # Lo [4] LAO LETTER DO..LAO LETTER THO TAM +0E99..0E9F ; XID_Continue # Lo [7] LAO LETTER NO..LAO LETTER FO SUNG +0EA1..0EA3 ; XID_Continue # Lo [3] LAO LETTER MO..LAO LETTER LO LING +0EA5 ; XID_Continue # Lo LAO LETTER LO LOOT +0EA7 ; XID_Continue # Lo LAO LETTER WO +0EAA..0EAB ; XID_Continue # Lo [2] LAO LETTER SO SUNG..LAO LETTER HO SUNG +0EAD..0EB0 ; XID_Continue # Lo [4] LAO LETTER O..LAO VOWEL SIGN A +0EB1 ; XID_Continue # Mn LAO VOWEL SIGN MAI KAN +0EB2..0EB3 ; XID_Continue # Lo [2] LAO VOWEL SIGN AA..LAO VOWEL SIGN AM +0EB4..0EB9 ; XID_Continue # Mn [6] LAO VOWEL SIGN I..LAO VOWEL SIGN UU +0EBB..0EBC ; XID_Continue # Mn [2] LAO VOWEL SIGN MAI KON..LAO SEMIVOWEL SIGN LO +0EBD ; XID_Continue # Lo LAO SEMIVOWEL SIGN NYO +0EC0..0EC4 ; XID_Continue # Lo [5] LAO VOWEL SIGN E..LAO VOWEL SIGN AI +0EC6 ; XID_Continue # Lm LAO KO LA +0EC8..0ECD ; XID_Continue # Mn [6] LAO TONE MAI EK..LAO NIGGAHITA +0ED0..0ED9 ; XID_Continue # Nd [10] LAO DIGIT ZERO..LAO DIGIT NINE +0EDC..0EDD ; XID_Continue # Lo [2] LAO HO NO..LAO HO MO +0F00 ; XID_Continue # Lo TIBETAN SYLLABLE OM +0F18..0F19 ; XID_Continue # Mn [2] TIBETAN ASTROLOGICAL SIGN -KHYUD PA..TIBETAN ASTROLOGICAL SIGN SDONG TSHUGS +0F20..0F29 ; XID_Continue # Nd [10] TIBETAN DIGIT ZERO..TIBETAN DIGIT NINE +0F35 ; XID_Continue # Mn TIBETAN MARK NGAS BZUNG NYI ZLA +0F37 ; XID_Continue # Mn TIBETAN MARK NGAS BZUNG SGOR RTAGS +0F39 ; XID_Continue # Mn TIBETAN MARK TSA -PHRU +0F3E..0F3F ; XID_Continue # Mc [2] TIBETAN SIGN YAR TSHES..TIBETAN SIGN MAR TSHES +0F40..0F47 ; XID_Continue # Lo [8] TIBETAN LETTER KA..TIBETAN LETTER JA +0F49..0F6C ; XID_Continue # Lo [36] TIBETAN LETTER NYA..TIBETAN LETTER RRA +0F71..0F7E ; XID_Continue # Mn [14] TIBETAN VOWEL SIGN AA..TIBETAN SIGN RJES SU NGA RO +0F7F ; XID_Continue # Mc TIBETAN SIGN RNAM BCAD +0F80..0F84 ; XID_Continue # Mn [5] TIBETAN VOWEL SIGN REVERSED I..TIBETAN MARK HALANTA +0F86..0F87 ; XID_Continue # Mn [2] TIBETAN SIGN LCI RTAGS..TIBETAN SIGN YANG RTAGS +0F88..0F8B ; XID_Continue # Lo [4] TIBETAN SIGN LCE TSA CAN..TIBETAN SIGN GRU MED RGYINGS +0F90..0F97 ; XID_Continue # Mn [8] TIBETAN SUBJOINED LETTER KA..TIBETAN SUBJOINED LETTER JA +0F99..0FBC ; XID_Continue # Mn [36] TIBETAN SUBJOINED LETTER NYA..TIBETAN SUBJOINED LETTER FIXED-FORM RA +0FC6 ; XID_Continue # Mn TIBETAN SYMBOL PADMA GDAN +1000..102A ; XID_Continue # Lo [43] MYANMAR LETTER KA..MYANMAR LETTER AU +102B..102C ; XID_Continue # Mc [2] MYANMAR VOWEL SIGN TALL AA..MYANMAR VOWEL SIGN AA +102D..1030 ; XID_Continue # Mn [4] MYANMAR VOWEL SIGN I..MYANMAR VOWEL SIGN UU +1031 ; XID_Continue # Mc MYANMAR VOWEL SIGN E +1032..1037 ; XID_Continue # Mn [6] MYANMAR VOWEL SIGN AI..MYANMAR SIGN DOT BELOW +1038 ; XID_Continue # Mc MYANMAR SIGN VISARGA +1039..103A ; XID_Continue # Mn [2] MYANMAR SIGN VIRAMA..MYANMAR SIGN ASAT +103B..103C ; XID_Continue # Mc [2] MYANMAR CONSONANT SIGN MEDIAL YA..MYANMAR CONSONANT SIGN MEDIAL RA +103D..103E ; XID_Continue # Mn [2] MYANMAR CONSONANT SIGN MEDIAL WA..MYANMAR CONSONANT SIGN MEDIAL HA +103F ; XID_Continue # Lo MYANMAR LETTER GREAT SA +1040..1049 ; XID_Continue # Nd [10] MYANMAR DIGIT ZERO..MYANMAR DIGIT NINE +1050..1055 ; XID_Continue # Lo [6] MYANMAR LETTER SHA..MYANMAR LETTER VOCALIC LL +1056..1057 ; XID_Continue # Mc [2] MYANMAR VOWEL SIGN VOCALIC R..MYANMAR VOWEL SIGN VOCALIC RR +1058..1059 ; XID_Continue # Mn [2] MYANMAR VOWEL SIGN VOCALIC L..MYANMAR VOWEL SIGN VOCALIC LL +105A..105D ; XID_Continue # Lo [4] MYANMAR LETTER MON NGA..MYANMAR LETTER MON BBE +105E..1060 ; XID_Continue # Mn [3] MYANMAR CONSONANT SIGN MON MEDIAL NA..MYANMAR CONSONANT SIGN MON MEDIAL LA +1061 ; XID_Continue # Lo MYANMAR LETTER SGAW KAREN SHA +1062..1064 ; XID_Continue # Mc [3] MYANMAR VOWEL SIGN SGAW KAREN EU..MYANMAR TONE MARK SGAW KAREN KE PHO +1065..1066 ; XID_Continue # Lo [2] MYANMAR LETTER WESTERN PWO KAREN THA..MYANMAR LETTER WESTERN PWO KAREN PWA +1067..106D ; XID_Continue # Mc [7] MYANMAR VOWEL SIGN WESTERN PWO KAREN EU..MYANMAR SIGN WESTERN PWO KAREN TONE-5 +106E..1070 ; XID_Continue # Lo [3] MYANMAR LETTER EASTERN PWO KAREN NNA..MYANMAR LETTER EASTERN PWO KAREN GHWA +1071..1074 ; XID_Continue # Mn [4] MYANMAR VOWEL SIGN GEBA KAREN I..MYANMAR VOWEL SIGN KAYAH EE +1075..1081 ; XID_Continue # Lo [13] MYANMAR LETTER SHAN KA..MYANMAR LETTER SHAN HA +1082 ; XID_Continue # Mn MYANMAR CONSONANT SIGN SHAN MEDIAL WA +1083..1084 ; XID_Continue # Mc [2] MYANMAR VOWEL SIGN SHAN AA..MYANMAR VOWEL SIGN SHAN E +1085..1086 ; XID_Continue # Mn [2] MYANMAR VOWEL SIGN SHAN E ABOVE..MYANMAR VOWEL SIGN SHAN FINAL Y +1087..108C ; XID_Continue # Mc [6] MYANMAR SIGN SHAN TONE-2..MYANMAR SIGN SHAN COUNCIL TONE-3 +108D ; XID_Continue # Mn MYANMAR SIGN SHAN COUNCIL EMPHATIC TONE +108E ; XID_Continue # Lo MYANMAR LETTER RUMAI PALAUNG FA +108F ; XID_Continue # Mc MYANMAR SIGN RUMAI PALAUNG TONE-5 +1090..1099 ; XID_Continue # Nd [10] MYANMAR SHAN DIGIT ZERO..MYANMAR SHAN DIGIT NINE +109A..109C ; XID_Continue # Mc [3] MYANMAR SIGN KHAMTI TONE-1..MYANMAR VOWEL SIGN AITON A +109D ; XID_Continue # Mn MYANMAR VOWEL SIGN AITON AI +10A0..10C5 ; XID_Continue # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +10D0..10FA ; XID_Continue # Lo [43] GEORGIAN LETTER AN..GEORGIAN LETTER AIN +10FC ; XID_Continue # Lm MODIFIER LETTER GEORGIAN NAR +1100..1248 ; XID_Continue # Lo [329] HANGUL CHOSEONG KIYEOK..ETHIOPIC SYLLABLE QWA +124A..124D ; XID_Continue # Lo [4] ETHIOPIC SYLLABLE QWI..ETHIOPIC SYLLABLE QWE +1250..1256 ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE QHA..ETHIOPIC SYLLABLE QHO +1258 ; XID_Continue # Lo ETHIOPIC SYLLABLE QHWA +125A..125D ; XID_Continue # Lo [4] ETHIOPIC SYLLABLE QHWI..ETHIOPIC SYLLABLE QHWE +1260..1288 ; XID_Continue # Lo [41] ETHIOPIC SYLLABLE BA..ETHIOPIC SYLLABLE XWA +128A..128D ; XID_Continue # Lo [4] ETHIOPIC SYLLABLE XWI..ETHIOPIC SYLLABLE XWE +1290..12B0 ; XID_Continue # Lo [33] ETHIOPIC SYLLABLE NA..ETHIOPIC SYLLABLE KWA +12B2..12B5 ; XID_Continue # Lo [4] ETHIOPIC SYLLABLE KWI..ETHIOPIC SYLLABLE KWE +12B8..12BE ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE KXA..ETHIOPIC SYLLABLE KXO +12C0 ; XID_Continue # Lo ETHIOPIC SYLLABLE KXWA +12C2..12C5 ; XID_Continue # Lo [4] ETHIOPIC SYLLABLE KXWI..ETHIOPIC SYLLABLE KXWE +12C8..12D6 ; XID_Continue # Lo [15] ETHIOPIC SYLLABLE WA..ETHIOPIC SYLLABLE PHARYNGEAL O +12D8..1310 ; XID_Continue # Lo [57] ETHIOPIC SYLLABLE ZA..ETHIOPIC SYLLABLE GWA +1312..1315 ; XID_Continue # Lo [4] ETHIOPIC SYLLABLE GWI..ETHIOPIC SYLLABLE GWE +1318..135A ; XID_Continue # Lo [67] ETHIOPIC SYLLABLE GGA..ETHIOPIC SYLLABLE FYA +135F ; XID_Continue # Mn ETHIOPIC COMBINING GEMINATION MARK +1369..1371 ; XID_Continue # No [9] ETHIOPIC DIGIT ONE..ETHIOPIC DIGIT NINE +1380..138F ; XID_Continue # Lo [16] ETHIOPIC SYLLABLE SEBATBEIT MWA..ETHIOPIC SYLLABLE PWE +13A0..13F4 ; XID_Continue # Lo [85] CHEROKEE LETTER A..CHEROKEE LETTER YV +1401..166C ; XID_Continue # Lo [620] CANADIAN SYLLABICS E..CANADIAN SYLLABICS CARRIER TTSA +166F..167F ; XID_Continue # Lo [17] CANADIAN SYLLABICS QAI..CANADIAN SYLLABICS BLACKFOOT W +1681..169A ; XID_Continue # Lo [26] OGHAM LETTER BEITH..OGHAM LETTER PEITH +16A0..16EA ; XID_Continue # Lo [75] RUNIC LETTER FEHU FEOH FE F..RUNIC LETTER X +16EE..16F0 ; XID_Continue # Nl [3] RUNIC ARLAUG SYMBOL..RUNIC BELGTHOR SYMBOL +1700..170C ; XID_Continue # Lo [13] TAGALOG LETTER A..TAGALOG LETTER YA +170E..1711 ; XID_Continue # Lo [4] TAGALOG LETTER LA..TAGALOG LETTER HA +1712..1714 ; XID_Continue # Mn [3] TAGALOG VOWEL SIGN I..TAGALOG SIGN VIRAMA +1720..1731 ; XID_Continue # Lo [18] HANUNOO LETTER A..HANUNOO LETTER HA +1732..1734 ; XID_Continue # Mn [3] HANUNOO VOWEL SIGN I..HANUNOO SIGN PAMUDPOD +1740..1751 ; XID_Continue # Lo [18] BUHID LETTER A..BUHID LETTER HA +1752..1753 ; XID_Continue # Mn [2] BUHID VOWEL SIGN I..BUHID VOWEL SIGN U +1760..176C ; XID_Continue # Lo [13] TAGBANWA LETTER A..TAGBANWA LETTER YA +176E..1770 ; XID_Continue # Lo [3] TAGBANWA LETTER LA..TAGBANWA LETTER SA +1772..1773 ; XID_Continue # Mn [2] TAGBANWA VOWEL SIGN I..TAGBANWA VOWEL SIGN U +1780..17B3 ; XID_Continue # Lo [52] KHMER LETTER KA..KHMER INDEPENDENT VOWEL QAU +17B6 ; XID_Continue # Mc KHMER VOWEL SIGN AA +17B7..17BD ; XID_Continue # Mn [7] KHMER VOWEL SIGN I..KHMER VOWEL SIGN UA +17BE..17C5 ; XID_Continue # Mc [8] KHMER VOWEL SIGN OE..KHMER VOWEL SIGN AU +17C6 ; XID_Continue # Mn KHMER SIGN NIKAHIT +17C7..17C8 ; XID_Continue # Mc [2] KHMER SIGN REAHMUK..KHMER SIGN YUUKALEAPINTU +17C9..17D3 ; XID_Continue # Mn [11] KHMER SIGN MUUSIKATOAN..KHMER SIGN BATHAMASAT +17D7 ; XID_Continue # Lm KHMER SIGN LEK TOO +17DC ; XID_Continue # Lo KHMER SIGN AVAKRAHASANYA +17DD ; XID_Continue # Mn KHMER SIGN ATTHACAN +17E0..17E9 ; XID_Continue # Nd [10] KHMER DIGIT ZERO..KHMER DIGIT NINE +180B..180D ; XID_Continue # Mn [3] MONGOLIAN FREE VARIATION SELECTOR ONE..MONGOLIAN FREE VARIATION SELECTOR THREE +1810..1819 ; XID_Continue # Nd [10] MONGOLIAN DIGIT ZERO..MONGOLIAN DIGIT NINE +1820..1842 ; XID_Continue # Lo [35] MONGOLIAN LETTER A..MONGOLIAN LETTER CHI +1843 ; XID_Continue # Lm MONGOLIAN LETTER TODO LONG VOWEL SIGN +1844..1877 ; XID_Continue # Lo [52] MONGOLIAN LETTER TODO E..MONGOLIAN LETTER MANCHU ZHA +1880..18A8 ; XID_Continue # Lo [41] MONGOLIAN LETTER ALI GALI ANUSVARA ONE..MONGOLIAN LETTER MANCHU ALI GALI BHA +18A9 ; XID_Continue # Mn MONGOLIAN LETTER ALI GALI DAGALGA +18AA ; XID_Continue # Lo MONGOLIAN LETTER MANCHU ALI GALI LHA +18B0..18F5 ; XID_Continue # Lo [70] CANADIAN SYLLABICS OY..CANADIAN SYLLABICS CARRIER DENTAL S +1900..191C ; XID_Continue # Lo [29] LIMBU VOWEL-CARRIER LETTER..LIMBU LETTER HA +1920..1922 ; XID_Continue # Mn [3] LIMBU VOWEL SIGN A..LIMBU VOWEL SIGN U +1923..1926 ; XID_Continue # Mc [4] LIMBU VOWEL SIGN EE..LIMBU VOWEL SIGN AU +1927..1928 ; XID_Continue # Mn [2] LIMBU VOWEL SIGN E..LIMBU VOWEL SIGN O +1929..192B ; XID_Continue # Mc [3] LIMBU SUBJOINED LETTER YA..LIMBU SUBJOINED LETTER WA +1930..1931 ; XID_Continue # Mc [2] LIMBU SMALL LETTER KA..LIMBU SMALL LETTER NGA +1932 ; XID_Continue # Mn LIMBU SMALL LETTER ANUSVARA +1933..1938 ; XID_Continue # Mc [6] LIMBU SMALL LETTER TA..LIMBU SMALL LETTER LA +1939..193B ; XID_Continue # Mn [3] LIMBU SIGN MUKPHRENG..LIMBU SIGN SA-I +1946..194F ; XID_Continue # Nd [10] LIMBU DIGIT ZERO..LIMBU DIGIT NINE +1950..196D ; XID_Continue # Lo [30] TAI LE LETTER KA..TAI LE LETTER AI +1970..1974 ; XID_Continue # Lo [5] TAI LE LETTER TONE-2..TAI LE LETTER TONE-6 +1980..19AB ; XID_Continue # Lo [44] NEW TAI LUE LETTER HIGH QA..NEW TAI LUE LETTER LOW SUA +19B0..19C0 ; XID_Continue # Mc [17] NEW TAI LUE VOWEL SIGN VOWEL SHORTENER..NEW TAI LUE VOWEL SIGN IY +19C1..19C7 ; XID_Continue # Lo [7] NEW TAI LUE LETTER FINAL V..NEW TAI LUE LETTER FINAL B +19C8..19C9 ; XID_Continue # Mc [2] NEW TAI LUE TONE MARK-1..NEW TAI LUE TONE MARK-2 +19D0..19DA ; XID_Continue # Nd [11] NEW TAI LUE DIGIT ZERO..NEW TAI LUE THAM DIGIT ONE +1A00..1A16 ; XID_Continue # Lo [23] BUGINESE LETTER KA..BUGINESE LETTER HA +1A17..1A18 ; XID_Continue # Mn [2] BUGINESE VOWEL SIGN I..BUGINESE VOWEL SIGN U +1A19..1A1B ; XID_Continue # Mc [3] BUGINESE VOWEL SIGN E..BUGINESE VOWEL SIGN AE +1A20..1A54 ; XID_Continue # Lo [53] TAI THAM LETTER HIGH KA..TAI THAM LETTER GREAT SA +1A55 ; XID_Continue # Mc TAI THAM CONSONANT SIGN MEDIAL RA +1A56 ; XID_Continue # Mn TAI THAM CONSONANT SIGN MEDIAL LA +1A57 ; XID_Continue # Mc TAI THAM CONSONANT SIGN LA TANG LAI +1A58..1A5E ; XID_Continue # Mn [7] TAI THAM SIGN MAI KANG LAI..TAI THAM CONSONANT SIGN SA +1A60 ; XID_Continue # Mn TAI THAM SIGN SAKOT +1A61 ; XID_Continue # Mc TAI THAM VOWEL SIGN A +1A62 ; XID_Continue # Mn TAI THAM VOWEL SIGN MAI SAT +1A63..1A64 ; XID_Continue # Mc [2] TAI THAM VOWEL SIGN AA..TAI THAM VOWEL SIGN TALL AA +1A65..1A6C ; XID_Continue # Mn [8] TAI THAM VOWEL SIGN I..TAI THAM VOWEL SIGN OA BELOW +1A6D..1A72 ; XID_Continue # Mc [6] TAI THAM VOWEL SIGN OY..TAI THAM VOWEL SIGN THAM AI +1A73..1A7C ; XID_Continue # Mn [10] TAI THAM VOWEL SIGN OA ABOVE..TAI THAM SIGN KHUEN-LUE KARAN +1A7F ; XID_Continue # Mn TAI THAM COMBINING CRYPTOGRAMMIC DOT +1A80..1A89 ; XID_Continue # Nd [10] TAI THAM HORA DIGIT ZERO..TAI THAM HORA DIGIT NINE +1A90..1A99 ; XID_Continue # Nd [10] TAI THAM THAM DIGIT ZERO..TAI THAM THAM DIGIT NINE +1AA7 ; XID_Continue # Lm TAI THAM SIGN MAI YAMOK +1B00..1B03 ; XID_Continue # Mn [4] BALINESE SIGN ULU RICEM..BALINESE SIGN SURANG +1B04 ; XID_Continue # Mc BALINESE SIGN BISAH +1B05..1B33 ; XID_Continue # Lo [47] BALINESE LETTER AKARA..BALINESE LETTER HA +1B34 ; XID_Continue # Mn BALINESE SIGN REREKAN +1B35 ; XID_Continue # Mc BALINESE VOWEL SIGN TEDUNG +1B36..1B3A ; XID_Continue # Mn [5] BALINESE VOWEL SIGN ULU..BALINESE VOWEL SIGN RA REPA +1B3B ; XID_Continue # Mc BALINESE VOWEL SIGN RA REPA TEDUNG +1B3C ; XID_Continue # Mn BALINESE VOWEL SIGN LA LENGA +1B3D..1B41 ; XID_Continue # Mc [5] BALINESE VOWEL SIGN LA LENGA TEDUNG..BALINESE VOWEL SIGN TALING REPA TEDUNG +1B42 ; XID_Continue # Mn BALINESE VOWEL SIGN PEPET +1B43..1B44 ; XID_Continue # Mc [2] BALINESE VOWEL SIGN PEPET TEDUNG..BALINESE ADEG ADEG +1B45..1B4B ; XID_Continue # Lo [7] BALINESE LETTER KAF SASAK..BALINESE LETTER ASYURA SASAK +1B50..1B59 ; XID_Continue # Nd [10] BALINESE DIGIT ZERO..BALINESE DIGIT NINE +1B6B..1B73 ; XID_Continue # Mn [9] BALINESE MUSICAL SYMBOL COMBINING TEGEH..BALINESE MUSICAL SYMBOL COMBINING GONG +1B80..1B81 ; XID_Continue # Mn [2] SUNDANESE SIGN PANYECEK..SUNDANESE SIGN PANGLAYAR +1B82 ; XID_Continue # Mc SUNDANESE SIGN PANGWISAD +1B83..1BA0 ; XID_Continue # Lo [30] SUNDANESE LETTER A..SUNDANESE LETTER HA +1BA1 ; XID_Continue # Mc SUNDANESE CONSONANT SIGN PAMINGKAL +1BA2..1BA5 ; XID_Continue # Mn [4] SUNDANESE CONSONANT SIGN PANYAKRA..SUNDANESE VOWEL SIGN PANYUKU +1BA6..1BA7 ; XID_Continue # Mc [2] SUNDANESE VOWEL SIGN PANAELAENG..SUNDANESE VOWEL SIGN PANOLONG +1BA8..1BA9 ; XID_Continue # Mn [2] SUNDANESE VOWEL SIGN PAMEPET..SUNDANESE VOWEL SIGN PANEULEUNG +1BAA ; XID_Continue # Mc SUNDANESE SIGN PAMAAEH +1BAE..1BAF ; XID_Continue # Lo [2] SUNDANESE LETTER KHA..SUNDANESE LETTER SYA +1BB0..1BB9 ; XID_Continue # Nd [10] SUNDANESE DIGIT ZERO..SUNDANESE DIGIT NINE +1C00..1C23 ; XID_Continue # Lo [36] LEPCHA LETTER KA..LEPCHA LETTER A +1C24..1C2B ; XID_Continue # Mc [8] LEPCHA SUBJOINED LETTER YA..LEPCHA VOWEL SIGN UU +1C2C..1C33 ; XID_Continue # Mn [8] LEPCHA VOWEL SIGN E..LEPCHA CONSONANT SIGN T +1C34..1C35 ; XID_Continue # Mc [2] LEPCHA CONSONANT SIGN NYIN-DO..LEPCHA CONSONANT SIGN KANG +1C36..1C37 ; XID_Continue # Mn [2] LEPCHA SIGN RAN..LEPCHA SIGN NUKTA +1C40..1C49 ; XID_Continue # Nd [10] LEPCHA DIGIT ZERO..LEPCHA DIGIT NINE +1C4D..1C4F ; XID_Continue # Lo [3] LEPCHA LETTER TTA..LEPCHA LETTER DDA +1C50..1C59 ; XID_Continue # Nd [10] OL CHIKI DIGIT ZERO..OL CHIKI DIGIT NINE +1C5A..1C77 ; XID_Continue # Lo [30] OL CHIKI LETTER LA..OL CHIKI LETTER OH +1C78..1C7D ; XID_Continue # Lm [6] OL CHIKI MU TTUDDAG..OL CHIKI AHAD +1CD0..1CD2 ; XID_Continue # Mn [3] VEDIC TONE KARSHANA..VEDIC TONE PRENKHA +1CD4..1CE0 ; XID_Continue # Mn [13] VEDIC SIGN YAJURVEDIC MIDLINE SVARITA..VEDIC TONE RIGVEDIC KASHMIRI INDEPENDENT SVARITA +1CE1 ; XID_Continue # Mc VEDIC TONE ATHARVAVEDIC INDEPENDENT SVARITA +1CE2..1CE8 ; XID_Continue # Mn [7] VEDIC SIGN VISARGA SVARITA..VEDIC SIGN VISARGA ANUDATTA WITH TAIL +1CE9..1CEC ; XID_Continue # Lo [4] VEDIC SIGN ANUSVARA ANTARGOMUKHA..VEDIC SIGN ANUSVARA VAMAGOMUKHA WITH TAIL +1CED ; XID_Continue # Mn VEDIC SIGN TIRYAK +1CEE..1CF1 ; XID_Continue # Lo [4] VEDIC SIGN HEXIFORM LONG ANUSVARA..VEDIC SIGN ANUSVARA UBHAYATO MUKHA +1CF2 ; XID_Continue # Mc VEDIC SIGN ARDHAVISARGA +1D00..1D2B ; XID_Continue # L& [44] LATIN LETTER SMALL CAPITAL A..CYRILLIC LETTER SMALL CAPITAL EL +1D2C..1D61 ; XID_Continue # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D62..1D77 ; XID_Continue # L& [22] LATIN SUBSCRIPT SMALL LETTER I..LATIN SMALL LETTER TURNED G +1D78 ; XID_Continue # Lm MODIFIER LETTER CYRILLIC EN +1D79..1D9A ; XID_Continue # L& [34] LATIN SMALL LETTER INSULAR G..LATIN SMALL LETTER EZH WITH RETROFLEX HOOK +1D9B..1DBF ; XID_Continue # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1DC0..1DE6 ; XID_Continue # Mn [39] COMBINING DOTTED GRAVE ACCENT..COMBINING LATIN SMALL LETTER Z +1DFD..1DFF ; XID_Continue # Mn [3] COMBINING ALMOST EQUAL TO BELOW..COMBINING RIGHT ARROWHEAD AND DOWN ARROWHEAD BELOW +1E00..1F15 ; XID_Continue # L& [278] LATIN CAPITAL LETTER A WITH RING BELOW..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F18..1F1D ; XID_Continue # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F45 ; XID_Continue # L& [38] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F48..1F4D ; XID_Continue # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; XID_Continue # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F59 ; XID_Continue # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; XID_Continue # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; XID_Continue # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F..1F7D ; XID_Continue # L& [31] GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; XID_Continue # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FBC ; XID_Continue # L& [7] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBE ; XID_Continue # L& GREEK PROSGEGRAMMENI +1FC2..1FC4 ; XID_Continue # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FCC ; XID_Continue # L& [7] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FD0..1FD3 ; XID_Continue # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FDB ; XID_Continue # L& [6] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK CAPITAL LETTER IOTA WITH OXIA +1FE0..1FEC ; XID_Continue # L& [13] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FF2..1FF4 ; XID_Continue # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FFC ; XID_Continue # L& [7] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +203F..2040 ; XID_Continue # Pc [2] UNDERTIE..CHARACTER TIE +2054 ; XID_Continue # Pc INVERTED UNDERTIE +2071 ; XID_Continue # Lm SUPERSCRIPT LATIN SMALL LETTER I +207F ; XID_Continue # Lm SUPERSCRIPT LATIN SMALL LETTER N +2090..2094 ; XID_Continue # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +20D0..20DC ; XID_Continue # Mn [13] COMBINING LEFT HARPOON ABOVE..COMBINING FOUR DOTS ABOVE +20E1 ; XID_Continue # Mn COMBINING LEFT RIGHT ARROW ABOVE +20E5..20F0 ; XID_Continue # Mn [12] COMBINING REVERSE SOLIDUS OVERLAY..COMBINING ASTERISK ABOVE +2102 ; XID_Continue # L& DOUBLE-STRUCK CAPITAL C +2107 ; XID_Continue # L& EULER CONSTANT +210A..2113 ; XID_Continue # L& [10] SCRIPT SMALL G..SCRIPT SMALL L +2115 ; XID_Continue # L& DOUBLE-STRUCK CAPITAL N +2118 ; XID_Continue # So SCRIPT CAPITAL P +2119..211D ; XID_Continue # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +2124 ; XID_Continue # L& DOUBLE-STRUCK CAPITAL Z +2126 ; XID_Continue # L& OHM SIGN +2128 ; XID_Continue # L& BLACK-LETTER CAPITAL Z +212A..212D ; XID_Continue # L& [4] KELVIN SIGN..BLACK-LETTER CAPITAL C +212E ; XID_Continue # So ESTIMATED SYMBOL +212F..2134 ; XID_Continue # L& [6] SCRIPT SMALL E..SCRIPT SMALL O +2135..2138 ; XID_Continue # Lo [4] ALEF SYMBOL..DALET SYMBOL +2139 ; XID_Continue # L& INFORMATION SOURCE +213C..213F ; XID_Continue # L& [4] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK CAPITAL PI +2145..2149 ; XID_Continue # L& [5] DOUBLE-STRUCK ITALIC CAPITAL D..DOUBLE-STRUCK ITALIC SMALL J +214E ; XID_Continue # L& TURNED SMALL F +2160..2182 ; XID_Continue # Nl [35] ROMAN NUMERAL ONE..ROMAN NUMERAL TEN THOUSAND +2183..2184 ; XID_Continue # L& [2] ROMAN NUMERAL REVERSED ONE HUNDRED..LATIN SMALL LETTER REVERSED C +2185..2188 ; XID_Continue # Nl [4] ROMAN NUMERAL SIX LATE FORM..ROMAN NUMERAL ONE HUNDRED THOUSAND +2C00..2C2E ; XID_Continue # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C30..2C5E ; XID_Continue # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C60..2C7C ; XID_Continue # L& [29] LATIN CAPITAL LETTER L WITH DOUBLE BAR..LATIN SUBSCRIPT SMALL LETTER J +2C7D ; XID_Continue # Lm MODIFIER LETTER CAPITAL V +2C7E..2CE4 ; XID_Continue # L& [103] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC SYMBOL KAI +2CEB..2CEE ; XID_Continue # L& [4] COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI..COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2CEF..2CF1 ; XID_Continue # Mn [3] COPTIC COMBINING NI ABOVE..COPTIC COMBINING SPIRITUS LENIS +2D00..2D25 ; XID_Continue # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +2D30..2D65 ; XID_Continue # Lo [54] TIFINAGH LETTER YA..TIFINAGH LETTER YAZZ +2D6F ; XID_Continue # Lm TIFINAGH MODIFIER LETTER LABIALIZATION MARK +2D80..2D96 ; XID_Continue # Lo [23] ETHIOPIC SYLLABLE LOA..ETHIOPIC SYLLABLE GGWE +2DA0..2DA6 ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE SSA..ETHIOPIC SYLLABLE SSO +2DA8..2DAE ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE CCA..ETHIOPIC SYLLABLE CCO +2DB0..2DB6 ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE ZZA..ETHIOPIC SYLLABLE ZZO +2DB8..2DBE ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE CCHA..ETHIOPIC SYLLABLE CCHO +2DC0..2DC6 ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE QYA..ETHIOPIC SYLLABLE QYO +2DC8..2DCE ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE KYA..ETHIOPIC SYLLABLE KYO +2DD0..2DD6 ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE XYA..ETHIOPIC SYLLABLE XYO +2DD8..2DDE ; XID_Continue # Lo [7] ETHIOPIC SYLLABLE GYA..ETHIOPIC SYLLABLE GYO +2DE0..2DFF ; XID_Continue # Mn [32] COMBINING CYRILLIC LETTER BE..COMBINING CYRILLIC LETTER IOTIFIED BIG YUS +3005 ; XID_Continue # Lm IDEOGRAPHIC ITERATION MARK +3006 ; XID_Continue # Lo IDEOGRAPHIC CLOSING MARK +3007 ; XID_Continue # Nl IDEOGRAPHIC NUMBER ZERO +3021..3029 ; XID_Continue # Nl [9] HANGZHOU NUMERAL ONE..HANGZHOU NUMERAL NINE +302A..302F ; XID_Continue # Mn [6] IDEOGRAPHIC LEVEL TONE MARK..HANGUL DOUBLE DOT TONE MARK +3031..3035 ; XID_Continue # Lm [5] VERTICAL KANA REPEAT MARK..VERTICAL KANA REPEAT MARK LOWER HALF +3038..303A ; XID_Continue # Nl [3] HANGZHOU NUMERAL TEN..HANGZHOU NUMERAL THIRTY +303B ; XID_Continue # Lm VERTICAL IDEOGRAPHIC ITERATION MARK +303C ; XID_Continue # Lo MASU MARK +3041..3096 ; XID_Continue # Lo [86] HIRAGANA LETTER SMALL A..HIRAGANA LETTER SMALL KE +3099..309A ; XID_Continue # Mn [2] COMBINING KATAKANA-HIRAGANA VOICED SOUND MARK..COMBINING KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK +309D..309E ; XID_Continue # Lm [2] HIRAGANA ITERATION MARK..HIRAGANA VOICED ITERATION MARK +309F ; XID_Continue # Lo HIRAGANA DIGRAPH YORI +30A1..30FA ; XID_Continue # Lo [90] KATAKANA LETTER SMALL A..KATAKANA LETTER VO +30FC..30FE ; XID_Continue # Lm [3] KATAKANA-HIRAGANA PROLONGED SOUND MARK..KATAKANA VOICED ITERATION MARK +30FF ; XID_Continue # Lo KATAKANA DIGRAPH KOTO +3105..312D ; XID_Continue # Lo [41] BOPOMOFO LETTER B..BOPOMOFO LETTER IH +3131..318E ; XID_Continue # Lo [94] HANGUL LETTER KIYEOK..HANGUL LETTER ARAEAE +31A0..31B7 ; XID_Continue # Lo [24] BOPOMOFO LETTER BU..BOPOMOFO FINAL LETTER H +31F0..31FF ; XID_Continue # Lo [16] KATAKANA LETTER SMALL KU..KATAKANA LETTER SMALL RO +3400..4DB5 ; XID_Continue # Lo [6582] CJK UNIFIED IDEOGRAPH-3400..CJK UNIFIED IDEOGRAPH-4DB5 +4E00..9FCB ; XID_Continue # Lo [20940] CJK UNIFIED IDEOGRAPH-4E00..CJK UNIFIED IDEOGRAPH-9FCB +A000..A014 ; XID_Continue # Lo [21] YI SYLLABLE IT..YI SYLLABLE E +A015 ; XID_Continue # Lm YI SYLLABLE WU +A016..A48C ; XID_Continue # Lo [1143] YI SYLLABLE BIT..YI SYLLABLE YYR +A4D0..A4F7 ; XID_Continue # Lo [40] LISU LETTER BA..LISU LETTER OE +A4F8..A4FD ; XID_Continue # Lm [6] LISU LETTER TONE MYA TI..LISU LETTER TONE MYA JEU +A500..A60B ; XID_Continue # Lo [268] VAI SYLLABLE EE..VAI SYLLABLE NG +A60C ; XID_Continue # Lm VAI SYLLABLE LENGTHENER +A610..A61F ; XID_Continue # Lo [16] VAI SYLLABLE NDOLE FA..VAI SYMBOL JONG +A620..A629 ; XID_Continue # Nd [10] VAI DIGIT ZERO..VAI DIGIT NINE +A62A..A62B ; XID_Continue # Lo [2] VAI SYLLABLE NDOLE MA..VAI SYLLABLE NDOLE DO +A640..A65F ; XID_Continue # L& [32] CYRILLIC CAPITAL LETTER ZEMLYA..CYRILLIC SMALL LETTER YN +A662..A66D ; XID_Continue # L& [12] CYRILLIC CAPITAL LETTER SOFT DE..CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A66E ; XID_Continue # Lo CYRILLIC LETTER MULTIOCULAR O +A66F ; XID_Continue # Mn COMBINING CYRILLIC VZMET +A67C..A67D ; XID_Continue # Mn [2] COMBINING CYRILLIC KAVYKA..COMBINING CYRILLIC PAYEROK +A67F ; XID_Continue # Lm CYRILLIC PAYEROK +A680..A697 ; XID_Continue # L& [24] CYRILLIC CAPITAL LETTER DWE..CYRILLIC SMALL LETTER SHWE +A6A0..A6E5 ; XID_Continue # Lo [70] BAMUM LETTER A..BAMUM LETTER KI +A6E6..A6EF ; XID_Continue # Nl [10] BAMUM LETTER MO..BAMUM LETTER KOGHOM +A6F0..A6F1 ; XID_Continue # Mn [2] BAMUM COMBINING MARK KOQNDON..BAMUM COMBINING MARK TUKWENTIS +A717..A71F ; XID_Continue # Lm [9] MODIFIER LETTER DOT VERTICAL BAR..MODIFIER LETTER LOW INVERTED EXCLAMATION MARK +A722..A76F ; XID_Continue # L& [78] LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF..LATIN SMALL LETTER CON +A770 ; XID_Continue # Lm MODIFIER LETTER US +A771..A787 ; XID_Continue # L& [23] LATIN SMALL LETTER DUM..LATIN SMALL LETTER INSULAR T +A788 ; XID_Continue # Lm MODIFIER LETTER LOW CIRCUMFLEX ACCENT +A78B..A78C ; XID_Continue # L& [2] LATIN CAPITAL LETTER SALTILLO..LATIN SMALL LETTER SALTILLO +A7FB..A801 ; XID_Continue # Lo [7] LATIN EPIGRAPHIC LETTER REVERSED F..SYLOTI NAGRI LETTER I +A802 ; XID_Continue # Mn SYLOTI NAGRI SIGN DVISVARA +A803..A805 ; XID_Continue # Lo [3] SYLOTI NAGRI LETTER U..SYLOTI NAGRI LETTER O +A806 ; XID_Continue # Mn SYLOTI NAGRI SIGN HASANTA +A807..A80A ; XID_Continue # Lo [4] SYLOTI NAGRI LETTER KO..SYLOTI NAGRI LETTER GHO +A80B ; XID_Continue # Mn SYLOTI NAGRI SIGN ANUSVARA +A80C..A822 ; XID_Continue # Lo [23] SYLOTI NAGRI LETTER CO..SYLOTI NAGRI LETTER HO +A823..A824 ; XID_Continue # Mc [2] SYLOTI NAGRI VOWEL SIGN A..SYLOTI NAGRI VOWEL SIGN I +A825..A826 ; XID_Continue # Mn [2] SYLOTI NAGRI VOWEL SIGN U..SYLOTI NAGRI VOWEL SIGN E +A827 ; XID_Continue # Mc SYLOTI NAGRI VOWEL SIGN OO +A840..A873 ; XID_Continue # Lo [52] PHAGS-PA LETTER KA..PHAGS-PA LETTER CANDRABINDU +A880..A881 ; XID_Continue # Mc [2] SAURASHTRA SIGN ANUSVARA..SAURASHTRA SIGN VISARGA +A882..A8B3 ; XID_Continue # Lo [50] SAURASHTRA LETTER A..SAURASHTRA LETTER LLA +A8B4..A8C3 ; XID_Continue # Mc [16] SAURASHTRA CONSONANT SIGN HAARU..SAURASHTRA VOWEL SIGN AU +A8C4 ; XID_Continue # Mn SAURASHTRA SIGN VIRAMA +A8D0..A8D9 ; XID_Continue # Nd [10] SAURASHTRA DIGIT ZERO..SAURASHTRA DIGIT NINE +A8E0..A8F1 ; XID_Continue # Mn [18] COMBINING DEVANAGARI DIGIT ZERO..COMBINING DEVANAGARI SIGN AVAGRAHA +A8F2..A8F7 ; XID_Continue # Lo [6] DEVANAGARI SIGN SPACING CANDRABINDU..DEVANAGARI SIGN CANDRABINDU AVAGRAHA +A8FB ; XID_Continue # Lo DEVANAGARI HEADSTROKE +A900..A909 ; XID_Continue # Nd [10] KAYAH LI DIGIT ZERO..KAYAH LI DIGIT NINE +A90A..A925 ; XID_Continue # Lo [28] KAYAH LI LETTER KA..KAYAH LI LETTER OO +A926..A92D ; XID_Continue # Mn [8] KAYAH LI VOWEL UE..KAYAH LI TONE CALYA PLOPHU +A930..A946 ; XID_Continue # Lo [23] REJANG LETTER KA..REJANG LETTER A +A947..A951 ; XID_Continue # Mn [11] REJANG VOWEL SIGN I..REJANG CONSONANT SIGN R +A952..A953 ; XID_Continue # Mc [2] REJANG CONSONANT SIGN H..REJANG VIRAMA +A960..A97C ; XID_Continue # Lo [29] HANGUL CHOSEONG TIKEUT-MIEUM..HANGUL CHOSEONG SSANGYEORINHIEUH +A980..A982 ; XID_Continue # Mn [3] JAVANESE SIGN PANYANGGA..JAVANESE SIGN LAYAR +A983 ; XID_Continue # Mc JAVANESE SIGN WIGNYAN +A984..A9B2 ; XID_Continue # Lo [47] JAVANESE LETTER A..JAVANESE LETTER HA +A9B3 ; XID_Continue # Mn JAVANESE SIGN CECAK TELU +A9B4..A9B5 ; XID_Continue # Mc [2] JAVANESE VOWEL SIGN TARUNG..JAVANESE VOWEL SIGN TOLONG +A9B6..A9B9 ; XID_Continue # Mn [4] JAVANESE VOWEL SIGN WULU..JAVANESE VOWEL SIGN SUKU MENDUT +A9BA..A9BB ; XID_Continue # Mc [2] JAVANESE VOWEL SIGN TALING..JAVANESE VOWEL SIGN DIRGA MURE +A9BC ; XID_Continue # Mn JAVANESE VOWEL SIGN PEPET +A9BD..A9C0 ; XID_Continue # Mc [4] JAVANESE CONSONANT SIGN KERET..JAVANESE PANGKON +A9CF ; XID_Continue # Lm JAVANESE PANGRANGKEP +A9D0..A9D9 ; XID_Continue # Nd [10] JAVANESE DIGIT ZERO..JAVANESE DIGIT NINE +AA00..AA28 ; XID_Continue # Lo [41] CHAM LETTER A..CHAM LETTER HA +AA29..AA2E ; XID_Continue # Mn [6] CHAM VOWEL SIGN AA..CHAM VOWEL SIGN OE +AA2F..AA30 ; XID_Continue # Mc [2] CHAM VOWEL SIGN O..CHAM VOWEL SIGN AI +AA31..AA32 ; XID_Continue # Mn [2] CHAM VOWEL SIGN AU..CHAM VOWEL SIGN UE +AA33..AA34 ; XID_Continue # Mc [2] CHAM CONSONANT SIGN YA..CHAM CONSONANT SIGN RA +AA35..AA36 ; XID_Continue # Mn [2] CHAM CONSONANT SIGN LA..CHAM CONSONANT SIGN WA +AA40..AA42 ; XID_Continue # Lo [3] CHAM LETTER FINAL K..CHAM LETTER FINAL NG +AA43 ; XID_Continue # Mn CHAM CONSONANT SIGN FINAL NG +AA44..AA4B ; XID_Continue # Lo [8] CHAM LETTER FINAL CH..CHAM LETTER FINAL SS +AA4C ; XID_Continue # Mn CHAM CONSONANT SIGN FINAL M +AA4D ; XID_Continue # Mc CHAM CONSONANT SIGN FINAL H +AA50..AA59 ; XID_Continue # Nd [10] CHAM DIGIT ZERO..CHAM DIGIT NINE +AA60..AA6F ; XID_Continue # Lo [16] MYANMAR LETTER KHAMTI GA..MYANMAR LETTER KHAMTI FA +AA70 ; XID_Continue # Lm MYANMAR MODIFIER LETTER KHAMTI REDUPLICATION +AA71..AA76 ; XID_Continue # Lo [6] MYANMAR LETTER KHAMTI XA..MYANMAR LOGOGRAM KHAMTI HM +AA7A ; XID_Continue # Lo MYANMAR LETTER AITON RA +AA7B ; XID_Continue # Mc MYANMAR SIGN PAO KAREN TONE +AA80..AAAF ; XID_Continue # Lo [48] TAI VIET LETTER LOW KO..TAI VIET LETTER HIGH O +AAB0 ; XID_Continue # Mn TAI VIET MAI KANG +AAB1 ; XID_Continue # Lo TAI VIET VOWEL AA +AAB2..AAB4 ; XID_Continue # Mn [3] TAI VIET VOWEL I..TAI VIET VOWEL U +AAB5..AAB6 ; XID_Continue # Lo [2] TAI VIET VOWEL E..TAI VIET VOWEL O +AAB7..AAB8 ; XID_Continue # Mn [2] TAI VIET MAI KHIT..TAI VIET VOWEL IA +AAB9..AABD ; XID_Continue # Lo [5] TAI VIET VOWEL UEA..TAI VIET VOWEL AN +AABE..AABF ; XID_Continue # Mn [2] TAI VIET VOWEL AM..TAI VIET TONE MAI EK +AAC0 ; XID_Continue # Lo TAI VIET TONE MAI NUENG +AAC1 ; XID_Continue # Mn TAI VIET TONE MAI THO +AAC2 ; XID_Continue # Lo TAI VIET TONE MAI SONG +AADB..AADC ; XID_Continue # Lo [2] TAI VIET SYMBOL KON..TAI VIET SYMBOL NUENG +AADD ; XID_Continue # Lm TAI VIET SYMBOL SAM +ABC0..ABE2 ; XID_Continue # Lo [35] MEETEI MAYEK LETTER KOK..MEETEI MAYEK LETTER I LONSUM +ABE3..ABE4 ; XID_Continue # Mc [2] MEETEI MAYEK VOWEL SIGN ONAP..MEETEI MAYEK VOWEL SIGN INAP +ABE5 ; XID_Continue # Mn MEETEI MAYEK VOWEL SIGN ANAP +ABE6..ABE7 ; XID_Continue # Mc [2] MEETEI MAYEK VOWEL SIGN YENAP..MEETEI MAYEK VOWEL SIGN SOUNAP +ABE8 ; XID_Continue # Mn MEETEI MAYEK VOWEL SIGN UNAP +ABE9..ABEA ; XID_Continue # Mc [2] MEETEI MAYEK VOWEL SIGN CHEINAP..MEETEI MAYEK VOWEL SIGN NUNG +ABEC ; XID_Continue # Mc MEETEI MAYEK LUM IYEK +ABED ; XID_Continue # Mn MEETEI MAYEK APUN IYEK +ABF0..ABF9 ; XID_Continue # Nd [10] MEETEI MAYEK DIGIT ZERO..MEETEI MAYEK DIGIT NINE +AC00..D7A3 ; XID_Continue # Lo [11172] HANGUL SYLLABLE GA..HANGUL SYLLABLE HIH +D7B0..D7C6 ; XID_Continue # Lo [23] HANGUL JUNGSEONG O-YEO..HANGUL JUNGSEONG ARAEA-E +D7CB..D7FB ; XID_Continue # Lo [49] HANGUL JONGSEONG NIEUN-RIEUL..HANGUL JONGSEONG PHIEUPH-THIEUTH +F900..FA2D ; XID_Continue # Lo [302] CJK COMPATIBILITY IDEOGRAPH-F900..CJK COMPATIBILITY IDEOGRAPH-FA2D +FA30..FA6D ; XID_Continue # Lo [62] CJK COMPATIBILITY IDEOGRAPH-FA30..CJK COMPATIBILITY IDEOGRAPH-FA6D +FA70..FAD9 ; XID_Continue # Lo [106] CJK COMPATIBILITY IDEOGRAPH-FA70..CJK COMPATIBILITY IDEOGRAPH-FAD9 +FB00..FB06 ; XID_Continue # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; XID_Continue # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FB1D ; XID_Continue # Lo HEBREW LETTER YOD WITH HIRIQ +FB1E ; XID_Continue # Mn HEBREW POINT JUDEO-SPANISH VARIKA +FB1F..FB28 ; XID_Continue # Lo [10] HEBREW LIGATURE YIDDISH YOD YOD PATAH..HEBREW LETTER WIDE TAV +FB2A..FB36 ; XID_Continue # Lo [13] HEBREW LETTER SHIN WITH SHIN DOT..HEBREW LETTER ZAYIN WITH DAGESH +FB38..FB3C ; XID_Continue # Lo [5] HEBREW LETTER TET WITH DAGESH..HEBREW LETTER LAMED WITH DAGESH +FB3E ; XID_Continue # Lo HEBREW LETTER MEM WITH DAGESH +FB40..FB41 ; XID_Continue # Lo [2] HEBREW LETTER NUN WITH DAGESH..HEBREW LETTER SAMEKH WITH DAGESH +FB43..FB44 ; XID_Continue # Lo [2] HEBREW LETTER FINAL PE WITH DAGESH..HEBREW LETTER PE WITH DAGESH +FB46..FBB1 ; XID_Continue # Lo [108] HEBREW LETTER TSADI WITH DAGESH..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE FINAL FORM +FBD3..FC5D ; XID_Continue # Lo [139] ARABIC LETTER NG ISOLATED FORM..ARABIC LIGATURE ALEF MAKSURA WITH SUPERSCRIPT ALEF ISOLATED FORM +FC64..FD3D ; XID_Continue # Lo [218] ARABIC LIGATURE YEH WITH HAMZA ABOVE WITH REH FINAL FORM..ARABIC LIGATURE ALEF WITH FATHATAN ISOLATED FORM +FD50..FD8F ; XID_Continue # Lo [64] ARABIC LIGATURE TEH WITH JEEM WITH MEEM INITIAL FORM..ARABIC LIGATURE MEEM WITH KHAH WITH MEEM INITIAL FORM +FD92..FDC7 ; XID_Continue # Lo [54] ARABIC LIGATURE MEEM WITH JEEM WITH KHAH INITIAL FORM..ARABIC LIGATURE NOON WITH JEEM WITH YEH FINAL FORM +FDF0..FDF9 ; XID_Continue # Lo [10] ARABIC LIGATURE SALLA USED AS KORANIC STOP SIGN ISOLATED FORM..ARABIC LIGATURE SALLA ISOLATED FORM +FE00..FE0F ; XID_Continue # Mn [16] VARIATION SELECTOR-1..VARIATION SELECTOR-16 +FE20..FE26 ; XID_Continue # Mn [7] COMBINING LIGATURE LEFT HALF..COMBINING CONJOINING MACRON +FE33..FE34 ; XID_Continue # Pc [2] PRESENTATION FORM FOR VERTICAL LOW LINE..PRESENTATION FORM FOR VERTICAL WAVY LOW LINE +FE4D..FE4F ; XID_Continue # Pc [3] DASHED LOW LINE..WAVY LOW LINE +FE71 ; XID_Continue # Lo ARABIC TATWEEL WITH FATHATAN ABOVE +FE73 ; XID_Continue # Lo ARABIC TAIL FRAGMENT +FE77 ; XID_Continue # Lo ARABIC FATHA MEDIAL FORM +FE79 ; XID_Continue # Lo ARABIC DAMMA MEDIAL FORM +FE7B ; XID_Continue # Lo ARABIC KASRA MEDIAL FORM +FE7D ; XID_Continue # Lo ARABIC SHADDA MEDIAL FORM +FE7F..FEFC ; XID_Continue # Lo [126] ARABIC SUKUN MEDIAL FORM..ARABIC LIGATURE LAM WITH ALEF FINAL FORM +FF10..FF19 ; XID_Continue # Nd [10] FULLWIDTH DIGIT ZERO..FULLWIDTH DIGIT NINE +FF21..FF3A ; XID_Continue # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +FF3F ; XID_Continue # Pc FULLWIDTH LOW LINE +FF41..FF5A ; XID_Continue # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +FF66..FF6F ; XID_Continue # Lo [10] HALFWIDTH KATAKANA LETTER WO..HALFWIDTH KATAKANA LETTER SMALL TU +FF70 ; XID_Continue # Lm HALFWIDTH KATAKANA-HIRAGANA PROLONGED SOUND MARK +FF71..FF9D ; XID_Continue # Lo [45] HALFWIDTH KATAKANA LETTER A..HALFWIDTH KATAKANA LETTER N +FF9E..FF9F ; XID_Continue # Lm [2] HALFWIDTH KATAKANA VOICED SOUND MARK..HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK +FFA0..FFBE ; XID_Continue # Lo [31] HALFWIDTH HANGUL FILLER..HALFWIDTH HANGUL LETTER HIEUH +FFC2..FFC7 ; XID_Continue # Lo [6] HALFWIDTH HANGUL LETTER A..HALFWIDTH HANGUL LETTER E +FFCA..FFCF ; XID_Continue # Lo [6] HALFWIDTH HANGUL LETTER YEO..HALFWIDTH HANGUL LETTER OE +FFD2..FFD7 ; XID_Continue # Lo [6] HALFWIDTH HANGUL LETTER YO..HALFWIDTH HANGUL LETTER YU +FFDA..FFDC ; XID_Continue # Lo [3] HALFWIDTH HANGUL LETTER EU..HALFWIDTH HANGUL LETTER I +10000..1000B ; XID_Continue # Lo [12] LINEAR B SYLLABLE B008 A..LINEAR B SYLLABLE B046 JE +1000D..10026 ; XID_Continue # Lo [26] LINEAR B SYLLABLE B036 JO..LINEAR B SYLLABLE B032 QO +10028..1003A ; XID_Continue # Lo [19] LINEAR B SYLLABLE B060 RA..LINEAR B SYLLABLE B042 WO +1003C..1003D ; XID_Continue # Lo [2] LINEAR B SYLLABLE B017 ZA..LINEAR B SYLLABLE B074 ZE +1003F..1004D ; XID_Continue # Lo [15] LINEAR B SYLLABLE B020 ZO..LINEAR B SYLLABLE B091 TWO +10050..1005D ; XID_Continue # Lo [14] LINEAR B SYMBOL B018..LINEAR B SYMBOL B089 +10080..100FA ; XID_Continue # Lo [123] LINEAR B IDEOGRAM B100 MAN..LINEAR B IDEOGRAM VESSEL B305 +10140..10174 ; XID_Continue # Nl [53] GREEK ACROPHONIC ATTIC ONE QUARTER..GREEK ACROPHONIC STRATIAN FIFTY MNAS +101FD ; XID_Continue # Mn PHAISTOS DISC SIGN COMBINING OBLIQUE STROKE +10280..1029C ; XID_Continue # Lo [29] LYCIAN LETTER A..LYCIAN LETTER X +102A0..102D0 ; XID_Continue # Lo [49] CARIAN LETTER A..CARIAN LETTER UUU3 +10300..1031E ; XID_Continue # Lo [31] OLD ITALIC LETTER A..OLD ITALIC LETTER UU +10330..10340 ; XID_Continue # Lo [17] GOTHIC LETTER AHSA..GOTHIC LETTER PAIRTHRA +10341 ; XID_Continue # Nl GOTHIC LETTER NINETY +10342..10349 ; XID_Continue # Lo [8] GOTHIC LETTER RAIDA..GOTHIC LETTER OTHAL +1034A ; XID_Continue # Nl GOTHIC LETTER NINE HUNDRED +10380..1039D ; XID_Continue # Lo [30] UGARITIC LETTER ALPA..UGARITIC LETTER SSU +103A0..103C3 ; XID_Continue # Lo [36] OLD PERSIAN SIGN A..OLD PERSIAN SIGN HA +103C8..103CF ; XID_Continue # Lo [8] OLD PERSIAN SIGN AURAMAZDAA..OLD PERSIAN SIGN BUUMISH +103D1..103D5 ; XID_Continue # Nl [5] OLD PERSIAN NUMBER ONE..OLD PERSIAN NUMBER HUNDRED +10400..1044F ; XID_Continue # L& [80] DESERET CAPITAL LETTER LONG I..DESERET SMALL LETTER EW +10450..1049D ; XID_Continue # Lo [78] SHAVIAN LETTER PEEP..OSMANYA LETTER OO +104A0..104A9 ; XID_Continue # Nd [10] OSMANYA DIGIT ZERO..OSMANYA DIGIT NINE +10800..10805 ; XID_Continue # Lo [6] CYPRIOT SYLLABLE A..CYPRIOT SYLLABLE JA +10808 ; XID_Continue # Lo CYPRIOT SYLLABLE JO +1080A..10835 ; XID_Continue # Lo [44] CYPRIOT SYLLABLE KA..CYPRIOT SYLLABLE WO +10837..10838 ; XID_Continue # Lo [2] CYPRIOT SYLLABLE XA..CYPRIOT SYLLABLE XE +1083C ; XID_Continue # Lo CYPRIOT SYLLABLE ZA +1083F..10855 ; XID_Continue # Lo [23] CYPRIOT SYLLABLE ZO..IMPERIAL ARAMAIC LETTER TAW +10900..10915 ; XID_Continue # Lo [22] PHOENICIAN LETTER ALF..PHOENICIAN LETTER TAU +10920..10939 ; XID_Continue # Lo [26] LYDIAN LETTER A..LYDIAN LETTER C +10A00 ; XID_Continue # Lo KHAROSHTHI LETTER A +10A01..10A03 ; XID_Continue # Mn [3] KHAROSHTHI VOWEL SIGN I..KHAROSHTHI VOWEL SIGN VOCALIC R +10A05..10A06 ; XID_Continue # Mn [2] KHAROSHTHI VOWEL SIGN E..KHAROSHTHI VOWEL SIGN O +10A0C..10A0F ; XID_Continue # Mn [4] KHAROSHTHI VOWEL LENGTH MARK..KHAROSHTHI SIGN VISARGA +10A10..10A13 ; XID_Continue # Lo [4] KHAROSHTHI LETTER KA..KHAROSHTHI LETTER GHA +10A15..10A17 ; XID_Continue # Lo [3] KHAROSHTHI LETTER CA..KHAROSHTHI LETTER JA +10A19..10A33 ; XID_Continue # Lo [27] KHAROSHTHI LETTER NYA..KHAROSHTHI LETTER TTTHA +10A38..10A3A ; XID_Continue # Mn [3] KHAROSHTHI SIGN BAR ABOVE..KHAROSHTHI SIGN DOT BELOW +10A3F ; XID_Continue # Mn KHAROSHTHI VIRAMA +10A60..10A7C ; XID_Continue # Lo [29] OLD SOUTH ARABIAN LETTER HE..OLD SOUTH ARABIAN LETTER THETH +10B00..10B35 ; XID_Continue # Lo [54] AVESTAN LETTER A..AVESTAN LETTER HE +10B40..10B55 ; XID_Continue # Lo [22] INSCRIPTIONAL PARTHIAN LETTER ALEPH..INSCRIPTIONAL PARTHIAN LETTER TAW +10B60..10B72 ; XID_Continue # Lo [19] INSCRIPTIONAL PAHLAVI LETTER ALEPH..INSCRIPTIONAL PAHLAVI LETTER TAW +10C00..10C48 ; XID_Continue # Lo [73] OLD TURKIC LETTER ORKHON A..OLD TURKIC LETTER ORKHON BASH +11080..11081 ; XID_Continue # Mn [2] KAITHI SIGN CANDRABINDU..KAITHI SIGN ANUSVARA +11082 ; XID_Continue # Mc KAITHI SIGN VISARGA +11083..110AF ; XID_Continue # Lo [45] KAITHI LETTER A..KAITHI LETTER HA +110B0..110B2 ; XID_Continue # Mc [3] KAITHI VOWEL SIGN AA..KAITHI VOWEL SIGN II +110B3..110B6 ; XID_Continue # Mn [4] KAITHI VOWEL SIGN U..KAITHI VOWEL SIGN AI +110B7..110B8 ; XID_Continue # Mc [2] KAITHI VOWEL SIGN O..KAITHI VOWEL SIGN AU +110B9..110BA ; XID_Continue # Mn [2] KAITHI SIGN VIRAMA..KAITHI SIGN NUKTA +12000..1236E ; XID_Continue # Lo [879] CUNEIFORM SIGN A..CUNEIFORM SIGN ZUM +12400..12462 ; XID_Continue # Nl [99] CUNEIFORM NUMERIC SIGN TWO ASH..CUNEIFORM NUMERIC SIGN OLD ASSYRIAN ONE QUARTER +13000..1342E ; XID_Continue # Lo [1071] EGYPTIAN HIEROGLYPH A001..EGYPTIAN HIEROGLYPH AA032 +1D165..1D166 ; XID_Continue # Mc [2] MUSICAL SYMBOL COMBINING STEM..MUSICAL SYMBOL COMBINING SPRECHGESANG STEM +1D167..1D169 ; XID_Continue # Mn [3] MUSICAL SYMBOL COMBINING TREMOLO-1..MUSICAL SYMBOL COMBINING TREMOLO-3 +1D16D..1D172 ; XID_Continue # Mc [6] MUSICAL SYMBOL COMBINING AUGMENTATION DOT..MUSICAL SYMBOL COMBINING FLAG-5 +1D17B..1D182 ; XID_Continue # Mn [8] MUSICAL SYMBOL COMBINING ACCENT..MUSICAL SYMBOL COMBINING LOURE +1D185..1D18B ; XID_Continue # Mn [7] MUSICAL SYMBOL COMBINING DOIT..MUSICAL SYMBOL COMBINING TRIPLE TONGUE +1D1AA..1D1AD ; XID_Continue # Mn [4] MUSICAL SYMBOL COMBINING DOWN BOW..MUSICAL SYMBOL COMBINING SNAP PIZZICATO +1D242..1D244 ; XID_Continue # Mn [3] COMBINING GREEK MUSICAL TRISEME..COMBINING GREEK MUSICAL PENTASEME +1D400..1D454 ; XID_Continue # L& [85] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL ITALIC SMALL G +1D456..1D49C ; XID_Continue # L& [71] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; XID_Continue # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; XID_Continue # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; XID_Continue # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; XID_Continue # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B9 ; XID_Continue # L& [12] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT SMALL D +1D4BB ; XID_Continue # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; XID_Continue # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D505 ; XID_Continue # L& [65] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; XID_Continue # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; XID_Continue # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; XID_Continue # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D51E..1D539 ; XID_Continue # L& [28] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; XID_Continue # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; XID_Continue # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; XID_Continue # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; XID_Continue # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D552..1D6A5 ; XID_Continue # L& [340] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6A8..1D6C0 ; XID_Continue # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6C2..1D6DA ; XID_Continue # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DC..1D6FA ; XID_Continue # L& [31] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL ITALIC CAPITAL OMEGA +1D6FC..1D714 ; XID_Continue # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D716..1D734 ; XID_Continue # L& [31] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D736..1D74E ; XID_Continue # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D750..1D76E ; XID_Continue # L& [31] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D770..1D788 ; XID_Continue # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D78A..1D7A8 ; XID_Continue # L& [31] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7AA..1D7C2 ; XID_Continue # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C4..1D7CB ; XID_Continue # L& [8] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD SMALL DIGAMMA +1D7CE..1D7FF ; XID_Continue # Nd [50] MATHEMATICAL BOLD DIGIT ZERO..MATHEMATICAL MONOSPACE DIGIT NINE +20000..2A6D6 ; XID_Continue # Lo [42711] CJK UNIFIED IDEOGRAPH-20000..CJK UNIFIED IDEOGRAPH-2A6D6 +2A700..2B734 ; XID_Continue # Lo [4149] CJK UNIFIED IDEOGRAPH-2A700..CJK UNIFIED IDEOGRAPH-2B734 +2F800..2FA1D ; XID_Continue # Lo [542] CJK COMPATIBILITY IDEOGRAPH-2F800..CJK COMPATIBILITY IDEOGRAPH-2FA1D +E0100..E01EF ; XID_Continue # Mn [240] VARIATION SELECTOR-17..VARIATION SELECTOR-256 + +# Total code points: 101615 + +# ================================================ + +# Derived Property: Default_Ignorable_Code_Point +# Generated from +# Other_Default_Ignorable_Code_Point +# + Cf (Format characters) +# + Variation_Selector +# - White_Space +# - FFF9..FFFB (Annotation Characters) +# - 0600..0603, 06DD, 070F (exceptional Cf characters that should be visible) + +00AD ; Default_Ignorable_Code_Point # Cf SOFT HYPHEN +034F ; Default_Ignorable_Code_Point # Mn COMBINING GRAPHEME JOINER +115F..1160 ; Default_Ignorable_Code_Point # Lo [2] HANGUL CHOSEONG FILLER..HANGUL JUNGSEONG FILLER +17B4..17B5 ; Default_Ignorable_Code_Point # Cf [2] KHMER VOWEL INHERENT AQ..KHMER VOWEL INHERENT AA +180B..180D ; Default_Ignorable_Code_Point # Mn [3] MONGOLIAN FREE VARIATION SELECTOR ONE..MONGOLIAN FREE VARIATION SELECTOR THREE +200B..200F ; Default_Ignorable_Code_Point # Cf [5] ZERO WIDTH SPACE..RIGHT-TO-LEFT MARK +202A..202E ; Default_Ignorable_Code_Point # Cf [5] LEFT-TO-RIGHT EMBEDDING..RIGHT-TO-LEFT OVERRIDE +2060..2064 ; Default_Ignorable_Code_Point # Cf [5] WORD JOINER..INVISIBLE PLUS +2065..2069 ; Default_Ignorable_Code_Point # Cn [5] .. +206A..206F ; Default_Ignorable_Code_Point # Cf [6] INHIBIT SYMMETRIC SWAPPING..NOMINAL DIGIT SHAPES +3164 ; Default_Ignorable_Code_Point # Lo HANGUL FILLER +FE00..FE0F ; Default_Ignorable_Code_Point # Mn [16] VARIATION SELECTOR-1..VARIATION SELECTOR-16 +FEFF ; Default_Ignorable_Code_Point # Cf ZERO WIDTH NO-BREAK SPACE +FFA0 ; Default_Ignorable_Code_Point # Lo HALFWIDTH HANGUL FILLER +FFF0..FFF8 ; Default_Ignorable_Code_Point # Cn [9] .. +1D173..1D17A ; Default_Ignorable_Code_Point # Cf [8] MUSICAL SYMBOL BEGIN BEAM..MUSICAL SYMBOL END PHRASE +E0000 ; Default_Ignorable_Code_Point # Cn +E0001 ; Default_Ignorable_Code_Point # Cf LANGUAGE TAG +E0002..E001F ; Default_Ignorable_Code_Point # Cn [30] .. +E0020..E007F ; Default_Ignorable_Code_Point # Cf [96] TAG SPACE..CANCEL TAG +E0080..E00FF ; Default_Ignorable_Code_Point # Cn [128] .. +E0100..E01EF ; Default_Ignorable_Code_Point # Mn [240] VARIATION SELECTOR-17..VARIATION SELECTOR-256 +E01F0..E0FFF ; Default_Ignorable_Code_Point # Cn [3600] .. + +# Total code points: 4167 + +# ================================================ + +# Derived Property: Grapheme_Extend +# Generated from: Me + Mn + Other_Grapheme_Extend +# Note: depending on an application's interpretation of Co (private use), +# they may be either in Grapheme_Base, or in Grapheme_Extend, or in neither. + +0300..036F ; Grapheme_Extend # Mn [112] COMBINING GRAVE ACCENT..COMBINING LATIN SMALL LETTER X +0483..0487 ; Grapheme_Extend # Mn [5] COMBINING CYRILLIC TITLO..COMBINING CYRILLIC POKRYTIE +0488..0489 ; Grapheme_Extend # Me [2] COMBINING CYRILLIC HUNDRED THOUSANDS SIGN..COMBINING CYRILLIC MILLIONS SIGN +0591..05BD ; Grapheme_Extend # Mn [45] HEBREW ACCENT ETNAHTA..HEBREW POINT METEG +05BF ; Grapheme_Extend # Mn HEBREW POINT RAFE +05C1..05C2 ; Grapheme_Extend # Mn [2] HEBREW POINT SHIN DOT..HEBREW POINT SIN DOT +05C4..05C5 ; Grapheme_Extend # Mn [2] HEBREW MARK UPPER DOT..HEBREW MARK LOWER DOT +05C7 ; Grapheme_Extend # Mn HEBREW POINT QAMATS QATAN +0610..061A ; Grapheme_Extend # Mn [11] ARABIC SIGN SALLALLAHOU ALAYHE WASSALLAM..ARABIC SMALL KASRA +064B..065E ; Grapheme_Extend # Mn [20] ARABIC FATHATAN..ARABIC FATHA WITH TWO DOTS +0670 ; Grapheme_Extend # Mn ARABIC LETTER SUPERSCRIPT ALEF +06D6..06DC ; Grapheme_Extend # Mn [7] ARABIC SMALL HIGH LIGATURE SAD WITH LAM WITH ALEF MAKSURA..ARABIC SMALL HIGH SEEN +06DE ; Grapheme_Extend # Me ARABIC START OF RUB EL HIZB +06DF..06E4 ; Grapheme_Extend # Mn [6] ARABIC SMALL HIGH ROUNDED ZERO..ARABIC SMALL HIGH MADDA +06E7..06E8 ; Grapheme_Extend # Mn [2] ARABIC SMALL HIGH YEH..ARABIC SMALL HIGH NOON +06EA..06ED ; Grapheme_Extend # Mn [4] ARABIC EMPTY CENTRE LOW STOP..ARABIC SMALL LOW MEEM +0711 ; Grapheme_Extend # Mn SYRIAC LETTER SUPERSCRIPT ALAPH +0730..074A ; Grapheme_Extend # Mn [27] SYRIAC PTHAHA ABOVE..SYRIAC BARREKH +07A6..07B0 ; Grapheme_Extend # Mn [11] THAANA ABAFILI..THAANA SUKUN +07EB..07F3 ; Grapheme_Extend # Mn [9] NKO COMBINING SHORT HIGH TONE..NKO COMBINING DOUBLE DOT ABOVE +0816..0819 ; Grapheme_Extend # Mn [4] SAMARITAN MARK IN..SAMARITAN MARK DAGESH +081B..0823 ; Grapheme_Extend # Mn [9] SAMARITAN MARK EPENTHETIC YUT..SAMARITAN VOWEL SIGN A +0825..0827 ; Grapheme_Extend # Mn [3] SAMARITAN VOWEL SIGN SHORT A..SAMARITAN VOWEL SIGN U +0829..082D ; Grapheme_Extend # Mn [5] SAMARITAN VOWEL SIGN LONG I..SAMARITAN MARK NEQUDAA +0900..0902 ; Grapheme_Extend # Mn [3] DEVANAGARI SIGN INVERTED CANDRABINDU..DEVANAGARI SIGN ANUSVARA +093C ; Grapheme_Extend # Mn DEVANAGARI SIGN NUKTA +0941..0948 ; Grapheme_Extend # Mn [8] DEVANAGARI VOWEL SIGN U..DEVANAGARI VOWEL SIGN AI +094D ; Grapheme_Extend # Mn DEVANAGARI SIGN VIRAMA +0951..0955 ; Grapheme_Extend # Mn [5] DEVANAGARI STRESS SIGN UDATTA..DEVANAGARI VOWEL SIGN CANDRA LONG E +0962..0963 ; Grapheme_Extend # Mn [2] DEVANAGARI VOWEL SIGN VOCALIC L..DEVANAGARI VOWEL SIGN VOCALIC LL +0981 ; Grapheme_Extend # Mn BENGALI SIGN CANDRABINDU +09BC ; Grapheme_Extend # Mn BENGALI SIGN NUKTA +09BE ; Grapheme_Extend # Mc BENGALI VOWEL SIGN AA +09C1..09C4 ; Grapheme_Extend # Mn [4] BENGALI VOWEL SIGN U..BENGALI VOWEL SIGN VOCALIC RR +09CD ; Grapheme_Extend # Mn BENGALI SIGN VIRAMA +09D7 ; Grapheme_Extend # Mc BENGALI AU LENGTH MARK +09E2..09E3 ; Grapheme_Extend # Mn [2] BENGALI VOWEL SIGN VOCALIC L..BENGALI VOWEL SIGN VOCALIC LL +0A01..0A02 ; Grapheme_Extend # Mn [2] GURMUKHI SIGN ADAK BINDI..GURMUKHI SIGN BINDI +0A3C ; Grapheme_Extend # Mn GURMUKHI SIGN NUKTA +0A41..0A42 ; Grapheme_Extend # Mn [2] GURMUKHI VOWEL SIGN U..GURMUKHI VOWEL SIGN UU +0A47..0A48 ; Grapheme_Extend # Mn [2] GURMUKHI VOWEL SIGN EE..GURMUKHI VOWEL SIGN AI +0A4B..0A4D ; Grapheme_Extend # Mn [3] GURMUKHI VOWEL SIGN OO..GURMUKHI SIGN VIRAMA +0A51 ; Grapheme_Extend # Mn GURMUKHI SIGN UDAAT +0A70..0A71 ; Grapheme_Extend # Mn [2] GURMUKHI TIPPI..GURMUKHI ADDAK +0A75 ; Grapheme_Extend # Mn GURMUKHI SIGN YAKASH +0A81..0A82 ; Grapheme_Extend # Mn [2] GUJARATI SIGN CANDRABINDU..GUJARATI SIGN ANUSVARA +0ABC ; Grapheme_Extend # Mn GUJARATI SIGN NUKTA +0AC1..0AC5 ; Grapheme_Extend # Mn [5] GUJARATI VOWEL SIGN U..GUJARATI VOWEL SIGN CANDRA E +0AC7..0AC8 ; Grapheme_Extend # Mn [2] GUJARATI VOWEL SIGN E..GUJARATI VOWEL SIGN AI +0ACD ; Grapheme_Extend # Mn GUJARATI SIGN VIRAMA +0AE2..0AE3 ; Grapheme_Extend # Mn [2] GUJARATI VOWEL SIGN VOCALIC L..GUJARATI VOWEL SIGN VOCALIC LL +0B01 ; Grapheme_Extend # Mn ORIYA SIGN CANDRABINDU +0B3C ; Grapheme_Extend # Mn ORIYA SIGN NUKTA +0B3E ; Grapheme_Extend # Mc ORIYA VOWEL SIGN AA +0B3F ; Grapheme_Extend # Mn ORIYA VOWEL SIGN I +0B41..0B44 ; Grapheme_Extend # Mn [4] ORIYA VOWEL SIGN U..ORIYA VOWEL SIGN VOCALIC RR +0B4D ; Grapheme_Extend # Mn ORIYA SIGN VIRAMA +0B56 ; Grapheme_Extend # Mn ORIYA AI LENGTH MARK +0B57 ; Grapheme_Extend # Mc ORIYA AU LENGTH MARK +0B62..0B63 ; Grapheme_Extend # Mn [2] ORIYA VOWEL SIGN VOCALIC L..ORIYA VOWEL SIGN VOCALIC LL +0B82 ; Grapheme_Extend # Mn TAMIL SIGN ANUSVARA +0BBE ; Grapheme_Extend # Mc TAMIL VOWEL SIGN AA +0BC0 ; Grapheme_Extend # Mn TAMIL VOWEL SIGN II +0BCD ; Grapheme_Extend # Mn TAMIL SIGN VIRAMA +0BD7 ; Grapheme_Extend # Mc TAMIL AU LENGTH MARK +0C3E..0C40 ; Grapheme_Extend # Mn [3] TELUGU VOWEL SIGN AA..TELUGU VOWEL SIGN II +0C46..0C48 ; Grapheme_Extend # Mn [3] TELUGU VOWEL SIGN E..TELUGU VOWEL SIGN AI +0C4A..0C4D ; Grapheme_Extend # Mn [4] TELUGU VOWEL SIGN O..TELUGU SIGN VIRAMA +0C55..0C56 ; Grapheme_Extend # Mn [2] TELUGU LENGTH MARK..TELUGU AI LENGTH MARK +0C62..0C63 ; Grapheme_Extend # Mn [2] TELUGU VOWEL SIGN VOCALIC L..TELUGU VOWEL SIGN VOCALIC LL +0CBC ; Grapheme_Extend # Mn KANNADA SIGN NUKTA +0CBF ; Grapheme_Extend # Mn KANNADA VOWEL SIGN I +0CC2 ; Grapheme_Extend # Mc KANNADA VOWEL SIGN UU +0CC6 ; Grapheme_Extend # Mn KANNADA VOWEL SIGN E +0CCC..0CCD ; Grapheme_Extend # Mn [2] KANNADA VOWEL SIGN AU..KANNADA SIGN VIRAMA +0CD5..0CD6 ; Grapheme_Extend # Mc [2] KANNADA LENGTH MARK..KANNADA AI LENGTH MARK +0CE2..0CE3 ; Grapheme_Extend # Mn [2] KANNADA VOWEL SIGN VOCALIC L..KANNADA VOWEL SIGN VOCALIC LL +0D3E ; Grapheme_Extend # Mc MALAYALAM VOWEL SIGN AA +0D41..0D44 ; Grapheme_Extend # Mn [4] MALAYALAM VOWEL SIGN U..MALAYALAM VOWEL SIGN VOCALIC RR +0D4D ; Grapheme_Extend # Mn MALAYALAM SIGN VIRAMA +0D57 ; Grapheme_Extend # Mc MALAYALAM AU LENGTH MARK +0D62..0D63 ; Grapheme_Extend # Mn [2] MALAYALAM VOWEL SIGN VOCALIC L..MALAYALAM VOWEL SIGN VOCALIC LL +0DCA ; Grapheme_Extend # Mn SINHALA SIGN AL-LAKUNA +0DCF ; Grapheme_Extend # Mc SINHALA VOWEL SIGN AELA-PILLA +0DD2..0DD4 ; Grapheme_Extend # Mn [3] SINHALA VOWEL SIGN KETTI IS-PILLA..SINHALA VOWEL SIGN KETTI PAA-PILLA +0DD6 ; Grapheme_Extend # Mn SINHALA VOWEL SIGN DIGA PAA-PILLA +0DDF ; Grapheme_Extend # Mc SINHALA VOWEL SIGN GAYANUKITTA +0E31 ; Grapheme_Extend # Mn THAI CHARACTER MAI HAN-AKAT +0E34..0E3A ; Grapheme_Extend # Mn [7] THAI CHARACTER SARA I..THAI CHARACTER PHINTHU +0E47..0E4E ; Grapheme_Extend # Mn [8] THAI CHARACTER MAITAIKHU..THAI CHARACTER YAMAKKAN +0EB1 ; Grapheme_Extend # Mn LAO VOWEL SIGN MAI KAN +0EB4..0EB9 ; Grapheme_Extend # Mn [6] LAO VOWEL SIGN I..LAO VOWEL SIGN UU +0EBB..0EBC ; Grapheme_Extend # Mn [2] LAO VOWEL SIGN MAI KON..LAO SEMIVOWEL SIGN LO +0EC8..0ECD ; Grapheme_Extend # Mn [6] LAO TONE MAI EK..LAO NIGGAHITA +0F18..0F19 ; Grapheme_Extend # Mn [2] TIBETAN ASTROLOGICAL SIGN -KHYUD PA..TIBETAN ASTROLOGICAL SIGN SDONG TSHUGS +0F35 ; Grapheme_Extend # Mn TIBETAN MARK NGAS BZUNG NYI ZLA +0F37 ; Grapheme_Extend # Mn TIBETAN MARK NGAS BZUNG SGOR RTAGS +0F39 ; Grapheme_Extend # Mn TIBETAN MARK TSA -PHRU +0F71..0F7E ; Grapheme_Extend # Mn [14] TIBETAN VOWEL SIGN AA..TIBETAN SIGN RJES SU NGA RO +0F80..0F84 ; Grapheme_Extend # Mn [5] TIBETAN VOWEL SIGN REVERSED I..TIBETAN MARK HALANTA +0F86..0F87 ; Grapheme_Extend # Mn [2] TIBETAN SIGN LCI RTAGS..TIBETAN SIGN YANG RTAGS +0F90..0F97 ; Grapheme_Extend # Mn [8] TIBETAN SUBJOINED LETTER KA..TIBETAN SUBJOINED LETTER JA +0F99..0FBC ; Grapheme_Extend # Mn [36] TIBETAN SUBJOINED LETTER NYA..TIBETAN SUBJOINED LETTER FIXED-FORM RA +0FC6 ; Grapheme_Extend # Mn TIBETAN SYMBOL PADMA GDAN +102D..1030 ; Grapheme_Extend # Mn [4] MYANMAR VOWEL SIGN I..MYANMAR VOWEL SIGN UU +1032..1037 ; Grapheme_Extend # Mn [6] MYANMAR VOWEL SIGN AI..MYANMAR SIGN DOT BELOW +1039..103A ; Grapheme_Extend # Mn [2] MYANMAR SIGN VIRAMA..MYANMAR SIGN ASAT +103D..103E ; Grapheme_Extend # Mn [2] MYANMAR CONSONANT SIGN MEDIAL WA..MYANMAR CONSONANT SIGN MEDIAL HA +1058..1059 ; Grapheme_Extend # Mn [2] MYANMAR VOWEL SIGN VOCALIC L..MYANMAR VOWEL SIGN VOCALIC LL +105E..1060 ; Grapheme_Extend # Mn [3] MYANMAR CONSONANT SIGN MON MEDIAL NA..MYANMAR CONSONANT SIGN MON MEDIAL LA +1071..1074 ; Grapheme_Extend # Mn [4] MYANMAR VOWEL SIGN GEBA KAREN I..MYANMAR VOWEL SIGN KAYAH EE +1082 ; Grapheme_Extend # Mn MYANMAR CONSONANT SIGN SHAN MEDIAL WA +1085..1086 ; Grapheme_Extend # Mn [2] MYANMAR VOWEL SIGN SHAN E ABOVE..MYANMAR VOWEL SIGN SHAN FINAL Y +108D ; Grapheme_Extend # Mn MYANMAR SIGN SHAN COUNCIL EMPHATIC TONE +109D ; Grapheme_Extend # Mn MYANMAR VOWEL SIGN AITON AI +135F ; Grapheme_Extend # Mn ETHIOPIC COMBINING GEMINATION MARK +1712..1714 ; Grapheme_Extend # Mn [3] TAGALOG VOWEL SIGN I..TAGALOG SIGN VIRAMA +1732..1734 ; Grapheme_Extend # Mn [3] HANUNOO VOWEL SIGN I..HANUNOO SIGN PAMUDPOD +1752..1753 ; Grapheme_Extend # Mn [2] BUHID VOWEL SIGN I..BUHID VOWEL SIGN U +1772..1773 ; Grapheme_Extend # Mn [2] TAGBANWA VOWEL SIGN I..TAGBANWA VOWEL SIGN U +17B7..17BD ; Grapheme_Extend # Mn [7] KHMER VOWEL SIGN I..KHMER VOWEL SIGN UA +17C6 ; Grapheme_Extend # Mn KHMER SIGN NIKAHIT +17C9..17D3 ; Grapheme_Extend # Mn [11] KHMER SIGN MUUSIKATOAN..KHMER SIGN BATHAMASAT +17DD ; Grapheme_Extend # Mn KHMER SIGN ATTHACAN +180B..180D ; Grapheme_Extend # Mn [3] MONGOLIAN FREE VARIATION SELECTOR ONE..MONGOLIAN FREE VARIATION SELECTOR THREE +18A9 ; Grapheme_Extend # Mn MONGOLIAN LETTER ALI GALI DAGALGA +1920..1922 ; Grapheme_Extend # Mn [3] LIMBU VOWEL SIGN A..LIMBU VOWEL SIGN U +1927..1928 ; Grapheme_Extend # Mn [2] LIMBU VOWEL SIGN E..LIMBU VOWEL SIGN O +1932 ; Grapheme_Extend # Mn LIMBU SMALL LETTER ANUSVARA +1939..193B ; Grapheme_Extend # Mn [3] LIMBU SIGN MUKPHRENG..LIMBU SIGN SA-I +1A17..1A18 ; Grapheme_Extend # Mn [2] BUGINESE VOWEL SIGN I..BUGINESE VOWEL SIGN U +1A56 ; Grapheme_Extend # Mn TAI THAM CONSONANT SIGN MEDIAL LA +1A58..1A5E ; Grapheme_Extend # Mn [7] TAI THAM SIGN MAI KANG LAI..TAI THAM CONSONANT SIGN SA +1A60 ; Grapheme_Extend # Mn TAI THAM SIGN SAKOT +1A62 ; Grapheme_Extend # Mn TAI THAM VOWEL SIGN MAI SAT +1A65..1A6C ; Grapheme_Extend # Mn [8] TAI THAM VOWEL SIGN I..TAI THAM VOWEL SIGN OA BELOW +1A73..1A7C ; Grapheme_Extend # Mn [10] TAI THAM VOWEL SIGN OA ABOVE..TAI THAM SIGN KHUEN-LUE KARAN +1A7F ; Grapheme_Extend # Mn TAI THAM COMBINING CRYPTOGRAMMIC DOT +1B00..1B03 ; Grapheme_Extend # Mn [4] BALINESE SIGN ULU RICEM..BALINESE SIGN SURANG +1B34 ; Grapheme_Extend # Mn BALINESE SIGN REREKAN +1B36..1B3A ; Grapheme_Extend # Mn [5] BALINESE VOWEL SIGN ULU..BALINESE VOWEL SIGN RA REPA +1B3C ; Grapheme_Extend # Mn BALINESE VOWEL SIGN LA LENGA +1B42 ; Grapheme_Extend # Mn BALINESE VOWEL SIGN PEPET +1B6B..1B73 ; Grapheme_Extend # Mn [9] BALINESE MUSICAL SYMBOL COMBINING TEGEH..BALINESE MUSICAL SYMBOL COMBINING GONG +1B80..1B81 ; Grapheme_Extend # Mn [2] SUNDANESE SIGN PANYECEK..SUNDANESE SIGN PANGLAYAR +1BA2..1BA5 ; Grapheme_Extend # Mn [4] SUNDANESE CONSONANT SIGN PANYAKRA..SUNDANESE VOWEL SIGN PANYUKU +1BA8..1BA9 ; Grapheme_Extend # Mn [2] SUNDANESE VOWEL SIGN PAMEPET..SUNDANESE VOWEL SIGN PANEULEUNG +1C2C..1C33 ; Grapheme_Extend # Mn [8] LEPCHA VOWEL SIGN E..LEPCHA CONSONANT SIGN T +1C36..1C37 ; Grapheme_Extend # Mn [2] LEPCHA SIGN RAN..LEPCHA SIGN NUKTA +1CD0..1CD2 ; Grapheme_Extend # Mn [3] VEDIC TONE KARSHANA..VEDIC TONE PRENKHA +1CD4..1CE0 ; Grapheme_Extend # Mn [13] VEDIC SIGN YAJURVEDIC MIDLINE SVARITA..VEDIC TONE RIGVEDIC KASHMIRI INDEPENDENT SVARITA +1CE2..1CE8 ; Grapheme_Extend # Mn [7] VEDIC SIGN VISARGA SVARITA..VEDIC SIGN VISARGA ANUDATTA WITH TAIL +1CED ; Grapheme_Extend # Mn VEDIC SIGN TIRYAK +1DC0..1DE6 ; Grapheme_Extend # Mn [39] COMBINING DOTTED GRAVE ACCENT..COMBINING LATIN SMALL LETTER Z +1DFD..1DFF ; Grapheme_Extend # Mn [3] COMBINING ALMOST EQUAL TO BELOW..COMBINING RIGHT ARROWHEAD AND DOWN ARROWHEAD BELOW +200C..200D ; Grapheme_Extend # Cf [2] ZERO WIDTH NON-JOINER..ZERO WIDTH JOINER +20D0..20DC ; Grapheme_Extend # Mn [13] COMBINING LEFT HARPOON ABOVE..COMBINING FOUR DOTS ABOVE +20DD..20E0 ; Grapheme_Extend # Me [4] COMBINING ENCLOSING CIRCLE..COMBINING ENCLOSING CIRCLE BACKSLASH +20E1 ; Grapheme_Extend # Mn COMBINING LEFT RIGHT ARROW ABOVE +20E2..20E4 ; Grapheme_Extend # Me [3] COMBINING ENCLOSING SCREEN..COMBINING ENCLOSING UPWARD POINTING TRIANGLE +20E5..20F0 ; Grapheme_Extend # Mn [12] COMBINING REVERSE SOLIDUS OVERLAY..COMBINING ASTERISK ABOVE +2CEF..2CF1 ; Grapheme_Extend # Mn [3] COPTIC COMBINING NI ABOVE..COPTIC COMBINING SPIRITUS LENIS +2DE0..2DFF ; Grapheme_Extend # Mn [32] COMBINING CYRILLIC LETTER BE..COMBINING CYRILLIC LETTER IOTIFIED BIG YUS +302A..302F ; Grapheme_Extend # Mn [6] IDEOGRAPHIC LEVEL TONE MARK..HANGUL DOUBLE DOT TONE MARK +3099..309A ; Grapheme_Extend # Mn [2] COMBINING KATAKANA-HIRAGANA VOICED SOUND MARK..COMBINING KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK +A66F ; Grapheme_Extend # Mn COMBINING CYRILLIC VZMET +A670..A672 ; Grapheme_Extend # Me [3] COMBINING CYRILLIC TEN MILLIONS SIGN..COMBINING CYRILLIC THOUSAND MILLIONS SIGN +A67C..A67D ; Grapheme_Extend # Mn [2] COMBINING CYRILLIC KAVYKA..COMBINING CYRILLIC PAYEROK +A6F0..A6F1 ; Grapheme_Extend # Mn [2] BAMUM COMBINING MARK KOQNDON..BAMUM COMBINING MARK TUKWENTIS +A802 ; Grapheme_Extend # Mn SYLOTI NAGRI SIGN DVISVARA +A806 ; Grapheme_Extend # Mn SYLOTI NAGRI SIGN HASANTA +A80B ; Grapheme_Extend # Mn SYLOTI NAGRI SIGN ANUSVARA +A825..A826 ; Grapheme_Extend # Mn [2] SYLOTI NAGRI VOWEL SIGN U..SYLOTI NAGRI VOWEL SIGN E +A8C4 ; Grapheme_Extend # Mn SAURASHTRA SIGN VIRAMA +A8E0..A8F1 ; Grapheme_Extend # Mn [18] COMBINING DEVANAGARI DIGIT ZERO..COMBINING DEVANAGARI SIGN AVAGRAHA +A926..A92D ; Grapheme_Extend # Mn [8] KAYAH LI VOWEL UE..KAYAH LI TONE CALYA PLOPHU +A947..A951 ; Grapheme_Extend # Mn [11] REJANG VOWEL SIGN I..REJANG CONSONANT SIGN R +A980..A982 ; Grapheme_Extend # Mn [3] JAVANESE SIGN PANYANGGA..JAVANESE SIGN LAYAR +A9B3 ; Grapheme_Extend # Mn JAVANESE SIGN CECAK TELU +A9B6..A9B9 ; Grapheme_Extend # Mn [4] JAVANESE VOWEL SIGN WULU..JAVANESE VOWEL SIGN SUKU MENDUT +A9BC ; Grapheme_Extend # Mn JAVANESE VOWEL SIGN PEPET +AA29..AA2E ; Grapheme_Extend # Mn [6] CHAM VOWEL SIGN AA..CHAM VOWEL SIGN OE +AA31..AA32 ; Grapheme_Extend # Mn [2] CHAM VOWEL SIGN AU..CHAM VOWEL SIGN UE +AA35..AA36 ; Grapheme_Extend # Mn [2] CHAM CONSONANT SIGN LA..CHAM CONSONANT SIGN WA +AA43 ; Grapheme_Extend # Mn CHAM CONSONANT SIGN FINAL NG +AA4C ; Grapheme_Extend # Mn CHAM CONSONANT SIGN FINAL M +AAB0 ; Grapheme_Extend # Mn TAI VIET MAI KANG +AAB2..AAB4 ; Grapheme_Extend # Mn [3] TAI VIET VOWEL I..TAI VIET VOWEL U +AAB7..AAB8 ; Grapheme_Extend # Mn [2] TAI VIET MAI KHIT..TAI VIET VOWEL IA +AABE..AABF ; Grapheme_Extend # Mn [2] TAI VIET VOWEL AM..TAI VIET TONE MAI EK +AAC1 ; Grapheme_Extend # Mn TAI VIET TONE MAI THO +ABE5 ; Grapheme_Extend # Mn MEETEI MAYEK VOWEL SIGN ANAP +ABE8 ; Grapheme_Extend # Mn MEETEI MAYEK VOWEL SIGN UNAP +ABED ; Grapheme_Extend # Mn MEETEI MAYEK APUN IYEK +FB1E ; Grapheme_Extend # Mn HEBREW POINT JUDEO-SPANISH VARIKA +FE00..FE0F ; Grapheme_Extend # Mn [16] VARIATION SELECTOR-1..VARIATION SELECTOR-16 +FE20..FE26 ; Grapheme_Extend # Mn [7] COMBINING LIGATURE LEFT HALF..COMBINING CONJOINING MACRON +FF9E..FF9F ; Grapheme_Extend # Lm [2] HALFWIDTH KATAKANA VOICED SOUND MARK..HALFWIDTH KATAKANA SEMI-VOICED SOUND MARK +101FD ; Grapheme_Extend # Mn PHAISTOS DISC SIGN COMBINING OBLIQUE STROKE +10A01..10A03 ; Grapheme_Extend # Mn [3] KHAROSHTHI VOWEL SIGN I..KHAROSHTHI VOWEL SIGN VOCALIC R +10A05..10A06 ; Grapheme_Extend # Mn [2] KHAROSHTHI VOWEL SIGN E..KHAROSHTHI VOWEL SIGN O +10A0C..10A0F ; Grapheme_Extend # Mn [4] KHAROSHTHI VOWEL LENGTH MARK..KHAROSHTHI SIGN VISARGA +10A38..10A3A ; Grapheme_Extend # Mn [3] KHAROSHTHI SIGN BAR ABOVE..KHAROSHTHI SIGN DOT BELOW +10A3F ; Grapheme_Extend # Mn KHAROSHTHI VIRAMA +11080..11081 ; Grapheme_Extend # Mn [2] KAITHI SIGN CANDRABINDU..KAITHI SIGN ANUSVARA +110B3..110B6 ; Grapheme_Extend # Mn [4] KAITHI VOWEL SIGN U..KAITHI VOWEL SIGN AI +110B9..110BA ; Grapheme_Extend # Mn [2] KAITHI SIGN VIRAMA..KAITHI SIGN NUKTA +1D165 ; Grapheme_Extend # Mc MUSICAL SYMBOL COMBINING STEM +1D167..1D169 ; Grapheme_Extend # Mn [3] MUSICAL SYMBOL COMBINING TREMOLO-1..MUSICAL SYMBOL COMBINING TREMOLO-3 +1D16E..1D172 ; Grapheme_Extend # Mc [5] MUSICAL SYMBOL COMBINING FLAG-1..MUSICAL SYMBOL COMBINING FLAG-5 +1D17B..1D182 ; Grapheme_Extend # Mn [8] MUSICAL SYMBOL COMBINING ACCENT..MUSICAL SYMBOL COMBINING LOURE +1D185..1D18B ; Grapheme_Extend # Mn [7] MUSICAL SYMBOL COMBINING DOIT..MUSICAL SYMBOL COMBINING TRIPLE TONGUE +1D1AA..1D1AD ; Grapheme_Extend # Mn [4] MUSICAL SYMBOL COMBINING DOWN BOW..MUSICAL SYMBOL COMBINING SNAP PIZZICATO +1D242..1D244 ; Grapheme_Extend # Mn [3] COMBINING GREEK MUSICAL TRISEME..COMBINING GREEK MUSICAL PENTASEME +E0100..E01EF ; Grapheme_Extend # Mn [240] VARIATION SELECTOR-17..VARIATION SELECTOR-256 + +# Total code points: 1198 + +# ================================================ + +# Derived Property: Grapheme_Base +# Generated from: [0..10FFFF] - Cc - Cf - Cs - Co - Cn - Zl - Zp - Grapheme_Extend +# Note: depending on an application's interpretation of Co (private use), +# they may be either in Grapheme_Base, or in Grapheme_Extend, or in neither. + +0020 ; Grapheme_Base # Zs SPACE +0021..0023 ; Grapheme_Base # Po [3] EXCLAMATION MARK..NUMBER SIGN +0024 ; Grapheme_Base # Sc DOLLAR SIGN +0025..0027 ; Grapheme_Base # Po [3] PERCENT SIGN..APOSTROPHE +0028 ; Grapheme_Base # Ps LEFT PARENTHESIS +0029 ; Grapheme_Base # Pe RIGHT PARENTHESIS +002A ; Grapheme_Base # Po ASTERISK +002B ; Grapheme_Base # Sm PLUS SIGN +002C ; Grapheme_Base # Po COMMA +002D ; Grapheme_Base # Pd HYPHEN-MINUS +002E..002F ; Grapheme_Base # Po [2] FULL STOP..SOLIDUS +0030..0039 ; Grapheme_Base # Nd [10] DIGIT ZERO..DIGIT NINE +003A..003B ; Grapheme_Base # Po [2] COLON..SEMICOLON +003C..003E ; Grapheme_Base # Sm [3] LESS-THAN SIGN..GREATER-THAN SIGN +003F..0040 ; Grapheme_Base # Po [2] QUESTION MARK..COMMERCIAL AT +0041..005A ; Grapheme_Base # L& [26] LATIN CAPITAL LETTER A..LATIN CAPITAL LETTER Z +005B ; Grapheme_Base # Ps LEFT SQUARE BRACKET +005C ; Grapheme_Base # Po REVERSE SOLIDUS +005D ; Grapheme_Base # Pe RIGHT SQUARE BRACKET +005E ; Grapheme_Base # Sk CIRCUMFLEX ACCENT +005F ; Grapheme_Base # Pc LOW LINE +0060 ; Grapheme_Base # Sk GRAVE ACCENT +0061..007A ; Grapheme_Base # L& [26] LATIN SMALL LETTER A..LATIN SMALL LETTER Z +007B ; Grapheme_Base # Ps LEFT CURLY BRACKET +007C ; Grapheme_Base # Sm VERTICAL LINE +007D ; Grapheme_Base # Pe RIGHT CURLY BRACKET +007E ; Grapheme_Base # Sm TILDE +00A0 ; Grapheme_Base # Zs NO-BREAK SPACE +00A1 ; Grapheme_Base # Po INVERTED EXCLAMATION MARK +00A2..00A5 ; Grapheme_Base # Sc [4] CENT SIGN..YEN SIGN +00A6..00A7 ; Grapheme_Base # So [2] BROKEN BAR..SECTION SIGN +00A8 ; Grapheme_Base # Sk DIAERESIS +00A9 ; Grapheme_Base # So COPYRIGHT SIGN +00AA ; Grapheme_Base # L& FEMININE ORDINAL INDICATOR +00AB ; Grapheme_Base # Pi LEFT-POINTING DOUBLE ANGLE QUOTATION MARK +00AC ; Grapheme_Base # Sm NOT SIGN +00AE ; Grapheme_Base # So REGISTERED SIGN +00AF ; Grapheme_Base # Sk MACRON +00B0 ; Grapheme_Base # So DEGREE SIGN +00B1 ; Grapheme_Base # Sm PLUS-MINUS SIGN +00B2..00B3 ; Grapheme_Base # No [2] SUPERSCRIPT TWO..SUPERSCRIPT THREE +00B4 ; Grapheme_Base # Sk ACUTE ACCENT +00B5 ; Grapheme_Base # L& MICRO SIGN +00B6 ; Grapheme_Base # So PILCROW SIGN +00B7 ; Grapheme_Base # Po MIDDLE DOT +00B8 ; Grapheme_Base # Sk CEDILLA +00B9 ; Grapheme_Base # No SUPERSCRIPT ONE +00BA ; Grapheme_Base # L& MASCULINE ORDINAL INDICATOR +00BB ; Grapheme_Base # Pf RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK +00BC..00BE ; Grapheme_Base # No [3] VULGAR FRACTION ONE QUARTER..VULGAR FRACTION THREE QUARTERS +00BF ; Grapheme_Base # Po INVERTED QUESTION MARK +00C0..00D6 ; Grapheme_Base # L& [23] LATIN CAPITAL LETTER A WITH GRAVE..LATIN CAPITAL LETTER O WITH DIAERESIS +00D7 ; Grapheme_Base # Sm MULTIPLICATION SIGN +00D8..00F6 ; Grapheme_Base # L& [31] LATIN CAPITAL LETTER O WITH STROKE..LATIN SMALL LETTER O WITH DIAERESIS +00F7 ; Grapheme_Base # Sm DIVISION SIGN +00F8..01BA ; Grapheme_Base # L& [195] LATIN SMALL LETTER O WITH STROKE..LATIN SMALL LETTER EZH WITH TAIL +01BB ; Grapheme_Base # Lo LATIN LETTER TWO WITH STROKE +01BC..01BF ; Grapheme_Base # L& [4] LATIN CAPITAL LETTER TONE FIVE..LATIN LETTER WYNN +01C0..01C3 ; Grapheme_Base # Lo [4] LATIN LETTER DENTAL CLICK..LATIN LETTER RETROFLEX CLICK +01C4..0293 ; Grapheme_Base # L& [208] LATIN CAPITAL LETTER DZ WITH CARON..LATIN SMALL LETTER EZH WITH CURL +0294 ; Grapheme_Base # Lo LATIN LETTER GLOTTAL STOP +0295..02AF ; Grapheme_Base # L& [27] LATIN LETTER PHARYNGEAL VOICED FRICATIVE..LATIN SMALL LETTER TURNED H WITH FISHHOOK AND TAIL +02B0..02C1 ; Grapheme_Base # Lm [18] MODIFIER LETTER SMALL H..MODIFIER LETTER REVERSED GLOTTAL STOP +02C2..02C5 ; Grapheme_Base # Sk [4] MODIFIER LETTER LEFT ARROWHEAD..MODIFIER LETTER DOWN ARROWHEAD +02C6..02D1 ; Grapheme_Base # Lm [12] MODIFIER LETTER CIRCUMFLEX ACCENT..MODIFIER LETTER HALF TRIANGULAR COLON +02D2..02DF ; Grapheme_Base # Sk [14] MODIFIER LETTER CENTRED RIGHT HALF RING..MODIFIER LETTER CROSS ACCENT +02E0..02E4 ; Grapheme_Base # Lm [5] MODIFIER LETTER SMALL GAMMA..MODIFIER LETTER SMALL REVERSED GLOTTAL STOP +02E5..02EB ; Grapheme_Base # Sk [7] MODIFIER LETTER EXTRA-HIGH TONE BAR..MODIFIER LETTER YANG DEPARTING TONE MARK +02EC ; Grapheme_Base # Lm MODIFIER LETTER VOICING +02ED ; Grapheme_Base # Sk MODIFIER LETTER UNASPIRATED +02EE ; Grapheme_Base # Lm MODIFIER LETTER DOUBLE APOSTROPHE +02EF..02FF ; Grapheme_Base # Sk [17] MODIFIER LETTER LOW DOWN ARROWHEAD..MODIFIER LETTER LOW LEFT ARROW +0370..0373 ; Grapheme_Base # L& [4] GREEK CAPITAL LETTER HETA..GREEK SMALL LETTER ARCHAIC SAMPI +0374 ; Grapheme_Base # Lm GREEK NUMERAL SIGN +0375 ; Grapheme_Base # Sk GREEK LOWER NUMERAL SIGN +0376..0377 ; Grapheme_Base # L& [2] GREEK CAPITAL LETTER PAMPHYLIAN DIGAMMA..GREEK SMALL LETTER PAMPHYLIAN DIGAMMA +037A ; Grapheme_Base # Lm GREEK YPOGEGRAMMENI +037B..037D ; Grapheme_Base # L& [3] GREEK SMALL REVERSED LUNATE SIGMA SYMBOL..GREEK SMALL REVERSED DOTTED LUNATE SIGMA SYMBOL +037E ; Grapheme_Base # Po GREEK QUESTION MARK +0384..0385 ; Grapheme_Base # Sk [2] GREEK TONOS..GREEK DIALYTIKA TONOS +0386 ; Grapheme_Base # L& GREEK CAPITAL LETTER ALPHA WITH TONOS +0387 ; Grapheme_Base # Po GREEK ANO TELEIA +0388..038A ; Grapheme_Base # L& [3] GREEK CAPITAL LETTER EPSILON WITH TONOS..GREEK CAPITAL LETTER IOTA WITH TONOS +038C ; Grapheme_Base # L& GREEK CAPITAL LETTER OMICRON WITH TONOS +038E..03A1 ; Grapheme_Base # L& [20] GREEK CAPITAL LETTER UPSILON WITH TONOS..GREEK CAPITAL LETTER RHO +03A3..03F5 ; Grapheme_Base # L& [83] GREEK CAPITAL LETTER SIGMA..GREEK LUNATE EPSILON SYMBOL +03F6 ; Grapheme_Base # Sm GREEK REVERSED LUNATE EPSILON SYMBOL +03F7..0481 ; Grapheme_Base # L& [139] GREEK CAPITAL LETTER SHO..CYRILLIC SMALL LETTER KOPPA +0482 ; Grapheme_Base # So CYRILLIC THOUSANDS SIGN +048A..0525 ; Grapheme_Base # L& [156] CYRILLIC CAPITAL LETTER SHORT I WITH TAIL..CYRILLIC SMALL LETTER PE WITH DESCENDER +0531..0556 ; Grapheme_Base # L& [38] ARMENIAN CAPITAL LETTER AYB..ARMENIAN CAPITAL LETTER FEH +0559 ; Grapheme_Base # Lm ARMENIAN MODIFIER LETTER LEFT HALF RING +055A..055F ; Grapheme_Base # Po [6] ARMENIAN APOSTROPHE..ARMENIAN ABBREVIATION MARK +0561..0587 ; Grapheme_Base # L& [39] ARMENIAN SMALL LETTER AYB..ARMENIAN SMALL LIGATURE ECH YIWN +0589 ; Grapheme_Base # Po ARMENIAN FULL STOP +058A ; Grapheme_Base # Pd ARMENIAN HYPHEN +05BE ; Grapheme_Base # Pd HEBREW PUNCTUATION MAQAF +05C0 ; Grapheme_Base # Po HEBREW PUNCTUATION PASEQ +05C3 ; Grapheme_Base # Po HEBREW PUNCTUATION SOF PASUQ +05C6 ; Grapheme_Base # Po HEBREW PUNCTUATION NUN HAFUKHA +05D0..05EA ; Grapheme_Base # Lo [27] HEBREW LETTER ALEF..HEBREW LETTER TAV +05F0..05F2 ; Grapheme_Base # Lo [3] HEBREW LIGATURE YIDDISH DOUBLE VAV..HEBREW LIGATURE YIDDISH DOUBLE YOD +05F3..05F4 ; Grapheme_Base # Po [2] HEBREW PUNCTUATION GERESH..HEBREW PUNCTUATION GERSHAYIM +0606..0608 ; Grapheme_Base # Sm [3] ARABIC-INDIC CUBE ROOT..ARABIC RAY +0609..060A ; Grapheme_Base # Po [2] ARABIC-INDIC PER MILLE SIGN..ARABIC-INDIC PER TEN THOUSAND SIGN +060B ; Grapheme_Base # Sc AFGHANI SIGN +060C..060D ; Grapheme_Base # Po [2] ARABIC COMMA..ARABIC DATE SEPARATOR +060E..060F ; Grapheme_Base # So [2] ARABIC POETIC VERSE SIGN..ARABIC SIGN MISRA +061B ; Grapheme_Base # Po ARABIC SEMICOLON +061E..061F ; Grapheme_Base # Po [2] ARABIC TRIPLE DOT PUNCTUATION MARK..ARABIC QUESTION MARK +0621..063F ; Grapheme_Base # Lo [31] ARABIC LETTER HAMZA..ARABIC LETTER FARSI YEH WITH THREE DOTS ABOVE +0640 ; Grapheme_Base # Lm ARABIC TATWEEL +0641..064A ; Grapheme_Base # Lo [10] ARABIC LETTER FEH..ARABIC LETTER YEH +0660..0669 ; Grapheme_Base # Nd [10] ARABIC-INDIC DIGIT ZERO..ARABIC-INDIC DIGIT NINE +066A..066D ; Grapheme_Base # Po [4] ARABIC PERCENT SIGN..ARABIC FIVE POINTED STAR +066E..066F ; Grapheme_Base # Lo [2] ARABIC LETTER DOTLESS BEH..ARABIC LETTER DOTLESS QAF +0671..06D3 ; Grapheme_Base # Lo [99] ARABIC LETTER ALEF WASLA..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE +06D4 ; Grapheme_Base # Po ARABIC FULL STOP +06D5 ; Grapheme_Base # Lo ARABIC LETTER AE +06E5..06E6 ; Grapheme_Base # Lm [2] ARABIC SMALL WAW..ARABIC SMALL YEH +06E9 ; Grapheme_Base # So ARABIC PLACE OF SAJDAH +06EE..06EF ; Grapheme_Base # Lo [2] ARABIC LETTER DAL WITH INVERTED V..ARABIC LETTER REH WITH INVERTED V +06F0..06F9 ; Grapheme_Base # Nd [10] EXTENDED ARABIC-INDIC DIGIT ZERO..EXTENDED ARABIC-INDIC DIGIT NINE +06FA..06FC ; Grapheme_Base # Lo [3] ARABIC LETTER SHEEN WITH DOT BELOW..ARABIC LETTER GHAIN WITH DOT BELOW +06FD..06FE ; Grapheme_Base # So [2] ARABIC SIGN SINDHI AMPERSAND..ARABIC SIGN SINDHI POSTPOSITION MEN +06FF ; Grapheme_Base # Lo ARABIC LETTER HEH WITH INVERTED V +0700..070D ; Grapheme_Base # Po [14] SYRIAC END OF PARAGRAPH..SYRIAC HARKLEAN ASTERISCUS +0710 ; Grapheme_Base # Lo SYRIAC LETTER ALAPH +0712..072F ; Grapheme_Base # Lo [30] SYRIAC LETTER BETH..SYRIAC LETTER PERSIAN DHALATH +074D..07A5 ; Grapheme_Base # Lo [89] SYRIAC LETTER SOGDIAN ZHAIN..THAANA LETTER WAAVU +07B1 ; Grapheme_Base # Lo THAANA LETTER NAA +07C0..07C9 ; Grapheme_Base # Nd [10] NKO DIGIT ZERO..NKO DIGIT NINE +07CA..07EA ; Grapheme_Base # Lo [33] NKO LETTER A..NKO LETTER JONA RA +07F4..07F5 ; Grapheme_Base # Lm [2] NKO HIGH TONE APOSTROPHE..NKO LOW TONE APOSTROPHE +07F6 ; Grapheme_Base # So NKO SYMBOL OO DENNEN +07F7..07F9 ; Grapheme_Base # Po [3] NKO SYMBOL GBAKURUNEN..NKO EXCLAMATION MARK +07FA ; Grapheme_Base # Lm NKO LAJANYALAN +0800..0815 ; Grapheme_Base # Lo [22] SAMARITAN LETTER ALAF..SAMARITAN LETTER TAAF +081A ; Grapheme_Base # Lm SAMARITAN MODIFIER LETTER EPENTHETIC YUT +0824 ; Grapheme_Base # Lm SAMARITAN MODIFIER LETTER SHORT A +0828 ; Grapheme_Base # Lm SAMARITAN MODIFIER LETTER I +0830..083E ; Grapheme_Base # Po [15] SAMARITAN PUNCTUATION NEQUDAA..SAMARITAN PUNCTUATION ANNAAU +0903 ; Grapheme_Base # Mc DEVANAGARI SIGN VISARGA +0904..0939 ; Grapheme_Base # Lo [54] DEVANAGARI LETTER SHORT A..DEVANAGARI LETTER HA +093D ; Grapheme_Base # Lo DEVANAGARI SIGN AVAGRAHA +093E..0940 ; Grapheme_Base # Mc [3] DEVANAGARI VOWEL SIGN AA..DEVANAGARI VOWEL SIGN II +0949..094C ; Grapheme_Base # Mc [4] DEVANAGARI VOWEL SIGN CANDRA O..DEVANAGARI VOWEL SIGN AU +094E ; Grapheme_Base # Mc DEVANAGARI VOWEL SIGN PRISHTHAMATRA E +0950 ; Grapheme_Base # Lo DEVANAGARI OM +0958..0961 ; Grapheme_Base # Lo [10] DEVANAGARI LETTER QA..DEVANAGARI LETTER VOCALIC LL +0964..0965 ; Grapheme_Base # Po [2] DEVANAGARI DANDA..DEVANAGARI DOUBLE DANDA +0966..096F ; Grapheme_Base # Nd [10] DEVANAGARI DIGIT ZERO..DEVANAGARI DIGIT NINE +0970 ; Grapheme_Base # Po DEVANAGARI ABBREVIATION SIGN +0971 ; Grapheme_Base # Lm DEVANAGARI SIGN HIGH SPACING DOT +0972 ; Grapheme_Base # Lo DEVANAGARI LETTER CANDRA A +0979..097F ; Grapheme_Base # Lo [7] DEVANAGARI LETTER ZHA..DEVANAGARI LETTER BBA +0982..0983 ; Grapheme_Base # Mc [2] BENGALI SIGN ANUSVARA..BENGALI SIGN VISARGA +0985..098C ; Grapheme_Base # Lo [8] BENGALI LETTER A..BENGALI LETTER VOCALIC L +098F..0990 ; Grapheme_Base # Lo [2] BENGALI LETTER E..BENGALI LETTER AI +0993..09A8 ; Grapheme_Base # Lo [22] BENGALI LETTER O..BENGALI LETTER NA +09AA..09B0 ; Grapheme_Base # Lo [7] BENGALI LETTER PA..BENGALI LETTER RA +09B2 ; Grapheme_Base # Lo BENGALI LETTER LA +09B6..09B9 ; Grapheme_Base # Lo [4] BENGALI LETTER SHA..BENGALI LETTER HA +09BD ; Grapheme_Base # Lo BENGALI SIGN AVAGRAHA +09BF..09C0 ; Grapheme_Base # Mc [2] BENGALI VOWEL SIGN I..BENGALI VOWEL SIGN II +09C7..09C8 ; Grapheme_Base # Mc [2] BENGALI VOWEL SIGN E..BENGALI VOWEL SIGN AI +09CB..09CC ; Grapheme_Base # Mc [2] BENGALI VOWEL SIGN O..BENGALI VOWEL SIGN AU +09CE ; Grapheme_Base # Lo BENGALI LETTER KHANDA TA +09DC..09DD ; Grapheme_Base # Lo [2] BENGALI LETTER RRA..BENGALI LETTER RHA +09DF..09E1 ; Grapheme_Base # Lo [3] BENGALI LETTER YYA..BENGALI LETTER VOCALIC LL +09E6..09EF ; Grapheme_Base # Nd [10] BENGALI DIGIT ZERO..BENGALI DIGIT NINE +09F0..09F1 ; Grapheme_Base # Lo [2] BENGALI LETTER RA WITH MIDDLE DIAGONAL..BENGALI LETTER RA WITH LOWER DIAGONAL +09F2..09F3 ; Grapheme_Base # Sc [2] BENGALI RUPEE MARK..BENGALI RUPEE SIGN +09F4..09F9 ; Grapheme_Base # No [6] BENGALI CURRENCY NUMERATOR ONE..BENGALI CURRENCY DENOMINATOR SIXTEEN +09FA ; Grapheme_Base # So BENGALI ISSHAR +09FB ; Grapheme_Base # Sc BENGALI GANDA MARK +0A03 ; Grapheme_Base # Mc GURMUKHI SIGN VISARGA +0A05..0A0A ; Grapheme_Base # Lo [6] GURMUKHI LETTER A..GURMUKHI LETTER UU +0A0F..0A10 ; Grapheme_Base # Lo [2] GURMUKHI LETTER EE..GURMUKHI LETTER AI +0A13..0A28 ; Grapheme_Base # Lo [22] GURMUKHI LETTER OO..GURMUKHI LETTER NA +0A2A..0A30 ; Grapheme_Base # Lo [7] GURMUKHI LETTER PA..GURMUKHI LETTER RA +0A32..0A33 ; Grapheme_Base # Lo [2] GURMUKHI LETTER LA..GURMUKHI LETTER LLA +0A35..0A36 ; Grapheme_Base # Lo [2] GURMUKHI LETTER VA..GURMUKHI LETTER SHA +0A38..0A39 ; Grapheme_Base # Lo [2] GURMUKHI LETTER SA..GURMUKHI LETTER HA +0A3E..0A40 ; Grapheme_Base # Mc [3] GURMUKHI VOWEL SIGN AA..GURMUKHI VOWEL SIGN II +0A59..0A5C ; Grapheme_Base # Lo [4] GURMUKHI LETTER KHHA..GURMUKHI LETTER RRA +0A5E ; Grapheme_Base # Lo GURMUKHI LETTER FA +0A66..0A6F ; Grapheme_Base # Nd [10] GURMUKHI DIGIT ZERO..GURMUKHI DIGIT NINE +0A72..0A74 ; Grapheme_Base # Lo [3] GURMUKHI IRI..GURMUKHI EK ONKAR +0A83 ; Grapheme_Base # Mc GUJARATI SIGN VISARGA +0A85..0A8D ; Grapheme_Base # Lo [9] GUJARATI LETTER A..GUJARATI VOWEL CANDRA E +0A8F..0A91 ; Grapheme_Base # Lo [3] GUJARATI LETTER E..GUJARATI VOWEL CANDRA O +0A93..0AA8 ; Grapheme_Base # Lo [22] GUJARATI LETTER O..GUJARATI LETTER NA +0AAA..0AB0 ; Grapheme_Base # Lo [7] GUJARATI LETTER PA..GUJARATI LETTER RA +0AB2..0AB3 ; Grapheme_Base # Lo [2] GUJARATI LETTER LA..GUJARATI LETTER LLA +0AB5..0AB9 ; Grapheme_Base # Lo [5] GUJARATI LETTER VA..GUJARATI LETTER HA +0ABD ; Grapheme_Base # Lo GUJARATI SIGN AVAGRAHA +0ABE..0AC0 ; Grapheme_Base # Mc [3] GUJARATI VOWEL SIGN AA..GUJARATI VOWEL SIGN II +0AC9 ; Grapheme_Base # Mc GUJARATI VOWEL SIGN CANDRA O +0ACB..0ACC ; Grapheme_Base # Mc [2] GUJARATI VOWEL SIGN O..GUJARATI VOWEL SIGN AU +0AD0 ; Grapheme_Base # Lo GUJARATI OM +0AE0..0AE1 ; Grapheme_Base # Lo [2] GUJARATI LETTER VOCALIC RR..GUJARATI LETTER VOCALIC LL +0AE6..0AEF ; Grapheme_Base # Nd [10] GUJARATI DIGIT ZERO..GUJARATI DIGIT NINE +0AF1 ; Grapheme_Base # Sc GUJARATI RUPEE SIGN +0B02..0B03 ; Grapheme_Base # Mc [2] ORIYA SIGN ANUSVARA..ORIYA SIGN VISARGA +0B05..0B0C ; Grapheme_Base # Lo [8] ORIYA LETTER A..ORIYA LETTER VOCALIC L +0B0F..0B10 ; Grapheme_Base # Lo [2] ORIYA LETTER E..ORIYA LETTER AI +0B13..0B28 ; Grapheme_Base # Lo [22] ORIYA LETTER O..ORIYA LETTER NA +0B2A..0B30 ; Grapheme_Base # Lo [7] ORIYA LETTER PA..ORIYA LETTER RA +0B32..0B33 ; Grapheme_Base # Lo [2] ORIYA LETTER LA..ORIYA LETTER LLA +0B35..0B39 ; Grapheme_Base # Lo [5] ORIYA LETTER VA..ORIYA LETTER HA +0B3D ; Grapheme_Base # Lo ORIYA SIGN AVAGRAHA +0B40 ; Grapheme_Base # Mc ORIYA VOWEL SIGN II +0B47..0B48 ; Grapheme_Base # Mc [2] ORIYA VOWEL SIGN E..ORIYA VOWEL SIGN AI +0B4B..0B4C ; Grapheme_Base # Mc [2] ORIYA VOWEL SIGN O..ORIYA VOWEL SIGN AU +0B5C..0B5D ; Grapheme_Base # Lo [2] ORIYA LETTER RRA..ORIYA LETTER RHA +0B5F..0B61 ; Grapheme_Base # Lo [3] ORIYA LETTER YYA..ORIYA LETTER VOCALIC LL +0B66..0B6F ; Grapheme_Base # Nd [10] ORIYA DIGIT ZERO..ORIYA DIGIT NINE +0B70 ; Grapheme_Base # So ORIYA ISSHAR +0B71 ; Grapheme_Base # Lo ORIYA LETTER WA +0B83 ; Grapheme_Base # Lo TAMIL SIGN VISARGA +0B85..0B8A ; Grapheme_Base # Lo [6] TAMIL LETTER A..TAMIL LETTER UU +0B8E..0B90 ; Grapheme_Base # Lo [3] TAMIL LETTER E..TAMIL LETTER AI +0B92..0B95 ; Grapheme_Base # Lo [4] TAMIL LETTER O..TAMIL LETTER KA +0B99..0B9A ; Grapheme_Base # Lo [2] TAMIL LETTER NGA..TAMIL LETTER CA +0B9C ; Grapheme_Base # Lo TAMIL LETTER JA +0B9E..0B9F ; Grapheme_Base # Lo [2] TAMIL LETTER NYA..TAMIL LETTER TTA +0BA3..0BA4 ; Grapheme_Base # Lo [2] TAMIL LETTER NNA..TAMIL LETTER TA +0BA8..0BAA ; Grapheme_Base # Lo [3] TAMIL LETTER NA..TAMIL LETTER PA +0BAE..0BB9 ; Grapheme_Base # Lo [12] TAMIL LETTER MA..TAMIL LETTER HA +0BBF ; Grapheme_Base # Mc TAMIL VOWEL SIGN I +0BC1..0BC2 ; Grapheme_Base # Mc [2] TAMIL VOWEL SIGN U..TAMIL VOWEL SIGN UU +0BC6..0BC8 ; Grapheme_Base # Mc [3] TAMIL VOWEL SIGN E..TAMIL VOWEL SIGN AI +0BCA..0BCC ; Grapheme_Base # Mc [3] TAMIL VOWEL SIGN O..TAMIL VOWEL SIGN AU +0BD0 ; Grapheme_Base # Lo TAMIL OM +0BE6..0BEF ; Grapheme_Base # Nd [10] TAMIL DIGIT ZERO..TAMIL DIGIT NINE +0BF0..0BF2 ; Grapheme_Base # No [3] TAMIL NUMBER TEN..TAMIL NUMBER ONE THOUSAND +0BF3..0BF8 ; Grapheme_Base # So [6] TAMIL DAY SIGN..TAMIL AS ABOVE SIGN +0BF9 ; Grapheme_Base # Sc TAMIL RUPEE SIGN +0BFA ; Grapheme_Base # So TAMIL NUMBER SIGN +0C01..0C03 ; Grapheme_Base # Mc [3] TELUGU SIGN CANDRABINDU..TELUGU SIGN VISARGA +0C05..0C0C ; Grapheme_Base # Lo [8] TELUGU LETTER A..TELUGU LETTER VOCALIC L +0C0E..0C10 ; Grapheme_Base # Lo [3] TELUGU LETTER E..TELUGU LETTER AI +0C12..0C28 ; Grapheme_Base # Lo [23] TELUGU LETTER O..TELUGU LETTER NA +0C2A..0C33 ; Grapheme_Base # Lo [10] TELUGU LETTER PA..TELUGU LETTER LLA +0C35..0C39 ; Grapheme_Base # Lo [5] TELUGU LETTER VA..TELUGU LETTER HA +0C3D ; Grapheme_Base # Lo TELUGU SIGN AVAGRAHA +0C41..0C44 ; Grapheme_Base # Mc [4] TELUGU VOWEL SIGN U..TELUGU VOWEL SIGN VOCALIC RR +0C58..0C59 ; Grapheme_Base # Lo [2] TELUGU LETTER TSA..TELUGU LETTER DZA +0C60..0C61 ; Grapheme_Base # Lo [2] TELUGU LETTER VOCALIC RR..TELUGU LETTER VOCALIC LL +0C66..0C6F ; Grapheme_Base # Nd [10] TELUGU DIGIT ZERO..TELUGU DIGIT NINE +0C78..0C7E ; Grapheme_Base # No [7] TELUGU FRACTION DIGIT ZERO FOR ODD POWERS OF FOUR..TELUGU FRACTION DIGIT THREE FOR EVEN POWERS OF FOUR +0C7F ; Grapheme_Base # So TELUGU SIGN TUUMU +0C82..0C83 ; Grapheme_Base # Mc [2] KANNADA SIGN ANUSVARA..KANNADA SIGN VISARGA +0C85..0C8C ; Grapheme_Base # Lo [8] KANNADA LETTER A..KANNADA LETTER VOCALIC L +0C8E..0C90 ; Grapheme_Base # Lo [3] KANNADA LETTER E..KANNADA LETTER AI +0C92..0CA8 ; Grapheme_Base # Lo [23] KANNADA LETTER O..KANNADA LETTER NA +0CAA..0CB3 ; Grapheme_Base # Lo [10] KANNADA LETTER PA..KANNADA LETTER LLA +0CB5..0CB9 ; Grapheme_Base # Lo [5] KANNADA LETTER VA..KANNADA LETTER HA +0CBD ; Grapheme_Base # Lo KANNADA SIGN AVAGRAHA +0CBE ; Grapheme_Base # Mc KANNADA VOWEL SIGN AA +0CC0..0CC1 ; Grapheme_Base # Mc [2] KANNADA VOWEL SIGN II..KANNADA VOWEL SIGN U +0CC3..0CC4 ; Grapheme_Base # Mc [2] KANNADA VOWEL SIGN VOCALIC R..KANNADA VOWEL SIGN VOCALIC RR +0CC7..0CC8 ; Grapheme_Base # Mc [2] KANNADA VOWEL SIGN EE..KANNADA VOWEL SIGN AI +0CCA..0CCB ; Grapheme_Base # Mc [2] KANNADA VOWEL SIGN O..KANNADA VOWEL SIGN OO +0CDE ; Grapheme_Base # Lo KANNADA LETTER FA +0CE0..0CE1 ; Grapheme_Base # Lo [2] KANNADA LETTER VOCALIC RR..KANNADA LETTER VOCALIC LL +0CE6..0CEF ; Grapheme_Base # Nd [10] KANNADA DIGIT ZERO..KANNADA DIGIT NINE +0CF1..0CF2 ; Grapheme_Base # So [2] KANNADA SIGN JIHVAMULIYA..KANNADA SIGN UPADHMANIYA +0D02..0D03 ; Grapheme_Base # Mc [2] MALAYALAM SIGN ANUSVARA..MALAYALAM SIGN VISARGA +0D05..0D0C ; Grapheme_Base # Lo [8] MALAYALAM LETTER A..MALAYALAM LETTER VOCALIC L +0D0E..0D10 ; Grapheme_Base # Lo [3] MALAYALAM LETTER E..MALAYALAM LETTER AI +0D12..0D28 ; Grapheme_Base # Lo [23] MALAYALAM LETTER O..MALAYALAM LETTER NA +0D2A..0D39 ; Grapheme_Base # Lo [16] MALAYALAM LETTER PA..MALAYALAM LETTER HA +0D3D ; Grapheme_Base # Lo MALAYALAM SIGN AVAGRAHA +0D3F..0D40 ; Grapheme_Base # Mc [2] MALAYALAM VOWEL SIGN I..MALAYALAM VOWEL SIGN II +0D46..0D48 ; Grapheme_Base # Mc [3] MALAYALAM VOWEL SIGN E..MALAYALAM VOWEL SIGN AI +0D4A..0D4C ; Grapheme_Base # Mc [3] MALAYALAM VOWEL SIGN O..MALAYALAM VOWEL SIGN AU +0D60..0D61 ; Grapheme_Base # Lo [2] MALAYALAM LETTER VOCALIC RR..MALAYALAM LETTER VOCALIC LL +0D66..0D6F ; Grapheme_Base # Nd [10] MALAYALAM DIGIT ZERO..MALAYALAM DIGIT NINE +0D70..0D75 ; Grapheme_Base # No [6] MALAYALAM NUMBER TEN..MALAYALAM FRACTION THREE QUARTERS +0D79 ; Grapheme_Base # So MALAYALAM DATE MARK +0D7A..0D7F ; Grapheme_Base # Lo [6] MALAYALAM LETTER CHILLU NN..MALAYALAM LETTER CHILLU K +0D82..0D83 ; Grapheme_Base # Mc [2] SINHALA SIGN ANUSVARAYA..SINHALA SIGN VISARGAYA +0D85..0D96 ; Grapheme_Base # Lo [18] SINHALA LETTER AYANNA..SINHALA LETTER AUYANNA +0D9A..0DB1 ; Grapheme_Base # Lo [24] SINHALA LETTER ALPAPRAANA KAYANNA..SINHALA LETTER DANTAJA NAYANNA +0DB3..0DBB ; Grapheme_Base # Lo [9] SINHALA LETTER SANYAKA DAYANNA..SINHALA LETTER RAYANNA +0DBD ; Grapheme_Base # Lo SINHALA LETTER DANTAJA LAYANNA +0DC0..0DC6 ; Grapheme_Base # Lo [7] SINHALA LETTER VAYANNA..SINHALA LETTER FAYANNA +0DD0..0DD1 ; Grapheme_Base # Mc [2] SINHALA VOWEL SIGN KETTI AEDA-PILLA..SINHALA VOWEL SIGN DIGA AEDA-PILLA +0DD8..0DDE ; Grapheme_Base # Mc [7] SINHALA VOWEL SIGN GAETTA-PILLA..SINHALA VOWEL SIGN KOMBUVA HAA GAYANUKITTA +0DF2..0DF3 ; Grapheme_Base # Mc [2] SINHALA VOWEL SIGN DIGA GAETTA-PILLA..SINHALA VOWEL SIGN DIGA GAYANUKITTA +0DF4 ; Grapheme_Base # Po SINHALA PUNCTUATION KUNDDALIYA +0E01..0E30 ; Grapheme_Base # Lo [48] THAI CHARACTER KO KAI..THAI CHARACTER SARA A +0E32..0E33 ; Grapheme_Base # Lo [2] THAI CHARACTER SARA AA..THAI CHARACTER SARA AM +0E3F ; Grapheme_Base # Sc THAI CURRENCY SYMBOL BAHT +0E40..0E45 ; Grapheme_Base # Lo [6] THAI CHARACTER SARA E..THAI CHARACTER LAKKHANGYAO +0E46 ; Grapheme_Base # Lm THAI CHARACTER MAIYAMOK +0E4F ; Grapheme_Base # Po THAI CHARACTER FONGMAN +0E50..0E59 ; Grapheme_Base # Nd [10] THAI DIGIT ZERO..THAI DIGIT NINE +0E5A..0E5B ; Grapheme_Base # Po [2] THAI CHARACTER ANGKHANKHU..THAI CHARACTER KHOMUT +0E81..0E82 ; Grapheme_Base # Lo [2] LAO LETTER KO..LAO LETTER KHO SUNG +0E84 ; Grapheme_Base # Lo LAO LETTER KHO TAM +0E87..0E88 ; Grapheme_Base # Lo [2] LAO LETTER NGO..LAO LETTER CO +0E8A ; Grapheme_Base # Lo LAO LETTER SO TAM +0E8D ; Grapheme_Base # Lo LAO LETTER NYO +0E94..0E97 ; Grapheme_Base # Lo [4] LAO LETTER DO..LAO LETTER THO TAM +0E99..0E9F ; Grapheme_Base # Lo [7] LAO LETTER NO..LAO LETTER FO SUNG +0EA1..0EA3 ; Grapheme_Base # Lo [3] LAO LETTER MO..LAO LETTER LO LING +0EA5 ; Grapheme_Base # Lo LAO LETTER LO LOOT +0EA7 ; Grapheme_Base # Lo LAO LETTER WO +0EAA..0EAB ; Grapheme_Base # Lo [2] LAO LETTER SO SUNG..LAO LETTER HO SUNG +0EAD..0EB0 ; Grapheme_Base # Lo [4] LAO LETTER O..LAO VOWEL SIGN A +0EB2..0EB3 ; Grapheme_Base # Lo [2] LAO VOWEL SIGN AA..LAO VOWEL SIGN AM +0EBD ; Grapheme_Base # Lo LAO SEMIVOWEL SIGN NYO +0EC0..0EC4 ; Grapheme_Base # Lo [5] LAO VOWEL SIGN E..LAO VOWEL SIGN AI +0EC6 ; Grapheme_Base # Lm LAO KO LA +0ED0..0ED9 ; Grapheme_Base # Nd [10] LAO DIGIT ZERO..LAO DIGIT NINE +0EDC..0EDD ; Grapheme_Base # Lo [2] LAO HO NO..LAO HO MO +0F00 ; Grapheme_Base # Lo TIBETAN SYLLABLE OM +0F01..0F03 ; Grapheme_Base # So [3] TIBETAN MARK GTER YIG MGO TRUNCATED A..TIBETAN MARK GTER YIG MGO -UM GTER TSHEG MA +0F04..0F12 ; Grapheme_Base # Po [15] TIBETAN MARK INITIAL YIG MGO MDUN MA..TIBETAN MARK RGYA GRAM SHAD +0F13..0F17 ; Grapheme_Base # So [5] TIBETAN MARK CARET -DZUD RTAGS ME LONG CAN..TIBETAN ASTROLOGICAL SIGN SGRA GCAN -CHAR RTAGS +0F1A..0F1F ; Grapheme_Base # So [6] TIBETAN SIGN RDEL DKAR GCIG..TIBETAN SIGN RDEL DKAR RDEL NAG +0F20..0F29 ; Grapheme_Base # Nd [10] TIBETAN DIGIT ZERO..TIBETAN DIGIT NINE +0F2A..0F33 ; Grapheme_Base # No [10] TIBETAN DIGIT HALF ONE..TIBETAN DIGIT HALF ZERO +0F34 ; Grapheme_Base # So TIBETAN MARK BSDUS RTAGS +0F36 ; Grapheme_Base # So TIBETAN MARK CARET -DZUD RTAGS BZHI MIG CAN +0F38 ; Grapheme_Base # So TIBETAN MARK CHE MGO +0F3A ; Grapheme_Base # Ps TIBETAN MARK GUG RTAGS GYON +0F3B ; Grapheme_Base # Pe TIBETAN MARK GUG RTAGS GYAS +0F3C ; Grapheme_Base # Ps TIBETAN MARK ANG KHANG GYON +0F3D ; Grapheme_Base # Pe TIBETAN MARK ANG KHANG GYAS +0F3E..0F3F ; Grapheme_Base # Mc [2] TIBETAN SIGN YAR TSHES..TIBETAN SIGN MAR TSHES +0F40..0F47 ; Grapheme_Base # Lo [8] TIBETAN LETTER KA..TIBETAN LETTER JA +0F49..0F6C ; Grapheme_Base # Lo [36] TIBETAN LETTER NYA..TIBETAN LETTER RRA +0F7F ; Grapheme_Base # Mc TIBETAN SIGN RNAM BCAD +0F85 ; Grapheme_Base # Po TIBETAN MARK PALUTA +0F88..0F8B ; Grapheme_Base # Lo [4] TIBETAN SIGN LCE TSA CAN..TIBETAN SIGN GRU MED RGYINGS +0FBE..0FC5 ; Grapheme_Base # So [8] TIBETAN KU RU KHA..TIBETAN SYMBOL RDO RJE +0FC7..0FCC ; Grapheme_Base # So [6] TIBETAN SYMBOL RDO RJE RGYA GRAM..TIBETAN SYMBOL NOR BU BZHI -KHYIL +0FCE..0FCF ; Grapheme_Base # So [2] TIBETAN SIGN RDEL NAG RDEL DKAR..TIBETAN SIGN RDEL NAG GSUM +0FD0..0FD4 ; Grapheme_Base # Po [5] TIBETAN MARK BSKA- SHOG GI MGO RGYAN..TIBETAN MARK CLOSING BRDA RNYING YIG MGO SGAB MA +0FD5..0FD8 ; Grapheme_Base # So [4] RIGHT-FACING SVASTI SIGN..LEFT-FACING SVASTI SIGN WITH DOTS +1000..102A ; Grapheme_Base # Lo [43] MYANMAR LETTER KA..MYANMAR LETTER AU +102B..102C ; Grapheme_Base # Mc [2] MYANMAR VOWEL SIGN TALL AA..MYANMAR VOWEL SIGN AA +1031 ; Grapheme_Base # Mc MYANMAR VOWEL SIGN E +1038 ; Grapheme_Base # Mc MYANMAR SIGN VISARGA +103B..103C ; Grapheme_Base # Mc [2] MYANMAR CONSONANT SIGN MEDIAL YA..MYANMAR CONSONANT SIGN MEDIAL RA +103F ; Grapheme_Base # Lo MYANMAR LETTER GREAT SA +1040..1049 ; Grapheme_Base # Nd [10] MYANMAR DIGIT ZERO..MYANMAR DIGIT NINE +104A..104F ; Grapheme_Base # Po [6] MYANMAR SIGN LITTLE SECTION..MYANMAR SYMBOL GENITIVE +1050..1055 ; Grapheme_Base # Lo [6] MYANMAR LETTER SHA..MYANMAR LETTER VOCALIC LL +1056..1057 ; Grapheme_Base # Mc [2] MYANMAR VOWEL SIGN VOCALIC R..MYANMAR VOWEL SIGN VOCALIC RR +105A..105D ; Grapheme_Base # Lo [4] MYANMAR LETTER MON NGA..MYANMAR LETTER MON BBE +1061 ; Grapheme_Base # Lo MYANMAR LETTER SGAW KAREN SHA +1062..1064 ; Grapheme_Base # Mc [3] MYANMAR VOWEL SIGN SGAW KAREN EU..MYANMAR TONE MARK SGAW KAREN KE PHO +1065..1066 ; Grapheme_Base # Lo [2] MYANMAR LETTER WESTERN PWO KAREN THA..MYANMAR LETTER WESTERN PWO KAREN PWA +1067..106D ; Grapheme_Base # Mc [7] MYANMAR VOWEL SIGN WESTERN PWO KAREN EU..MYANMAR SIGN WESTERN PWO KAREN TONE-5 +106E..1070 ; Grapheme_Base # Lo [3] MYANMAR LETTER EASTERN PWO KAREN NNA..MYANMAR LETTER EASTERN PWO KAREN GHWA +1075..1081 ; Grapheme_Base # Lo [13] MYANMAR LETTER SHAN KA..MYANMAR LETTER SHAN HA +1083..1084 ; Grapheme_Base # Mc [2] MYANMAR VOWEL SIGN SHAN AA..MYANMAR VOWEL SIGN SHAN E +1087..108C ; Grapheme_Base # Mc [6] MYANMAR SIGN SHAN TONE-2..MYANMAR SIGN SHAN COUNCIL TONE-3 +108E ; Grapheme_Base # Lo MYANMAR LETTER RUMAI PALAUNG FA +108F ; Grapheme_Base # Mc MYANMAR SIGN RUMAI PALAUNG TONE-5 +1090..1099 ; Grapheme_Base # Nd [10] MYANMAR SHAN DIGIT ZERO..MYANMAR SHAN DIGIT NINE +109A..109C ; Grapheme_Base # Mc [3] MYANMAR SIGN KHAMTI TONE-1..MYANMAR VOWEL SIGN AITON A +109E..109F ; Grapheme_Base # So [2] MYANMAR SYMBOL SHAN ONE..MYANMAR SYMBOL SHAN EXCLAMATION +10A0..10C5 ; Grapheme_Base # L& [38] GEORGIAN CAPITAL LETTER AN..GEORGIAN CAPITAL LETTER HOE +10D0..10FA ; Grapheme_Base # Lo [43] GEORGIAN LETTER AN..GEORGIAN LETTER AIN +10FB ; Grapheme_Base # Po GEORGIAN PARAGRAPH SEPARATOR +10FC ; Grapheme_Base # Lm MODIFIER LETTER GEORGIAN NAR +1100..1248 ; Grapheme_Base # Lo [329] HANGUL CHOSEONG KIYEOK..ETHIOPIC SYLLABLE QWA +124A..124D ; Grapheme_Base # Lo [4] ETHIOPIC SYLLABLE QWI..ETHIOPIC SYLLABLE QWE +1250..1256 ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE QHA..ETHIOPIC SYLLABLE QHO +1258 ; Grapheme_Base # Lo ETHIOPIC SYLLABLE QHWA +125A..125D ; Grapheme_Base # Lo [4] ETHIOPIC SYLLABLE QHWI..ETHIOPIC SYLLABLE QHWE +1260..1288 ; Grapheme_Base # Lo [41] ETHIOPIC SYLLABLE BA..ETHIOPIC SYLLABLE XWA +128A..128D ; Grapheme_Base # Lo [4] ETHIOPIC SYLLABLE XWI..ETHIOPIC SYLLABLE XWE +1290..12B0 ; Grapheme_Base # Lo [33] ETHIOPIC SYLLABLE NA..ETHIOPIC SYLLABLE KWA +12B2..12B5 ; Grapheme_Base # Lo [4] ETHIOPIC SYLLABLE KWI..ETHIOPIC SYLLABLE KWE +12B8..12BE ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE KXA..ETHIOPIC SYLLABLE KXO +12C0 ; Grapheme_Base # Lo ETHIOPIC SYLLABLE KXWA +12C2..12C5 ; Grapheme_Base # Lo [4] ETHIOPIC SYLLABLE KXWI..ETHIOPIC SYLLABLE KXWE +12C8..12D6 ; Grapheme_Base # Lo [15] ETHIOPIC SYLLABLE WA..ETHIOPIC SYLLABLE PHARYNGEAL O +12D8..1310 ; Grapheme_Base # Lo [57] ETHIOPIC SYLLABLE ZA..ETHIOPIC SYLLABLE GWA +1312..1315 ; Grapheme_Base # Lo [4] ETHIOPIC SYLLABLE GWI..ETHIOPIC SYLLABLE GWE +1318..135A ; Grapheme_Base # Lo [67] ETHIOPIC SYLLABLE GGA..ETHIOPIC SYLLABLE FYA +1360 ; Grapheme_Base # So ETHIOPIC SECTION MARK +1361..1368 ; Grapheme_Base # Po [8] ETHIOPIC WORDSPACE..ETHIOPIC PARAGRAPH SEPARATOR +1369..137C ; Grapheme_Base # No [20] ETHIOPIC DIGIT ONE..ETHIOPIC NUMBER TEN THOUSAND +1380..138F ; Grapheme_Base # Lo [16] ETHIOPIC SYLLABLE SEBATBEIT MWA..ETHIOPIC SYLLABLE PWE +1390..1399 ; Grapheme_Base # So [10] ETHIOPIC TONAL MARK YIZET..ETHIOPIC TONAL MARK KURT +13A0..13F4 ; Grapheme_Base # Lo [85] CHEROKEE LETTER A..CHEROKEE LETTER YV +1400 ; Grapheme_Base # Pd CANADIAN SYLLABICS HYPHEN +1401..166C ; Grapheme_Base # Lo [620] CANADIAN SYLLABICS E..CANADIAN SYLLABICS CARRIER TTSA +166D..166E ; Grapheme_Base # Po [2] CANADIAN SYLLABICS CHI SIGN..CANADIAN SYLLABICS FULL STOP +166F..167F ; Grapheme_Base # Lo [17] CANADIAN SYLLABICS QAI..CANADIAN SYLLABICS BLACKFOOT W +1680 ; Grapheme_Base # Zs OGHAM SPACE MARK +1681..169A ; Grapheme_Base # Lo [26] OGHAM LETTER BEITH..OGHAM LETTER PEITH +169B ; Grapheme_Base # Ps OGHAM FEATHER MARK +169C ; Grapheme_Base # Pe OGHAM REVERSED FEATHER MARK +16A0..16EA ; Grapheme_Base # Lo [75] RUNIC LETTER FEHU FEOH FE F..RUNIC LETTER X +16EB..16ED ; Grapheme_Base # Po [3] RUNIC SINGLE PUNCTUATION..RUNIC CROSS PUNCTUATION +16EE..16F0 ; Grapheme_Base # Nl [3] RUNIC ARLAUG SYMBOL..RUNIC BELGTHOR SYMBOL +1700..170C ; Grapheme_Base # Lo [13] TAGALOG LETTER A..TAGALOG LETTER YA +170E..1711 ; Grapheme_Base # Lo [4] TAGALOG LETTER LA..TAGALOG LETTER HA +1720..1731 ; Grapheme_Base # Lo [18] HANUNOO LETTER A..HANUNOO LETTER HA +1735..1736 ; Grapheme_Base # Po [2] PHILIPPINE SINGLE PUNCTUATION..PHILIPPINE DOUBLE PUNCTUATION +1740..1751 ; Grapheme_Base # Lo [18] BUHID LETTER A..BUHID LETTER HA +1760..176C ; Grapheme_Base # Lo [13] TAGBANWA LETTER A..TAGBANWA LETTER YA +176E..1770 ; Grapheme_Base # Lo [3] TAGBANWA LETTER LA..TAGBANWA LETTER SA +1780..17B3 ; Grapheme_Base # Lo [52] KHMER LETTER KA..KHMER INDEPENDENT VOWEL QAU +17B6 ; Grapheme_Base # Mc KHMER VOWEL SIGN AA +17BE..17C5 ; Grapheme_Base # Mc [8] KHMER VOWEL SIGN OE..KHMER VOWEL SIGN AU +17C7..17C8 ; Grapheme_Base # Mc [2] KHMER SIGN REAHMUK..KHMER SIGN YUUKALEAPINTU +17D4..17D6 ; Grapheme_Base # Po [3] KHMER SIGN KHAN..KHMER SIGN CAMNUC PII KUUH +17D7 ; Grapheme_Base # Lm KHMER SIGN LEK TOO +17D8..17DA ; Grapheme_Base # Po [3] KHMER SIGN BEYYAL..KHMER SIGN KOOMUUT +17DB ; Grapheme_Base # Sc KHMER CURRENCY SYMBOL RIEL +17DC ; Grapheme_Base # Lo KHMER SIGN AVAKRAHASANYA +17E0..17E9 ; Grapheme_Base # Nd [10] KHMER DIGIT ZERO..KHMER DIGIT NINE +17F0..17F9 ; Grapheme_Base # No [10] KHMER SYMBOL LEK ATTAK SON..KHMER SYMBOL LEK ATTAK PRAM-BUON +1800..1805 ; Grapheme_Base # Po [6] MONGOLIAN BIRGA..MONGOLIAN FOUR DOTS +1806 ; Grapheme_Base # Pd MONGOLIAN TODO SOFT HYPHEN +1807..180A ; Grapheme_Base # Po [4] MONGOLIAN SIBE SYLLABLE BOUNDARY MARKER..MONGOLIAN NIRUGU +180E ; Grapheme_Base # Zs MONGOLIAN VOWEL SEPARATOR +1810..1819 ; Grapheme_Base # Nd [10] MONGOLIAN DIGIT ZERO..MONGOLIAN DIGIT NINE +1820..1842 ; Grapheme_Base # Lo [35] MONGOLIAN LETTER A..MONGOLIAN LETTER CHI +1843 ; Grapheme_Base # Lm MONGOLIAN LETTER TODO LONG VOWEL SIGN +1844..1877 ; Grapheme_Base # Lo [52] MONGOLIAN LETTER TODO E..MONGOLIAN LETTER MANCHU ZHA +1880..18A8 ; Grapheme_Base # Lo [41] MONGOLIAN LETTER ALI GALI ANUSVARA ONE..MONGOLIAN LETTER MANCHU ALI GALI BHA +18AA ; Grapheme_Base # Lo MONGOLIAN LETTER MANCHU ALI GALI LHA +18B0..18F5 ; Grapheme_Base # Lo [70] CANADIAN SYLLABICS OY..CANADIAN SYLLABICS CARRIER DENTAL S +1900..191C ; Grapheme_Base # Lo [29] LIMBU VOWEL-CARRIER LETTER..LIMBU LETTER HA +1923..1926 ; Grapheme_Base # Mc [4] LIMBU VOWEL SIGN EE..LIMBU VOWEL SIGN AU +1929..192B ; Grapheme_Base # Mc [3] LIMBU SUBJOINED LETTER YA..LIMBU SUBJOINED LETTER WA +1930..1931 ; Grapheme_Base # Mc [2] LIMBU SMALL LETTER KA..LIMBU SMALL LETTER NGA +1933..1938 ; Grapheme_Base # Mc [6] LIMBU SMALL LETTER TA..LIMBU SMALL LETTER LA +1940 ; Grapheme_Base # So LIMBU SIGN LOO +1944..1945 ; Grapheme_Base # Po [2] LIMBU EXCLAMATION MARK..LIMBU QUESTION MARK +1946..194F ; Grapheme_Base # Nd [10] LIMBU DIGIT ZERO..LIMBU DIGIT NINE +1950..196D ; Grapheme_Base # Lo [30] TAI LE LETTER KA..TAI LE LETTER AI +1970..1974 ; Grapheme_Base # Lo [5] TAI LE LETTER TONE-2..TAI LE LETTER TONE-6 +1980..19AB ; Grapheme_Base # Lo [44] NEW TAI LUE LETTER HIGH QA..NEW TAI LUE LETTER LOW SUA +19B0..19C0 ; Grapheme_Base # Mc [17] NEW TAI LUE VOWEL SIGN VOWEL SHORTENER..NEW TAI LUE VOWEL SIGN IY +19C1..19C7 ; Grapheme_Base # Lo [7] NEW TAI LUE LETTER FINAL V..NEW TAI LUE LETTER FINAL B +19C8..19C9 ; Grapheme_Base # Mc [2] NEW TAI LUE TONE MARK-1..NEW TAI LUE TONE MARK-2 +19D0..19DA ; Grapheme_Base # Nd [11] NEW TAI LUE DIGIT ZERO..NEW TAI LUE THAM DIGIT ONE +19DE..19DF ; Grapheme_Base # Po [2] NEW TAI LUE SIGN LAE..NEW TAI LUE SIGN LAEV +19E0..19FF ; Grapheme_Base # So [32] KHMER SYMBOL PATHAMASAT..KHMER SYMBOL DAP-PRAM ROC +1A00..1A16 ; Grapheme_Base # Lo [23] BUGINESE LETTER KA..BUGINESE LETTER HA +1A19..1A1B ; Grapheme_Base # Mc [3] BUGINESE VOWEL SIGN E..BUGINESE VOWEL SIGN AE +1A1E..1A1F ; Grapheme_Base # Po [2] BUGINESE PALLAWA..BUGINESE END OF SECTION +1A20..1A54 ; Grapheme_Base # Lo [53] TAI THAM LETTER HIGH KA..TAI THAM LETTER GREAT SA +1A55 ; Grapheme_Base # Mc TAI THAM CONSONANT SIGN MEDIAL RA +1A57 ; Grapheme_Base # Mc TAI THAM CONSONANT SIGN LA TANG LAI +1A61 ; Grapheme_Base # Mc TAI THAM VOWEL SIGN A +1A63..1A64 ; Grapheme_Base # Mc [2] TAI THAM VOWEL SIGN AA..TAI THAM VOWEL SIGN TALL AA +1A6D..1A72 ; Grapheme_Base # Mc [6] TAI THAM VOWEL SIGN OY..TAI THAM VOWEL SIGN THAM AI +1A80..1A89 ; Grapheme_Base # Nd [10] TAI THAM HORA DIGIT ZERO..TAI THAM HORA DIGIT NINE +1A90..1A99 ; Grapheme_Base # Nd [10] TAI THAM THAM DIGIT ZERO..TAI THAM THAM DIGIT NINE +1AA0..1AA6 ; Grapheme_Base # Po [7] TAI THAM SIGN WIANG..TAI THAM SIGN REVERSED ROTATED RANA +1AA7 ; Grapheme_Base # Lm TAI THAM SIGN MAI YAMOK +1AA8..1AAD ; Grapheme_Base # Po [6] TAI THAM SIGN KAAN..TAI THAM SIGN CAANG +1B04 ; Grapheme_Base # Mc BALINESE SIGN BISAH +1B05..1B33 ; Grapheme_Base # Lo [47] BALINESE LETTER AKARA..BALINESE LETTER HA +1B35 ; Grapheme_Base # Mc BALINESE VOWEL SIGN TEDUNG +1B3B ; Grapheme_Base # Mc BALINESE VOWEL SIGN RA REPA TEDUNG +1B3D..1B41 ; Grapheme_Base # Mc [5] BALINESE VOWEL SIGN LA LENGA TEDUNG..BALINESE VOWEL SIGN TALING REPA TEDUNG +1B43..1B44 ; Grapheme_Base # Mc [2] BALINESE VOWEL SIGN PEPET TEDUNG..BALINESE ADEG ADEG +1B45..1B4B ; Grapheme_Base # Lo [7] BALINESE LETTER KAF SASAK..BALINESE LETTER ASYURA SASAK +1B50..1B59 ; Grapheme_Base # Nd [10] BALINESE DIGIT ZERO..BALINESE DIGIT NINE +1B5A..1B60 ; Grapheme_Base # Po [7] BALINESE PANTI..BALINESE PAMENENG +1B61..1B6A ; Grapheme_Base # So [10] BALINESE MUSICAL SYMBOL DONG..BALINESE MUSICAL SYMBOL DANG GEDE +1B74..1B7C ; Grapheme_Base # So [9] BALINESE MUSICAL SYMBOL RIGHT-HAND OPEN DUG..BALINESE MUSICAL SYMBOL LEFT-HAND OPEN PING +1B82 ; Grapheme_Base # Mc SUNDANESE SIGN PANGWISAD +1B83..1BA0 ; Grapheme_Base # Lo [30] SUNDANESE LETTER A..SUNDANESE LETTER HA +1BA1 ; Grapheme_Base # Mc SUNDANESE CONSONANT SIGN PAMINGKAL +1BA6..1BA7 ; Grapheme_Base # Mc [2] SUNDANESE VOWEL SIGN PANAELAENG..SUNDANESE VOWEL SIGN PANOLONG +1BAA ; Grapheme_Base # Mc SUNDANESE SIGN PAMAAEH +1BAE..1BAF ; Grapheme_Base # Lo [2] SUNDANESE LETTER KHA..SUNDANESE LETTER SYA +1BB0..1BB9 ; Grapheme_Base # Nd [10] SUNDANESE DIGIT ZERO..SUNDANESE DIGIT NINE +1C00..1C23 ; Grapheme_Base # Lo [36] LEPCHA LETTER KA..LEPCHA LETTER A +1C24..1C2B ; Grapheme_Base # Mc [8] LEPCHA SUBJOINED LETTER YA..LEPCHA VOWEL SIGN UU +1C34..1C35 ; Grapheme_Base # Mc [2] LEPCHA CONSONANT SIGN NYIN-DO..LEPCHA CONSONANT SIGN KANG +1C3B..1C3F ; Grapheme_Base # Po [5] LEPCHA PUNCTUATION TA-ROL..LEPCHA PUNCTUATION TSHOOK +1C40..1C49 ; Grapheme_Base # Nd [10] LEPCHA DIGIT ZERO..LEPCHA DIGIT NINE +1C4D..1C4F ; Grapheme_Base # Lo [3] LEPCHA LETTER TTA..LEPCHA LETTER DDA +1C50..1C59 ; Grapheme_Base # Nd [10] OL CHIKI DIGIT ZERO..OL CHIKI DIGIT NINE +1C5A..1C77 ; Grapheme_Base # Lo [30] OL CHIKI LETTER LA..OL CHIKI LETTER OH +1C78..1C7D ; Grapheme_Base # Lm [6] OL CHIKI MU TTUDDAG..OL CHIKI AHAD +1C7E..1C7F ; Grapheme_Base # Po [2] OL CHIKI PUNCTUATION MUCAAD..OL CHIKI PUNCTUATION DOUBLE MUCAAD +1CD3 ; Grapheme_Base # Po VEDIC SIGN NIHSHVASA +1CE1 ; Grapheme_Base # Mc VEDIC TONE ATHARVAVEDIC INDEPENDENT SVARITA +1CE9..1CEC ; Grapheme_Base # Lo [4] VEDIC SIGN ANUSVARA ANTARGOMUKHA..VEDIC SIGN ANUSVARA VAMAGOMUKHA WITH TAIL +1CEE..1CF1 ; Grapheme_Base # Lo [4] VEDIC SIGN HEXIFORM LONG ANUSVARA..VEDIC SIGN ANUSVARA UBHAYATO MUKHA +1CF2 ; Grapheme_Base # Mc VEDIC SIGN ARDHAVISARGA +1D00..1D2B ; Grapheme_Base # L& [44] LATIN LETTER SMALL CAPITAL A..CYRILLIC LETTER SMALL CAPITAL EL +1D2C..1D61 ; Grapheme_Base # Lm [54] MODIFIER LETTER CAPITAL A..MODIFIER LETTER SMALL CHI +1D62..1D77 ; Grapheme_Base # L& [22] LATIN SUBSCRIPT SMALL LETTER I..LATIN SMALL LETTER TURNED G +1D78 ; Grapheme_Base # Lm MODIFIER LETTER CYRILLIC EN +1D79..1D9A ; Grapheme_Base # L& [34] LATIN SMALL LETTER INSULAR G..LATIN SMALL LETTER EZH WITH RETROFLEX HOOK +1D9B..1DBF ; Grapheme_Base # Lm [37] MODIFIER LETTER SMALL TURNED ALPHA..MODIFIER LETTER SMALL THETA +1E00..1F15 ; Grapheme_Base # L& [278] LATIN CAPITAL LETTER A WITH RING BELOW..GREEK SMALL LETTER EPSILON WITH DASIA AND OXIA +1F18..1F1D ; Grapheme_Base # L& [6] GREEK CAPITAL LETTER EPSILON WITH PSILI..GREEK CAPITAL LETTER EPSILON WITH DASIA AND OXIA +1F20..1F45 ; Grapheme_Base # L& [38] GREEK SMALL LETTER ETA WITH PSILI..GREEK SMALL LETTER OMICRON WITH DASIA AND OXIA +1F48..1F4D ; Grapheme_Base # L& [6] GREEK CAPITAL LETTER OMICRON WITH PSILI..GREEK CAPITAL LETTER OMICRON WITH DASIA AND OXIA +1F50..1F57 ; Grapheme_Base # L& [8] GREEK SMALL LETTER UPSILON WITH PSILI..GREEK SMALL LETTER UPSILON WITH DASIA AND PERISPOMENI +1F59 ; Grapheme_Base # L& GREEK CAPITAL LETTER UPSILON WITH DASIA +1F5B ; Grapheme_Base # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND VARIA +1F5D ; Grapheme_Base # L& GREEK CAPITAL LETTER UPSILON WITH DASIA AND OXIA +1F5F..1F7D ; Grapheme_Base # L& [31] GREEK CAPITAL LETTER UPSILON WITH DASIA AND PERISPOMENI..GREEK SMALL LETTER OMEGA WITH OXIA +1F80..1FB4 ; Grapheme_Base # L& [53] GREEK SMALL LETTER ALPHA WITH PSILI AND YPOGEGRAMMENI..GREEK SMALL LETTER ALPHA WITH OXIA AND YPOGEGRAMMENI +1FB6..1FBC ; Grapheme_Base # L& [7] GREEK SMALL LETTER ALPHA WITH PERISPOMENI..GREEK CAPITAL LETTER ALPHA WITH PROSGEGRAMMENI +1FBD ; Grapheme_Base # Sk GREEK KORONIS +1FBE ; Grapheme_Base # L& GREEK PROSGEGRAMMENI +1FBF..1FC1 ; Grapheme_Base # Sk [3] GREEK PSILI..GREEK DIALYTIKA AND PERISPOMENI +1FC2..1FC4 ; Grapheme_Base # L& [3] GREEK SMALL LETTER ETA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER ETA WITH OXIA AND YPOGEGRAMMENI +1FC6..1FCC ; Grapheme_Base # L& [7] GREEK SMALL LETTER ETA WITH PERISPOMENI..GREEK CAPITAL LETTER ETA WITH PROSGEGRAMMENI +1FCD..1FCF ; Grapheme_Base # Sk [3] GREEK PSILI AND VARIA..GREEK PSILI AND PERISPOMENI +1FD0..1FD3 ; Grapheme_Base # L& [4] GREEK SMALL LETTER IOTA WITH VRACHY..GREEK SMALL LETTER IOTA WITH DIALYTIKA AND OXIA +1FD6..1FDB ; Grapheme_Base # L& [6] GREEK SMALL LETTER IOTA WITH PERISPOMENI..GREEK CAPITAL LETTER IOTA WITH OXIA +1FDD..1FDF ; Grapheme_Base # Sk [3] GREEK DASIA AND VARIA..GREEK DASIA AND PERISPOMENI +1FE0..1FEC ; Grapheme_Base # L& [13] GREEK SMALL LETTER UPSILON WITH VRACHY..GREEK CAPITAL LETTER RHO WITH DASIA +1FED..1FEF ; Grapheme_Base # Sk [3] GREEK DIALYTIKA AND VARIA..GREEK VARIA +1FF2..1FF4 ; Grapheme_Base # L& [3] GREEK SMALL LETTER OMEGA WITH VARIA AND YPOGEGRAMMENI..GREEK SMALL LETTER OMEGA WITH OXIA AND YPOGEGRAMMENI +1FF6..1FFC ; Grapheme_Base # L& [7] GREEK SMALL LETTER OMEGA WITH PERISPOMENI..GREEK CAPITAL LETTER OMEGA WITH PROSGEGRAMMENI +1FFD..1FFE ; Grapheme_Base # Sk [2] GREEK OXIA..GREEK DASIA +2000..200A ; Grapheme_Base # Zs [11] EN QUAD..HAIR SPACE +2010..2015 ; Grapheme_Base # Pd [6] HYPHEN..HORIZONTAL BAR +2016..2017 ; Grapheme_Base # Po [2] DOUBLE VERTICAL LINE..DOUBLE LOW LINE +2018 ; Grapheme_Base # Pi LEFT SINGLE QUOTATION MARK +2019 ; Grapheme_Base # Pf RIGHT SINGLE QUOTATION MARK +201A ; Grapheme_Base # Ps SINGLE LOW-9 QUOTATION MARK +201B..201C ; Grapheme_Base # Pi [2] SINGLE HIGH-REVERSED-9 QUOTATION MARK..LEFT DOUBLE QUOTATION MARK +201D ; Grapheme_Base # Pf RIGHT DOUBLE QUOTATION MARK +201E ; Grapheme_Base # Ps DOUBLE LOW-9 QUOTATION MARK +201F ; Grapheme_Base # Pi DOUBLE HIGH-REVERSED-9 QUOTATION MARK +2020..2027 ; Grapheme_Base # Po [8] DAGGER..HYPHENATION POINT +202F ; Grapheme_Base # Zs NARROW NO-BREAK SPACE +2030..2038 ; Grapheme_Base # Po [9] PER MILLE SIGN..CARET +2039 ; Grapheme_Base # Pi SINGLE LEFT-POINTING ANGLE QUOTATION MARK +203A ; Grapheme_Base # Pf SINGLE RIGHT-POINTING ANGLE QUOTATION MARK +203B..203E ; Grapheme_Base # Po [4] REFERENCE MARK..OVERLINE +203F..2040 ; Grapheme_Base # Pc [2] UNDERTIE..CHARACTER TIE +2041..2043 ; Grapheme_Base # Po [3] CARET INSERTION POINT..HYPHEN BULLET +2044 ; Grapheme_Base # Sm FRACTION SLASH +2045 ; Grapheme_Base # Ps LEFT SQUARE BRACKET WITH QUILL +2046 ; Grapheme_Base # Pe RIGHT SQUARE BRACKET WITH QUILL +2047..2051 ; Grapheme_Base # Po [11] DOUBLE QUESTION MARK..TWO ASTERISKS ALIGNED VERTICALLY +2052 ; Grapheme_Base # Sm COMMERCIAL MINUS SIGN +2053 ; Grapheme_Base # Po SWUNG DASH +2054 ; Grapheme_Base # Pc INVERTED UNDERTIE +2055..205E ; Grapheme_Base # Po [10] FLOWER PUNCTUATION MARK..VERTICAL FOUR DOTS +205F ; Grapheme_Base # Zs MEDIUM MATHEMATICAL SPACE +2070 ; Grapheme_Base # No SUPERSCRIPT ZERO +2071 ; Grapheme_Base # Lm SUPERSCRIPT LATIN SMALL LETTER I +2074..2079 ; Grapheme_Base # No [6] SUPERSCRIPT FOUR..SUPERSCRIPT NINE +207A..207C ; Grapheme_Base # Sm [3] SUPERSCRIPT PLUS SIGN..SUPERSCRIPT EQUALS SIGN +207D ; Grapheme_Base # Ps SUPERSCRIPT LEFT PARENTHESIS +207E ; Grapheme_Base # Pe SUPERSCRIPT RIGHT PARENTHESIS +207F ; Grapheme_Base # Lm SUPERSCRIPT LATIN SMALL LETTER N +2080..2089 ; Grapheme_Base # No [10] SUBSCRIPT ZERO..SUBSCRIPT NINE +208A..208C ; Grapheme_Base # Sm [3] SUBSCRIPT PLUS SIGN..SUBSCRIPT EQUALS SIGN +208D ; Grapheme_Base # Ps SUBSCRIPT LEFT PARENTHESIS +208E ; Grapheme_Base # Pe SUBSCRIPT RIGHT PARENTHESIS +2090..2094 ; Grapheme_Base # Lm [5] LATIN SUBSCRIPT SMALL LETTER A..LATIN SUBSCRIPT SMALL LETTER SCHWA +20A0..20B8 ; Grapheme_Base # Sc [25] EURO-CURRENCY SIGN..TENGE SIGN +2100..2101 ; Grapheme_Base # So [2] ACCOUNT OF..ADDRESSED TO THE SUBJECT +2102 ; Grapheme_Base # L& DOUBLE-STRUCK CAPITAL C +2103..2106 ; Grapheme_Base # So [4] DEGREE CELSIUS..CADA UNA +2107 ; Grapheme_Base # L& EULER CONSTANT +2108..2109 ; Grapheme_Base # So [2] SCRUPLE..DEGREE FAHRENHEIT +210A..2113 ; Grapheme_Base # L& [10] SCRIPT SMALL G..SCRIPT SMALL L +2114 ; Grapheme_Base # So L B BAR SYMBOL +2115 ; Grapheme_Base # L& DOUBLE-STRUCK CAPITAL N +2116..2118 ; Grapheme_Base # So [3] NUMERO SIGN..SCRIPT CAPITAL P +2119..211D ; Grapheme_Base # L& [5] DOUBLE-STRUCK CAPITAL P..DOUBLE-STRUCK CAPITAL R +211E..2123 ; Grapheme_Base # So [6] PRESCRIPTION TAKE..VERSICLE +2124 ; Grapheme_Base # L& DOUBLE-STRUCK CAPITAL Z +2125 ; Grapheme_Base # So OUNCE SIGN +2126 ; Grapheme_Base # L& OHM SIGN +2127 ; Grapheme_Base # So INVERTED OHM SIGN +2128 ; Grapheme_Base # L& BLACK-LETTER CAPITAL Z +2129 ; Grapheme_Base # So TURNED GREEK SMALL LETTER IOTA +212A..212D ; Grapheme_Base # L& [4] KELVIN SIGN..BLACK-LETTER CAPITAL C +212E ; Grapheme_Base # So ESTIMATED SYMBOL +212F..2134 ; Grapheme_Base # L& [6] SCRIPT SMALL E..SCRIPT SMALL O +2135..2138 ; Grapheme_Base # Lo [4] ALEF SYMBOL..DALET SYMBOL +2139 ; Grapheme_Base # L& INFORMATION SOURCE +213A..213B ; Grapheme_Base # So [2] ROTATED CAPITAL Q..FACSIMILE SIGN +213C..213F ; Grapheme_Base # L& [4] DOUBLE-STRUCK SMALL PI..DOUBLE-STRUCK CAPITAL PI +2140..2144 ; Grapheme_Base # Sm [5] DOUBLE-STRUCK N-ARY SUMMATION..TURNED SANS-SERIF CAPITAL Y +2145..2149 ; Grapheme_Base # L& [5] DOUBLE-STRUCK ITALIC CAPITAL D..DOUBLE-STRUCK ITALIC SMALL J +214A ; Grapheme_Base # So PROPERTY LINE +214B ; Grapheme_Base # Sm TURNED AMPERSAND +214C..214D ; Grapheme_Base # So [2] PER SIGN..AKTIESELSKAB +214E ; Grapheme_Base # L& TURNED SMALL F +214F ; Grapheme_Base # So SYMBOL FOR SAMARITAN SOURCE +2150..215F ; Grapheme_Base # No [16] VULGAR FRACTION ONE SEVENTH..FRACTION NUMERATOR ONE +2160..2182 ; Grapheme_Base # Nl [35] ROMAN NUMERAL ONE..ROMAN NUMERAL TEN THOUSAND +2183..2184 ; Grapheme_Base # L& [2] ROMAN NUMERAL REVERSED ONE HUNDRED..LATIN SMALL LETTER REVERSED C +2185..2188 ; Grapheme_Base # Nl [4] ROMAN NUMERAL SIX LATE FORM..ROMAN NUMERAL ONE HUNDRED THOUSAND +2189 ; Grapheme_Base # No VULGAR FRACTION ZERO THIRDS +2190..2194 ; Grapheme_Base # Sm [5] LEFTWARDS ARROW..LEFT RIGHT ARROW +2195..2199 ; Grapheme_Base # So [5] UP DOWN ARROW..SOUTH WEST ARROW +219A..219B ; Grapheme_Base # Sm [2] LEFTWARDS ARROW WITH STROKE..RIGHTWARDS ARROW WITH STROKE +219C..219F ; Grapheme_Base # So [4] LEFTWARDS WAVE ARROW..UPWARDS TWO HEADED ARROW +21A0 ; Grapheme_Base # Sm RIGHTWARDS TWO HEADED ARROW +21A1..21A2 ; Grapheme_Base # So [2] DOWNWARDS TWO HEADED ARROW..LEFTWARDS ARROW WITH TAIL +21A3 ; Grapheme_Base # Sm RIGHTWARDS ARROW WITH TAIL +21A4..21A5 ; Grapheme_Base # So [2] LEFTWARDS ARROW FROM BAR..UPWARDS ARROW FROM BAR +21A6 ; Grapheme_Base # Sm RIGHTWARDS ARROW FROM BAR +21A7..21AD ; Grapheme_Base # So [7] DOWNWARDS ARROW FROM BAR..LEFT RIGHT WAVE ARROW +21AE ; Grapheme_Base # Sm LEFT RIGHT ARROW WITH STROKE +21AF..21CD ; Grapheme_Base # So [31] DOWNWARDS ZIGZAG ARROW..LEFTWARDS DOUBLE ARROW WITH STROKE +21CE..21CF ; Grapheme_Base # Sm [2] LEFT RIGHT DOUBLE ARROW WITH STROKE..RIGHTWARDS DOUBLE ARROW WITH STROKE +21D0..21D1 ; Grapheme_Base # So [2] LEFTWARDS DOUBLE ARROW..UPWARDS DOUBLE ARROW +21D2 ; Grapheme_Base # Sm RIGHTWARDS DOUBLE ARROW +21D3 ; Grapheme_Base # So DOWNWARDS DOUBLE ARROW +21D4 ; Grapheme_Base # Sm LEFT RIGHT DOUBLE ARROW +21D5..21F3 ; Grapheme_Base # So [31] UP DOWN DOUBLE ARROW..UP DOWN WHITE ARROW +21F4..22FF ; Grapheme_Base # Sm [268] RIGHT ARROW WITH SMALL CIRCLE..Z NOTATION BAG MEMBERSHIP +2300..2307 ; Grapheme_Base # So [8] DIAMETER SIGN..WAVY LINE +2308..230B ; Grapheme_Base # Sm [4] LEFT CEILING..RIGHT FLOOR +230C..231F ; Grapheme_Base # So [20] BOTTOM RIGHT CROP..BOTTOM RIGHT CORNER +2320..2321 ; Grapheme_Base # Sm [2] TOP HALF INTEGRAL..BOTTOM HALF INTEGRAL +2322..2328 ; Grapheme_Base # So [7] FROWN..KEYBOARD +2329 ; Grapheme_Base # Ps LEFT-POINTING ANGLE BRACKET +232A ; Grapheme_Base # Pe RIGHT-POINTING ANGLE BRACKET +232B..237B ; Grapheme_Base # So [81] ERASE TO THE LEFT..NOT CHECK MARK +237C ; Grapheme_Base # Sm RIGHT ANGLE WITH DOWNWARDS ZIGZAG ARROW +237D..239A ; Grapheme_Base # So [30] SHOULDERED OPEN BOX..CLEAR SCREEN SYMBOL +239B..23B3 ; Grapheme_Base # Sm [25] LEFT PARENTHESIS UPPER HOOK..SUMMATION BOTTOM +23B4..23DB ; Grapheme_Base # So [40] TOP SQUARE BRACKET..FUSE +23DC..23E1 ; Grapheme_Base # Sm [6] TOP PARENTHESIS..BOTTOM TORTOISE SHELL BRACKET +23E2..23E8 ; Grapheme_Base # So [7] WHITE TRAPEZIUM..DECIMAL EXPONENT SYMBOL +2400..2426 ; Grapheme_Base # So [39] SYMBOL FOR NULL..SYMBOL FOR SUBSTITUTE FORM TWO +2440..244A ; Grapheme_Base # So [11] OCR HOOK..OCR DOUBLE BACKSLASH +2460..249B ; Grapheme_Base # No [60] CIRCLED DIGIT ONE..NUMBER TWENTY FULL STOP +249C..24E9 ; Grapheme_Base # So [78] PARENTHESIZED LATIN SMALL LETTER A..CIRCLED LATIN SMALL LETTER Z +24EA..24FF ; Grapheme_Base # No [22] CIRCLED DIGIT ZERO..NEGATIVE CIRCLED DIGIT ZERO +2500..25B6 ; Grapheme_Base # So [183] BOX DRAWINGS LIGHT HORIZONTAL..BLACK RIGHT-POINTING TRIANGLE +25B7 ; Grapheme_Base # Sm WHITE RIGHT-POINTING TRIANGLE +25B8..25C0 ; Grapheme_Base # So [9] BLACK RIGHT-POINTING SMALL TRIANGLE..BLACK LEFT-POINTING TRIANGLE +25C1 ; Grapheme_Base # Sm WHITE LEFT-POINTING TRIANGLE +25C2..25F7 ; Grapheme_Base # So [54] BLACK LEFT-POINTING SMALL TRIANGLE..WHITE CIRCLE WITH UPPER RIGHT QUADRANT +25F8..25FF ; Grapheme_Base # Sm [8] UPPER LEFT TRIANGLE..LOWER RIGHT TRIANGLE +2600..266E ; Grapheme_Base # So [111] BLACK SUN WITH RAYS..MUSIC NATURAL SIGN +266F ; Grapheme_Base # Sm MUSIC SHARP SIGN +2670..26CD ; Grapheme_Base # So [94] WEST SYRIAC CROSS..DISABLED CAR +26CF..26E1 ; Grapheme_Base # So [19] PICK..RESTRICTED LEFT ENTRY-2 +26E3 ; Grapheme_Base # So HEAVY CIRCLE WITH STROKE AND TWO DOTS ABOVE +26E8..26FF ; Grapheme_Base # So [24] BLACK CROSS ON SHIELD..WHITE FLAG WITH HORIZONTAL MIDDLE BLACK STRIPE +2701..2704 ; Grapheme_Base # So [4] UPPER BLADE SCISSORS..WHITE SCISSORS +2706..2709 ; Grapheme_Base # So [4] TELEPHONE LOCATION SIGN..ENVELOPE +270C..2727 ; Grapheme_Base # So [28] VICTORY HAND..WHITE FOUR POINTED STAR +2729..274B ; Grapheme_Base # So [35] STRESS OUTLINED WHITE STAR..HEAVY EIGHT TEARDROP-SPOKED PROPELLER ASTERISK +274D ; Grapheme_Base # So SHADOWED WHITE CIRCLE +274F..2752 ; Grapheme_Base # So [4] LOWER RIGHT DROP-SHADOWED WHITE SQUARE..UPPER RIGHT SHADOWED WHITE SQUARE +2756..275E ; Grapheme_Base # So [9] BLACK DIAMOND MINUS WHITE X..HEAVY DOUBLE COMMA QUOTATION MARK ORNAMENT +2761..2767 ; Grapheme_Base # So [7] CURVED STEM PARAGRAPH SIGN ORNAMENT..ROTATED FLORAL HEART BULLET +2768 ; Grapheme_Base # Ps MEDIUM LEFT PARENTHESIS ORNAMENT +2769 ; Grapheme_Base # Pe MEDIUM RIGHT PARENTHESIS ORNAMENT +276A ; Grapheme_Base # Ps MEDIUM FLATTENED LEFT PARENTHESIS ORNAMENT +276B ; Grapheme_Base # Pe MEDIUM FLATTENED RIGHT PARENTHESIS ORNAMENT +276C ; Grapheme_Base # Ps MEDIUM LEFT-POINTING ANGLE BRACKET ORNAMENT +276D ; Grapheme_Base # Pe MEDIUM RIGHT-POINTING ANGLE BRACKET ORNAMENT +276E ; Grapheme_Base # Ps HEAVY LEFT-POINTING ANGLE QUOTATION MARK ORNAMENT +276F ; Grapheme_Base # Pe HEAVY RIGHT-POINTING ANGLE QUOTATION MARK ORNAMENT +2770 ; Grapheme_Base # Ps HEAVY LEFT-POINTING ANGLE BRACKET ORNAMENT +2771 ; Grapheme_Base # Pe HEAVY RIGHT-POINTING ANGLE BRACKET ORNAMENT +2772 ; Grapheme_Base # Ps LIGHT LEFT TORTOISE SHELL BRACKET ORNAMENT +2773 ; Grapheme_Base # Pe LIGHT RIGHT TORTOISE SHELL BRACKET ORNAMENT +2774 ; Grapheme_Base # Ps MEDIUM LEFT CURLY BRACKET ORNAMENT +2775 ; Grapheme_Base # Pe MEDIUM RIGHT CURLY BRACKET ORNAMENT +2776..2793 ; Grapheme_Base # No [30] DINGBAT NEGATIVE CIRCLED DIGIT ONE..DINGBAT NEGATIVE CIRCLED SANS-SERIF NUMBER TEN +2794 ; Grapheme_Base # So HEAVY WIDE-HEADED RIGHTWARDS ARROW +2798..27AF ; Grapheme_Base # So [24] HEAVY SOUTH EAST ARROW..NOTCHED LOWER RIGHT-SHADOWED WHITE RIGHTWARDS ARROW +27B1..27BE ; Grapheme_Base # So [14] NOTCHED UPPER RIGHT-SHADOWED WHITE RIGHTWARDS ARROW..OPEN-OUTLINED RIGHTWARDS ARROW +27C0..27C4 ; Grapheme_Base # Sm [5] THREE DIMENSIONAL ANGLE..OPEN SUPERSET +27C5 ; Grapheme_Base # Ps LEFT S-SHAPED BAG DELIMITER +27C6 ; Grapheme_Base # Pe RIGHT S-SHAPED BAG DELIMITER +27C7..27CA ; Grapheme_Base # Sm [4] OR WITH DOT INSIDE..VERTICAL BAR WITH HORIZONTAL STROKE +27CC ; Grapheme_Base # Sm LONG DIVISION +27D0..27E5 ; Grapheme_Base # Sm [22] WHITE DIAMOND WITH CENTRED DOT..WHITE SQUARE WITH RIGHTWARDS TICK +27E6 ; Grapheme_Base # Ps MATHEMATICAL LEFT WHITE SQUARE BRACKET +27E7 ; Grapheme_Base # Pe MATHEMATICAL RIGHT WHITE SQUARE BRACKET +27E8 ; Grapheme_Base # Ps MATHEMATICAL LEFT ANGLE BRACKET +27E9 ; Grapheme_Base # Pe MATHEMATICAL RIGHT ANGLE BRACKET +27EA ; Grapheme_Base # Ps MATHEMATICAL LEFT DOUBLE ANGLE BRACKET +27EB ; Grapheme_Base # Pe MATHEMATICAL RIGHT DOUBLE ANGLE BRACKET +27EC ; Grapheme_Base # Ps MATHEMATICAL LEFT WHITE TORTOISE SHELL BRACKET +27ED ; Grapheme_Base # Pe MATHEMATICAL RIGHT WHITE TORTOISE SHELL BRACKET +27EE ; Grapheme_Base # Ps MATHEMATICAL LEFT FLATTENED PARENTHESIS +27EF ; Grapheme_Base # Pe MATHEMATICAL RIGHT FLATTENED PARENTHESIS +27F0..27FF ; Grapheme_Base # Sm [16] UPWARDS QUADRUPLE ARROW..LONG RIGHTWARDS SQUIGGLE ARROW +2800..28FF ; Grapheme_Base # So [256] BRAILLE PATTERN BLANK..BRAILLE PATTERN DOTS-12345678 +2900..2982 ; Grapheme_Base # Sm [131] RIGHTWARDS TWO-HEADED ARROW WITH VERTICAL STROKE..Z NOTATION TYPE COLON +2983 ; Grapheme_Base # Ps LEFT WHITE CURLY BRACKET +2984 ; Grapheme_Base # Pe RIGHT WHITE CURLY BRACKET +2985 ; Grapheme_Base # Ps LEFT WHITE PARENTHESIS +2986 ; Grapheme_Base # Pe RIGHT WHITE PARENTHESIS +2987 ; Grapheme_Base # Ps Z NOTATION LEFT IMAGE BRACKET +2988 ; Grapheme_Base # Pe Z NOTATION RIGHT IMAGE BRACKET +2989 ; Grapheme_Base # Ps Z NOTATION LEFT BINDING BRACKET +298A ; Grapheme_Base # Pe Z NOTATION RIGHT BINDING BRACKET +298B ; Grapheme_Base # Ps LEFT SQUARE BRACKET WITH UNDERBAR +298C ; Grapheme_Base # Pe RIGHT SQUARE BRACKET WITH UNDERBAR +298D ; Grapheme_Base # Ps LEFT SQUARE BRACKET WITH TICK IN TOP CORNER +298E ; Grapheme_Base # Pe RIGHT SQUARE BRACKET WITH TICK IN BOTTOM CORNER +298F ; Grapheme_Base # Ps LEFT SQUARE BRACKET WITH TICK IN BOTTOM CORNER +2990 ; Grapheme_Base # Pe RIGHT SQUARE BRACKET WITH TICK IN TOP CORNER +2991 ; Grapheme_Base # Ps LEFT ANGLE BRACKET WITH DOT +2992 ; Grapheme_Base # Pe RIGHT ANGLE BRACKET WITH DOT +2993 ; Grapheme_Base # Ps LEFT ARC LESS-THAN BRACKET +2994 ; Grapheme_Base # Pe RIGHT ARC GREATER-THAN BRACKET +2995 ; Grapheme_Base # Ps DOUBLE LEFT ARC GREATER-THAN BRACKET +2996 ; Grapheme_Base # Pe DOUBLE RIGHT ARC LESS-THAN BRACKET +2997 ; Grapheme_Base # Ps LEFT BLACK TORTOISE SHELL BRACKET +2998 ; Grapheme_Base # Pe RIGHT BLACK TORTOISE SHELL BRACKET +2999..29D7 ; Grapheme_Base # Sm [63] DOTTED FENCE..BLACK HOURGLASS +29D8 ; Grapheme_Base # Ps LEFT WIGGLY FENCE +29D9 ; Grapheme_Base # Pe RIGHT WIGGLY FENCE +29DA ; Grapheme_Base # Ps LEFT DOUBLE WIGGLY FENCE +29DB ; Grapheme_Base # Pe RIGHT DOUBLE WIGGLY FENCE +29DC..29FB ; Grapheme_Base # Sm [32] INCOMPLETE INFINITY..TRIPLE PLUS +29FC ; Grapheme_Base # Ps LEFT-POINTING CURVED ANGLE BRACKET +29FD ; Grapheme_Base # Pe RIGHT-POINTING CURVED ANGLE BRACKET +29FE..2AFF ; Grapheme_Base # Sm [258] TINY..N-ARY WHITE VERTICAL BAR +2B00..2B2F ; Grapheme_Base # So [48] NORTH EAST WHITE ARROW..WHITE VERTICAL ELLIPSE +2B30..2B44 ; Grapheme_Base # Sm [21] LEFT ARROW WITH SMALL CIRCLE..RIGHTWARDS ARROW THROUGH SUPERSET +2B45..2B46 ; Grapheme_Base # So [2] LEFTWARDS QUADRUPLE ARROW..RIGHTWARDS QUADRUPLE ARROW +2B47..2B4C ; Grapheme_Base # Sm [6] REVERSE TILDE OPERATOR ABOVE RIGHTWARDS ARROW..RIGHTWARDS ARROW ABOVE REVERSE TILDE OPERATOR +2B50..2B59 ; Grapheme_Base # So [10] WHITE MEDIUM STAR..HEAVY CIRCLED SALTIRE +2C00..2C2E ; Grapheme_Base # L& [47] GLAGOLITIC CAPITAL LETTER AZU..GLAGOLITIC CAPITAL LETTER LATINATE MYSLITE +2C30..2C5E ; Grapheme_Base # L& [47] GLAGOLITIC SMALL LETTER AZU..GLAGOLITIC SMALL LETTER LATINATE MYSLITE +2C60..2C7C ; Grapheme_Base # L& [29] LATIN CAPITAL LETTER L WITH DOUBLE BAR..LATIN SUBSCRIPT SMALL LETTER J +2C7D ; Grapheme_Base # Lm MODIFIER LETTER CAPITAL V +2C7E..2CE4 ; Grapheme_Base # L& [103] LATIN CAPITAL LETTER S WITH SWASH TAIL..COPTIC SYMBOL KAI +2CE5..2CEA ; Grapheme_Base # So [6] COPTIC SYMBOL MI RO..COPTIC SYMBOL SHIMA SIMA +2CEB..2CEE ; Grapheme_Base # L& [4] COPTIC CAPITAL LETTER CRYPTOGRAMMIC SHEI..COPTIC SMALL LETTER CRYPTOGRAMMIC GANGIA +2CF9..2CFC ; Grapheme_Base # Po [4] COPTIC OLD NUBIAN FULL STOP..COPTIC OLD NUBIAN VERSE DIVIDER +2CFD ; Grapheme_Base # No COPTIC FRACTION ONE HALF +2CFE..2CFF ; Grapheme_Base # Po [2] COPTIC FULL STOP..COPTIC MORPHOLOGICAL DIVIDER +2D00..2D25 ; Grapheme_Base # L& [38] GEORGIAN SMALL LETTER AN..GEORGIAN SMALL LETTER HOE +2D30..2D65 ; Grapheme_Base # Lo [54] TIFINAGH LETTER YA..TIFINAGH LETTER YAZZ +2D6F ; Grapheme_Base # Lm TIFINAGH MODIFIER LETTER LABIALIZATION MARK +2D80..2D96 ; Grapheme_Base # Lo [23] ETHIOPIC SYLLABLE LOA..ETHIOPIC SYLLABLE GGWE +2DA0..2DA6 ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE SSA..ETHIOPIC SYLLABLE SSO +2DA8..2DAE ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE CCA..ETHIOPIC SYLLABLE CCO +2DB0..2DB6 ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE ZZA..ETHIOPIC SYLLABLE ZZO +2DB8..2DBE ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE CCHA..ETHIOPIC SYLLABLE CCHO +2DC0..2DC6 ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE QYA..ETHIOPIC SYLLABLE QYO +2DC8..2DCE ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE KYA..ETHIOPIC SYLLABLE KYO +2DD0..2DD6 ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE XYA..ETHIOPIC SYLLABLE XYO +2DD8..2DDE ; Grapheme_Base # Lo [7] ETHIOPIC SYLLABLE GYA..ETHIOPIC SYLLABLE GYO +2E00..2E01 ; Grapheme_Base # Po [2] RIGHT ANGLE SUBSTITUTION MARKER..RIGHT ANGLE DOTTED SUBSTITUTION MARKER +2E02 ; Grapheme_Base # Pi LEFT SUBSTITUTION BRACKET +2E03 ; Grapheme_Base # Pf RIGHT SUBSTITUTION BRACKET +2E04 ; Grapheme_Base # Pi LEFT DOTTED SUBSTITUTION BRACKET +2E05 ; Grapheme_Base # Pf RIGHT DOTTED SUBSTITUTION BRACKET +2E06..2E08 ; Grapheme_Base # Po [3] RAISED INTERPOLATION MARKER..DOTTED TRANSPOSITION MARKER +2E09 ; Grapheme_Base # Pi LEFT TRANSPOSITION BRACKET +2E0A ; Grapheme_Base # Pf RIGHT TRANSPOSITION BRACKET +2E0B ; Grapheme_Base # Po RAISED SQUARE +2E0C ; Grapheme_Base # Pi LEFT RAISED OMISSION BRACKET +2E0D ; Grapheme_Base # Pf RIGHT RAISED OMISSION BRACKET +2E0E..2E16 ; Grapheme_Base # Po [9] EDITORIAL CORONIS..DOTTED RIGHT-POINTING ANGLE +2E17 ; Grapheme_Base # Pd DOUBLE OBLIQUE HYPHEN +2E18..2E19 ; Grapheme_Base # Po [2] INVERTED INTERROBANG..PALM BRANCH +2E1A ; Grapheme_Base # Pd HYPHEN WITH DIAERESIS +2E1B ; Grapheme_Base # Po TILDE WITH RING ABOVE +2E1C ; Grapheme_Base # Pi LEFT LOW PARAPHRASE BRACKET +2E1D ; Grapheme_Base # Pf RIGHT LOW PARAPHRASE BRACKET +2E1E..2E1F ; Grapheme_Base # Po [2] TILDE WITH DOT ABOVE..TILDE WITH DOT BELOW +2E20 ; Grapheme_Base # Pi LEFT VERTICAL BAR WITH QUILL +2E21 ; Grapheme_Base # Pf RIGHT VERTICAL BAR WITH QUILL +2E22 ; Grapheme_Base # Ps TOP LEFT HALF BRACKET +2E23 ; Grapheme_Base # Pe TOP RIGHT HALF BRACKET +2E24 ; Grapheme_Base # Ps BOTTOM LEFT HALF BRACKET +2E25 ; Grapheme_Base # Pe BOTTOM RIGHT HALF BRACKET +2E26 ; Grapheme_Base # Ps LEFT SIDEWAYS U BRACKET +2E27 ; Grapheme_Base # Pe RIGHT SIDEWAYS U BRACKET +2E28 ; Grapheme_Base # Ps LEFT DOUBLE PARENTHESIS +2E29 ; Grapheme_Base # Pe RIGHT DOUBLE PARENTHESIS +2E2A..2E2E ; Grapheme_Base # Po [5] TWO DOTS OVER ONE DOT PUNCTUATION..REVERSED QUESTION MARK +2E2F ; Grapheme_Base # Lm VERTICAL TILDE +2E30..2E31 ; Grapheme_Base # Po [2] RING POINT..WORD SEPARATOR MIDDLE DOT +2E80..2E99 ; Grapheme_Base # So [26] CJK RADICAL REPEAT..CJK RADICAL RAP +2E9B..2EF3 ; Grapheme_Base # So [89] CJK RADICAL CHOKE..CJK RADICAL C-SIMPLIFIED TURTLE +2F00..2FD5 ; Grapheme_Base # So [214] KANGXI RADICAL ONE..KANGXI RADICAL FLUTE +2FF0..2FFB ; Grapheme_Base # So [12] IDEOGRAPHIC DESCRIPTION CHARACTER LEFT TO RIGHT..IDEOGRAPHIC DESCRIPTION CHARACTER OVERLAID +3000 ; Grapheme_Base # Zs IDEOGRAPHIC SPACE +3001..3003 ; Grapheme_Base # Po [3] IDEOGRAPHIC COMMA..DITTO MARK +3004 ; Grapheme_Base # So JAPANESE INDUSTRIAL STANDARD SYMBOL +3005 ; Grapheme_Base # Lm IDEOGRAPHIC ITERATION MARK +3006 ; Grapheme_Base # Lo IDEOGRAPHIC CLOSING MARK +3007 ; Grapheme_Base # Nl IDEOGRAPHIC NUMBER ZERO +3008 ; Grapheme_Base # Ps LEFT ANGLE BRACKET +3009 ; Grapheme_Base # Pe RIGHT ANGLE BRACKET +300A ; Grapheme_Base # Ps LEFT DOUBLE ANGLE BRACKET +300B ; Grapheme_Base # Pe RIGHT DOUBLE ANGLE BRACKET +300C ; Grapheme_Base # Ps LEFT CORNER BRACKET +300D ; Grapheme_Base # Pe RIGHT CORNER BRACKET +300E ; Grapheme_Base # Ps LEFT WHITE CORNER BRACKET +300F ; Grapheme_Base # Pe RIGHT WHITE CORNER BRACKET +3010 ; Grapheme_Base # Ps LEFT BLACK LENTICULAR BRACKET +3011 ; Grapheme_Base # Pe RIGHT BLACK LENTICULAR BRACKET +3012..3013 ; Grapheme_Base # So [2] POSTAL MARK..GETA MARK +3014 ; Grapheme_Base # Ps LEFT TORTOISE SHELL BRACKET +3015 ; Grapheme_Base # Pe RIGHT TORTOISE SHELL BRACKET +3016 ; Grapheme_Base # Ps LEFT WHITE LENTICULAR BRACKET +3017 ; Grapheme_Base # Pe RIGHT WHITE LENTICULAR BRACKET +3018 ; Grapheme_Base # Ps LEFT WHITE TORTOISE SHELL BRACKET +3019 ; Grapheme_Base # Pe RIGHT WHITE TORTOISE SHELL BRACKET +301A ; Grapheme_Base # Ps LEFT WHITE SQUARE BRACKET +301B ; Grapheme_Base # Pe RIGHT WHITE SQUARE BRACKET +301C ; Grapheme_Base # Pd WAVE DASH +301D ; Grapheme_Base # Ps REVERSED DOUBLE PRIME QUOTATION MARK +301E..301F ; Grapheme_Base # Pe [2] DOUBLE PRIME QUOTATION MARK..LOW DOUBLE PRIME QUOTATION MARK +3020 ; Grapheme_Base # So POSTAL MARK FACE +3021..3029 ; Grapheme_Base # Nl [9] HANGZHOU NUMERAL ONE..HANGZHOU NUMERAL NINE +3030 ; Grapheme_Base # Pd WAVY DASH +3031..3035 ; Grapheme_Base # Lm [5] VERTICAL KANA REPEAT MARK..VERTICAL KANA REPEAT MARK LOWER HALF +3036..3037 ; Grapheme_Base # So [2] CIRCLED POSTAL MARK..IDEOGRAPHIC TELEGRAPH LINE FEED SEPARATOR SYMBOL +3038..303A ; Grapheme_Base # Nl [3] HANGZHOU NUMERAL TEN..HANGZHOU NUMERAL THIRTY +303B ; Grapheme_Base # Lm VERTICAL IDEOGRAPHIC ITERATION MARK +303C ; Grapheme_Base # Lo MASU MARK +303D ; Grapheme_Base # Po PART ALTERNATION MARK +303E..303F ; Grapheme_Base # So [2] IDEOGRAPHIC VARIATION INDICATOR..IDEOGRAPHIC HALF FILL SPACE +3041..3096 ; Grapheme_Base # Lo [86] HIRAGANA LETTER SMALL A..HIRAGANA LETTER SMALL KE +309B..309C ; Grapheme_Base # Sk [2] KATAKANA-HIRAGANA VOICED SOUND MARK..KATAKANA-HIRAGANA SEMI-VOICED SOUND MARK +309D..309E ; Grapheme_Base # Lm [2] HIRAGANA ITERATION MARK..HIRAGANA VOICED ITERATION MARK +309F ; Grapheme_Base # Lo HIRAGANA DIGRAPH YORI +30A0 ; Grapheme_Base # Pd KATAKANA-HIRAGANA DOUBLE HYPHEN +30A1..30FA ; Grapheme_Base # Lo [90] KATAKANA LETTER SMALL A..KATAKANA LETTER VO +30FB ; Grapheme_Base # Po KATAKANA MIDDLE DOT +30FC..30FE ; Grapheme_Base # Lm [3] KATAKANA-HIRAGANA PROLONGED SOUND MARK..KATAKANA VOICED ITERATION MARK +30FF ; Grapheme_Base # Lo KATAKANA DIGRAPH KOTO +3105..312D ; Grapheme_Base # Lo [41] BOPOMOFO LETTER B..BOPOMOFO LETTER IH +3131..318E ; Grapheme_Base # Lo [94] HANGUL LETTER KIYEOK..HANGUL LETTER ARAEAE +3190..3191 ; Grapheme_Base # So [2] IDEOGRAPHIC ANNOTATION LINKING MARK..IDEOGRAPHIC ANNOTATION REVERSE MARK +3192..3195 ; Grapheme_Base # No [4] IDEOGRAPHIC ANNOTATION ONE MARK..IDEOGRAPHIC ANNOTATION FOUR MARK +3196..319F ; Grapheme_Base # So [10] IDEOGRAPHIC ANNOTATION TOP MARK..IDEOGRAPHIC ANNOTATION MAN MARK +31A0..31B7 ; Grapheme_Base # Lo [24] BOPOMOFO LETTER BU..BOPOMOFO FINAL LETTER H +31C0..31E3 ; Grapheme_Base # So [36] CJK STROKE T..CJK STROKE Q +31F0..31FF ; Grapheme_Base # Lo [16] KATAKANA LETTER SMALL KU..KATAKANA LETTER SMALL RO +3200..321E ; Grapheme_Base # So [31] PARENTHESIZED HANGUL KIYEOK..PARENTHESIZED KOREAN CHARACTER O HU +3220..3229 ; Grapheme_Base # No [10] PARENTHESIZED IDEOGRAPH ONE..PARENTHESIZED IDEOGRAPH TEN +322A..3250 ; Grapheme_Base # So [39] PARENTHESIZED IDEOGRAPH MOON..PARTNERSHIP SIGN +3251..325F ; Grapheme_Base # No [15] CIRCLED NUMBER TWENTY ONE..CIRCLED NUMBER THIRTY FIVE +3260..327F ; Grapheme_Base # So [32] CIRCLED HANGUL KIYEOK..KOREAN STANDARD SYMBOL +3280..3289 ; Grapheme_Base # No [10] CIRCLED IDEOGRAPH ONE..CIRCLED IDEOGRAPH TEN +328A..32B0 ; Grapheme_Base # So [39] CIRCLED IDEOGRAPH MOON..CIRCLED IDEOGRAPH NIGHT +32B1..32BF ; Grapheme_Base # No [15] CIRCLED NUMBER THIRTY SIX..CIRCLED NUMBER FIFTY +32C0..32FE ; Grapheme_Base # So [63] IDEOGRAPHIC TELEGRAPH SYMBOL FOR JANUARY..CIRCLED KATAKANA WO +3300..33FF ; Grapheme_Base # So [256] SQUARE APAATO..SQUARE GAL +3400..4DB5 ; Grapheme_Base # Lo [6582] CJK UNIFIED IDEOGRAPH-3400..CJK UNIFIED IDEOGRAPH-4DB5 +4DC0..4DFF ; Grapheme_Base # So [64] HEXAGRAM FOR THE CREATIVE HEAVEN..HEXAGRAM FOR BEFORE COMPLETION +4E00..9FCB ; Grapheme_Base # Lo [20940] CJK UNIFIED IDEOGRAPH-4E00..CJK UNIFIED IDEOGRAPH-9FCB +A000..A014 ; Grapheme_Base # Lo [21] YI SYLLABLE IT..YI SYLLABLE E +A015 ; Grapheme_Base # Lm YI SYLLABLE WU +A016..A48C ; Grapheme_Base # Lo [1143] YI SYLLABLE BIT..YI SYLLABLE YYR +A490..A4C6 ; Grapheme_Base # So [55] YI RADICAL QOT..YI RADICAL KE +A4D0..A4F7 ; Grapheme_Base # Lo [40] LISU LETTER BA..LISU LETTER OE +A4F8..A4FD ; Grapheme_Base # Lm [6] LISU LETTER TONE MYA TI..LISU LETTER TONE MYA JEU +A4FE..A4FF ; Grapheme_Base # Po [2] LISU PUNCTUATION COMMA..LISU PUNCTUATION FULL STOP +A500..A60B ; Grapheme_Base # Lo [268] VAI SYLLABLE EE..VAI SYLLABLE NG +A60C ; Grapheme_Base # Lm VAI SYLLABLE LENGTHENER +A60D..A60F ; Grapheme_Base # Po [3] VAI COMMA..VAI QUESTION MARK +A610..A61F ; Grapheme_Base # Lo [16] VAI SYLLABLE NDOLE FA..VAI SYMBOL JONG +A620..A629 ; Grapheme_Base # Nd [10] VAI DIGIT ZERO..VAI DIGIT NINE +A62A..A62B ; Grapheme_Base # Lo [2] VAI SYLLABLE NDOLE MA..VAI SYLLABLE NDOLE DO +A640..A65F ; Grapheme_Base # L& [32] CYRILLIC CAPITAL LETTER ZEMLYA..CYRILLIC SMALL LETTER YN +A662..A66D ; Grapheme_Base # L& [12] CYRILLIC CAPITAL LETTER SOFT DE..CYRILLIC SMALL LETTER DOUBLE MONOCULAR O +A66E ; Grapheme_Base # Lo CYRILLIC LETTER MULTIOCULAR O +A673 ; Grapheme_Base # Po SLAVONIC ASTERISK +A67E ; Grapheme_Base # Po CYRILLIC KAVYKA +A67F ; Grapheme_Base # Lm CYRILLIC PAYEROK +A680..A697 ; Grapheme_Base # L& [24] CYRILLIC CAPITAL LETTER DWE..CYRILLIC SMALL LETTER SHWE +A6A0..A6E5 ; Grapheme_Base # Lo [70] BAMUM LETTER A..BAMUM LETTER KI +A6E6..A6EF ; Grapheme_Base # Nl [10] BAMUM LETTER MO..BAMUM LETTER KOGHOM +A6F2..A6F7 ; Grapheme_Base # Po [6] BAMUM NJAEMLI..BAMUM QUESTION MARK +A700..A716 ; Grapheme_Base # Sk [23] MODIFIER LETTER CHINESE TONE YIN PING..MODIFIER LETTER EXTRA-LOW LEFT-STEM TONE BAR +A717..A71F ; Grapheme_Base # Lm [9] MODIFIER LETTER DOT VERTICAL BAR..MODIFIER LETTER LOW INVERTED EXCLAMATION MARK +A720..A721 ; Grapheme_Base # Sk [2] MODIFIER LETTER STRESS AND HIGH TONE..MODIFIER LETTER STRESS AND LOW TONE +A722..A76F ; Grapheme_Base # L& [78] LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF..LATIN SMALL LETTER CON +A770 ; Grapheme_Base # Lm MODIFIER LETTER US +A771..A787 ; Grapheme_Base # L& [23] LATIN SMALL LETTER DUM..LATIN SMALL LETTER INSULAR T +A788 ; Grapheme_Base # Lm MODIFIER LETTER LOW CIRCUMFLEX ACCENT +A789..A78A ; Grapheme_Base # Sk [2] MODIFIER LETTER COLON..MODIFIER LETTER SHORT EQUALS SIGN +A78B..A78C ; Grapheme_Base # L& [2] LATIN CAPITAL LETTER SALTILLO..LATIN SMALL LETTER SALTILLO +A7FB..A801 ; Grapheme_Base # Lo [7] LATIN EPIGRAPHIC LETTER REVERSED F..SYLOTI NAGRI LETTER I +A803..A805 ; Grapheme_Base # Lo [3] SYLOTI NAGRI LETTER U..SYLOTI NAGRI LETTER O +A807..A80A ; Grapheme_Base # Lo [4] SYLOTI NAGRI LETTER KO..SYLOTI NAGRI LETTER GHO +A80C..A822 ; Grapheme_Base # Lo [23] SYLOTI NAGRI LETTER CO..SYLOTI NAGRI LETTER HO +A823..A824 ; Grapheme_Base # Mc [2] SYLOTI NAGRI VOWEL SIGN A..SYLOTI NAGRI VOWEL SIGN I +A827 ; Grapheme_Base # Mc SYLOTI NAGRI VOWEL SIGN OO +A828..A82B ; Grapheme_Base # So [4] SYLOTI NAGRI POETRY MARK-1..SYLOTI NAGRI POETRY MARK-4 +A830..A835 ; Grapheme_Base # No [6] NORTH INDIC FRACTION ONE QUARTER..NORTH INDIC FRACTION THREE SIXTEENTHS +A836..A837 ; Grapheme_Base # So [2] NORTH INDIC QUARTER MARK..NORTH INDIC PLACEHOLDER MARK +A838 ; Grapheme_Base # Sc NORTH INDIC RUPEE MARK +A839 ; Grapheme_Base # So NORTH INDIC QUANTITY MARK +A840..A873 ; Grapheme_Base # Lo [52] PHAGS-PA LETTER KA..PHAGS-PA LETTER CANDRABINDU +A874..A877 ; Grapheme_Base # Po [4] PHAGS-PA SINGLE HEAD MARK..PHAGS-PA MARK DOUBLE SHAD +A880..A881 ; Grapheme_Base # Mc [2] SAURASHTRA SIGN ANUSVARA..SAURASHTRA SIGN VISARGA +A882..A8B3 ; Grapheme_Base # Lo [50] SAURASHTRA LETTER A..SAURASHTRA LETTER LLA +A8B4..A8C3 ; Grapheme_Base # Mc [16] SAURASHTRA CONSONANT SIGN HAARU..SAURASHTRA VOWEL SIGN AU +A8CE..A8CF ; Grapheme_Base # Po [2] SAURASHTRA DANDA..SAURASHTRA DOUBLE DANDA +A8D0..A8D9 ; Grapheme_Base # Nd [10] SAURASHTRA DIGIT ZERO..SAURASHTRA DIGIT NINE +A8F2..A8F7 ; Grapheme_Base # Lo [6] DEVANAGARI SIGN SPACING CANDRABINDU..DEVANAGARI SIGN CANDRABINDU AVAGRAHA +A8F8..A8FA ; Grapheme_Base # Po [3] DEVANAGARI SIGN PUSHPIKA..DEVANAGARI CARET +A8FB ; Grapheme_Base # Lo DEVANAGARI HEADSTROKE +A900..A909 ; Grapheme_Base # Nd [10] KAYAH LI DIGIT ZERO..KAYAH LI DIGIT NINE +A90A..A925 ; Grapheme_Base # Lo [28] KAYAH LI LETTER KA..KAYAH LI LETTER OO +A92E..A92F ; Grapheme_Base # Po [2] KAYAH LI SIGN CWI..KAYAH LI SIGN SHYA +A930..A946 ; Grapheme_Base # Lo [23] REJANG LETTER KA..REJANG LETTER A +A952..A953 ; Grapheme_Base # Mc [2] REJANG CONSONANT SIGN H..REJANG VIRAMA +A95F ; Grapheme_Base # Po REJANG SECTION MARK +A960..A97C ; Grapheme_Base # Lo [29] HANGUL CHOSEONG TIKEUT-MIEUM..HANGUL CHOSEONG SSANGYEORINHIEUH +A983 ; Grapheme_Base # Mc JAVANESE SIGN WIGNYAN +A984..A9B2 ; Grapheme_Base # Lo [47] JAVANESE LETTER A..JAVANESE LETTER HA +A9B4..A9B5 ; Grapheme_Base # Mc [2] JAVANESE VOWEL SIGN TARUNG..JAVANESE VOWEL SIGN TOLONG +A9BA..A9BB ; Grapheme_Base # Mc [2] JAVANESE VOWEL SIGN TALING..JAVANESE VOWEL SIGN DIRGA MURE +A9BD..A9C0 ; Grapheme_Base # Mc [4] JAVANESE CONSONANT SIGN KERET..JAVANESE PANGKON +A9C1..A9CD ; Grapheme_Base # Po [13] JAVANESE LEFT RERENGGAN..JAVANESE TURNED PADA PISELEH +A9CF ; Grapheme_Base # Lm JAVANESE PANGRANGKEP +A9D0..A9D9 ; Grapheme_Base # Nd [10] JAVANESE DIGIT ZERO..JAVANESE DIGIT NINE +A9DE..A9DF ; Grapheme_Base # Po [2] JAVANESE PADA TIRTA TUMETES..JAVANESE PADA ISEN-ISEN +AA00..AA28 ; Grapheme_Base # Lo [41] CHAM LETTER A..CHAM LETTER HA +AA2F..AA30 ; Grapheme_Base # Mc [2] CHAM VOWEL SIGN O..CHAM VOWEL SIGN AI +AA33..AA34 ; Grapheme_Base # Mc [2] CHAM CONSONANT SIGN YA..CHAM CONSONANT SIGN RA +AA40..AA42 ; Grapheme_Base # Lo [3] CHAM LETTER FINAL K..CHAM LETTER FINAL NG +AA44..AA4B ; Grapheme_Base # Lo [8] CHAM LETTER FINAL CH..CHAM LETTER FINAL SS +AA4D ; Grapheme_Base # Mc CHAM CONSONANT SIGN FINAL H +AA50..AA59 ; Grapheme_Base # Nd [10] CHAM DIGIT ZERO..CHAM DIGIT NINE +AA5C..AA5F ; Grapheme_Base # Po [4] CHAM PUNCTUATION SPIRAL..CHAM PUNCTUATION TRIPLE DANDA +AA60..AA6F ; Grapheme_Base # Lo [16] MYANMAR LETTER KHAMTI GA..MYANMAR LETTER KHAMTI FA +AA70 ; Grapheme_Base # Lm MYANMAR MODIFIER LETTER KHAMTI REDUPLICATION +AA71..AA76 ; Grapheme_Base # Lo [6] MYANMAR LETTER KHAMTI XA..MYANMAR LOGOGRAM KHAMTI HM +AA77..AA79 ; Grapheme_Base # So [3] MYANMAR SYMBOL AITON EXCLAMATION..MYANMAR SYMBOL AITON TWO +AA7A ; Grapheme_Base # Lo MYANMAR LETTER AITON RA +AA7B ; Grapheme_Base # Mc MYANMAR SIGN PAO KAREN TONE +AA80..AAAF ; Grapheme_Base # Lo [48] TAI VIET LETTER LOW KO..TAI VIET LETTER HIGH O +AAB1 ; Grapheme_Base # Lo TAI VIET VOWEL AA +AAB5..AAB6 ; Grapheme_Base # Lo [2] TAI VIET VOWEL E..TAI VIET VOWEL O +AAB9..AABD ; Grapheme_Base # Lo [5] TAI VIET VOWEL UEA..TAI VIET VOWEL AN +AAC0 ; Grapheme_Base # Lo TAI VIET TONE MAI NUENG +AAC2 ; Grapheme_Base # Lo TAI VIET TONE MAI SONG +AADB..AADC ; Grapheme_Base # Lo [2] TAI VIET SYMBOL KON..TAI VIET SYMBOL NUENG +AADD ; Grapheme_Base # Lm TAI VIET SYMBOL SAM +AADE..AADF ; Grapheme_Base # Po [2] TAI VIET SYMBOL HO HOI..TAI VIET SYMBOL KOI KOI +ABC0..ABE2 ; Grapheme_Base # Lo [35] MEETEI MAYEK LETTER KOK..MEETEI MAYEK LETTER I LONSUM +ABE3..ABE4 ; Grapheme_Base # Mc [2] MEETEI MAYEK VOWEL SIGN ONAP..MEETEI MAYEK VOWEL SIGN INAP +ABE6..ABE7 ; Grapheme_Base # Mc [2] MEETEI MAYEK VOWEL SIGN YENAP..MEETEI MAYEK VOWEL SIGN SOUNAP +ABE9..ABEA ; Grapheme_Base # Mc [2] MEETEI MAYEK VOWEL SIGN CHEINAP..MEETEI MAYEK VOWEL SIGN NUNG +ABEB ; Grapheme_Base # Po MEETEI MAYEK CHEIKHEI +ABEC ; Grapheme_Base # Mc MEETEI MAYEK LUM IYEK +ABF0..ABF9 ; Grapheme_Base # Nd [10] MEETEI MAYEK DIGIT ZERO..MEETEI MAYEK DIGIT NINE +AC00..D7A3 ; Grapheme_Base # Lo [11172] HANGUL SYLLABLE GA..HANGUL SYLLABLE HIH +D7B0..D7C6 ; Grapheme_Base # Lo [23] HANGUL JUNGSEONG O-YEO..HANGUL JUNGSEONG ARAEA-E +D7CB..D7FB ; Grapheme_Base # Lo [49] HANGUL JONGSEONG NIEUN-RIEUL..HANGUL JONGSEONG PHIEUPH-THIEUTH +F900..FA2D ; Grapheme_Base # Lo [302] CJK COMPATIBILITY IDEOGRAPH-F900..CJK COMPATIBILITY IDEOGRAPH-FA2D +FA30..FA6D ; Grapheme_Base # Lo [62] CJK COMPATIBILITY IDEOGRAPH-FA30..CJK COMPATIBILITY IDEOGRAPH-FA6D +FA70..FAD9 ; Grapheme_Base # Lo [106] CJK COMPATIBILITY IDEOGRAPH-FA70..CJK COMPATIBILITY IDEOGRAPH-FAD9 +FB00..FB06 ; Grapheme_Base # L& [7] LATIN SMALL LIGATURE FF..LATIN SMALL LIGATURE ST +FB13..FB17 ; Grapheme_Base # L& [5] ARMENIAN SMALL LIGATURE MEN NOW..ARMENIAN SMALL LIGATURE MEN XEH +FB1D ; Grapheme_Base # Lo HEBREW LETTER YOD WITH HIRIQ +FB1F..FB28 ; Grapheme_Base # Lo [10] HEBREW LIGATURE YIDDISH YOD YOD PATAH..HEBREW LETTER WIDE TAV +FB29 ; Grapheme_Base # Sm HEBREW LETTER ALTERNATIVE PLUS SIGN +FB2A..FB36 ; Grapheme_Base # Lo [13] HEBREW LETTER SHIN WITH SHIN DOT..HEBREW LETTER ZAYIN WITH DAGESH +FB38..FB3C ; Grapheme_Base # Lo [5] HEBREW LETTER TET WITH DAGESH..HEBREW LETTER LAMED WITH DAGESH +FB3E ; Grapheme_Base # Lo HEBREW LETTER MEM WITH DAGESH +FB40..FB41 ; Grapheme_Base # Lo [2] HEBREW LETTER NUN WITH DAGESH..HEBREW LETTER SAMEKH WITH DAGESH +FB43..FB44 ; Grapheme_Base # Lo [2] HEBREW LETTER FINAL PE WITH DAGESH..HEBREW LETTER PE WITH DAGESH +FB46..FBB1 ; Grapheme_Base # Lo [108] HEBREW LETTER TSADI WITH DAGESH..ARABIC LETTER YEH BARREE WITH HAMZA ABOVE FINAL FORM +FBD3..FD3D ; Grapheme_Base # Lo [363] ARABIC LETTER NG ISOLATED FORM..ARABIC LIGATURE ALEF WITH FATHATAN ISOLATED FORM +FD3E ; Grapheme_Base # Ps ORNATE LEFT PARENTHESIS +FD3F ; Grapheme_Base # Pe ORNATE RIGHT PARENTHESIS +FD50..FD8F ; Grapheme_Base # Lo [64] ARABIC LIGATURE TEH WITH JEEM WITH MEEM INITIAL FORM..ARABIC LIGATURE MEEM WITH KHAH WITH MEEM INITIAL FORM +FD92..FDC7 ; Grapheme_Base # Lo [54] ARABIC LIGATURE MEEM WITH JEEM WITH KHAH INITIAL FORM..ARABIC LIGATURE NOON WITH JEEM WITH YEH FINAL FORM +FDF0..FDFB ; Grapheme_Base # Lo [12] ARABIC LIGATURE SALLA USED AS KORANIC STOP SIGN ISOLATED FORM..ARABIC LIGATURE JALLAJALALOUHOU +FDFC ; Grapheme_Base # Sc RIAL SIGN +FDFD ; Grapheme_Base # So ARABIC LIGATURE BISMILLAH AR-RAHMAN AR-RAHEEM +FE10..FE16 ; Grapheme_Base # Po [7] PRESENTATION FORM FOR VERTICAL COMMA..PRESENTATION FORM FOR VERTICAL QUESTION MARK +FE17 ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT WHITE LENTICULAR BRACKET +FE18 ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT WHITE LENTICULAR BRAKCET +FE19 ; Grapheme_Base # Po PRESENTATION FORM FOR VERTICAL HORIZONTAL ELLIPSIS +FE30 ; Grapheme_Base # Po PRESENTATION FORM FOR VERTICAL TWO DOT LEADER +FE31..FE32 ; Grapheme_Base # Pd [2] PRESENTATION FORM FOR VERTICAL EM DASH..PRESENTATION FORM FOR VERTICAL EN DASH +FE33..FE34 ; Grapheme_Base # Pc [2] PRESENTATION FORM FOR VERTICAL LOW LINE..PRESENTATION FORM FOR VERTICAL WAVY LOW LINE +FE35 ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT PARENTHESIS +FE36 ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT PARENTHESIS +FE37 ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT CURLY BRACKET +FE38 ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT CURLY BRACKET +FE39 ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT TORTOISE SHELL BRACKET +FE3A ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT TORTOISE SHELL BRACKET +FE3B ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT BLACK LENTICULAR BRACKET +FE3C ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT BLACK LENTICULAR BRACKET +FE3D ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT DOUBLE ANGLE BRACKET +FE3E ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT DOUBLE ANGLE BRACKET +FE3F ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT ANGLE BRACKET +FE40 ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT ANGLE BRACKET +FE41 ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT CORNER BRACKET +FE42 ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT CORNER BRACKET +FE43 ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT WHITE CORNER BRACKET +FE44 ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT WHITE CORNER BRACKET +FE45..FE46 ; Grapheme_Base # Po [2] SESAME DOT..WHITE SESAME DOT +FE47 ; Grapheme_Base # Ps PRESENTATION FORM FOR VERTICAL LEFT SQUARE BRACKET +FE48 ; Grapheme_Base # Pe PRESENTATION FORM FOR VERTICAL RIGHT SQUARE BRACKET +FE49..FE4C ; Grapheme_Base # Po [4] DASHED OVERLINE..DOUBLE WAVY OVERLINE +FE4D..FE4F ; Grapheme_Base # Pc [3] DASHED LOW LINE..WAVY LOW LINE +FE50..FE52 ; Grapheme_Base # Po [3] SMALL COMMA..SMALL FULL STOP +FE54..FE57 ; Grapheme_Base # Po [4] SMALL SEMICOLON..SMALL EXCLAMATION MARK +FE58 ; Grapheme_Base # Pd SMALL EM DASH +FE59 ; Grapheme_Base # Ps SMALL LEFT PARENTHESIS +FE5A ; Grapheme_Base # Pe SMALL RIGHT PARENTHESIS +FE5B ; Grapheme_Base # Ps SMALL LEFT CURLY BRACKET +FE5C ; Grapheme_Base # Pe SMALL RIGHT CURLY BRACKET +FE5D ; Grapheme_Base # Ps SMALL LEFT TORTOISE SHELL BRACKET +FE5E ; Grapheme_Base # Pe SMALL RIGHT TORTOISE SHELL BRACKET +FE5F..FE61 ; Grapheme_Base # Po [3] SMALL NUMBER SIGN..SMALL ASTERISK +FE62 ; Grapheme_Base # Sm SMALL PLUS SIGN +FE63 ; Grapheme_Base # Pd SMALL HYPHEN-MINUS +FE64..FE66 ; Grapheme_Base # Sm [3] SMALL LESS-THAN SIGN..SMALL EQUALS SIGN +FE68 ; Grapheme_Base # Po SMALL REVERSE SOLIDUS +FE69 ; Grapheme_Base # Sc SMALL DOLLAR SIGN +FE6A..FE6B ; Grapheme_Base # Po [2] SMALL PERCENT SIGN..SMALL COMMERCIAL AT +FE70..FE74 ; Grapheme_Base # Lo [5] ARABIC FATHATAN ISOLATED FORM..ARABIC KASRATAN ISOLATED FORM +FE76..FEFC ; Grapheme_Base # Lo [135] ARABIC FATHA ISOLATED FORM..ARABIC LIGATURE LAM WITH ALEF FINAL FORM +FF01..FF03 ; Grapheme_Base # Po [3] FULLWIDTH EXCLAMATION MARK..FULLWIDTH NUMBER SIGN +FF04 ; Grapheme_Base # Sc FULLWIDTH DOLLAR SIGN +FF05..FF07 ; Grapheme_Base # Po [3] FULLWIDTH PERCENT SIGN..FULLWIDTH APOSTROPHE +FF08 ; Grapheme_Base # Ps FULLWIDTH LEFT PARENTHESIS +FF09 ; Grapheme_Base # Pe FULLWIDTH RIGHT PARENTHESIS +FF0A ; Grapheme_Base # Po FULLWIDTH ASTERISK +FF0B ; Grapheme_Base # Sm FULLWIDTH PLUS SIGN +FF0C ; Grapheme_Base # Po FULLWIDTH COMMA +FF0D ; Grapheme_Base # Pd FULLWIDTH HYPHEN-MINUS +FF0E..FF0F ; Grapheme_Base # Po [2] FULLWIDTH FULL STOP..FULLWIDTH SOLIDUS +FF10..FF19 ; Grapheme_Base # Nd [10] FULLWIDTH DIGIT ZERO..FULLWIDTH DIGIT NINE +FF1A..FF1B ; Grapheme_Base # Po [2] FULLWIDTH COLON..FULLWIDTH SEMICOLON +FF1C..FF1E ; Grapheme_Base # Sm [3] FULLWIDTH LESS-THAN SIGN..FULLWIDTH GREATER-THAN SIGN +FF1F..FF20 ; Grapheme_Base # Po [2] FULLWIDTH QUESTION MARK..FULLWIDTH COMMERCIAL AT +FF21..FF3A ; Grapheme_Base # L& [26] FULLWIDTH LATIN CAPITAL LETTER A..FULLWIDTH LATIN CAPITAL LETTER Z +FF3B ; Grapheme_Base # Ps FULLWIDTH LEFT SQUARE BRACKET +FF3C ; Grapheme_Base # Po FULLWIDTH REVERSE SOLIDUS +FF3D ; Grapheme_Base # Pe FULLWIDTH RIGHT SQUARE BRACKET +FF3E ; Grapheme_Base # Sk FULLWIDTH CIRCUMFLEX ACCENT +FF3F ; Grapheme_Base # Pc FULLWIDTH LOW LINE +FF40 ; Grapheme_Base # Sk FULLWIDTH GRAVE ACCENT +FF41..FF5A ; Grapheme_Base # L& [26] FULLWIDTH LATIN SMALL LETTER A..FULLWIDTH LATIN SMALL LETTER Z +FF5B ; Grapheme_Base # Ps FULLWIDTH LEFT CURLY BRACKET +FF5C ; Grapheme_Base # Sm FULLWIDTH VERTICAL LINE +FF5D ; Grapheme_Base # Pe FULLWIDTH RIGHT CURLY BRACKET +FF5E ; Grapheme_Base # Sm FULLWIDTH TILDE +FF5F ; Grapheme_Base # Ps FULLWIDTH LEFT WHITE PARENTHESIS +FF60 ; Grapheme_Base # Pe FULLWIDTH RIGHT WHITE PARENTHESIS +FF61 ; Grapheme_Base # Po HALFWIDTH IDEOGRAPHIC FULL STOP +FF62 ; Grapheme_Base # Ps HALFWIDTH LEFT CORNER BRACKET +FF63 ; Grapheme_Base # Pe HALFWIDTH RIGHT CORNER BRACKET +FF64..FF65 ; Grapheme_Base # Po [2] HALFWIDTH IDEOGRAPHIC COMMA..HALFWIDTH KATAKANA MIDDLE DOT +FF66..FF6F ; Grapheme_Base # Lo [10] HALFWIDTH KATAKANA LETTER WO..HALFWIDTH KATAKANA LETTER SMALL TU +FF70 ; Grapheme_Base # Lm HALFWIDTH KATAKANA-HIRAGANA PROLONGED SOUND MARK +FF71..FF9D ; Grapheme_Base # Lo [45] HALFWIDTH KATAKANA LETTER A..HALFWIDTH KATAKANA LETTER N +FFA0..FFBE ; Grapheme_Base # Lo [31] HALFWIDTH HANGUL FILLER..HALFWIDTH HANGUL LETTER HIEUH +FFC2..FFC7 ; Grapheme_Base # Lo [6] HALFWIDTH HANGUL LETTER A..HALFWIDTH HANGUL LETTER E +FFCA..FFCF ; Grapheme_Base # Lo [6] HALFWIDTH HANGUL LETTER YEO..HALFWIDTH HANGUL LETTER OE +FFD2..FFD7 ; Grapheme_Base # Lo [6] HALFWIDTH HANGUL LETTER YO..HALFWIDTH HANGUL LETTER YU +FFDA..FFDC ; Grapheme_Base # Lo [3] HALFWIDTH HANGUL LETTER EU..HALFWIDTH HANGUL LETTER I +FFE0..FFE1 ; Grapheme_Base # Sc [2] FULLWIDTH CENT SIGN..FULLWIDTH POUND SIGN +FFE2 ; Grapheme_Base # Sm FULLWIDTH NOT SIGN +FFE3 ; Grapheme_Base # Sk FULLWIDTH MACRON +FFE4 ; Grapheme_Base # So FULLWIDTH BROKEN BAR +FFE5..FFE6 ; Grapheme_Base # Sc [2] FULLWIDTH YEN SIGN..FULLWIDTH WON SIGN +FFE8 ; Grapheme_Base # So HALFWIDTH FORMS LIGHT VERTICAL +FFE9..FFEC ; Grapheme_Base # Sm [4] HALFWIDTH LEFTWARDS ARROW..HALFWIDTH DOWNWARDS ARROW +FFED..FFEE ; Grapheme_Base # So [2] HALFWIDTH BLACK SQUARE..HALFWIDTH WHITE CIRCLE +FFFC..FFFD ; Grapheme_Base # So [2] OBJECT REPLACEMENT CHARACTER..REPLACEMENT CHARACTER +10000..1000B ; Grapheme_Base # Lo [12] LINEAR B SYLLABLE B008 A..LINEAR B SYLLABLE B046 JE +1000D..10026 ; Grapheme_Base # Lo [26] LINEAR B SYLLABLE B036 JO..LINEAR B SYLLABLE B032 QO +10028..1003A ; Grapheme_Base # Lo [19] LINEAR B SYLLABLE B060 RA..LINEAR B SYLLABLE B042 WO +1003C..1003D ; Grapheme_Base # Lo [2] LINEAR B SYLLABLE B017 ZA..LINEAR B SYLLABLE B074 ZE +1003F..1004D ; Grapheme_Base # Lo [15] LINEAR B SYLLABLE B020 ZO..LINEAR B SYLLABLE B091 TWO +10050..1005D ; Grapheme_Base # Lo [14] LINEAR B SYMBOL B018..LINEAR B SYMBOL B089 +10080..100FA ; Grapheme_Base # Lo [123] LINEAR B IDEOGRAM B100 MAN..LINEAR B IDEOGRAM VESSEL B305 +10100..10101 ; Grapheme_Base # Po [2] AEGEAN WORD SEPARATOR LINE..AEGEAN WORD SEPARATOR DOT +10102 ; Grapheme_Base # So AEGEAN CHECK MARK +10107..10133 ; Grapheme_Base # No [45] AEGEAN NUMBER ONE..AEGEAN NUMBER NINETY THOUSAND +10137..1013F ; Grapheme_Base # So [9] AEGEAN WEIGHT BASE UNIT..AEGEAN MEASURE THIRD SUBUNIT +10140..10174 ; Grapheme_Base # Nl [53] GREEK ACROPHONIC ATTIC ONE QUARTER..GREEK ACROPHONIC STRATIAN FIFTY MNAS +10175..10178 ; Grapheme_Base # No [4] GREEK ONE HALF SIGN..GREEK THREE QUARTERS SIGN +10179..10189 ; Grapheme_Base # So [17] GREEK YEAR SIGN..GREEK TRYBLION BASE SIGN +1018A ; Grapheme_Base # No GREEK ZERO SIGN +10190..1019B ; Grapheme_Base # So [12] ROMAN SEXTANS SIGN..ROMAN CENTURIAL SIGN +101D0..101FC ; Grapheme_Base # So [45] PHAISTOS DISC SIGN PEDESTRIAN..PHAISTOS DISC SIGN WAVY BAND +10280..1029C ; Grapheme_Base # Lo [29] LYCIAN LETTER A..LYCIAN LETTER X +102A0..102D0 ; Grapheme_Base # Lo [49] CARIAN LETTER A..CARIAN LETTER UUU3 +10300..1031E ; Grapheme_Base # Lo [31] OLD ITALIC LETTER A..OLD ITALIC LETTER UU +10320..10323 ; Grapheme_Base # No [4] OLD ITALIC NUMERAL ONE..OLD ITALIC NUMERAL FIFTY +10330..10340 ; Grapheme_Base # Lo [17] GOTHIC LETTER AHSA..GOTHIC LETTER PAIRTHRA +10341 ; Grapheme_Base # Nl GOTHIC LETTER NINETY +10342..10349 ; Grapheme_Base # Lo [8] GOTHIC LETTER RAIDA..GOTHIC LETTER OTHAL +1034A ; Grapheme_Base # Nl GOTHIC LETTER NINE HUNDRED +10380..1039D ; Grapheme_Base # Lo [30] UGARITIC LETTER ALPA..UGARITIC LETTER SSU +1039F ; Grapheme_Base # Po UGARITIC WORD DIVIDER +103A0..103C3 ; Grapheme_Base # Lo [36] OLD PERSIAN SIGN A..OLD PERSIAN SIGN HA +103C8..103CF ; Grapheme_Base # Lo [8] OLD PERSIAN SIGN AURAMAZDAA..OLD PERSIAN SIGN BUUMISH +103D0 ; Grapheme_Base # Po OLD PERSIAN WORD DIVIDER +103D1..103D5 ; Grapheme_Base # Nl [5] OLD PERSIAN NUMBER ONE..OLD PERSIAN NUMBER HUNDRED +10400..1044F ; Grapheme_Base # L& [80] DESERET CAPITAL LETTER LONG I..DESERET SMALL LETTER EW +10450..1049D ; Grapheme_Base # Lo [78] SHAVIAN LETTER PEEP..OSMANYA LETTER OO +104A0..104A9 ; Grapheme_Base # Nd [10] OSMANYA DIGIT ZERO..OSMANYA DIGIT NINE +10800..10805 ; Grapheme_Base # Lo [6] CYPRIOT SYLLABLE A..CYPRIOT SYLLABLE JA +10808 ; Grapheme_Base # Lo CYPRIOT SYLLABLE JO +1080A..10835 ; Grapheme_Base # Lo [44] CYPRIOT SYLLABLE KA..CYPRIOT SYLLABLE WO +10837..10838 ; Grapheme_Base # Lo [2] CYPRIOT SYLLABLE XA..CYPRIOT SYLLABLE XE +1083C ; Grapheme_Base # Lo CYPRIOT SYLLABLE ZA +1083F..10855 ; Grapheme_Base # Lo [23] CYPRIOT SYLLABLE ZO..IMPERIAL ARAMAIC LETTER TAW +10857 ; Grapheme_Base # Po IMPERIAL ARAMAIC SECTION SIGN +10858..1085F ; Grapheme_Base # No [8] IMPERIAL ARAMAIC NUMBER ONE..IMPERIAL ARAMAIC NUMBER TEN THOUSAND +10900..10915 ; Grapheme_Base # Lo [22] PHOENICIAN LETTER ALF..PHOENICIAN LETTER TAU +10916..1091B ; Grapheme_Base # No [6] PHOENICIAN NUMBER ONE..PHOENICIAN NUMBER THREE +1091F ; Grapheme_Base # Po PHOENICIAN WORD SEPARATOR +10920..10939 ; Grapheme_Base # Lo [26] LYDIAN LETTER A..LYDIAN LETTER C +1093F ; Grapheme_Base # Po LYDIAN TRIANGULAR MARK +10A00 ; Grapheme_Base # Lo KHAROSHTHI LETTER A +10A10..10A13 ; Grapheme_Base # Lo [4] KHAROSHTHI LETTER KA..KHAROSHTHI LETTER GHA +10A15..10A17 ; Grapheme_Base # Lo [3] KHAROSHTHI LETTER CA..KHAROSHTHI LETTER JA +10A19..10A33 ; Grapheme_Base # Lo [27] KHAROSHTHI LETTER NYA..KHAROSHTHI LETTER TTTHA +10A40..10A47 ; Grapheme_Base # No [8] KHAROSHTHI DIGIT ONE..KHAROSHTHI NUMBER ONE THOUSAND +10A50..10A58 ; Grapheme_Base # Po [9] KHAROSHTHI PUNCTUATION DOT..KHAROSHTHI PUNCTUATION LINES +10A60..10A7C ; Grapheme_Base # Lo [29] OLD SOUTH ARABIAN LETTER HE..OLD SOUTH ARABIAN LETTER THETH +10A7D..10A7E ; Grapheme_Base # No [2] OLD SOUTH ARABIAN NUMBER ONE..OLD SOUTH ARABIAN NUMBER FIFTY +10A7F ; Grapheme_Base # Po OLD SOUTH ARABIAN NUMERIC INDICATOR +10B00..10B35 ; Grapheme_Base # Lo [54] AVESTAN LETTER A..AVESTAN LETTER HE +10B39..10B3F ; Grapheme_Base # Po [7] AVESTAN ABBREVIATION MARK..LARGE ONE RING OVER TWO RINGS PUNCTUATION +10B40..10B55 ; Grapheme_Base # Lo [22] INSCRIPTIONAL PARTHIAN LETTER ALEPH..INSCRIPTIONAL PARTHIAN LETTER TAW +10B58..10B5F ; Grapheme_Base # No [8] INSCRIPTIONAL PARTHIAN NUMBER ONE..INSCRIPTIONAL PARTHIAN NUMBER ONE THOUSAND +10B60..10B72 ; Grapheme_Base # Lo [19] INSCRIPTIONAL PAHLAVI LETTER ALEPH..INSCRIPTIONAL PAHLAVI LETTER TAW +10B78..10B7F ; Grapheme_Base # No [8] INSCRIPTIONAL PAHLAVI NUMBER ONE..INSCRIPTIONAL PAHLAVI NUMBER ONE THOUSAND +10C00..10C48 ; Grapheme_Base # Lo [73] OLD TURKIC LETTER ORKHON A..OLD TURKIC LETTER ORKHON BASH +10E60..10E7E ; Grapheme_Base # No [31] RUMI DIGIT ONE..RUMI FRACTION TWO THIRDS +11082 ; Grapheme_Base # Mc KAITHI SIGN VISARGA +11083..110AF ; Grapheme_Base # Lo [45] KAITHI LETTER A..KAITHI LETTER HA +110B0..110B2 ; Grapheme_Base # Mc [3] KAITHI VOWEL SIGN AA..KAITHI VOWEL SIGN II +110B7..110B8 ; Grapheme_Base # Mc [2] KAITHI VOWEL SIGN O..KAITHI VOWEL SIGN AU +110BB..110BC ; Grapheme_Base # Po [2] KAITHI ABBREVIATION SIGN..KAITHI ENUMERATION SIGN +110BE..110C1 ; Grapheme_Base # Po [4] KAITHI SECTION MARK..KAITHI DOUBLE DANDA +12000..1236E ; Grapheme_Base # Lo [879] CUNEIFORM SIGN A..CUNEIFORM SIGN ZUM +12400..12462 ; Grapheme_Base # Nl [99] CUNEIFORM NUMERIC SIGN TWO ASH..CUNEIFORM NUMERIC SIGN OLD ASSYRIAN ONE QUARTER +12470..12473 ; Grapheme_Base # Po [4] CUNEIFORM PUNCTUATION SIGN OLD ASSYRIAN WORD DIVIDER..CUNEIFORM PUNCTUATION SIGN DIAGONAL TRICOLON +13000..1342E ; Grapheme_Base # Lo [1071] EGYPTIAN HIEROGLYPH A001..EGYPTIAN HIEROGLYPH AA032 +1D000..1D0F5 ; Grapheme_Base # So [246] BYZANTINE MUSICAL SYMBOL PSILI..BYZANTINE MUSICAL SYMBOL GORGON NEO KATO +1D100..1D126 ; Grapheme_Base # So [39] MUSICAL SYMBOL SINGLE BARLINE..MUSICAL SYMBOL DRUM CLEF-2 +1D129..1D164 ; Grapheme_Base # So [60] MUSICAL SYMBOL MULTIPLE MEASURE REST..MUSICAL SYMBOL ONE HUNDRED TWENTY-EIGHTH NOTE +1D166 ; Grapheme_Base # Mc MUSICAL SYMBOL COMBINING SPRECHGESANG STEM +1D16A..1D16C ; Grapheme_Base # So [3] MUSICAL SYMBOL FINGERED TREMOLO-1..MUSICAL SYMBOL FINGERED TREMOLO-3 +1D16D ; Grapheme_Base # Mc MUSICAL SYMBOL COMBINING AUGMENTATION DOT +1D183..1D184 ; Grapheme_Base # So [2] MUSICAL SYMBOL ARPEGGIATO UP..MUSICAL SYMBOL ARPEGGIATO DOWN +1D18C..1D1A9 ; Grapheme_Base # So [30] MUSICAL SYMBOL RINFORZANDO..MUSICAL SYMBOL DEGREE SLASH +1D1AE..1D1DD ; Grapheme_Base # So [48] MUSICAL SYMBOL PEDAL MARK..MUSICAL SYMBOL PES SUBPUNCTIS +1D200..1D241 ; Grapheme_Base # So [66] GREEK VOCAL NOTATION SYMBOL-1..GREEK INSTRUMENTAL NOTATION SYMBOL-54 +1D245 ; Grapheme_Base # So GREEK MUSICAL LEIMMA +1D300..1D356 ; Grapheme_Base # So [87] MONOGRAM FOR EARTH..TETRAGRAM FOR FOSTERING +1D360..1D371 ; Grapheme_Base # No [18] COUNTING ROD UNIT DIGIT ONE..COUNTING ROD TENS DIGIT NINE +1D400..1D454 ; Grapheme_Base # L& [85] MATHEMATICAL BOLD CAPITAL A..MATHEMATICAL ITALIC SMALL G +1D456..1D49C ; Grapheme_Base # L& [71] MATHEMATICAL ITALIC SMALL I..MATHEMATICAL SCRIPT CAPITAL A +1D49E..1D49F ; Grapheme_Base # L& [2] MATHEMATICAL SCRIPT CAPITAL C..MATHEMATICAL SCRIPT CAPITAL D +1D4A2 ; Grapheme_Base # L& MATHEMATICAL SCRIPT CAPITAL G +1D4A5..1D4A6 ; Grapheme_Base # L& [2] MATHEMATICAL SCRIPT CAPITAL J..MATHEMATICAL SCRIPT CAPITAL K +1D4A9..1D4AC ; Grapheme_Base # L& [4] MATHEMATICAL SCRIPT CAPITAL N..MATHEMATICAL SCRIPT CAPITAL Q +1D4AE..1D4B9 ; Grapheme_Base # L& [12] MATHEMATICAL SCRIPT CAPITAL S..MATHEMATICAL SCRIPT SMALL D +1D4BB ; Grapheme_Base # L& MATHEMATICAL SCRIPT SMALL F +1D4BD..1D4C3 ; Grapheme_Base # L& [7] MATHEMATICAL SCRIPT SMALL H..MATHEMATICAL SCRIPT SMALL N +1D4C5..1D505 ; Grapheme_Base # L& [65] MATHEMATICAL SCRIPT SMALL P..MATHEMATICAL FRAKTUR CAPITAL B +1D507..1D50A ; Grapheme_Base # L& [4] MATHEMATICAL FRAKTUR CAPITAL D..MATHEMATICAL FRAKTUR CAPITAL G +1D50D..1D514 ; Grapheme_Base # L& [8] MATHEMATICAL FRAKTUR CAPITAL J..MATHEMATICAL FRAKTUR CAPITAL Q +1D516..1D51C ; Grapheme_Base # L& [7] MATHEMATICAL FRAKTUR CAPITAL S..MATHEMATICAL FRAKTUR CAPITAL Y +1D51E..1D539 ; Grapheme_Base # L& [28] MATHEMATICAL FRAKTUR SMALL A..MATHEMATICAL DOUBLE-STRUCK CAPITAL B +1D53B..1D53E ; Grapheme_Base # L& [4] MATHEMATICAL DOUBLE-STRUCK CAPITAL D..MATHEMATICAL DOUBLE-STRUCK CAPITAL G +1D540..1D544 ; Grapheme_Base # L& [5] MATHEMATICAL DOUBLE-STRUCK CAPITAL I..MATHEMATICAL DOUBLE-STRUCK CAPITAL M +1D546 ; Grapheme_Base # L& MATHEMATICAL DOUBLE-STRUCK CAPITAL O +1D54A..1D550 ; Grapheme_Base # L& [7] MATHEMATICAL DOUBLE-STRUCK CAPITAL S..MATHEMATICAL DOUBLE-STRUCK CAPITAL Y +1D552..1D6A5 ; Grapheme_Base # L& [340] MATHEMATICAL DOUBLE-STRUCK SMALL A..MATHEMATICAL ITALIC SMALL DOTLESS J +1D6A8..1D6C0 ; Grapheme_Base # L& [25] MATHEMATICAL BOLD CAPITAL ALPHA..MATHEMATICAL BOLD CAPITAL OMEGA +1D6C1 ; Grapheme_Base # Sm MATHEMATICAL BOLD NABLA +1D6C2..1D6DA ; Grapheme_Base # L& [25] MATHEMATICAL BOLD SMALL ALPHA..MATHEMATICAL BOLD SMALL OMEGA +1D6DB ; Grapheme_Base # Sm MATHEMATICAL BOLD PARTIAL DIFFERENTIAL +1D6DC..1D6FA ; Grapheme_Base # L& [31] MATHEMATICAL BOLD EPSILON SYMBOL..MATHEMATICAL ITALIC CAPITAL OMEGA +1D6FB ; Grapheme_Base # Sm MATHEMATICAL ITALIC NABLA +1D6FC..1D714 ; Grapheme_Base # L& [25] MATHEMATICAL ITALIC SMALL ALPHA..MATHEMATICAL ITALIC SMALL OMEGA +1D715 ; Grapheme_Base # Sm MATHEMATICAL ITALIC PARTIAL DIFFERENTIAL +1D716..1D734 ; Grapheme_Base # L& [31] MATHEMATICAL ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD ITALIC CAPITAL OMEGA +1D735 ; Grapheme_Base # Sm MATHEMATICAL BOLD ITALIC NABLA +1D736..1D74E ; Grapheme_Base # L& [25] MATHEMATICAL BOLD ITALIC SMALL ALPHA..MATHEMATICAL BOLD ITALIC SMALL OMEGA +1D74F ; Grapheme_Base # Sm MATHEMATICAL BOLD ITALIC PARTIAL DIFFERENTIAL +1D750..1D76E ; Grapheme_Base # L& [31] MATHEMATICAL BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD CAPITAL OMEGA +1D76F ; Grapheme_Base # Sm MATHEMATICAL SANS-SERIF BOLD NABLA +1D770..1D788 ; Grapheme_Base # L& [25] MATHEMATICAL SANS-SERIF BOLD SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD SMALL OMEGA +1D789 ; Grapheme_Base # Sm MATHEMATICAL SANS-SERIF BOLD PARTIAL DIFFERENTIAL +1D78A..1D7A8 ; Grapheme_Base # L& [31] MATHEMATICAL SANS-SERIF BOLD EPSILON SYMBOL..MATHEMATICAL SANS-SERIF BOLD ITALIC CAPITAL OMEGA +1D7A9 ; Grapheme_Base # Sm MATHEMATICAL SANS-SERIF BOLD ITALIC NABLA +1D7AA..1D7C2 ; Grapheme_Base # L& [25] MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL ALPHA..MATHEMATICAL SANS-SERIF BOLD ITALIC SMALL OMEGA +1D7C3 ; Grapheme_Base # Sm MATHEMATICAL SANS-SERIF BOLD ITALIC PARTIAL DIFFERENTIAL +1D7C4..1D7CB ; Grapheme_Base # L& [8] MATHEMATICAL SANS-SERIF BOLD ITALIC EPSILON SYMBOL..MATHEMATICAL BOLD SMALL DIGAMMA +1D7CE..1D7FF ; Grapheme_Base # Nd [50] MATHEMATICAL BOLD DIGIT ZERO..MATHEMATICAL MONOSPACE DIGIT NINE +1F000..1F02B ; Grapheme_Base # So [44] MAHJONG TILE EAST WIND..MAHJONG TILE BACK +1F030..1F093 ; Grapheme_Base # So [100] DOMINO TILE HORIZONTAL BACK..DOMINO TILE VERTICAL-06-06 +1F100..1F10A ; Grapheme_Base # No [11] DIGIT ZERO FULL STOP..DIGIT NINE COMMA +1F110..1F12E ; Grapheme_Base # So [31] PARENTHESIZED LATIN CAPITAL LETTER A..CIRCLED WZ +1F131 ; Grapheme_Base # So SQUARED LATIN CAPITAL LETTER B +1F13D ; Grapheme_Base # So SQUARED LATIN CAPITAL LETTER N +1F13F ; Grapheme_Base # So SQUARED LATIN CAPITAL LETTER P +1F142 ; Grapheme_Base # So SQUARED LATIN CAPITAL LETTER S +1F146 ; Grapheme_Base # So SQUARED LATIN CAPITAL LETTER W +1F14A..1F14E ; Grapheme_Base # So [5] SQUARED HV..SQUARED PPV +1F157 ; Grapheme_Base # So NEGATIVE CIRCLED LATIN CAPITAL LETTER H +1F15F ; Grapheme_Base # So NEGATIVE CIRCLED LATIN CAPITAL LETTER P +1F179 ; Grapheme_Base # So NEGATIVE SQUARED LATIN CAPITAL LETTER J +1F17B..1F17C ; Grapheme_Base # So [2] NEGATIVE SQUARED LATIN CAPITAL LETTER L..NEGATIVE SQUARED LATIN CAPITAL LETTER M +1F17F ; Grapheme_Base # So NEGATIVE SQUARED LATIN CAPITAL LETTER P +1F18A..1F18D ; Grapheme_Base # So [4] CROSSED NEGATIVE SQUARED LATIN CAPITAL LETTER P..NEGATIVE SQUARED SA +1F190 ; Grapheme_Base # So SQUARE DJ +1F200 ; Grapheme_Base # So SQUARE HIRAGANA HOKA +1F210..1F231 ; Grapheme_Base # So [34] SQUARED CJK UNIFIED IDEOGRAPH-624B..SQUARED CJK UNIFIED IDEOGRAPH-6253 +1F240..1F248 ; Grapheme_Base # So [9] TORTOISE SHELL BRACKETED CJK UNIFIED IDEOGRAPH-672C..TORTOISE SHELL BRACKETED CJK UNIFIED IDEOGRAPH-6557 +20000..2A6D6 ; Grapheme_Base # Lo [42711] CJK UNIFIED IDEOGRAPH-20000..CJK UNIFIED IDEOGRAPH-2A6D6 +2A700..2B734 ; Grapheme_Base # Lo [4149] CJK UNIFIED IDEOGRAPH-2A700..CJK UNIFIED IDEOGRAPH-2B734 +2F800..2FA1D ; Grapheme_Base # Lo [542] CJK COMPATIBILITY IDEOGRAPH-2F800..CJK COMPATIBILITY IDEOGRAPH-2FA1D + +# Total code points: 105958 + +# ================================================ + +# Derived Property: Grapheme_Link (deprecated) +# Generated from: Canonical_Combining_Class=Virama +# Use Canonical_Combining_Class=Virama directly instead + +094D ; Grapheme_Link # Mn DEVANAGARI SIGN VIRAMA +09CD ; Grapheme_Link # Mn BENGALI SIGN VIRAMA +0A4D ; Grapheme_Link # Mn GURMUKHI SIGN VIRAMA +0ACD ; Grapheme_Link # Mn GUJARATI SIGN VIRAMA +0B4D ; Grapheme_Link # Mn ORIYA SIGN VIRAMA +0BCD ; Grapheme_Link # Mn TAMIL SIGN VIRAMA +0C4D ; Grapheme_Link # Mn TELUGU SIGN VIRAMA +0CCD ; Grapheme_Link # Mn KANNADA SIGN VIRAMA +0D4D ; Grapheme_Link # Mn MALAYALAM SIGN VIRAMA +0DCA ; Grapheme_Link # Mn SINHALA SIGN AL-LAKUNA +0E3A ; Grapheme_Link # Mn THAI CHARACTER PHINTHU +0F84 ; Grapheme_Link # Mn TIBETAN MARK HALANTA +1039..103A ; Grapheme_Link # Mn [2] MYANMAR SIGN VIRAMA..MYANMAR SIGN ASAT +1714 ; Grapheme_Link # Mn TAGALOG SIGN VIRAMA +1734 ; Grapheme_Link # Mn HANUNOO SIGN PAMUDPOD +17D2 ; Grapheme_Link # Mn KHMER SIGN COENG +1A60 ; Grapheme_Link # Mn TAI THAM SIGN SAKOT +1B44 ; Grapheme_Link # Mc BALINESE ADEG ADEG +1BAA ; Grapheme_Link # Mc SUNDANESE SIGN PAMAAEH +A806 ; Grapheme_Link # Mn SYLOTI NAGRI SIGN HASANTA +A8C4 ; Grapheme_Link # Mn SAURASHTRA SIGN VIRAMA +A953 ; Grapheme_Link # Mc REJANG VIRAMA +A9C0 ; Grapheme_Link # Mc JAVANESE PANGKON +ABED ; Grapheme_Link # Mn MEETEI MAYEK APUN IYEK +10A3F ; Grapheme_Link # Mn KHAROSHTHI VIRAMA +110B9 ; Grapheme_Link # Mn KAITHI SIGN VIRAMA + +# Total code points: 27 + +# EOF diff --git a/pypy/module/unicodedata/generate_unicodedb.py b/pypy/module/unicodedata/generate_unicodedb.py --- a/pypy/module/unicodedata/generate_unicodedb.py +++ b/pypy/module/unicodedata/generate_unicodedb.py @@ -37,6 +37,7 @@ self.excluded = False self.linebreak = False self.decompositionTag = '' + self.properties = () if data[5]: self.raw_decomposition = data[5] if data[5][0] == '<': @@ -95,7 +96,8 @@ return table[code].canonical_decomp def read_unicodedata(unicodedata_file, exclusions_file, east_asian_width_file, - unihan_file=None, linebreak_file=None): + unihan_file=None, linebreak_file=None, + derived_core_properties_file=None): rangeFirst = {} rangeLast = {} table = [None] * (MAXUNICODE + 1) @@ -164,6 +166,27 @@ else: table[int(code, 16)].east_asian_width = width + # Read Derived Core Properties: + for line in derived_core_properties_file: + line = line.split('#', 1)[0].strip() + if not line: + continue + + r, p = line.split(";") + r = r.strip() + p = p.strip() + if ".." in r: + first, last = [int(c, 16) for c in r.split('..')] + chars = list(range(first, last+1)) + else: + chars = [int(r, 16)] + for char in chars: + if not table[char]: + # Some properties (e.g. Default_Ignorable_Code_Point) + # apply to unassigned code points; ignore them + continue + table[char].properties += (p,) + # Expand ranges for (first, last), char in ranges.iteritems(): for code in range(first, last + 1): @@ -257,6 +280,8 @@ IS_DIGIT = 128 IS_DECIMAL = 256 IS_MIRRORED = 512 + IS_XID_START = 1024 + IS_XID_CONTINUE = 2048 # Create the records db_records = {} for code in range(len(table)): @@ -282,6 +307,10 @@ flags |= IS_LOWER if char.mirrored: flags |= IS_MIRRORED + if "XID_Start" in char.properties: + flags |= IS_XID_START + if "XID_Continue" in char.properties: + flags |= IS_XID_CONTINUE char.db_record = (char.category, char.bidirectional, char.east_asian_width, flags, char.combining) db_records[char.db_record] = 1 db_records = db_records.keys() @@ -335,6 +364,8 @@ print >> outfile, 'def istitle(code): return _get_record(code)[3] & %d != 0'% IS_TITLE print >> outfile, 'def islower(code): return _get_record(code)[3] & %d != 0'% IS_LOWER print >> outfile, 'def iscased(code): return _get_record(code)[3] & %d != 0'% (IS_UPPER | IS_TITLE | IS_LOWER) + print >> outfile, 'def isxidstart(code): return _get_record(code)[3] & %d != 0'% (IS_XID_START) + print >> outfile, 'def isxidcontinue(code): return _get_record(code)[3] & %d != 0'% (IS_XID_CONTINUE) print >> outfile, 'def mirrored(code): return _get_record(code)[3] & %d != 0'% IS_MIRRORED print >> outfile, 'def combining(code): return _get_record(code)[4]' @@ -681,9 +712,11 @@ east_asian_width = open('EastAsianWidth-%s.txt' % options.unidata_version) unihan = open('UnihanNumeric-%s.txt' % options.unidata_version) linebreak = open('LineBreak-%s.txt' % options.unidata_version) + derived_core_properties = open('DerivedCoreProperties-%s.txt' % + options.unidata_version) table = read_unicodedata(infile, exclusions, east_asian_width, unihan, - linebreak) + linebreak, derived_core_properties) print >> outfile, '# UNICODE CHARACTER DATABASE' print >> outfile, '# This file was generated with the command:' print >> outfile, '# ', ' '.join(sys.argv) diff --git a/pypy/module/unicodedata/unicodedb_5_2_0.py b/pypy/module/unicodedata/unicodedb_5_2_0.py --- a/pypy/module/unicodedata/unicodedb_5_2_0.py +++ b/pypy/module/unicodedata/unicodedb_5_2_0.py @@ -221,7 +221,7 @@ '\x043 RI' '\x05MUOY ' '\x05HEAD ' -'\x08AVY BAND' +'\x08RDEL NAG' '\x07R TSHES' '\x12KBAR ISOLATED FORM' '\x07KYLISMA' @@ -304,13 +304,12 @@ '\x06WIDTH ' '\x0cN ELEMENT OF' '\x0fDAGESH OR MAPIQ' -'\x08CANDICUS' +'!RIGHTWARDS HARPOON WITH BARB DOWN' '\x0cTHODOX CROSS' '\x10AM ISOLATED FORM' '\x08ULL STOP' '\x0eDEYTEROS ICHOS' '\x04ZERO' -'!ALL BUT UPPER LEFT QUADRANT BLACK' '\x05OSS O' '\x04ERET' '\x10VERTICALLY ABOVE' @@ -617,7 +616,6 @@ '\x05EAGLE' '\x05TIMES' '\x05BREW ' -'\x04AMMA' '\x0eWITH DIAERESIS' '\x06OR EQU' '\x08CRO SIGN' @@ -697,7 +695,7 @@ '\x03IRT' '\x03IRU' '\tNTERPRISE' -'\x03IRI' +'\x07DISIMOU' '\x1bKATHAKA INDEPENDENT SVARITA' '\x05TYR T' '\x03IRO' @@ -766,7 +764,7 @@ '\x02NG' '\x02NY' '\tIRCUMFLEX' -'\x02NT' +'\x07AS SIGN' '\x02NU' '\x02NW' '\x0fFOUR DOTS ABOVE' @@ -804,7 +802,6 @@ '\x0cMBELLISHMENT' '\tNCLOSING ' '\x07MINIMA ' -'\x07SHORT A' '\rEAVENLY EARTH' '\x0cEMISOFT SIGN' '\x043 MI' @@ -887,7 +884,7 @@ '\x1dDOTS OVER ONE DOT PUNCTUATION' '\x04IXTH' '\x03UR-' -'\x08RDEL NAG' +'\x08AVY BAND' '\x0bBLACK ARROW' '\x073 AREPA' '\x03URE' @@ -918,7 +915,7 @@ '\x04 ALL' '\x05THIRD' '\x04GIBA' -'\nLET SYMBOL' +'\x03OVE' '\rFULL SURROUND' '\n TIMES LAL' '\x18RIGHTWARDS THEN CURVING ' @@ -1067,7 +1064,6 @@ '\x0eGYPTOLOGICAL A' '\x04GHWA' '\x15ALATALIZED HOOK BELOW' -'\x13TRICTED LEFT ENTRY-' '\x070 SPEAR' '\x16HEXIFORM LONG ANUSVARA' '\x02OO' @@ -1189,7 +1185,7 @@ '\x06RGLASS' '\x05CREEN' '\x04MON ' -'\x08NEIFORM ' +'\x12 THROUGH DESCENDER' '\x05LEFT ' '\x05A-ROL' '\x0fQUADRUPLE ARROW' @@ -1197,7 +1193,7 @@ '\x08NG RTAGS' '\x07 TEDUNG' '\x0cNUMERAL SIGN' -'\x03OVE' +'\nLET SYMBOL' '\x06TO BAR' '\x11TAN ISOLATED FORM' '\x03D70' @@ -1232,7 +1228,7 @@ '\rGYA GRAM SHAD' '\x07RESILLO' '\x10FALLING DIAGONAL' -'\x12 THROUGH DESCENDER' +'\x13TRICTED LEFT ENTRY-' '\x08THAMASAT' '\x14SINGLE DOT TONE MARK' '\x08 STATERS' @@ -1391,6 +1387,7 @@ '\x06TAUROS' '\x03CR ' '\x03OPO' +'\x12LEFT-STEM TONE BAR' '\x05 FACE' '\x08FROM BAR' '\x17YELORUSSIAN-UKRAINIAN I' @@ -1472,7 +1469,7 @@ '\x03O B' '\x03O N' '\x18LINE HORIZONTAL ELLIPSIS' -'!RIGHTWARDS HARPOON WITH BARB DOWN' +'\x08CANDICUS' '\x03O Y' '\x04MINI' '\x0cENOS CHRONOU' @@ -1642,7 +1639,7 @@ '\x05SHARA' '\x05ESIS ' '\rWITH UNDERBAR' -'\x19 WITH DOUBLE GRAVE ACCENT' +'\x19OUTLINED RIGHTWARDS ARROW' '\x05SHARP' '\x05SHARU' "'USED AS KORANIC STOP SIGN ISOLATED FORM" @@ -2369,7 +2366,9 @@ '\x05CHECK' '\x07SIGN PA' '\x08BOL FOR ' +'\x08NEIFORM ' '\x05 HAA ' +'\x04 RHO' '\nPLUS NAGA ' '\x04GRU ' '\x05TONE ' @@ -2968,8 +2967,8 @@ '\x03H00' '\x02U0' '\x02U3' -'\x05NADA ' -'\x03EEL' +'\x02U2' +'\x0cCIAN LETTER ' '\x10CORNER DOWNWARDS' '\x03EEN' '\x03EEE' @@ -3750,7 +3749,7 @@ '\x10SLANTED EQUAL TO' '\x0bIAN LETTER ' '\x12KOREAN CHARACTER O' -'\x16OVER LEFTWARDS HARPOON' +'\x0cGEBA KAREN I' '\tLVIN SIGN' '\x06BISHOP' '\x0cPER-EM SPACE' @@ -3772,7 +3771,7 @@ '\x03TAS' '\x05LAMED' '\x08EQUAL TO' -'\x03TAA' +'\rAWELLEMET YAZ' '\x03TAB' '\x03TAL' '\x03TAM' @@ -3782,7 +3781,7 @@ '\x05PRIME' '\x0cUN WITH RAYS' '\x04KAKO' -'\x19OUTLINED RIGHTWARDS ARROW' +'\x19 WITH DOUBLE GRAVE ACCENT' '\x18E PLUS A PLUS SU PLUS NA' '\x0bQUARTER ASH' '\x03DJA' @@ -3937,7 +3936,7 @@ '\x02EP' '\x02ER' '\x04AMIL' -'\x07DISIMOU' +'\x03IRI' '\tERCIAL AT' '\x1aMARRIED PARTNERSHIP SYMBOL' '\x06NTOGEN' @@ -4084,6 +4083,7 @@ '\x18VOICED LARYNGEAL SPIRANT' '\t PLUS GUD' '\x05TAL S' +'\x0fEFTWARDS ARROWS' '\x0cRAH BEN YOMO' '\t2 GARMENT' '\x11NAUDIZ NYD NAUD N' @@ -4318,7 +4318,7 @@ '\x07STROKE ' '\r-OR-PLUS SIGN' '\x0eJOINED SQUARES' -'\x07AS SIGN' +'\x02NT' '\nRUPEE SIGN' '\x1fUPWARDS HARPOON WITH BARB RIGHT' '\x08 CURRENT' @@ -4342,7 +4342,7 @@ '\x15INVERTED GLOTTAL STOP' '\x05RIGHT' '\nCTION MARK' -'\x12LEFT-STEM TONE BAR' +'\x04AMMA' '\x0bERCENT SIGN' '\x07HIN DOT' '\x08OVER SAG' @@ -4459,7 +4459,7 @@ '\x04ARGI' '\x05DIAL ' '\x04ARGA' -'\x0cGEBA KAREN I' +'\x16OVER LEFTWARDS HARPOON' '\x04KING' '\x05ICHON' '\x03OSS' @@ -4597,7 +4597,7 @@ '\x0bUNCTUATION ' '\x06ANCHOR' '\x07PRICORN' -'\rAWELLEMET YAZ' +'\x03TAA' '\t PLUS ASH' '\x05NGLE ' '\x0bTHEMATICAL ' @@ -4655,7 +4655,6 @@ '\x0eTHAKA ANUDATTA' '\x15AKIA TELOUS ICHIMATOS' '\x04HORN' -'\x04 RHO' '\x05OKARA' '\x03MUG' '\x10HEAVY AND RIGHT ' @@ -4869,7 +4868,7 @@ '\x06WN BOX' '\x1cLIQUID MEASURE FIRST SUBUNIT' '\x0cMETA STAVROU' -'\x08MAKSURA ' +'\x0cINITIAL FORM' '\x07 LONSUM' '\x04TIKI' '\x15VOICED ITERATION MARK' @@ -4917,7 +4916,7 @@ '\x0bTRUNCATED A' '\x05JEEM ' '\x07OVERBAR' -'\x0fEFTWARDS ARROWS' +'!ALL BUT UPPER LEFT QUADRANT BLACK' '\x19INVERTED EXCLAMATION MARK' '\x06-IEUNG' '\t DRACHMAS' @@ -5041,6 +5040,7 @@ '\tCAPITAL D' '\tCAPITAL F' '\x1dDOWN HEAVY AND RIGHT UP LIGHT' +'\x0fGH VOLTAGE SIGN' '\nETA SYMBOL' '\x03GOU' '\x03AY ' @@ -5066,7 +5066,7 @@ '\x13VARIANT FORM ILIMMU' '\x01H' '\x03EQU' -'\x0cINITIAL FORM' +'\x08MAKSURA ' '\x04ADDA' '\x05SHED ' '\x08MEM-QOPH' @@ -5136,7 +5136,7 @@ '\x02IE' '\x02ID' '\x02IG' -'\x0fGH VOLTAGE SIGN' +'\x07SHORT A' '\x13ARTIAL DIFFERENTIAL' '\x04AME ' '\x04ZATA' @@ -5560,10 +5560,10 @@ '\x0bIOR PRODUCT' '\x06IGAMMA' '\x04MUIN' -'\x02U2' +'\x05NADA ' '\x0bRIPLE PRIME' '\x0bMIDDLE HOOK' -'\x0cCIAN LETTER ' +'\x03EEL' '\x05MAI K' '\x04NGIA' '\x16HORT HORIZONTAL STROKE' @@ -5985,3286 +5985,3286 @@ '\x06ALENTS' ) _charnodes =[70758, - -33936, + -33944, -1, 132371, - 34747, + 34739, -1, 197694, - 104236, + 104224, -1, 262727, - 169245, + 169237, -1, 327957, - 251140, + 251146, -1, 393238, - 280647, + 280619, -1, -65529, - 346183, + 346155, 195071, -65528, - 396588, + 396579, 195070, -65527, - 506107, + 506087, 195069, -65526, - 555888, + 555880, 195068, -65525, - 605632, + 605604, 195067, -65524, 656179, 195066, -65523, - 775428, + 775434, 195065, -65522, - 825353, + 825341, 195064, -65521, - 874626, + 874612, 195063, -65520, - 925298, + 925285, 195062, -65519, - 1034822, + 1034828, 195061, -65518, - 1084557, + 1084549, 195060, -65517, - 1134189, + 1134161, 195059, -65516, - 1184784, + 1184775, 195058, -65515, - 1293992, + 1293994, 195057, -1, - 1344078, + 1344070, 195056, 1507367, - 331052, + 331043, -1, -65512, - 1460295, + 1460267, 195055, -65511, - 1510700, + 1510691, 195054, -65510, - 1620219, + 1620199, 195053, -65509, - 1670000, + 1669992, 195052, -65508, - 1719744, + 1719716, 195051, -65507, 1770291, 195050, -65506, - 1889540, + 1889546, 195049, -65505, - 1939465, + 1939453, 195048, -65504, - 1988738, + 1988724, 195047, -65503, - 2039410, + 2039397, 195046, -65502, - 2148934, + 2148940, 195045, -65501, - 2198669, + 2198661, 195044, -65500, - 2248301, + 2248273, 195043, -65499, - 2298896, + 2298887, 195042, -65498, - 2408104, + 2408106, 195041, -1, - 2458190, + 2458182, 195040, 2621496, - 1489147, + 1489127, -1, -65495, - 2574407, + 2574379, 195039, -65494, - 2624812, + 2624803, 195038, -65493, - 2734331, + 2734311, 195037, -65492, - 2784112, + 2784104, 195036, -65491, - 2833856, + 2833828, 195035, -65490, 2884403, 195034, -65489, - 3003652, + 3003658, 195033, -65488, - 3053577, + 3053565, 195032, -65487, - 3102850, + 3102836, 195031, -65486, - 3153522, + 3153509, 195030, -65485, - 3263046, + 3263052, 195029, -65484, - 3312781, + 3312773, 195028, -65483, - 3362413, + 3362385, 195027, -65482, - 3413008, + 3412999, 195026, -65481, - 3522216, + 3522218, 195025, -1, - 3572302, + 3572294, 195024, 3735625, - 2587504, + 2587496, -1, -65478, - 3688519, + 3688491, 195023, -65477, - 3738924, + 3738915, 195022, -65476, - 3848443, + 3848423, 195021, -65475, - 3898224, + 3898216, 195020, -65474, - 3947968, + 3947940, 195019, -65473, 3998515, 195018, -65472, - 4117764, + 4117770, 195017, -65471, - 4167689, + 4167677, 195016, -65470, - 4216962, + 4216948, 195015, -65469, - 4267634, + 4267621, 195014, -65468, - 4377158, + 4377164, 195013, -65467, - 4426893, + 4426885, 195012, -65466, - 4476525, + 4476497, 195011, -65465, - 4527120, + 4527111, 195010, -65464, - 4636328, + 4636330, 195009, -1, - 4686414, + 4686406, 195008, 4849754, - 3685824, + 3685796, -1, -65461, - 4802631, + 4802603, 195007, -65460, - 4853036, + 4853027, 195006, -65459, - 4962555, + 4962535, 195005, -65458, - 5012336, + 5012328, 195004, -65457, - 5062080, + 5062052, 195003, -65456, 5112627, 195002, -65455, - 5231876, + 5231882, 195001, -65454, - 5281801, + 5281789, 195000, -65453, - 5331074, + 5331060, 194999, -65452, - 5381746, + 5381733, 194998, -65451, - 5491270, + 5491276, 194997, -65450, - 5541005, + 5540997, 194996, -65449, - 5590637, + 5590609, 194995, -65448, - 5641232, + 5641223, 194994, -65447, - 5750440, + 5750442, 194993, -1, From noreply at buildbot.pypy.org Sat Jan 14 21:48:26 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:26 +0100 (CET) Subject: [pypy-commit] pypy py3k: Implement str.isidentifier() Message-ID: <20120114204826.EEDDE82B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51318:c863145453cb Date: 2011-12-22 10:44 +0100 http://bitbucket.org/pypy/pypy/changeset/c863145453cb/ Log: Implement str.isidentifier() diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -162,6 +162,13 @@ assert "!Brown Fox".istitle() == True assert "Brow&&&&N Fox".istitle() == True assert "!Brow&&&&n Fox".istitle() == False + + def test_isidentifier(self): + assert "".isidentifier() is False + assert "a4".isidentifier() is True + assert "_var".isidentifier() is True + assert "_!var".isidentifier() is False + assert "3abc".isidentifier() is False def test_capitalize(self): assert "brown fox".capitalize() == "Brown fox" diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -279,6 +279,27 @@ previous_is_cased = False return space.newbool(cased) +def unicode_isidentifier__Unicode(space, w_unicode): + v = w_unicode._value + if len(v) == 0: + return space.w_False + + # PEP 3131 says that the first character must be in XID_Start and + # subsequent characters in XID_Continue, and for the ASCII range, + # the 2.x rules apply (i.e start with letters and underscore, + # continue with letters, digits, underscore). However, given the + # current definition of XID_Start and XID_Continue, it is + # sufficient to check just for these, except that _ must be + # allowed as starting an identifier. + first = v[0] + if not (unicodedb.isxidstart(ord(first)) or first == u'_'): + return space.w_False + + for i in range(1, len(v)): + if not unicodedb.isxidcontinue(ord(v[i])): + return space.w_False + return space.w_True + def _strip(space, w_self, w_chars, left, right): "internal function called by str_xstrip methods" u_self = w_self._value diff --git a/pypy/objspace/std/unicodetype.py b/pypy/objspace/std/unicodetype.py --- a/pypy/objspace/std/unicodetype.py +++ b/pypy/objspace/std/unicodetype.py @@ -107,6 +107,10 @@ ' characters in S are uppercase and there is\nat' ' least one cased character in S, False' ' otherwise.') +unicode_isidentifier = SMM('isidentifier', 1, + doc='S.isidentifier() -> bool\n\nReturn True if S is' + ' a valid identifier according\nto the language' + ' definition.') unicode_join = SMM('join', 2, doc='S.join(sequence) -> unicode\n\nReturn a string' ' which is the concatenation of the strings in' From noreply at buildbot.pypy.org Sat Jan 14 21:48:28 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:28 +0100 (CET) Subject: [pypy-commit] pypy py3k: fixes in pure-Python implementation of sha1 and md5: Message-ID: <20120114204828.3926482B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51319:5f39f911a130 Date: 2011-12-22 11:06 +0100 http://bitbucket.org/pypy/pypy/changeset/5f39f911a130/ Log: fixes in pure-Python implementation of sha1 and md5: they accept bytes, not strings diff --git a/lib_pypy/_md5.py b/lib_pypy/_md5.py --- a/lib_pypy/_md5.py +++ b/lib_pypy/_md5.py @@ -47,16 +47,16 @@ def _bytelist2long(list): "Transform a list of characters into a list of longs." - imax = len(list)/4 + imax = len(list) // 4 hl = [0L] * imax j = 0 i = 0 while i < imax: - b0 = long(ord(list[j])) - b1 = (long(ord(list[j+1]))) << 8 - b2 = (long(ord(list[j+2]))) << 16 - b3 = (long(ord(list[j+3]))) << 24 + b0 = list[j] + b1 = list[j+1] << 8 + b2 = list[j+2] << 16 + b3 = list[j+3] << 24 hl[i] = b0 | b1 |b2 | b3 i = i+1 j = j+4 @@ -118,7 +118,7 @@ digest_size = digestsize = 16 block_size = 64 - def __init__(self): + def __init__(self, arg=None): "Initialisation." # Initial message length in bits(!). @@ -132,6 +132,9 @@ # to start from scratch on the same object. self.init() + if arg: + self.update(arg) + def init(self): "Initialize the message-digest and set all fields to zero." @@ -316,7 +319,7 @@ else: padLen = 120 - index - padding = ['\200'] + ['\000'] * 63 + padding = [0o200] + [0] * 63 self.update(padding[:padLen]) # Append length (before padding). @@ -346,7 +349,7 @@ binary environments. """ - return ''.join(['%02x' % ord(c) for c in self.digest()]) + return ''.join(['%02x' % c for c in self.digest()]) def copy(self): """Return a clone object. diff --git a/lib_pypy/_sha1.py b/lib_pypy/_sha1.py --- a/lib_pypy/_sha1.py +++ b/lib_pypy/_sha1.py @@ -35,7 +35,7 @@ """ # After much testing, this algorithm was deemed to be the fastest. - s = '' + s = b'' pack = struct.pack while n > 0: s = pack('>I', n & 0xffffffffL) + s @@ -69,10 +69,10 @@ j = 0 i = 0 while i < imax: - b0 = long(ord(list[j])) << 24 - b1 = long(ord(list[j+1])) << 16 - b2 = long(ord(list[j+2])) << 8 - b3 = long(ord(list[j+3])) + b0 = list[j] << 24 + b1 = list[j+1] << 16 + b2 = list[j+2] << 8 + b3 = list[j+3] hl[i] = b0 | b1 | b2 | b3 i = i+1 j = j+4 @@ -280,7 +280,7 @@ else: padLen = 120 - index - padding = ['\200'] + ['\000'] * 63 + padding = [0o200] + [0] * 63 self.update(padding[:padLen]) # Append length (before padding). @@ -314,7 +314,7 @@ used to exchange the value safely in email or other non- binary environments. """ - return ''.join(['%02x' % ord(c) for c in self.digest()]) + return ''.join(['%02x' % c for c in self.digest()]) def copy(self): """Return a clone object. From noreply at buildbot.pypy.org Sat Jan 14 21:48:29 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:29 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix tests in module/thread Message-ID: <20120114204829.7C32F82B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51320:36af87a5ee11 Date: 2011-12-22 23:52 +0100 http://bitbucket.org/pypy/pypy/changeset/36af87a5ee11/ Log: Fix tests in module/thread diff --git a/pypy/module/_io/interp_stringio.py b/pypy/module/_io/interp_stringio.py --- a/pypy/module/_io/interp_stringio.py +++ b/pypy/module/_io/interp_stringio.py @@ -12,7 +12,7 @@ self.buf = [] self.pos = 0 - def descr_init(self, space, w_initvalue=None, w_newline="\n"): + def descr_init(self, space, w_initvalue=None, w_newline=u"\n"): # In case __init__ is called multiple times self.buf = [] self.pos = 0 diff --git a/pypy/module/thread/test/test_import_lock.py b/pypy/module/thread/test/test_import_lock.py --- a/pypy/module/thread/test/test_import_lock.py +++ b/pypy/module/thread/test/test_import_lock.py @@ -66,9 +66,6 @@ def test_lock(self, space, monkeypatch): from pypy.module.imp.importing import getimportlock, importhook - # Force importing the module _file now - space.builtin.get('file') - # Monkeypatch the import lock and add a counter importlock = getimportlock(space) original_acquire = importlock.acquire_lock @@ -82,9 +79,9 @@ importhook(space, 'sys') assert importlock.count == 0 # A new module - importhook(space, 're') - assert importlock.count == 7 + importhook(space, 'pprint') + assert importlock.count == 1 # Import it again previous_count = importlock.count - importhook(space, 're') + importhook(space, 'pprint') assert importlock.count == previous_count From noreply at buildbot.pypy.org Sat Jan 14 21:48:30 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:30 +0100 (CET) Subject: [pypy-commit] pypy py3k: Add _thread.RLock Message-ID: <20120114204830.AB7EB82B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51321:ae023502cd1a Date: 2011-12-26 17:19 +0100 http://bitbucket.org/pypy/pypy/changeset/ae023502cd1a/ Log: Add _thread.RLock diff --git a/pypy/module/thread/__init__.py b/pypy/module/thread/__init__.py --- a/pypy/module/thread/__init__.py +++ b/pypy/module/thread/__init__.py @@ -20,6 +20,7 @@ 'allocate_lock': 'os_lock.allocate_lock', 'allocate': 'os_lock.allocate_lock', # obsolete synonym 'LockType': 'os_lock.Lock', + 'RLock': 'os_lock.W_RLock', '_local': 'os_local.Local', 'TIMEOUT_MAX': 'space.wrap(float(os_lock.TIMEOUT_MAX) / 1000000.0)', 'error': 'space.fromcache(error.Cache).w_error', diff --git a/pypy/module/thread/os_lock.py b/pypy/module/thread/os_lock.py --- a/pypy/module/thread/os_lock.py +++ b/pypy/module/thread/os_lock.py @@ -6,9 +6,9 @@ from pypy.module.thread.error import wrap_thread_error from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import interp2app, unwrap_spec -from pypy.interpreter.typedef import TypeDef +from pypy.interpreter.typedef import TypeDef, make_weakref_descr from pypy.interpreter.error import OperationError -from pypy.rlib.rarithmetic import r_longlong +from pypy.rlib.rarithmetic import r_longlong, r_uint, ovfcheck # Force the declaration of the type 'thread.LockType' for RPython #import pypy.module.thread.rpython.exttable @@ -100,14 +100,8 @@ def __exit__(self, *args): self.descr_lock_release(self.space) -descr_acquire = interp2app(Lock.descr_lock_acquire) -descr_release = interp2app(Lock.descr_lock_release) -descr_locked = interp2app(Lock.descr_lock_locked) -descr__enter__ = interp2app(Lock.descr__enter__) -descr__exit__ = interp2app(Lock.descr__exit__) - - -Lock.typedef = TypeDef("thread.lock", +Lock.typedef = TypeDef( + "_thread.lock", __doc__ = """\ A lock object is a synchronization primitive. To create a lock, call the thread.allocate_lock() function. Methods are: @@ -119,15 +113,15 @@ A lock is not owned by the thread that locked it; another thread may unlock it. A thread attempting to lock a lock that it has already locked will block until another thread unlocks it. Deadlocks may ensue.""", - acquire = descr_acquire, - release = descr_release, - locked = descr_locked, - __enter__ = descr__enter__, - __exit__ = descr__exit__, + acquire = interp2app(Lock.descr_lock_acquire), + release = interp2app(Lock.descr_lock_release), + locked = interp2app(Lock.descr_lock_locked), + __enter__ = interp2app(Lock.descr__enter__), + __exit__ = interp2app(Lock.descr__exit__), # Obsolete synonyms - acquire_lock = descr_acquire, - release_lock = descr_release, - locked_lock = descr_locked, + acquire_lock = interp2app(Lock.descr_lock_acquire), + release_lock = interp2app(Lock.descr_lock_release), + locked_lock = interp2app(Lock.descr_lock_locked), ) @@ -135,3 +129,123 @@ """Create a new lock object. (allocate() is an obsolete synonym.) See LockType.__doc__ for information about locks.""" return space.wrap(Lock(space)) + + +class W_RLock(Wrappable): + def __init__(self, space): + self.rlock_count = 0 + self.rlock_owner = 0 + try: + self.lock = thread.allocate_lock() + except thread.error: + raise wrap_thread_error(space, "cannot allocate lock") + + def descr__new__(space, w_subtype): + self = space.allocate_instance(W_RLock, w_subtype) + W_RLock.__init__(self, space) + return space.wrap(self) + + def descr__repr__(self): + typename = space.type(self).getname(space) + return space.wrap("<%s owner=%d count=%d>" % ( + typename, self.rlock_owner, self.rlock_count)) + + @unwrap_spec(blocking=bool) + def acquire_w(self, space, blocking=True): + """Lock the lock. `blocking` indicates whether we should wait + for the lock to be available or not. If `blocking` is False + and another thread holds the lock, the method will return False + immediately. If `blocking` is True and another thread holds + the lock, the method will wait for the lock to be released, + take it and then return True. + (note: the blocking operation is not interruptible.) + + In all other cases, the method will return True immediately. + Precisely, if the current thread already holds the lock, its + internal counter is simply incremented. If nobody holds the lock, + the lock is taken and its internal counter initialized to 1.""" + tid = thread.get_ident() + if self.rlock_count > 0 and tid == self.rlock_owner: + try: + self.rlock_count = ovfcheck(self.rlock_count + 1) + except OverflowError: + raise OperationError(space.w_OverflowError, space.wrap( + 'internal lock count overflowed')) + return space.w_True + + r = True + if self.rlock_count > 0 or not self.lock.acquire(False): + if not blocking: + return space.w_False + r = self.lock.acquire(True) + if r: + assert self.rlock_count == 0 + self.rlock_owner = tid + self.rlock_count = 1 + + return space.wrap(r) + + + def release_w(self, space): + """Release the lock, allowing another thread that is blocked waiting for + the lock to acquire the lock. The lock must be in the locked state, + and must be locked by the same thread that unlocks it; otherwise a + `RuntimeError` is raised. + + Do note that if the lock was acquire()d several times in a row by the + current thread, release() needs to be called as many times for the lock + to be available for other threads.""" + tid = thread.get_ident() + if self.rlock_count == 0 or self.rlock_owner != tid: + raise OperationError(space.w_RuntimeError, space.wrap( + "cannot release un-acquired lock")) + self.rlock_count -= 1 + if self.rlock_count == 0: + self.rlock_owner == 0 + self.lock.release() + + def is_owned_w(self, space): + """For internal use by `threading.Condition`.""" + tid = thread.get_ident() + if self.rlock_count > 0 and self.rlock_owner == tid: + return space.w_True + else: + return space.w_False + + @unwrap_spec(count=r_uint, owner=int) + def acquire_restore_w(self, space, count, owner): + """For internal use by `threading.Condition`.""" + r = True + if not self.lock.acquire(False): + r = self.lock.acquire(True) + if not r: + raise wrap_thread_error(space, "coult not acquire lock") + assert self.rlock_count == 0 + self.rlock_owner = owner + self.rlock_count = count + + def release_save_w(self, space): + """For internal use by `threading.Condition`.""" + count, self.rlock_count = self.rlock_count, 0 + owner, self.rlock_owner = self.rlock_owner, 0 + return space.newtuple([space.wrap(count), space.wrap(owner)]) + + def descr__enter__(self, space): + self.acquire_w(space) + return self + + def descr__exit__(self, space, *args): + self.release_w(space) + +W_RLock.typedef = TypeDef( + "_thread.RLock", + __new__ = interp2app(W_RLock.descr__new__.im_func), + acquire = interp2app(W_RLock.acquire_w), + release = interp2app(W_RLock.release_w), + _is_owned = interp2app(W_RLock.is_owned_w), + _acquire_restore = interp2app(W_RLock.acquire_restore_w), + _release_save = interp2app(W_RLock.release_save_w), + __enter__ = interp2app(W_RLock.descr__enter__), + __exit__ = interp2app(W_RLock.descr__exit__), + __weakref__ = make_weakref_descr(W_RLock), + ) diff --git a/pypy/module/thread/test/test_lock.py b/pypy/module/thread/test/test_lock.py --- a/pypy/module/thread/test/test_lock.py +++ b/pypy/module/thread/test/test_lock.py @@ -80,3 +80,45 @@ class AppTestLockAgain(GenericTestThread): # test it at app-level again to detect strange interactions test_lock_again = AppTestLock.test_lock.im_func + + +class AppTestRLock(GenericTestThread): + """ + Tests for recursive locks. + """ + def test_reacquire(self): + import _thread + lock = _thread.RLock() + lock.acquire() + lock.acquire() + lock.release() + lock.acquire() + lock.release() + lock.release() + + def test_release_unacquired(self): + # Cannot release an unacquired lock + import _thread + lock = _thread.RLock() + raises(RuntimeError, lock.release) + lock.acquire() + lock.acquire() + lock.release() + lock.acquire() + lock.release() + lock.release() + raises(RuntimeError, lock.release) + + def test__is_owned(self): + import _thread + lock = _thread.RLock() + assert lock._is_owned() is False + lock.acquire() + assert lock._is_owned() is True + lock.acquire() + assert lock._is_owned() is True + lock.release() + assert lock._is_owned() is True + lock.release() + assert lock._is_owned() is False + From noreply at buildbot.pypy.org Sat Jan 14 21:48:31 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix for repr of the empty set: set(), not {} which is a dict... Message-ID: <20120114204831.DCC7482B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51322:a69b99cf0083 Date: 2011-12-26 17:26 +0100 http://bitbucket.org/pypy/pypy/changeset/a69b99cf0083/ Log: Fix for repr of the empty set: set(), not {} which is a dict... diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -640,6 +640,8 @@ if set_id in currently_in_repr: return '%s(...)' % (s.__class__.__name__,) currently_in_repr[set_id] = 1 + if not s: + return '%s()' % (s.__class__.__name__,) try: listrepr = repr([x for x in s]) if type(s) is set: diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -52,7 +52,7 @@ def test_space_newset(self): s = self.space.newset() - assert self.space.str_w(self.space.repr(s)) == '{}' + assert self.space.str_w(self.space.repr(s)) == 'set()' class AppTestAppSetTest: def test_subtype(self): From noreply at buildbot.pypy.org Sat Jan 14 21:48:33 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: CPython issue #1721812: Binary operations and copy operations on Message-ID: <20120114204833.1C54582B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51323:3924c71a5c0a Date: 2011-12-26 17:49 +0100 http://bitbucket.org/pypy/pypy/changeset/3924c71a5c0a/ Log: CPython issue #1721812: Binary operations and copy operations on set/frozenset subclasses need to return the base type, not the subclass itself. diff --git a/pypy/objspace/std/setobject.py b/pypy/objspace/std/setobject.py --- a/pypy/objspace/std/setobject.py +++ b/pypy/objspace/std/setobject.py @@ -44,12 +44,7 @@ def _newobj(w_self, space, rdict_w): """Make a new set by taking ownership of 'rdict_w'.""" - if type(w_self) is W_SetObject: - return W_SetObject(space, rdict_w) - w_type = space.type(w_self) - w_obj = space.allocate_instance(W_SetObject, w_type) - W_SetObject.__init__(w_obj, space, rdict_w) - return w_obj + return W_SetObject(space, rdict_w) class W_FrozensetObject(W_BaseSetObject): from pypy.objspace.std.frozensettype import frozenset_typedef as typedef @@ -57,12 +52,7 @@ def _newobj(w_self, space, rdict_w): """Make a new frozenset by taking ownership of 'rdict_w'.""" - if type(w_self) is W_FrozensetObject: - return W_FrozensetObject(space, rdict_w) - w_type = space.type(w_self) - w_obj = space.allocate_instance(W_FrozensetObject, w_type) - W_FrozensetObject.__init__(w_obj, space, rdict_w) - return w_obj + return W_FrozensetObject(space, rdict_w) registerimplementation(W_BaseSetObject) registerimplementation(W_SetObject) diff --git a/pypy/objspace/std/test/test_setobject.py b/pypy/objspace/std/test/test_setobject.py --- a/pypy/objspace/std/test/test_setobject.py +++ b/pypy/objspace/std/test/test_setobject.py @@ -59,7 +59,7 @@ class subset(set):pass a = subset() b = a | set('abc') - assert type(b) is subset + assert type(b) is set def test_init_new_behavior(self): s = set.__new__(set, 'abc') @@ -77,14 +77,14 @@ b = subset('abc') subset.__new__ = lambda *args: foobar # not called b = b.copy() - assert type(b) is subset + assert type(b) is set assert set(b) == set('abc') # class frozensubset(frozenset): pass b = frozensubset('abc') frozensubset.__new__ = lambda *args: foobar # not called b = b.copy() - assert type(b) is frozensubset + assert type(b) is frozenset assert frozenset(b) == frozenset('abc') def test_union(self): @@ -160,7 +160,7 @@ s2 = s1.copy() assert s1 is not s2 assert s1 == s2 - assert type(s2) is myfrozen + assert type(s2) is frozenset def test_update(self): s1 = set('abc') @@ -317,8 +317,7 @@ s = subset([2]) assert s.x == ([2],) t = s | base([5]) - # obscure CPython behavior: - assert type(t) is subset + assert type(t) is base assert not hasattr(t, 'x') def test_isdisjoint(self): From noreply at buildbot.pypy.org Sat Jan 14 21:48:34 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix checkmodule.py, and run it on the thread module. Message-ID: <20120114204834.50DDB82B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51324:65597104c440 Date: 2011-12-26 18:39 +0100 http://bitbucket.org/pypy/pypy/changeset/65597104c440/ Log: Fix checkmodule.py, and run it on the thread module. diff --git a/pypy/module/thread/os_lock.py b/pypy/module/thread/os_lock.py --- a/pypy/module/thread/os_lock.py +++ b/pypy/module/thread/os_lock.py @@ -8,7 +8,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, make_weakref_descr from pypy.interpreter.error import OperationError -from pypy.rlib.rarithmetic import r_longlong, r_uint, ovfcheck +from pypy.rlib.rarithmetic import r_longlong, ovfcheck # Force the declaration of the type 'thread.LockType' for RPython #import pypy.module.thread.rpython.exttable @@ -212,7 +212,7 @@ else: return space.w_False - @unwrap_spec(count=r_uint, owner=int) + @unwrap_spec(count=int, owner=int) def acquire_restore_w(self, space, count, owner): """For internal use by `threading.Condition`.""" r = True diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -267,7 +267,7 @@ ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', 'dict', 'unicode', 'complex', 'slice', 'bool', - 'type', 'basestring']): + 'type', 'text']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # for (name, _, arity, _) in ObjSpace.MethodTable: From noreply at buildbot.pypy.org Sat Jan 14 21:48:35 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: Break everything and unify int and long! Message-ID: <20120114204835.AACF182B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51325:f4af6649c12f Date: 2011-12-27 17:22 +0100 http://bitbucket.org/pypy/pypy/changeset/f4af6649c12f/ Log: Break everything and unify int and long! diff --git a/pypy/interpreter/astcompiler/astbuilder.py b/pypy/interpreter/astcompiler/astbuilder.py --- a/pypy/interpreter/astcompiler/astbuilder.py +++ b/pypy/interpreter/astcompiler/astbuilder.py @@ -1047,10 +1047,7 @@ w_num_str = self.space.wrap(raw) w_index = None w_base = self.space.wrap(base) - if raw[-1] in "lL": - tp = self.space.w_long - return self.space.call_function(tp, w_num_str, w_base) - elif raw[-1] in "jJ": + if raw[-1] in "jJ": tp = self.space.w_complex return self.space.call_function(tp, w_num_str) try: diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1409,8 +1409,7 @@ # not os.close(). It's likely designed for 'select'. It's irregular # in the sense that it expects either a real int/long or an object # with a fileno(), but not an object with an __int__(). - if (not self.isinstance_w(w_fd, self.w_int) and - not self.isinstance_w(w_fd, self.w_long)): + if not self.isinstance_w(w_fd, self.w_int): try: w_fileno = self.getattr(w_fd, self.wrap("fileno")) except OperationError, e: @@ -1519,7 +1518,6 @@ ('int', 'int', 1, ['__int__']), ('index', 'index', 1, ['__index__']), ('float', 'float', 1, ['__float__']), - ('long', 'long', 1, ['__long__']), ('inplace_add', '+=', 2, ['__iadd__']), ('inplace_sub', '-=', 2, ['__isub__']), ('inplace_mul', '*=', 2, ['__imul__']), diff --git a/pypy/module/_io/interp_io.py b/pypy/module/_io/interp_io.py --- a/pypy/module/_io/interp_io.py +++ b/pypy/module/_io/interp_io.py @@ -41,8 +41,7 @@ if not (space.isinstance_w(w_file, space.w_unicode) or space.isinstance_w(w_file, space.w_str) or - space.isinstance_w(w_file, space.w_int) or - space.isinstance_w(w_file, space.w_long)): + space.isinstance_w(w_file, space.w_int)): raise operationerrfmt(space.w_TypeError, "invalid file: %s", space.str_w(space.repr(w_file)) ) diff --git a/pypy/module/_random/interp_random.py b/pypy/module/_random/interp_random.py --- a/pypy/module/_random/interp_random.py +++ b/pypy/module/_random/interp_random.py @@ -28,8 +28,6 @@ else: if space.is_true(space.isinstance(w_n, space.w_int)): w_n = space.abs(w_n) - elif space.is_true(space.isinstance(w_n, space.w_long)): - w_n = space.abs(w_n) else: # XXX not perfectly like CPython w_n = space.abs(space.hash(w_n)) @@ -76,11 +74,13 @@ self._rnd.index = space.int_w(w_item) def jumpahead(self, space, w_n): - if space.is_true(space.isinstance(w_n, space.w_long)): + try: + n = space.int_w(w_n) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise num = space.bigint_w(w_n) n = intmask(num.uintmask()) - else: - n = space.int_w(w_n) self._rnd.jumpahead(n) assert rbigint.SHIFT <= 32 diff --git a/pypy/module/_winreg/interp_winreg.py b/pypy/module/_winreg/interp_winreg.py --- a/pypy/module/_winreg/interp_winreg.py +++ b/pypy/module/_winreg/interp_winreg.py @@ -109,9 +109,14 @@ elif isinstance(w_hkey, W_HKEY): return w_hkey.hkey elif space.is_true(space.isinstance(w_hkey, space.w_int)): - return rffi.cast(rwinreg.HKEY, space.int_w(w_hkey)) - elif space.is_true(space.isinstance(w_hkey, space.w_long)): - return rffi.cast(rwinreg.HKEY, space.uint_w(w_hkey)) + try: + value = space.int_w(w_hkey) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + return rffi.cast(rwinreg.HKEY, space.uint_w(w_hkey)) + else: + return rffi.cast(rwinreg.HKEY, space.int_w(w_hkey)) else: errstring = space.wrap("The object is not a PyHKEY object") raise OperationError(space.w_TypeError, errstring) diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -377,10 +377,9 @@ "Dict": "space.w_dict", "Tuple": "space.w_tuple", "List": "space.w_list", - "Int": "space.w_int", "Bool": "space.w_bool", "Float": "space.w_float", - "Long": "space.w_long", + "Long": "space.w_int", "Complex": "space.w_complex", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -126,7 +126,7 @@ @cpython_api([lltype.Float], PyObject) def PyLong_FromDouble(space, val): """Return a new PyLongObject object from v, or NULL on failure.""" - return space.long(space.wrap(val)) + return space.int(space.wrap(val)) @cpython_api([PyObject], lltype.Float, error=-1.0) def PyLong_AsDouble(space, w_long): @@ -151,7 +151,7 @@ w_base = space.wrap(rffi.cast(lltype.Signed, base)) if pend: pend[0] = rffi.ptradd(str, len(s)) - return space.call_function(space.w_long, w_str, w_base) + return space.call_function(space.w_int, w_str, w_base) @cpython_api([rffi.VOIDP], PyObject) def PyLong_FromVoidPtr(space, p): diff --git a/pypy/module/cpyext/number.py b/pypy/module/cpyext/number.py --- a/pypy/module/cpyext/number.py +++ b/pypy/module/cpyext/number.py @@ -47,7 +47,7 @@ def PyNumber_Long(space, w_obj): """Returns the o converted to a long integer object on success, or NULL on failure. This is the equivalent of the Python expression long(o).""" - return space.long(w_obj) + return space.int(w_obj) @cpython_api([PyObject], PyObject) def PyNumber_Index(space, w_obj): diff --git a/pypy/module/math/interp_math.py b/pypy/module/math/interp_math.py --- a/pypy/module/math/interp_math.py +++ b/pypy/module/math/interp_math.py @@ -101,8 +101,7 @@ """ldexp(x, i) -> x * (2**i) """ x = _get_double(space, w_x) - if (space.isinstance_w(w_i, space.w_int) or - space.isinstance_w(w_i, space.w_long)): + if space.isinstance_w(w_i, space.w_int): try: exp = space.int_w(w_i) except OperationError, e: @@ -189,7 +188,7 @@ def _log_any(space, w_x, base): # base is supposed to be positive or 0.0, which means we use e try: - if space.is_true(space.isinstance(w_x, space.w_long)): + if space.is_true(space.isinstance(w_x, space.w_int)): # special case to support log(extremely-large-long) num = space.bigint_w(w_x) result = num.log(base) diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -187,7 +187,7 @@ name="int64", char="q", w_box_type = space.gettypefor(interp_boxes.W_Int64Box), - alternate_constructors=[space.w_long], + alternate_constructors=[space.w_int], ) self.w_uint64dtype = W_Dtype( types.UInt64(), @@ -228,4 +228,4 @@ ) def get_dtype_cache(space): - return space.fromcache(DtypeCache) \ No newline at end of file + return space.fromcache(DtypeCache) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -270,11 +270,6 @@ return current_guess elif space.isinstance_w(w_obj, space.w_int): if (current_guess is None or current_guess is bool_dtype or - current_guess is long_dtype): - return long_dtype - return current_guess - elif space.isinstance_w(w_obj, space.w_long): - if (current_guess is None or current_guess is bool_dtype or current_guess is long_dtype or current_guess is int64_dtype): return int64_dtype return current_guess diff --git a/pypy/module/operator/interp_operator.py b/pypy/module/operator/interp_operator.py --- a/pypy/module/operator/interp_operator.py +++ b/pypy/module/operator/interp_operator.py @@ -226,8 +226,7 @@ raise OperationError(space.w_TypeError, space.wrap("non-sequence object can't be repeated")) - if not (space.is_true(space.isinstance(w_obj2, space.w_int)) or \ - space.is_true(space.isinstance(w_obj2, space.w_long))): + if not space.is_true(space.isinstance(w_obj2, space.w_int)): # second arg has to be int/long raise OperationError(space.w_TypeError, space.wrap('an integer is required')) diff --git a/pypy/module/struct/formatiterator.py b/pypy/module/struct/formatiterator.py --- a/pypy/module/struct/formatiterator.py +++ b/pypy/module/struct/formatiterator.py @@ -64,8 +64,7 @@ def _accept_integral(self, meth): space = self.space w_obj = self.accept_obj_arg() - if (space.isinstance_w(w_obj, space.w_int) or - space.isinstance_w(w_obj, space.w_long)): + if space.isinstance_w(w_obj, space.w_int): w_index = w_obj else: w_index = None diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -428,18 +428,12 @@ w_resulttype = space.type(w_result) if space.is_w(w_resulttype, space.w_int): return w_result - elif space.is_w(w_resulttype, space.w_long): - return space.hash(w_result) elif space.is_true(space.isinstance(w_result, space.w_int)): # be careful about subclasses of 'int'... return space.wrap(space.int_w(w_result)) - elif space.is_true(space.isinstance(w_result, space.w_long)): - # be careful about subclasses of 'long'... - bigint = space.bigint_w(w_result) - return space.wrap(bigint.hash()) else: raise OperationError(space.w_TypeError, - space.wrap("__hash__() should return an int or long")) + space.wrap("__hash__() should return an int")) def userdel(space, w_obj): w_del = space.lookup(w_obj, '__del__') @@ -724,9 +718,8 @@ # more of the above manually-coded operations as well) for targetname, specialname, checkerspec in [ - ('int', '__int__', ("space.w_int", "space.w_long")), - ('index', '__index__', ("space.w_int", "space.w_long")), - ('long', '__long__', ("space.w_int", "space.w_long")), + ('int', '__int__', ("space.w_int",)), + ('index', '__index__', ("space.w_int",)), ('float', '__float__', ("space.w_float",))]: l = ["space.is_true(space.isinstance(w_result, %s))" % x diff --git a/pypy/objspace/std/booltype.py b/pypy/objspace/std/booltype.py --- a/pypy/objspace/std/booltype.py +++ b/pypy/objspace/std/booltype.py @@ -1,6 +1,6 @@ from pypy.interpreter import gateway from pypy.objspace.std.stdtypedef import StdTypeDef -from pypy.objspace.std.inttype import int_typedef +from pypy.objspace.std.longtype import long_typedef def descr__new__(space, w_booltype, w_obj=None): space.w_bool.check_user_subclass(w_booltype) @@ -11,7 +11,7 @@ # ____________________________________________________________ -bool_typedef = StdTypeDef("bool", int_typedef, +bool_typedef = StdTypeDef("bool", long_typedef, __doc__ = '''bool(x) -> bool Returns True when the argument x is true, False otherwise. diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -93,15 +93,13 @@ try: value = ovfcheck_float_to_int(w_value.floatval) except OverflowError: - return space.long(w_value) + pass else: return space.newint(value) - -def long__Float(space, w_floatobj): try: - return W_LongObject.fromfloat(space, w_floatobj.floatval) + return W_LongObject.fromfloat(space, w_value.floatval) except OverflowError: - if isnan(w_floatobj.floatval): + if isnan(w_value.floatval): raise OperationError( space.w_ValueError, space.wrap("cannot convert float NaN to integer")) @@ -112,7 +110,7 @@ try: value = ovfcheck_float_to_int(whole) except OverflowError: - return long__Float(space, w_floatobj) + return int__Float(space, w_floatobj) else: return space.newint(value) diff --git a/pypy/objspace/std/intobject.py b/pypy/objspace/std/intobject.py --- a/pypy/objspace/std/intobject.py +++ b/pypy/objspace/std/intobject.py @@ -24,7 +24,7 @@ return False if self.user_overridden_class or w_other.user_overridden_class: return self is w_other - return space.int_w(self) == space.int_w(w_other) + return space.bigint_w(self).eq(space.bigint_w(w_other)) def immutable_unique_id(self, space): if self.user_overridden_class: @@ -39,7 +39,7 @@ __slots__ = 'intval' _immutable_fields_ = ['intval'] - from pypy.objspace.std.inttype import int_typedef as typedef + from pypy.objspace.std.longtype import long_typedef as typedef def __init__(w_self, intval): w_self.intval = intval diff --git a/pypy/objspace/std/inttype.py b/pypy/objspace/std/inttype.py --- a/pypy/objspace/std/inttype.py +++ b/pypy/objspace/std/inttype.py @@ -36,7 +36,7 @@ @gateway.unwrap_spec(s='bufferstr', byteorder=str) def descr_from_bytes(space, w_cls, s, byteorder): from pypy.objspace.std.longtype import descr_from_bytes - return descr_from_bytes(space, space.w_long, s, byteorder) + return descr_from_bytes(space, space.w_int, s, byteorder) def wrapint(space, x): if space.config.objspace.std.withsmallint: diff --git a/pypy/objspace/std/longobject.py b/pypy/objspace/std/longobject.py --- a/pypy/objspace/std/longobject.py +++ b/pypy/objspace/std/longobject.py @@ -4,30 +4,12 @@ from pypy.objspace.std.model import registerimplementation, W_Object from pypy.objspace.std.register_all import register_all from pypy.objspace.std.multimethod import FailedToImplementArgs -from pypy.objspace.std.intobject import W_IntObject +from pypy.objspace.std.intobject import W_IntObject, W_AbstractIntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.rlib.rbigint import rbigint, SHIFT -class W_AbstractLongObject(W_Object): - __slots__ = () - def is_w(self, space, w_other): - if not isinstance(w_other, W_AbstractLongObject): - return False - if self.user_overridden_class or w_other.user_overridden_class: - return self is w_other - return space.bigint_w(self).eq(space.bigint_w(w_other)) - - def immutable_unique_id(self, space): - if self.user_overridden_class: - return None - from pypy.objspace.std.model import IDTAG_LONG as tag - b = space.bigint_w(self) - b = b.lshift(3).or_(rbigint.fromint(tag)) - return space.newlong_from_rbigint(b) - - -class W_LongObject(W_AbstractLongObject): +class W_LongObject(W_AbstractIntObject): """This is a wrapper of rbigint.""" from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['num'] @@ -114,27 +96,18 @@ return W_LongObject.fromint(space, w_intobj.intval) -# long__Long is supposed to do nothing, unless it has +# int__Long is supposed to do nothing, unless it has # a derived long object, where it should return # an exact one. -def long__Long(space, w_long1): - if space.is_w(space.type(w_long1), space.w_long): +def int__Long(space, w_long1): + if space.is_w(space.type(w_long1), space.w_int): return w_long1 l = w_long1.num return W_LongObject(l) -trunc__Long = long__Long - -def long__Int(space, w_intobj): - return space.newlong(w_intobj.intval) - -def int__Long(space, w_value): - try: - return space.newint(w_value.num.toint()) - except OverflowError: - return long__Long(space, w_value) +trunc__Long = int__Long def index__Long(space, w_value): - return long__Long(space, w_value) + return int__Long(space, w_value) def float__Long(space, w_longobj): try: @@ -275,7 +248,7 @@ return W_LongObject(w_long1.num.neg()) def pos__Long(space, w_long): - return long__Long(space, w_long) + return int__Long(space, w_long) def abs__Long(space, w_long): return W_LongObject(w_long.num.abs()) diff --git a/pypy/objspace/std/longtype.py b/pypy/objspace/std/longtype.py --- a/pypy/objspace/std/longtype.py +++ b/pypy/objspace/std/longtype.py @@ -5,7 +5,7 @@ from pypy.objspace.std.strutil import string_to_bigint, ParseStringError def descr_conjugate(space, w_int): - return space.long(w_int) + return space.int(w_int) def descr__new__(space, w_longtype, w_x=0, w_base=gateway.NoneNotWrapped): @@ -20,7 +20,7 @@ if w_base is None: # check for easy cases if (W_SmallLongObject and type(w_value) is W_SmallLongObject - and space.is_w(w_longtype, space.w_long)): + and space.is_w(w_longtype, space.w_int)): return w_value elif type(w_value) is W_LongObject: return newbigint(space, w_longtype, w_value.num) @@ -34,20 +34,13 @@ return string_to_w_long(space, w_longtype, unicode_to_decimal_w(space, w_value)) else: - # otherwise, use the __long__() or the __trunc__ methods + # otherwise, use the __int__() or the __trunc__ methods w_obj = w_value - if (space.lookup(w_obj, '__long__') is not None or - space.lookup(w_obj, '__int__') is not None): - w_obj = space.long(w_obj) + if space.lookup(w_obj, '__int__') is not None: + w_obj = space.int(w_obj) else: w_obj = space.trunc(w_obj) - # :-( blame CPython 2.7 - if space.lookup(w_obj, '__long__') is not None: - w_obj = space.long(w_obj) - else: - w_obj = space.int(w_obj) - bigint = space.bigint_w(w_obj) - return newbigint(space, w_longtype, bigint) + return w_obj else: base = space.int_w(w_base) @@ -80,7 +73,7 @@ longobject.py, but takes an explicit w_longtype argument. """ if (space.config.objspace.std.withsmalllong - and space.is_w(w_longtype, space.w_long)): + and space.is_w(w_longtype, space.w_int)): try: z = bigint.tolonglong() except OverflowError: @@ -94,13 +87,13 @@ return w_obj def descr_get_numerator(space, w_obj): - return space.long(w_obj) + return space.int(w_obj) def descr_get_denominator(space, w_obj): return space.newlong(1) def descr_get_real(space, w_obj): - return space.long(w_obj) + return space.int(w_obj) def descr_get_imag(space, w_obj): return space.newlong(0) @@ -124,8 +117,8 @@ # ____________________________________________________________ -long_typedef = StdTypeDef("long", - __doc__ = '''long(x[, base]) -> integer +long_typedef = StdTypeDef("int", + __doc__ = '''int(x[, base]) -> integer Convert a string or number to a long integer, if possible. A floating point argument will be truncated towards zero (this does not include a diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -31,7 +31,7 @@ } IDTAG_INT = 1 -IDTAG_LONG = 3 +#IDTAG_LONG = 3 IDTAG_FLOAT = 5 IDTAG_COMPLEX = 7 @@ -44,7 +44,7 @@ class result: from pypy.objspace.std.objecttype import object_typedef from pypy.objspace.std.booltype import bool_typedef - from pypy.objspace.std.inttype import int_typedef + #from pypy.objspace.std.inttype import int_typedef from pypy.objspace.std.floattype import float_typedef from pypy.objspace.std.complextype import complex_typedef from pypy.objspace.std.settype import set_typedef @@ -184,8 +184,8 @@ (complexobject.W_ComplexObject, complexobject.delegate_Bool2Complex), ] self.typeorder[intobject.W_IntObject] += [ + (longobject.W_LongObject, longobject.delegate_Int2Long), (floatobject.W_FloatObject, floatobject.delegate_Int2Float), - (longobject.W_LongObject, longobject.delegate_Int2Long), (complexobject.W_ComplexObject, complexobject.delegate_Int2Complex), ] if config.objspace.std.withsmalllong: diff --git a/pypy/objspace/std/smallintobject.py b/pypy/objspace/std/smallintobject.py --- a/pypy/objspace/std/smallintobject.py +++ b/pypy/objspace/std/smallintobject.py @@ -16,7 +16,7 @@ class W_SmallIntObject(W_AbstractIntObject, UnboxedValue): __slots__ = 'intval' - from pypy.objspace.std.inttype import int_typedef as typedef + from pypy.objspace.std.longtype import long_typedef as typedef def unwrap(w_self, space): return int(w_self.intval) diff --git a/pypy/objspace/std/smalllongobject.py b/pypy/objspace/std/smalllongobject.py --- a/pypy/objspace/std/smalllongobject.py +++ b/pypy/objspace/std/smalllongobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import r_longlong, r_int, r_uint from pypy.rlib.rarithmetic import intmask, LONGLONG_BIT from pypy.rlib.rbigint import rbigint -from pypy.objspace.std.longobject import W_AbstractLongObject, W_LongObject +from pypy.objspace.std.longobject import W_AbstractIntObject, W_LongObject from pypy.objspace.std.intobject import W_IntObject from pypy.objspace.std.noneobject import W_NoneObject from pypy.interpreter.error import OperationError @@ -17,7 +17,7 @@ LONGLONG_MIN = r_longlong((-1) << (LONGLONG_BIT-1)) -class W_SmallLongObject(W_AbstractLongObject): +class W_SmallLongObject(W_AbstractIntObject): from pypy.objspace.std.longtype import long_typedef as typedef _immutable_fields_ = ['longlong'] @@ -109,9 +109,6 @@ return space.newcomplex(float(w_small.longlong), 0.0) -def long__SmallLong(space, w_value): - return w_value - def int__SmallLong(space, w_value): a = w_value.longlong b = intmask(a) diff --git a/pypy/objspace/std/test/test_intobject.py b/pypy/objspace/std/test/test_intobject.py --- a/pypy/objspace/std/test/test_intobject.py +++ b/pypy/objspace/std/test/test_intobject.py @@ -314,10 +314,7 @@ def test_int_string(self): assert 42 == int("42") - assert 10000000000 == long("10000000000") - - def test_int_unicode(self): - assert 42 == int(unicode('42')) + assert 10000000000 == int("10000000000") def test_int_float(self): assert 4 == int(4.2) @@ -348,7 +345,7 @@ def test_overflow(self): import sys n = sys.maxint + 1 - assert isinstance(n, long) + assert isinstance(n, int) def test_pow(self): assert pow(2, -10) == 1/1024. @@ -369,8 +366,6 @@ assert j("100") == 100 assert j("100",2) == 4 assert isinstance(j("100",2),j) - raises(OverflowError,j,sys.maxint+1) - raises(OverflowError,j,str(sys.maxint+1)) def test_int_subclass_ops(self): import sys @@ -402,7 +397,7 @@ assert (5 + j(100), type(5 + j(100))) == ( 105, int) assert (5 - j(100), type(5 - j(100))) == ( -95, int) assert (5 * j(100), type(5 * j(100))) == ( 500, int) - assert (5 << j(100), type(5 << j(100))) == (5 << 100, long) + assert (5 << j(100), type(5 << j(100))) == (5 << 100, int) assert (j(100) >> 2, type(j(100) >> 2)) == ( 25, int) def test_int_subclass_int(self): @@ -413,17 +408,15 @@ return '' class subint(int): pass - class sublong(long): - pass value = 42L assert int(j()) == 42 value = 4200000000000000000000000000000000L assert int(j()) == 4200000000000000000000000000000000L value = subint(42) assert int(j()) == 42 and type(int(j())) is subint - value = sublong(4200000000000000000000000000000000L) + value = subint(4200000000000000000000000000000000L) assert (int(j()) == 4200000000000000000000000000000000L - and type(int(j())) is sublong) + and type(int(j())) is subint) value = 42.0 raises(TypeError, int, j()) value = "foo" @@ -444,16 +437,16 @@ def test_special_long(self): class a(object): - def __long__(self): + def __int__(self): self.ar = True return None inst = a() - raises(TypeError, long, inst) + raises(TypeError, int, inst) assert inst.ar == True class b(object): pass - raises((AttributeError,TypeError), long, b()) + raises((AttributeError,TypeError), int, b()) def test_just_trunc(self): class myint(object): diff --git a/pypy/objspace/std/test/test_longobject.py b/pypy/objspace/std/test/test_longobject.py --- a/pypy/objspace/std/test/test_longobject.py +++ b/pypy/objspace/std/test/test_longobject.py @@ -63,7 +63,7 @@ assert x * 2 ** 40 == x << 40 def test_truediv(self): - exec "from __future__ import division; a = 31415926L / 10000000L" + a = 31415926L / 10000000L assert a == 3.1415926 def test_floordiv(self): @@ -155,22 +155,22 @@ assert not (int(BIG)-1 >= BIG) def test_conversion(self): - class long2(long): + class long2(int): pass x = 1L x = long2(x<<100) y = int(x) - assert type(y) == long - assert type(+long2(5)) is long - assert type(long2(5) << 0) is long - assert type(long2(5) >> 0) is long - assert type(long2(5) + 0) is long - assert type(long2(5) - 0) is long - assert type(long2(5) * 1) is long - assert type(1 * long2(5)) is long - assert type(0 + long2(5)) is long - assert type(-long2(0)) is long - assert type(long2(5) // 1) is long + assert type(y) == int + assert type(+long2(5)) is int + assert type(long2(5) << 0) is int + assert type(long2(5) >> 0) is int + assert type(long2(5) + 0) is int + assert type(long2(5) - 0) is int + assert type(long2(5) * 1) is int + assert type(1 * long2(5)) is int + assert type(0 + long2(5)) is int + assert type(-long2(0)) is int + assert type(long2(5) // 1) is int def test_pow(self): x = 0L @@ -194,7 +194,7 @@ assert y < r <= 0 for x in [-1L, 0L, 1L, 2L ** 100 - 1, -2L ** 100 - 1]: for y in [-105566530L, -1L, 1L, 1034522340L]: - print "checking division for %s, %s" % (x, y) + print("checking division for %s, %s" % (x, y)) check_division(x, y) # special case from python tests: s1 = 33 @@ -209,7 +209,7 @@ raises(ZeroDivisionError, "x // 0L") def test_format(self): - assert repr(12345678901234567890) == '12345678901234567890L' + assert repr(12345678901234567890) == '12345678901234567890' assert str(12345678901234567890) == '12345678901234567890' assert hex(0x1234567890ABCDEFL) == '0x1234567890abcdefL' assert oct(01234567012345670L) == '01234567012345670L' @@ -227,7 +227,7 @@ def test_hash(self): # ints have the same hash as equal longs for i in range(-4, 14): - assert hash(i) == hash(long(i)) + assert hash(i) == hash(int(i)) # might check too much -- it's ok to change the hashing algorithm assert hash(123456789L) == 123456789 assert hash(1234567890123456789L) in ( @@ -247,8 +247,8 @@ def test_long(self): import sys n = -sys.maxint-1 - assert long(n) == n - assert str(long(n)) == str(n) + assert int(n) == n + assert str(int(n)) == str(n) def test_huge_longs(self): import operator @@ -262,45 +262,41 @@ class myint(object): def __trunc__(self): return 42 - assert long(myint()) == 42 + assert int(myint()) == 42 - def test_override___long__(self): - class mylong(long): - def __long__(self): + def test_override___int__(self): + class myint(int): + def __int__(self): return 42L - assert long(mylong(21)) == 42L - class myotherlong(long): + assert int(myint(21)) == 42L + class myotherint(int): pass - assert long(myotherlong(21)) == 21L + assert int(myotherint(21)) == 21L - def test___long__(self): + def test___int__(self): class A(object): - def __long__(self): - return 42 - assert long(A()) == 42L - class B(object): def __int__(self): return 42 - raises(TypeError, long, B()) + assert int(A()) == 42L # but!: (blame CPython 2.7) class Integral(object): def __int__(self): return 42 - class TruncReturnsNonLong(object): + class TruncReturnsNonInt(object): def __trunc__(self): return Integral() - assert long(TruncReturnsNonLong()) == 42 + assert int(TruncReturnsNonInt()) == 42 def test_conjugate(self): assert (7L).conjugate() == 7L assert (-7L).conjugate() == -7L - class L(long): + class L(int): pass - assert type(L(7).conjugate()) is long + assert type(L(7).conjugate()) is int - class L(long): + class L(int): def __pos__(self): return 43 assert L(7).conjugate() == 7L @@ -311,27 +307,14 @@ assert ((2**31)-1).bit_length() == 31 def test_from_bytes(self): - assert long.from_bytes(b'c', 'little') == 99 - assert long.from_bytes(b'\x01\x01', 'little') == 257 + assert int.from_bytes(b'c', 'little') == 99 + assert int.from_bytes(b'\x01\x01', 'little') == 257 def test_negative_zero(self): x = eval("-0L") assert x == 0L - def test_mix_int_and_long(self): - class IntLongMixClass(object): - def __int__(self): - return 42L - - def __long__(self): - return 64 - - mixIntAndLong = IntLongMixClass() - as_long = long(mixIntAndLong) - assert type(as_long) is long - assert as_long == 64 - def test_long_real(self): - class A(long): pass + class A(int): pass b = A(5).real - assert type(b) is long + assert type(b) is int diff --git a/pypy/objspace/std/test/test_newformat.py b/pypy/objspace/std/test/test_newformat.py --- a/pypy/objspace/std/test/test_newformat.py +++ b/pypy/objspace/std/test/test_newformat.py @@ -269,12 +269,6 @@ cls.w_i = cls.space.w_int -class AppTestLongFormatting(BaseIntegralFormattingTest): - - def setup_class(cls): - cls.w_i = cls.space.w_long - - class AppTestFloatFormatting: def setup_class(cls): cls.space = gettestobjspace(usemodules=('_locale',)) diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -293,7 +293,7 @@ return _format(self, digits, prefix, suffix) def repr(self): - return _format(self, BASE10, '', 'L') + return _format(self, BASE10) def str(self): return _format(self, BASE10) diff --git a/pypy/translator/geninterplevel.py b/pypy/translator/geninterplevel.py --- a/pypy/translator/geninterplevel.py +++ b/pypy/translator/geninterplevel.py @@ -844,7 +844,7 @@ typename_mapping = { object: 'space.w_object', int: 'space.w_int', - long: 'space.w_long', + long: 'space.w_int', bool: 'space.w_bool', list: 'space.w_list', tuple: 'space.w_tuple', From noreply at buildbot.pypy.org Sat Jan 14 21:48:36 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: Remove 'L' suffix from the output of hex() and oct() Message-ID: <20120114204836.DEB9A82B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51326:83fc38544bca Date: 2011-12-26 23:48 +0100 http://bitbucket.org/pypy/pypy/changeset/83fc38544bca/ Log: Remove 'L' suffix from the output of hex() and oct() diff --git a/pypy/objspace/std/test/test_longobject.py b/pypy/objspace/std/test/test_longobject.py --- a/pypy/objspace/std/test/test_longobject.py +++ b/pypy/objspace/std/test/test_longobject.py @@ -211,8 +211,8 @@ def test_format(self): assert repr(12345678901234567890) == '12345678901234567890' assert str(12345678901234567890) == '12345678901234567890' - assert hex(0x1234567890ABCDEFL) == '0x1234567890abcdefL' - assert oct(01234567012345670L) == '01234567012345670L' + assert hex(0x1234567890ABCDEF) == '0x1234567890abcdef' + assert oct(01234567012345670) == '01234567012345670' def test_bits(self): x = 0xAAAAAAAAL diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -287,10 +287,10 @@ def tofloat(self): return _AsDouble(self) - def format(self, digits, prefix='', suffix=''): + def format(self, digits, prefix=''): # 'digits' is a string whose length is the base to use, # and where each character is the corresponding digit. - return _format(self, digits, prefix, suffix) + return _format(self, digits, prefix) def repr(self): return _format(self, BASE10) @@ -603,10 +603,10 @@ if self.sign == 0: return '0L' else: - return _format(self, BASE8, '0', 'L') + return _format(self, BASE8, '0') def hex(self): - return _format(self, BASE16, '0x', 'L') + return _format(self, BASE16, '0x') def log(self, base): # base is supposed to be positive or 0.0, which means we use e @@ -1586,7 +1586,7 @@ BASE10 = '0123456789' BASE16 = '0123456789abcdef' -def _format(a, digits, prefix='', suffix=''): +def _format(a, digits, prefix=''): """ Convert a bigint object to a string, using a given conversion base. Return a string object. @@ -1602,14 +1602,9 @@ while i > 1: bits += 1 i >>= 1 - i = 5 + len(prefix) + len(suffix) + (size_a*SHIFT + bits-1) // bits + i = 5 + len(prefix) + (size_a*SHIFT + bits-1) // bits s = [chr(0)] * i p = i - j = len(suffix) - while j > 0: - p -= 1 - j -= 1 - s[p] = suffix[j] if a.sign == 0: p -= 1 From noreply at buildbot.pypy.org Sat Jan 14 21:48:38 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:38 +0100 (CET) Subject: [pypy-commit] pypy py3k: don't parse '0L' anymore Message-ID: <20120114204838.1DC7882B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51327:40196a1cc504 Date: 2011-12-27 00:03 +0100 http://bitbucket.org/pypy/pypy/changeset/40196a1cc504/ Log: don't parse '0L' anymore diff --git a/pypy/interpreter/pyparser/genpytokenize.py b/pypy/interpreter/pyparser/genpytokenize.py --- a/pypy/interpreter/pyparser/genpytokenize.py +++ b/pypy/interpreter/pyparser/genpytokenize.py @@ -60,25 +60,21 @@ newArcPair(states, "0"), groupStr(states, "xX"), atleastonce(states, - groupStr(states, "0123456789abcdefABCDEF")), - maybe(states, groupStr(states, "lL"))) + groupStr(states, "0123456789abcdefABCDEF"))) octNumber = chain(states, newArcPair(states, "0"), maybe(states, chain(states, groupStr(states, "oO"), groupStr(states, "01234567"))), - any(states, groupStr(states, "01234567")), - maybe(states, groupStr(states, "lL"))) + any(states, groupStr(states, "01234567"))) binNumber = chain(states, newArcPair(states, "0"), groupStr(states, "bB"), - atleastonce(states, groupStr(states, "01")), - maybe(states, groupStr(states, "lL"))) + atleastonce(states, groupStr(states, "01"))) decNumber = chain(states, groupStr(states, "123456789"), - any(states, makeDigits()), - maybe(states, groupStr(states, "lL"))) + any(states, makeDigits())) intNumber = group(states, hexNumber, octNumber, binNumber, decNumber) # ____________________________________________________________ # Exponents diff --git a/pypy/interpreter/pyparser/pytokenize.py b/pypy/interpreter/pyparser/pytokenize.py --- a/pypy/interpreter/pyparser/pytokenize.py +++ b/pypy/interpreter/pyparser/pytokenize.py @@ -113,15 +113,14 @@ {'.': 24, '0': 21, '1': 21, '2': 21, '3': 21, '4': 21, '5': 21, '6': 21, '7': 21, '8': 23, '9': 23, 'B': 22, - 'E': 25, 'J': 13, 'L': 13, 'O': 20, - 'X': 19, 'b': 22, 'e': 25, 'j': 13, - 'l': 13, 'o': 20, 'x': 19}, + 'E': 25, 'J': 13, 'O': 20, 'X': 19, + 'b': 22, 'e': 25, 'j': 13, 'o': 20, + 'x': 19}, # 5 {'.': 24, '0': 5, '1': 5, '2': 5, '3': 5, '4': 5, '5': 5, '6': 5, '7': 5, '8': 5, '9': 5, 'E': 25, - 'J': 13, 'L': 13, 'e': 25, 'j': 13, - 'l': 13}, + 'J': 13, 'e': 25, 'j': 13}, # 6 {'0': 26, '1': 26, '2': 26, '3': 26, '4': 26, '5': 26, '6': 26, '7': 26, @@ -166,8 +165,7 @@ {'.': 24, '0': 21, '1': 21, '2': 21, '3': 21, '4': 21, '5': 21, '6': 21, '7': 21, '8': 23, '9': 23, 'E': 25, - 'J': 13, 'L': 13, 'e': 25, 'j': 13, - 'l': 13}, + 'J': 13, 'e': 25, 'j': 13}, # 22 {'0': 36, '1': 36}, # 23 @@ -210,14 +208,13 @@ '4': 34, '5': 34, '6': 34, '7': 34, '8': 34, '9': 34, 'A': 34, 'B': 34, 'C': 34, 'D': 34, 'E': 34, 'F': 34, - 'L': 13, 'a': 34, 'b': 34, 'c': 34, - 'd': 34, 'e': 34, 'f': 34, 'l': 13}, + 'a': 34, 'b': 34, 'c': 34, 'd': 34, + 'e': 34, 'f': 34}, # 35 {'0': 35, '1': 35, '2': 35, '3': 35, - '4': 35, '5': 35, '6': 35, '7': 35, - 'L': 13, 'l': 13}, + '4': 35, '5': 35, '6': 35, '7': 35}, # 36 - {'0': 36, '1': 36, 'L': 13, 'l': 13}, + {'0': 36, '1': 36}, # 37 {'+': 42, '-': 42, '0': 43, '1': 43, '2': 43, '3': 43, '4': 43, '5': 43, diff --git a/pypy/objspace/std/test/test_intobject.py b/pypy/objspace/std/test/test_intobject.py --- a/pypy/objspace/std/test/test_intobject.py +++ b/pypy/objspace/std/test/test_intobject.py @@ -351,8 +351,8 @@ assert pow(2, -10) == 1/1024. def test_int_w_long_arg(self): - assert int(10000000000) == 10000000000L - assert int("10000000000") == 10000000000l + assert int(10000000000) == 10000000000 + assert int("10000000000") == 10000000000 raises(ValueError, int, "10000000000JUNK") raises(ValueError, int, "10000000000JUNK", 10) @@ -362,7 +362,7 @@ pass assert j(100) == 100 assert isinstance(j(100),j) - assert j(100L) == 100 + assert j(100) == 100 assert j("100") == 100 assert j("100",2) == 4 assert isinstance(j("100",2),j) @@ -408,14 +408,14 @@ return '' class subint(int): pass - value = 42L + value = 42 assert int(j()) == 42 - value = 4200000000000000000000000000000000L - assert int(j()) == 4200000000000000000000000000000000L + value = 4200000000000000000000000000000000 + assert int(j()) == 4200000000000000000000000000000000 value = subint(42) assert int(j()) == 42 and type(int(j())) is subint - value = subint(4200000000000000000000000000000000L) - assert (int(j()) == 4200000000000000000000000000000000L + value = subint(4200000000000000000000000000000000) + assert (int(j()) == 4200000000000000000000000000000000 and type(int(j())) is subint) value = 42.0 raises(TypeError, int, j()) diff --git a/pypy/objspace/std/test/test_longobject.py b/pypy/objspace/std/test/test_longobject.py --- a/pypy/objspace/std/test/test_longobject.py +++ b/pypy/objspace/std/test/test_longobject.py @@ -43,44 +43,44 @@ def test_trunc(self): import math - assert math.trunc(1L) == 1L - assert math.trunc(-1L) == -1L + assert math.trunc(1) == 1 + assert math.trunc(-1) == -1 def test_add(self): - x = 123L - assert int(x + 12443L) == 123 + 12443 + x = 123 + assert int(x + 12443) == 123 + 12443 x = -20 - assert x + 2 + 3L + True == -14L + assert x + 2 + 3 + True == -14 def test_sub(self): - x = 58543L - assert int(x - 12332L) == 58543 - 12332 - x = 237123838281233L - assert x * 12 == x * 12L + x = 58543 + assert int(x - 12332) == 58543 - 12332 + x = 237123838281233 + assert x * 12 == x * 12 def test_mul(self): - x = 363L + x = 363 assert x * 2 ** 40 == x << 40 def test_truediv(self): - a = 31415926L / 10000000L + a = 31415926 / 10000000 assert a == 3.1415926 def test_floordiv(self): - x = 31415926L - a = x // 10000000L - assert a == 3L + x = 31415926 + a = x // 10000000 + assert a == 3 def test_numerator_denominator(self): - assert (1L).numerator == 1L - assert (1L).denominator == 1L - assert (42L).numerator == 42L - assert (42L).denominator == 1L + assert (1).numerator == 1 + assert (1).denominator == 1 + assert (42).numerator == 42 + assert (42).denominator == 1 def test_compare(self): Z = 0 - ZL = 0L - for BIG in (1L, 1L << 62, 1L << 9999): + ZL = 0 + for BIG in (1, 1 << 62, 1 << 9999): assert Z == ZL assert not (Z != ZL) assert ZL == Z @@ -157,7 +157,7 @@ def test_conversion(self): class long2(int): pass - x = 1L + x = 1 x = long2(x<<100) y = int(x) assert type(y) == int @@ -173,12 +173,12 @@ assert type(long2(5) // 1) is int def test_pow(self): - x = 0L - assert pow(x, 0L, 1L) == 0L + x = 0 + assert pow(x, 0, 1) == 0 def test_getnewargs(self): - assert 0L .__getnewargs__() == (0L,) - assert (-1L) .__getnewargs__() == (-1L,) + assert 0 .__getnewargs__() == (0,) + assert (-1) .__getnewargs__() == (-1,) def test_divmod(self): def check_division(x, y): @@ -192,8 +192,8 @@ assert 0 <= r < y else: assert y < r <= 0 - for x in [-1L, 0L, 1L, 2L ** 100 - 1, -2L ** 100 - 1]: - for y in [-105566530L, -1L, 1L, 1034522340L]: + for x in [-1, 0, 1, 2 ** 100 - 1, -2 ** 100 - 1]: + for y in [-105566530, -1, 1, 1034522340]: print("checking division for %s, %s" % (x, y)) check_division(x, y) # special case from python tests: @@ -203,10 +203,10 @@ x >>= s1*16 y = 10953035502453784575 y >>= s2*16 - x = 0x3FE0003FFFFC0001FFFL - y = 0x9800FFC1L + x = 0x3FE0003FFFFC0001FFF + y = 0x9800FFC1 check_division(x, y) - raises(ZeroDivisionError, "x // 0L") + raises(ZeroDivisionError, "x // 0") def test_format(self): assert repr(12345678901234567890) == '12345678901234567890' @@ -215,31 +215,31 @@ assert oct(01234567012345670) == '01234567012345670' def test_bits(self): - x = 0xAAAAAAAAL - assert x | 0x55555555L == 0xFFFFFFFFL - assert x & 0x55555555L == 0x00000000L - assert x ^ 0x55555555L == 0xFFFFFFFFL - assert -x | 0x55555555L == -0xAAAAAAA9L - assert x | 0x555555555L == 0x5FFFFFFFFL - assert x & 0x555555555L == 0x000000000L - assert x ^ 0x555555555L == 0x5FFFFFFFFL + x = 0xAAAAAAAA + assert x | 0x55555555 == 0xFFFFFFFF + assert x & 0x55555555 == 0x00000000 + assert x ^ 0x55555555 == 0xFFFFFFFF + assert -x | 0x55555555 == -0xAAAAAAA9 + assert x | 0x555555555 == 0x5FFFFFFFF + assert x & 0x555555555 == 0x000000000 + assert x ^ 0x555555555 == 0x5FFFFFFFF def test_hash(self): # ints have the same hash as equal longs for i in range(-4, 14): assert hash(i) == hash(int(i)) # might check too much -- it's ok to change the hashing algorithm - assert hash(123456789L) == 123456789 - assert hash(1234567890123456789L) in ( + assert hash(123456789) == 123456789 + assert hash(1234567890123456789) in ( -1895067127, # with 32-bit platforms 1234567890123456789) # with 64-bit platforms def test_math_log(self): import math - raises(ValueError, math.log, 0L) - raises(ValueError, math.log, -1L) - raises(ValueError, math.log, -2L) - raises(ValueError, math.log, -(1L << 10000)) + raises(ValueError, math.log, 0) + raises(ValueError, math.log, -1) + raises(ValueError, math.log, -2) + raises(ValueError, math.log, -(1 << 10000)) #raises(ValueError, math.log, 0) raises(ValueError, math.log, -1) raises(ValueError, math.log, -2) @@ -252,11 +252,11 @@ def test_huge_longs(self): import operator - x = 1L - huge = x << 40000L + x = 1 + huge = x << 40000 raises(OverflowError, float, huge) raises(OverflowError, operator.truediv, huge, 3) - raises(OverflowError, operator.truediv, huge, 3L) + raises(OverflowError, operator.truediv, huge, 3) def test_just_trunc(self): class myint(object): @@ -267,17 +267,17 @@ def test_override___int__(self): class myint(int): def __int__(self): - return 42L - assert int(myint(21)) == 42L + return 42 + assert int(myint(21)) == 42 class myotherint(int): pass - assert int(myotherint(21)) == 21L + assert int(myotherint(21)) == 21 def test___int__(self): class A(object): def __int__(self): return 42 - assert int(A()) == 42L + assert int(A()) == 42 # but!: (blame CPython 2.7) class Integral(object): def __int__(self): @@ -288,8 +288,8 @@ assert int(TruncReturnsNonInt()) == 42 def test_conjugate(self): - assert (7L).conjugate() == 7L - assert (-7L).conjugate() == -7L + assert (7).conjugate() == 7 + assert (-7).conjugate() == -7 class L(int): pass @@ -299,10 +299,10 @@ class L(int): def __pos__(self): return 43 - assert L(7).conjugate() == 7L + assert L(7).conjugate() == 7 def test_bit_length(self): - assert 8L.bit_length() == 4 + assert (8).bit_length() == 4 assert (-1<<40).bit_length() == 41 assert ((2**31)-1).bit_length() == 31 @@ -311,8 +311,8 @@ assert int.from_bytes(b'\x01\x01', 'little') == 257 def test_negative_zero(self): - x = eval("-0L") - assert x == 0L + x = eval("-0") + assert x == 0 def test_long_real(self): class A(int): pass From noreply at buildbot.pypy.org Sat Jan 14 21:48:39 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:39 +0100 (CET) Subject: [pypy-commit] pypy py3k: Now reject u'' literals, Message-ID: <20120114204839.4DFA082B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51328:662eb2c58644 Date: 2011-12-27 00:29 +0100 http://bitbucket.org/pypy/pypy/changeset/662eb2c58644/ Log: Now reject u'' literals, expect to break many tests here and there... diff --git a/pypy/interpreter/pyparser/genpytokenize.py b/pypy/interpreter/pyparser/genpytokenize.py --- a/pypy/interpreter/pyparser/genpytokenize.py +++ b/pypy/interpreter/pyparser/genpytokenize.py @@ -141,7 +141,7 @@ # ____________________________________________________________ def makeStrPrefix (): return chain(states, - maybe(states, groupStr(states, "uUbB")), + maybe(states, groupStr(states, "bB")), maybe(states, groupStr(states, "rR"))) # ____________________________________________________________ contStr = group(states, diff --git a/pypy/interpreter/pyparser/parsestring.py b/pypy/interpreter/pyparser/parsestring.py --- a/pypy/interpreter/pyparser/parsestring.py +++ b/pypy/interpreter/pyparser/parsestring.py @@ -21,10 +21,6 @@ ps += 1 quote = s[ps] unicode = False - elif quote == 'u' or quote == 'U': - ps += 1 - quote = s[ps] - unicode = True if quote == 'r' or quote == 'R': ps += 1 quote = s[ps] diff --git a/pypy/interpreter/pyparser/pytokenize.py b/pypy/interpreter/pyparser/pytokenize.py --- a/pypy/interpreter/pyparser/pytokenize.py +++ b/pypy/interpreter/pyparser/pytokenize.py @@ -45,7 +45,7 @@ 'I': 1, 'J': 1, 'K': 1, 'L': 1, 'M': 1, 'N': 1, 'O': 1, 'P': 1, 'Q': 1, 'R': 3, 'S': 1, 'T': 1, - 'U': 2, 'V': 1, 'W': 1, 'X': 1, + 'U': 1, 'V': 1, 'W': 1, 'X': 1, 'Y': 1, 'Z': 1, '[': 13, '\\': 17, ']': 13, '^': 12, '_': 1, '`': 13, 'a': 1, 'b': 2, 'c': 1, 'd': 1, @@ -53,7 +53,7 @@ 'i': 1, 'j': 1, 'k': 1, 'l': 1, 'm': 1, 'n': 1, 'o': 1, 'p': 1, 'q': 1, 'r': 3, 's': 1, 't': 1, - 'u': 2, 'v': 1, 'w': 1, 'x': 1, + 'u': 1, 'v': 1, 'w': 1, 'x': 1, 'y': 1, 'z': 1, '{': 13, '|': 12, '}': 13, '~': 13}, # 1 @@ -311,12 +311,10 @@ '"' : doubleDFA, 'r' : None, 'R' : None, - 'u' : None, - 'U' : None, 'b' : None, 'B' : None} -for uniPrefix in ("", "u", "U", "b", "B"): +for uniPrefix in ("", "b", "B"): for rawPrefix in ("", "r", "R"): prefix = uniPrefix + rawPrefix endDFAs[prefix + "'''"] = single3DFA @@ -332,20 +330,14 @@ triple_quoted = {} for t in ("'''", '"""', "r'''", 'r"""', "R'''", 'R"""', - "u'''", 'u"""', "U'''", 'U"""', "b'''", 'b"""', "B'''", 'B"""', - "ur'''", 'ur"""', "Ur'''", 'Ur"""', - "uR'''", 'uR"""', "UR'''", 'UR"""', "br'''", 'br"""', "Br'''", 'Br"""', "bR'''", 'bR"""', "BR'''", 'BR"""'): triple_quoted[t] = t single_quoted = {} for t in ("'", '"', "r'", 'r"', "R'", 'R"', - "u'", 'u"', "U'", 'U"', "b'", 'b"', "B'", 'B"', - "ur'", 'ur"', "Ur'", 'Ur"', - "uR'", 'uR"', "UR'", 'UR"', "br'", 'br"', "Br'", 'Br"', "bR'", 'bR"', "BR'", 'BR"'): single_quoted[t] = t diff --git a/pypy/interpreter/pyparser/test/test_parsestring.py b/pypy/interpreter/pyparser/test/test_parsestring.py --- a/pypy/interpreter/pyparser/test/test_parsestring.py +++ b/pypy/interpreter/pyparser/test/test_parsestring.py @@ -6,8 +6,8 @@ space = self.space w_ret = parsestring.parsestr(space, None, literal) if isinstance(value, str): - assert space.type(w_ret) == space.w_str - assert space.str_w(w_ret) == value + assert space.type(w_ret) == space.w_bytes + assert space.bytes_w(w_ret) == value elif isinstance(value, unicode): assert space.type(w_ret) == space.w_unicode assert space.unicode_w(w_ret) == value @@ -17,49 +17,49 @@ def test_simple(self): space = self.space for s in ['hello world', 'hello\n world']: - self.parse_and_compare(repr(s), s) + self.parse_and_compare('b' + repr(s), s) - self.parse_and_compare("'''hello\\x42 world'''", 'hello\x42 world') + self.parse_and_compare("b'''hello\\x42 world'''", 'hello\x42 world') # octal - self.parse_and_compare(r'"\0"', chr(0)) - self.parse_and_compare(r'"\07"', chr(7)) - self.parse_and_compare(r'"\123"', chr(0123)) - self.parse_and_compare(r'"\400"', chr(0)) - self.parse_and_compare(r'"\9"', '\\' + '9') - self.parse_and_compare(r'"\08"', chr(0) + '8') + self.parse_and_compare(r'b"\0"', chr(0)) + self.parse_and_compare(r'b"\07"', chr(7)) + self.parse_and_compare(r'b"\123"', chr(0123)) + self.parse_and_compare(r'b"\400"', chr(0)) + self.parse_and_compare(r'b"\9"', '\\' + '9') + self.parse_and_compare(r'b"\08"', chr(0) + '8') # hexadecimal - self.parse_and_compare(r'"\xfF"', chr(0xFF)) - self.parse_and_compare(r'"\""', '"') - self.parse_and_compare(r"'\''", "'") - for s in (r'"\x"', r'"\x7"', r'"\x7g"'): + self.parse_and_compare(r'b"\xfF"', chr(0xFF)) + self.parse_and_compare(r'b"\""', '"') + self.parse_and_compare(r"b'\''", "'") + for s in (r'b"\x"', r'b"\x7"', r'b"\x7g"'): space.raises_w(space.w_ValueError, parsestring.parsestr, space, None, s) def test_unicode(self): space = self.space - for s in [u'hello world', u'hello\n world']: - self.parse_and_compare(repr(s), s) + for s in ['hello world', 'hello\n world']: + self.parse_and_compare(repr(s), unicode(s)) - self.parse_and_compare("u'''hello\\x42 world'''", + self.parse_and_compare("'''hello\\x42 world'''", u'hello\x42 world') - self.parse_and_compare("u'''hello\\u0842 world'''", + self.parse_and_compare("'''hello\\u0842 world'''", u'hello\u0842 world') s = "u'\x81'" - s = s.decode("koi8-u").encode("utf8") + s = s.decode("koi8-u").encode("utf8")[1:] w_ret = parsestring.parsestr(self.space, 'koi8-u', s) ret = space.unwrap(w_ret) assert ret == eval("# -*- coding: koi8-u -*-\nu'\x81'") def test_unicode_literals(self): space = self.space - w_ret = parsestring.parsestr(space, None, repr("hello"), True) + w_ret = parsestring.parsestr(space, None, repr("hello")) assert space.isinstance_w(w_ret, space.w_unicode) - w_ret = parsestring.parsestr(space, None, "b'hi'", True) + w_ret = parsestring.parsestr(space, None, "b'hi'") assert space.isinstance_w(w_ret, space.w_str) - w_ret = parsestring.parsestr(space, None, "r'hi'", True) + w_ret = parsestring.parsestr(space, None, "r'hi'") assert space.isinstance_w(w_ret, space.w_unicode) def test_bytes(self): @@ -77,7 +77,7 @@ s = s.decode("koi8-u").encode("utf8") w_ret = parsestring.parsestr(self.space, 'koi8-u', s) ret = space.unwrap(w_ret) - assert ret == eval("# -*- coding: koi8-u -*-\n'\x81'") + assert ret == eval("# -*- coding: koi8-u -*-\nu'\x81'") def test_multiline_unicode_strings_with_backslash(self): space = self.space From noreply at buildbot.pypy.org Sat Jan 14 21:48:40 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:40 +0100 (CET) Subject: [pypy-commit] pypy py3k: intern() is now in the sys module Message-ID: <20120114204840.7F1CC82B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51329:cb7edd67143d Date: 2011-12-27 17:41 +0100 http://bitbucket.org/pypy/pypy/changeset/cb7edd67143d/ Log: intern() is now in the sys module diff --git a/pypy/module/__builtin__/__init__.py b/pypy/module/__builtin__/__init__.py --- a/pypy/module/__builtin__/__init__.py +++ b/pypy/module/__builtin__/__init__.py @@ -70,7 +70,6 @@ 'iter' : 'operation.iter', 'next' : 'operation.next', 'id' : 'operation.id', - 'intern' : 'operation.intern', 'callable' : 'operation.callable', 'compile' : 'compiling.compile', diff --git a/pypy/module/__builtin__/operation.py b/pypy/module/__builtin__/operation.py --- a/pypy/module/__builtin__/operation.py +++ b/pypy/module/__builtin__/operation.py @@ -220,15 +220,6 @@ space.setattr(w_object, w_name, w_val) return space.w_None -def intern(space, w_str): - """``Intern'' the given string. This enters the string in the (global) -table of interned strings whose purpose is to speed up dictionary lookups. -Return the string itself or the previously interned string object with the -same value.""" - if space.is_w(space.type(w_str), space.w_unicode): - return space.new_interned_w_str(w_str) - raise OperationError(space.w_TypeError, space.wrap("intern() argument must be string.")) - def callable(space, w_object): """Check whether the object appears to be callable (i.e., some kind of function). Note that classes are callable.""" diff --git a/pypy/module/__builtin__/test/test_builtin.py b/pypy/module/__builtin__/test/test_builtin.py --- a/pypy/module/__builtin__/test/test_builtin.py +++ b/pypy/module/__builtin__/test/test_builtin.py @@ -93,20 +93,6 @@ raises(ValueError, chr, -1) raises(ValueError, chr, sys.maxunicode+1) - def test_intern(self): - raises(TypeError, intern) - raises(TypeError, intern, 1) - class S(str): - pass - raises(TypeError, intern, S("hello")) - s = "never interned before" - s2 = intern(s) - assert s == s2 - s3 = s.swapcase() - assert s3 != s2 - s4 = s3.swapcase() - assert intern(s4) is s2 - def test_globals(self): d = {"foo":"bar"} exec("def f(): return globals()", d) diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -55,6 +55,7 @@ 'getprofile' : 'vm.getprofile', 'call_tracing' : 'vm.call_tracing', 'getsizeof' : 'vm.getsizeof', + 'intern' : 'vm.intern', 'executable' : 'space.wrap("py.py")', 'api_version' : 'version.get_api_version(space)', diff --git a/pypy/module/sys/test/test_sysmodule.py b/pypy/module/sys/test/test_sysmodule.py --- a/pypy/module/sys/test/test_sysmodule.py +++ b/pypy/module/sys/test/test_sysmodule.py @@ -607,3 +607,19 @@ assert len(frames) == 1 _, other_frame = frames.popitem() assert other_frame.f_code.co_name in ('other_thread', '?') + + def test_intern(self): + from sys import intern + raises(TypeError, intern) + raises(TypeError, intern, 1) + class S(str): + pass + raises(TypeError, intern, S("hello")) + s = "never interned before" + s2 = intern(s) + assert s == s2 + s3 = s.swapcase() + assert s3 != s2 + s4 = s3.swapcase() + assert intern(s4) is s2 + diff --git a/pypy/module/sys/vm.py b/pypy/module/sys/vm.py --- a/pypy/module/sys/vm.py +++ b/pypy/module/sys/vm.py @@ -201,3 +201,13 @@ raise OperationError(space.w_TypeError, space.wrap("sys.getsizeof() not implemented on PyPy")) return w_default + +def intern(space, w_str): + """``Intern'' the given string. This enters the string in the (global) +table of interned strings whose purpose is to speed up dictionary lookups. +Return the string itself or the previously interned string object with the +same value.""" + if space.is_w(space.type(w_str), space.w_unicode): + return space.new_interned_w_str(w_str) + raise OperationError(space.w_TypeError, space.wrap("intern() argument must be string.")) + From noreply at buildbot.pypy.org Sat Jan 14 21:48:46 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 14 Jan 2012 21:48:46 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120114204846.C7B5082B12@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51330:86189f3364a6 Date: 2012-01-14 21:45 +0100 http://bitbucket.org/pypy/pypy/changeset/86189f3364a6/ Log: hg merge default diff too long, truncating to 10000 out of 21522 lines diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -231,8 +231,10 @@ sqlite.sqlite3_result_text.argtypes = [c_void_p, c_char_p, c_int, c_void_p] sqlite.sqlite3_result_text.restype = None -sqlite.sqlite3_enable_load_extension.argtypes = [c_void_p, c_int] -sqlite.sqlite3_enable_load_extension.restype = c_int +HAS_LOAD_EXTENSION = hasattr(sqlite, "sqlite3_enable_load_extension") +if HAS_LOAD_EXTENSION: + sqlite.sqlite3_enable_load_extension.argtypes = [c_void_p, c_int] + sqlite.sqlite3_enable_load_extension.restype = c_int ########################################## # END Wrapped SQLite C API and constants @@ -708,13 +710,14 @@ from sqlite3.dump import _iterdump return _iterdump(self) - def enable_load_extension(self, enabled): - self._check_thread() - self._check_closed() + if HAS_LOAD_EXTENSION: + def enable_load_extension(self, enabled): + self._check_thread() + self._check_closed() - rc = sqlite.sqlite3_enable_load_extension(self.db, int(enabled)) - if rc != SQLITE_OK: - raise OperationalError("Error enabling load extension") + rc = sqlite.sqlite3_enable_load_extension(self.db, int(enabled)) + if rc != SQLITE_OK: + raise OperationalError("Error enabling load extension") DML, DQL, DDL = range(3) diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -180,7 +180,12 @@ if name is None: name = pyobj.func_name if signature is None: - signature = cpython_code_signature(pyobj.func_code) + if hasattr(pyobj, '_generator_next_method_of_'): + from pypy.interpreter.argument import Signature + signature = Signature(['entry']) # haaaaaack + defaults = () + else: + signature = cpython_code_signature(pyobj.func_code) if defaults is None: defaults = pyobj.func_defaults self.name = name @@ -252,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,8 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1567,12 +1567,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1593,7 +1596,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable @@ -98,7 +97,6 @@ "Abstract. Get the expected number of locals." raise TypeError, "abstract" - @jit.dont_look_inside def fast2locals(self): # Copy values from the fastlocals to self.w_locals if self.w_locals is None: @@ -112,7 +110,6 @@ w_name = self.space.wrap(name) self.space.setitem(self.w_locals, w_name, w_value) - @jit.dont_look_inside def locals2fast(self): # Copy values from self.w_locals to the fastlocals assert self.w_locals is not None diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -619,7 +619,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -655,7 +656,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -674,7 +676,8 @@ self.descr_reqcls, args.prepend(w_obj)) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -690,7 +693,8 @@ raise OperationError(space.w_SystemError, space.wrap("unexpected DescrMismatch error")) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -708,7 +712,8 @@ self.descr_reqcls, Arguments(space, [w1])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -726,7 +731,8 @@ self.descr_reqcls, Arguments(space, [w1, w2])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -744,7 +750,8 @@ self.descr_reqcls, Arguments(space, [w1, w2, w3])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -763,7 +770,8 @@ Arguments(space, [w1, w2, w3, w4])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -37,7 +37,7 @@ def get_arg_types(self): return self.arg_types - def get_return_type(self): + def get_result_type(self): return self.typeinfo def get_extra_info(self): diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -5,11 +5,7 @@ from pypy.jit.metainterp.history import AbstractDescr, getkind from pypy.jit.metainterp import history from pypy.jit.codewriter import heaptracker, longlong - -# The point of the class organization in this file is to make instances -# as compact as possible. This is done by not storing the field size or -# the 'is_pointer_field' flag in the instance itself but in the class -# (in methods actually) using a few classes instead of just one. +from pypy.jit.codewriter.longlong import is_longlong class GcCache(object): @@ -19,6 +15,7 @@ self._cache_size = {} self._cache_field = {} self._cache_array = {} + self._cache_arraylen = {} self._cache_call = {} self._cache_interiorfield = {} @@ -26,24 +23,15 @@ assert isinstance(STRUCT, lltype.GcStruct) def init_array_descr(self, ARRAY, arraydescr): - assert isinstance(ARRAY, lltype.GcArray) + assert (isinstance(ARRAY, lltype.GcArray) or + isinstance(ARRAY, lltype.GcStruct) and ARRAY._arrayfld) -if lltype.SignedLongLong is lltype.Signed: - def is_longlong(TYPE): - return False -else: - assert rffi.sizeof(lltype.SignedLongLong) == rffi.sizeof(lltype.Float) - def is_longlong(TYPE): - return TYPE in (lltype.SignedLongLong, lltype.UnsignedLongLong) - # ____________________________________________________________ # SizeDescrs class SizeDescr(AbstractDescr): size = 0 # help translation - is_immutable = False - tid = llop.combine_ushort(lltype.Signed, 0, 0) def __init__(self, size, count_fields_if_immut=-1): @@ -77,265 +65,247 @@ cache[STRUCT] = sizedescr return sizedescr + # ____________________________________________________________ # FieldDescrs -class BaseFieldDescr(AbstractDescr): +FLAG_POINTER = 'P' +FLAG_FLOAT = 'F' +FLAG_UNSIGNED = 'U' +FLAG_SIGNED = 'S' +FLAG_STRUCT = 'X' +FLAG_VOID = 'V' + +class FieldDescr(AbstractDescr): + name = '' offset = 0 # help translation - name = '' - _clsname = '' + field_size = 0 + flag = '\x00' - def __init__(self, name, offset): + def __init__(self, name, offset, field_size, flag): self.name = name self.offset = offset + self.field_size = field_size + self.flag = flag + + def is_pointer_field(self): + return self.flag == FLAG_POINTER + + def is_float_field(self): + return self.flag == FLAG_FLOAT + + def is_field_signed(self): + return self.flag == FLAG_SIGNED def sort_key(self): return self.offset - def get_field_size(self, translate_support_code): - raise NotImplementedError + def repr_of_descr(self): + return '' % (self.flag, self.name, self.offset) - _is_pointer_field = False # unless overridden by GcPtrFieldDescr - _is_float_field = False # unless overridden by FloatFieldDescr - _is_field_signed = False # unless overridden by XxxFieldDescr - - def is_pointer_field(self): - return self._is_pointer_field - - def is_float_field(self): - return self._is_float_field - - def is_field_signed(self): - return self._is_field_signed - - def repr_of_descr(self): - return '<%s %s %s>' % (self._clsname, self.name, self.offset) - -class DynamicFieldDescr(BaseFieldDescr): - def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): - self.offset = offset - self._fieldsize = fieldsize - self._is_pointer_field = is_pointer - self._is_float_field = is_float - self._is_field_signed = is_signed - - def get_field_size(self, translate_support_code): - return self._fieldsize - -class NonGcPtrFieldDescr(BaseFieldDescr): - _clsname = 'NonGcPtrFieldDescr' - def get_field_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrFieldDescr(NonGcPtrFieldDescr): - _clsname = 'GcPtrFieldDescr' - _is_pointer_field = True - -def getFieldDescrClass(TYPE): - return getDescrClass(TYPE, BaseFieldDescr, GcPtrFieldDescr, - NonGcPtrFieldDescr, 'Field', 'get_field_size', - '_is_float_field', '_is_field_signed') def get_field_descr(gccache, STRUCT, fieldname): cache = gccache._cache_field try: return cache[STRUCT][fieldname] except KeyError: - offset, _ = symbolic.get_field_token(STRUCT, fieldname, - gccache.translate_support_code) + offset, size = symbolic.get_field_token(STRUCT, fieldname, + gccache.translate_support_code) FIELDTYPE = getattr(STRUCT, fieldname) + flag = get_type_flag(FIELDTYPE) name = '%s.%s' % (STRUCT._name, fieldname) - fielddescr = getFieldDescrClass(FIELDTYPE)(name, offset) + fielddescr = FieldDescr(name, offset, size, flag) cachedict = cache.setdefault(STRUCT, {}) cachedict[fieldname] = fielddescr return fielddescr +def get_type_flag(TYPE): + if isinstance(TYPE, lltype.Ptr): + if TYPE.TO._gckind == 'gc': + return FLAG_POINTER + else: + return FLAG_UNSIGNED + if isinstance(TYPE, lltype.Struct): + return FLAG_STRUCT + if TYPE is lltype.Float or is_longlong(TYPE): + return FLAG_FLOAT + if (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and + rffi.cast(TYPE, -1) == -1): + return FLAG_SIGNED + return FLAG_UNSIGNED + +def get_field_arraylen_descr(gccache, ARRAY_OR_STRUCT): + cache = gccache._cache_arraylen + try: + return cache[ARRAY_OR_STRUCT] + except KeyError: + tsc = gccache.translate_support_code + (_, _, ofs) = symbolic.get_array_token(ARRAY_OR_STRUCT, tsc) + size = symbolic.get_size(lltype.Signed, tsc) + result = FieldDescr("len", ofs, size, get_type_flag(lltype.Signed)) + cache[ARRAY_OR_STRUCT] = result + return result + + # ____________________________________________________________ # ArrayDescrs -_A = lltype.GcArray(lltype.Signed) # a random gcarray -_AF = lltype.GcArray(lltype.Float) # an array of C doubles +class ArrayDescr(AbstractDescr): + tid = 0 + basesize = 0 # workaround for the annotator + itemsize = 0 + lendescr = None + flag = '\x00' - -class BaseArrayDescr(AbstractDescr): - _clsname = '' - tid = llop.combine_ushort(lltype.Signed, 0, 0) - - def get_base_size(self, translate_support_code): - basesize, _, _ = symbolic.get_array_token(_A, translate_support_code) - return basesize - - def get_ofs_length(self, translate_support_code): - _, _, ofslength = symbolic.get_array_token(_A, translate_support_code) - return ofslength - - def get_item_size(self, translate_support_code): - raise NotImplementedError - - _is_array_of_pointers = False # unless overridden by GcPtrArrayDescr - _is_array_of_floats = False # unless overridden by FloatArrayDescr - _is_array_of_structs = False # unless overridden by StructArrayDescr - _is_item_signed = False # unless overridden by XxxArrayDescr + def __init__(self, basesize, itemsize, lendescr, flag): + self.basesize = basesize + self.itemsize = itemsize + self.lendescr = lendescr # or None, if no length + self.flag = flag def is_array_of_pointers(self): - return self._is_array_of_pointers + return self.flag == FLAG_POINTER def is_array_of_floats(self): - return self._is_array_of_floats + return self.flag == FLAG_FLOAT + + def is_item_signed(self): + return self.flag == FLAG_SIGNED def is_array_of_structs(self): - return self._is_array_of_structs - - def is_item_signed(self): - return self._is_item_signed + return self.flag == FLAG_STRUCT def repr_of_descr(self): - return '<%s>' % self._clsname + return '' % (self.flag, self.itemsize) -class NonGcPtrArrayDescr(BaseArrayDescr): - _clsname = 'NonGcPtrArrayDescr' - def get_item_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrArrayDescr(NonGcPtrArrayDescr): - _clsname = 'GcPtrArrayDescr' - _is_array_of_pointers = True - -class FloatArrayDescr(BaseArrayDescr): - _clsname = 'FloatArrayDescr' - _is_array_of_floats = True - def get_base_size(self, translate_support_code): - basesize, _, _ = symbolic.get_array_token(_AF, translate_support_code) - return basesize - def get_item_size(self, translate_support_code): - return symbolic.get_size(lltype.Float, translate_support_code) - -class StructArrayDescr(BaseArrayDescr): - _clsname = 'StructArrayDescr' - _is_array_of_structs = True - -class BaseArrayNoLengthDescr(BaseArrayDescr): - def get_base_size(self, translate_support_code): - return 0 - - def get_ofs_length(self, translate_support_code): - return -1 - -class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): - def __init__(self, itemsize): - self.itemsize = itemsize - - def get_item_size(self, translate_support_code): - return self.itemsize - -class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): - _clsname = 'NonGcPtrArrayNoLengthDescr' - def get_item_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrArrayNoLengthDescr(NonGcPtrArrayNoLengthDescr): - _clsname = 'GcPtrArrayNoLengthDescr' - _is_array_of_pointers = True - -def getArrayDescrClass(ARRAY): - if ARRAY.OF is lltype.Float: - return FloatArrayDescr - elif isinstance(ARRAY.OF, lltype.Struct): - class Descr(StructArrayDescr): - _clsname = '%sArrayDescr' % ARRAY.OF._name - def get_item_size(self, translate_support_code): - return symbolic.get_size(ARRAY.OF, translate_support_code) - Descr.__name__ = Descr._clsname - return Descr - return getDescrClass(ARRAY.OF, BaseArrayDescr, GcPtrArrayDescr, - NonGcPtrArrayDescr, 'Array', 'get_item_size', - '_is_array_of_floats', '_is_item_signed') - -def getArrayNoLengthDescrClass(ARRAY): - return getDescrClass(ARRAY.OF, BaseArrayNoLengthDescr, GcPtrArrayNoLengthDescr, - NonGcPtrArrayNoLengthDescr, 'ArrayNoLength', 'get_item_size', - '_is_array_of_floats', '_is_item_signed') - -def get_array_descr(gccache, ARRAY): +def get_array_descr(gccache, ARRAY_OR_STRUCT): cache = gccache._cache_array try: - return cache[ARRAY] + return cache[ARRAY_OR_STRUCT] except KeyError: - # we only support Arrays that are either GcArrays, or raw no-length - # non-gc Arrays. - if ARRAY._hints.get('nolength', False): - assert not isinstance(ARRAY, lltype.GcArray) - arraydescr = getArrayNoLengthDescrClass(ARRAY)() + tsc = gccache.translate_support_code + basesize, itemsize, _ = symbolic.get_array_token(ARRAY_OR_STRUCT, tsc) + if isinstance(ARRAY_OR_STRUCT, lltype.Array): + ARRAY_INSIDE = ARRAY_OR_STRUCT else: - assert isinstance(ARRAY, lltype.GcArray) - arraydescr = getArrayDescrClass(ARRAY)() - # verify basic assumption that all arrays' basesize and ofslength - # are equal - basesize, itemsize, ofslength = symbolic.get_array_token(ARRAY, False) - assert basesize == arraydescr.get_base_size(False) - assert itemsize == arraydescr.get_item_size(False) - if not ARRAY._hints.get('nolength', False): - assert ofslength == arraydescr.get_ofs_length(False) - if isinstance(ARRAY, lltype.GcArray): - gccache.init_array_descr(ARRAY, arraydescr) - cache[ARRAY] = arraydescr + ARRAY_INSIDE = ARRAY_OR_STRUCT._flds[ARRAY_OR_STRUCT._arrayfld] + if ARRAY_INSIDE._hints.get('nolength', False): + lendescr = None + else: + lendescr = get_field_arraylen_descr(gccache, ARRAY_OR_STRUCT) + flag = get_type_flag(ARRAY_INSIDE.OF) + arraydescr = ArrayDescr(basesize, itemsize, lendescr, flag) + if ARRAY_OR_STRUCT._gckind == 'gc': + gccache.init_array_descr(ARRAY_OR_STRUCT, arraydescr) + cache[ARRAY_OR_STRUCT] = arraydescr return arraydescr + # ____________________________________________________________ # InteriorFieldDescr class InteriorFieldDescr(AbstractDescr): - arraydescr = BaseArrayDescr() # workaround for the annotator - fielddescr = BaseFieldDescr('', 0) + arraydescr = ArrayDescr(0, 0, None, '\x00') # workaround for the annotator + fielddescr = FieldDescr('', 0, 0, '\x00') def __init__(self, arraydescr, fielddescr): + assert arraydescr.flag == FLAG_STRUCT self.arraydescr = arraydescr self.fielddescr = fielddescr + def sort_key(self): + return self.fielddescr.sort_key() + def is_pointer_field(self): return self.fielddescr.is_pointer_field() def is_float_field(self): return self.fielddescr.is_float_field() - def sort_key(self): - return self.fielddescr.sort_key() - def repr_of_descr(self): return '' % self.fielddescr.repr_of_descr() -def get_interiorfield_descr(gc_ll_descr, ARRAY, FIELDTP, name): +def get_interiorfield_descr(gc_ll_descr, ARRAY, name): cache = gc_ll_descr._cache_interiorfield try: - return cache[(ARRAY, FIELDTP, name)] + return cache[(ARRAY, name)] except KeyError: arraydescr = get_array_descr(gc_ll_descr, ARRAY) - fielddescr = get_field_descr(gc_ll_descr, FIELDTP, name) + fielddescr = get_field_descr(gc_ll_descr, ARRAY.OF, name) descr = InteriorFieldDescr(arraydescr, fielddescr) - cache[(ARRAY, FIELDTP, name)] = descr + cache[(ARRAY, name)] = descr return descr +def get_dynamic_interiorfield_descr(gc_ll_descr, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = ArrayDescr(0, width, None, FLAG_STRUCT) + if is_pointer: + assert not is_float + flag = FLAG_POINTER + elif is_float: + flag = FLAG_FLOAT + elif is_signed: + flag = FLAG_SIGNED + else: + flag = FLAG_UNSIGNED + fielddescr = FieldDescr('dynamic', offset, fieldsize, flag) + return InteriorFieldDescr(arraydescr, fielddescr) + + # ____________________________________________________________ # CallDescrs -class BaseCallDescr(AbstractDescr): - _clsname = '' - loop_token = None +class CallDescr(AbstractDescr): arg_classes = '' # <-- annotation hack + result_type = '\x00' + result_flag = '\x00' ffi_flags = 1 + call_stub_i = staticmethod(lambda func, args_i, args_r, args_f: + 0) + call_stub_r = staticmethod(lambda func, args_i, args_r, args_f: + lltype.nullptr(llmemory.GCREF.TO)) + call_stub_f = staticmethod(lambda func,args_i,args_r,args_f: + longlong.ZEROF) - def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): - self.arg_classes = arg_classes # string of "r" and "i" (ref/int) + def __init__(self, arg_classes, result_type, result_signed, result_size, + extrainfo=None, ffi_flags=1): + """ + 'arg_classes' is a string of characters, one per argument: + 'i', 'r', 'f', 'L', 'S' + + 'result_type' is one character from the same list or 'v' + + 'result_signed' is a boolean True/False + """ + self.arg_classes = arg_classes + self.result_type = result_type + self.result_size = result_size self.extrainfo = extrainfo self.ffi_flags = ffi_flags # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which # makes sense on Windows as it's the one for all the C functions # we are compiling together with the JIT. On non-Windows platforms # it is just ignored anyway. + if result_type == 'v': + result_flag = FLAG_VOID + elif result_type == 'i': + if result_signed: + result_flag = FLAG_SIGNED + else: + result_flag = FLAG_UNSIGNED + elif result_type == history.REF: + result_flag = FLAG_POINTER + elif result_type == history.FLOAT or result_type == 'L': + result_flag = FLAG_FLOAT + elif result_type == 'S': + result_flag = FLAG_UNSIGNED + else: + raise NotImplementedError("result_type = '%s'" % (result_type,)) + self.result_flag = result_flag def __repr__(self): - res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) + res = 'CallDescr(%s)' % (self.arg_classes,) extraeffect = getattr(self.extrainfo, 'extraeffect', None) if extraeffect is not None: res += ' EF=%r' % extraeffect @@ -363,14 +333,14 @@ def get_arg_types(self): return self.arg_classes - def get_return_type(self): - return self._return_type + def get_result_type(self): + return self.result_type - def get_result_size(self, translate_support_code): - raise NotImplementedError + def get_result_size(self): + return self.result_size def is_result_signed(self): - return False # unless overridden + return self.result_flag == FLAG_SIGNED def create_call_stub(self, rtyper, RESULT): from pypy.rlib.clibffi import FFI_DEFAULT_ABI @@ -408,18 +378,26 @@ seen = {'i': 0, 'r': 0, 'f': 0} args = ", ".join([process(c) for c in self.arg_classes]) - if self.get_return_type() == history.INT: + result_type = self.get_result_type() + if result_type == history.INT: result = 'rffi.cast(lltype.Signed, res)' - elif self.get_return_type() == history.REF: + category = 'i' + elif result_type == history.REF: + assert RESULT == llmemory.GCREF # should be ensured by the caller result = 'lltype.cast_opaque_ptr(llmemory.GCREF, res)' - elif self.get_return_type() == history.FLOAT: + category = 'r' + elif result_type == history.FLOAT: result = 'longlong.getfloatstorage(res)' - elif self.get_return_type() == 'L': + category = 'f' + elif result_type == 'L': result = 'rffi.cast(lltype.SignedLongLong, res)' - elif self.get_return_type() == history.VOID: - result = 'None' - elif self.get_return_type() == 'S': + category = 'f' + elif result_type == history.VOID: + result = '0' + category = 'i' + elif result_type == 'S': result = 'longlong.singlefloat2int(res)' + category = 'i' else: assert 0 source = py.code.Source(""" @@ -433,10 +411,13 @@ d = globals().copy() d.update(locals()) exec source.compile() in d - self.call_stub = d['call_stub'] + call_stub = d['call_stub'] + # store the function into one of three attributes, to preserve + # type-correctness of the return value + setattr(self, 'call_stub_%s' % category, call_stub) def verify_types(self, args_i, args_r, args_f, return_type): - assert self._return_type in return_type + assert self.result_type in return_type assert (self.arg_classes.count('i') + self.arg_classes.count('S')) == len(args_i or ()) assert self.arg_classes.count('r') == len(args_r or ()) @@ -444,161 +425,56 @@ self.arg_classes.count('L')) == len(args_f or ()) def repr_of_descr(self): - return '<%s>' % self._clsname + res = 'Call%s %d' % (self.result_type, self.result_size) + if self.arg_classes: + res += ' ' + self.arg_classes + if self.extrainfo: + res += ' EF=%d' % self.extrainfo.extraeffect + oopspecindex = self.extrainfo.oopspecindex + if oopspecindex: + res += ' OS=%d' % oopspecindex + return '<%s>' % res -class BaseIntCallDescr(BaseCallDescr): - # Base class of the various subclasses of descrs corresponding to - # calls having a return kind of 'int' (including non-gc pointers). - # The inheritance hierarchy is a bit different than with other Descr - # classes because of the 'call_stub' attribute, which is of type - # - # lambda func, args_i, args_r, args_f --> int/ref/float/void - # - # The purpose of BaseIntCallDescr is to be the parent of all classes - # in which 'call_stub' has a return kind of 'int'. - _return_type = history.INT - call_stub = staticmethod(lambda func, args_i, args_r, args_f: 0) - - _is_result_signed = False # can be overridden in XxxCallDescr - def is_result_signed(self): - return self._is_result_signed - -class DynamicIntCallDescr(BaseIntCallDescr): - """ - calldescr that works for every integer type, by explicitly passing it the - size of the result. Used only by get_call_descr_dynamic - """ - _clsname = 'DynamicIntCallDescr' - - def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): - BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) - assert isinstance(result_sign, bool) - self._result_size = chr(result_size) - self._result_sign = result_sign - - def get_result_size(self, translate_support_code): - return ord(self._result_size) - - def is_result_signed(self): - return self._result_sign - - -class NonGcPtrCallDescr(BaseIntCallDescr): - _clsname = 'NonGcPtrCallDescr' - def get_result_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrCallDescr(BaseCallDescr): - _clsname = 'GcPtrCallDescr' - _return_type = history.REF - call_stub = staticmethod(lambda func, args_i, args_r, args_f: - lltype.nullptr(llmemory.GCREF.TO)) - def get_result_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class FloatCallDescr(BaseCallDescr): - _clsname = 'FloatCallDescr' - _return_type = history.FLOAT - call_stub = staticmethod(lambda func,args_i,args_r,args_f: longlong.ZEROF) - def get_result_size(self, translate_support_code): - return symbolic.get_size(lltype.Float, translate_support_code) - -class LongLongCallDescr(FloatCallDescr): - _clsname = 'LongLongCallDescr' - _return_type = 'L' - -class VoidCallDescr(BaseCallDescr): - _clsname = 'VoidCallDescr' - _return_type = history.VOID - call_stub = staticmethod(lambda func, args_i, args_r, args_f: None) - def get_result_size(self, translate_support_code): - return 0 - -_SingleFloatCallDescr = None # built lazily - -def getCallDescrClass(RESULT): - if RESULT is lltype.Void: - return VoidCallDescr - if RESULT is lltype.Float: - return FloatCallDescr - if RESULT is lltype.SingleFloat: - global _SingleFloatCallDescr - if _SingleFloatCallDescr is None: - assert rffi.sizeof(rffi.UINT) == rffi.sizeof(RESULT) - class SingleFloatCallDescr(getCallDescrClass(rffi.UINT)): - _clsname = 'SingleFloatCallDescr' - _return_type = 'S' - _SingleFloatCallDescr = SingleFloatCallDescr - return _SingleFloatCallDescr - if is_longlong(RESULT): - return LongLongCallDescr - return getDescrClass(RESULT, BaseIntCallDescr, GcPtrCallDescr, - NonGcPtrCallDescr, 'Call', 'get_result_size', - Ellipsis, # <= floatattrname should not be used here - '_is_result_signed') -getCallDescrClass._annspecialcase_ = 'specialize:memo' +def map_type_to_argclass(ARG, accept_void=False): + kind = getkind(ARG) + if kind == 'int': + if ARG is lltype.SingleFloat: return 'S' + else: return 'i' + elif kind == 'ref': return 'r' + elif kind == 'float': + if is_longlong(ARG): return 'L' + else: return 'f' + elif kind == 'void': + if accept_void: return 'v' + raise NotImplementedError('ARG = %r' % (ARG,)) def get_call_descr(gccache, ARGS, RESULT, extrainfo=None): - arg_classes = [] - for ARG in ARGS: - kind = getkind(ARG) - if kind == 'int': - if ARG is lltype.SingleFloat: - arg_classes.append('S') + arg_classes = map(map_type_to_argclass, ARGS) + arg_classes = ''.join(arg_classes) + result_type = map_type_to_argclass(RESULT, accept_void=True) + RESULT_ERASED = RESULT + if RESULT is lltype.Void: + result_size = 0 + result_signed = False + else: + if isinstance(RESULT, lltype.Ptr): + # avoid too many CallDescrs + if result_type == 'r': + RESULT_ERASED = llmemory.GCREF else: - arg_classes.append('i') - elif kind == 'ref': arg_classes.append('r') - elif kind == 'float': - if is_longlong(ARG): - arg_classes.append('L') - else: - arg_classes.append('f') - else: - raise NotImplementedError('ARG = %r' % (ARG,)) - arg_classes = ''.join(arg_classes) - cls = getCallDescrClass(RESULT) - key = (cls, arg_classes, extrainfo) + RESULT_ERASED = llmemory.Address + result_size = symbolic.get_size(RESULT_ERASED, + gccache.translate_support_code) + result_signed = get_type_flag(RESULT) == FLAG_SIGNED + key = (arg_classes, result_type, result_signed, RESULT_ERASED, extrainfo) cache = gccache._cache_call try: - return cache[key] + calldescr = cache[key] except KeyError: - calldescr = cls(arg_classes, extrainfo) - calldescr.create_call_stub(gccache.rtyper, RESULT) + calldescr = CallDescr(arg_classes, result_type, result_signed, + result_size, extrainfo) + calldescr.create_call_stub(gccache.rtyper, RESULT_ERASED) cache[key] = calldescr - return calldescr - - -# ____________________________________________________________ - -def getDescrClass(TYPE, BaseDescr, GcPtrDescr, NonGcPtrDescr, - nameprefix, methodname, floatattrname, signedattrname, - _cache={}): - if isinstance(TYPE, lltype.Ptr): - if TYPE.TO._gckind == 'gc': - return GcPtrDescr - else: - return NonGcPtrDescr - if TYPE is lltype.SingleFloat: - assert rffi.sizeof(rffi.UINT) == rffi.sizeof(TYPE) - TYPE = rffi.UINT - try: - return _cache[nameprefix, TYPE] - except KeyError: - # - class Descr(BaseDescr): - _clsname = '%s%sDescr' % (TYPE._name, nameprefix) - Descr.__name__ = Descr._clsname - # - def method(self, translate_support_code): - return symbolic.get_size(TYPE, translate_support_code) - setattr(Descr, methodname, method) - # - if TYPE is lltype.Float or is_longlong(TYPE): - setattr(Descr, floatattrname, True) - elif (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and - rffi.cast(TYPE, -1) == -1): - setattr(Descr, signedattrname, True) - # - _cache[nameprefix, TYPE] = Descr - return Descr + assert repr(calldescr.result_size) == repr(result_size) + return calldescr diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -1,9 +1,7 @@ from pypy.rlib.rarithmetic import intmask from pypy.jit.metainterp import history from pypy.rpython.lltypesystem import rffi -from pypy.jit.backend.llsupport.descr import ( - DynamicIntCallDescr, NonGcPtrCallDescr, FloatCallDescr, VoidCallDescr, - LongLongCallDescr, getCallDescrClass) +from pypy.jit.backend.llsupport.descr import CallDescr class UnsupportedKind(Exception): pass @@ -16,29 +14,13 @@ argkinds = [get_ffi_type_kind(cpu, arg) for arg in ffi_args] except UnsupportedKind: return None - arg_classes = ''.join(argkinds) - if reskind == history.INT: - size = intmask(ffi_result.c_size) - signed = is_ffi_type_signed(ffi_result) - return DynamicIntCallDescr(arg_classes, size, signed, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.REF: - return NonGcPtrCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.FLOAT: - return FloatCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.VOID: - return VoidCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == 'L': - return LongLongCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == 'S': - SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) - return SingleFloatCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - assert False + if reskind == history.VOID: + result_size = 0 + else: + result_size = intmask(ffi_result.c_size) + argkinds = ''.join(argkinds) + return CallDescr(argkinds, reskind, is_ffi_type_signed(ffi_result), + result_size, extrainfo, ffi_flags=ffi_flags) def get_ffi_type_kind(cpu, ffi_type): from pypy.rlib.libffi import types diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,6 +1,6 @@ import os from pypy.rlib import rgc -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, specialize from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr @@ -8,52 +8,93 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.translator.tool.cbuild import ExternalCompilationInfo -from pypy.jit.metainterp.history import BoxInt, BoxPtr, ConstInt, ConstPtr -from pypy.jit.metainterp.history import AbstractDescr +from pypy.jit.codewriter import heaptracker +from pypy.jit.metainterp.history import ConstPtr, AbstractDescr from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD -from pypy.jit.backend.llsupport.descr import BaseSizeDescr, BaseArrayDescr +from pypy.jit.backend.llsupport.descr import SizeDescr, ArrayDescr from pypy.jit.backend.llsupport.descr import GcCache, get_field_descr -from pypy.jit.backend.llsupport.descr import GcPtrFieldDescr +from pypy.jit.backend.llsupport.descr import get_array_descr from pypy.jit.backend.llsupport.descr import get_call_descr +from pypy.jit.backend.llsupport.rewrite import GcRewriterAssembler from pypy.rpython.memory.gctransform import asmgcroot # ____________________________________________________________ class GcLLDescription(GcCache): - minimal_size_in_nursery = 0 - get_malloc_slowpath_addr = None def __init__(self, gcdescr, translator=None, rtyper=None): GcCache.__init__(self, translator is not None, rtyper) self.gcdescr = gcdescr + if translator and translator.config.translation.gcremovetypeptr: + self.fielddescr_vtable = None + else: + self.fielddescr_vtable = get_field_descr(self, rclass.OBJECT, + 'typeptr') + self._generated_functions = [] + + def _setup_str(self): + self.str_descr = get_array_descr(self, rstr.STR) + self.unicode_descr = get_array_descr(self, rstr.UNICODE) + + def generate_function(self, funcname, func, ARGS, RESULT=llmemory.GCREF): + """Generates a variant of malloc with the given name and the given + arguments. It should return NULL if out of memory. If it raises + anything, it must be an optional MemoryError. + """ + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + descr = get_call_descr(self, ARGS, RESULT) + setattr(self, funcname, func) + setattr(self, funcname + '_FUNCPTR', FUNCPTR) + setattr(self, funcname + '_descr', descr) + self._generated_functions.append(funcname) + + @specialize.arg(1) + def get_malloc_fn(self, funcname): + func = getattr(self, funcname) + FUNC = getattr(self, funcname + '_FUNCPTR') + return llhelper(FUNC, func) + + @specialize.arg(1) + def get_malloc_fn_addr(self, funcname): + ll_func = self.get_malloc_fn(funcname) + return heaptracker.adr2int(llmemory.cast_ptr_to_adr(ll_func)) + def _freeze_(self): return True def initialize(self): pass def do_write_barrier(self, gcref_struct, gcref_newptr): pass - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - return operations - def can_inline_malloc(self, descr): - return False - def can_inline_malloc_varsize(self, descr, num_elem): + def can_use_nursery_malloc(self, size): return False def has_write_barrier_class(self): return None def freeing_block(self, start, stop): pass + def get_nursery_free_addr(self): + raise NotImplementedError + def get_nursery_top_addr(self): + raise NotImplementedError - def get_funcptr_for_newarray(self): - return llhelper(self.GC_MALLOC_ARRAY, self.malloc_array) - def get_funcptr_for_newstr(self): - return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_str) - def get_funcptr_for_newunicode(self): - return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_unicode) + def gc_malloc(self, sizedescr): + """Blackhole: do a 'bh_new'. Also used for 'bh_new_with_vtable', + with the vtable pointer set manually afterwards.""" + assert isinstance(sizedescr, SizeDescr) + return self._bh_malloc(sizedescr) + def gc_malloc_array(self, arraydescr, num_elem): + assert isinstance(arraydescr, ArrayDescr) + return self._bh_malloc_array(arraydescr, num_elem) - def record_constptrs(self, op, gcrefs_output_list): + def gc_malloc_str(self, num_elem): + return self._bh_malloc_array(self.str_descr, num_elem) + + def gc_malloc_unicode(self, num_elem): + return self._bh_malloc_array(self.unicode_descr, num_elem) + + def _record_constptrs(self, op, gcrefs_output_list): for i in range(op.numargs()): v = op.getarg(i) if isinstance(v, ConstPtr) and bool(v.value): @@ -61,11 +102,27 @@ rgc._make_sure_does_not_move(p) gcrefs_output_list.append(p) + def rewrite_assembler(self, cpu, operations, gcrefs_output_list): + rewriter = GcRewriterAssembler(self, cpu) + newops = rewriter.rewrite(operations) + # record all GCREFs, because the GC (or Boehm) cannot see them and + # keep them alive if they end up as constants in the assembler + for op in newops: + self._record_constptrs(op, gcrefs_output_list) + return newops + # ____________________________________________________________ class GcLLDescr_boehm(GcLLDescription): - moving_gc = False - gcrootmap = None + kind = 'boehm' + moving_gc = False + round_up = False + gcrootmap = None + write_barrier_descr = None + fielddescr_tid = None + str_type_id = 0 + unicode_type_id = 0 + get_malloc_slowpath_addr = None @classmethod def configure_boehm_once(cls): @@ -76,6 +133,16 @@ from pypy.rpython.tool import rffi_platform compilation_info = rffi_platform.configure_boehm() + # on some platform GC_init is required before any other + # GC_* functions, call it here for the benefit of tests + # XXX move this to tests + init_fn_ptr = rffi.llexternal("GC_init", + [], lltype.Void, + compilation_info=compilation_info, + sandboxsafe=True, + _nowrapper=True) + init_fn_ptr() + # Versions 6.x of libgc needs to use GC_local_malloc(). # Versions 7.x of libgc removed this function; GC_malloc() has # the same behavior if libgc was compiled with @@ -95,96 +162,42 @@ sandboxsafe=True, _nowrapper=True) cls.malloc_fn_ptr = malloc_fn_ptr - cls.compilation_info = compilation_info return malloc_fn_ptr def __init__(self, gcdescr, translator, rtyper): GcLLDescription.__init__(self, gcdescr, translator, rtyper) # grab a pointer to the Boehm 'malloc' function - malloc_fn_ptr = self.configure_boehm_once() - self.funcptr_for_new = malloc_fn_ptr + self.malloc_fn_ptr = self.configure_boehm_once() + self._setup_str() + self._make_functions() - def malloc_array(basesize, itemsize, ofs_length, num_elem): + def _make_functions(self): + + def malloc_fixedsize(size): + return self.malloc_fn_ptr(size) + self.generate_function('malloc_fixedsize', malloc_fixedsize, + [lltype.Signed]) + + def malloc_array(basesize, num_elem, itemsize, ofs_length): try: - size = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) + totalsize = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) except OverflowError: return lltype.nullptr(llmemory.GCREF.TO) - res = self.funcptr_for_new(size) - if not res: - return res - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem + res = self.malloc_fn_ptr(totalsize) + if res: + arrayptr = rffi.cast(rffi.CArrayPtr(lltype.Signed), res) + arrayptr[ofs_length/WORD] = num_elem return res - self.malloc_array = malloc_array - self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( - [lltype.Signed] * 4, llmemory.GCREF)) + self.generate_function('malloc_array', malloc_array, + [lltype.Signed] * 4) + def _bh_malloc(self, sizedescr): + return self.malloc_fixedsize(sizedescr.size) - (str_basesize, str_itemsize, str_ofs_length - ) = symbolic.get_array_token(rstr.STR, self.translate_support_code) - (unicode_basesize, unicode_itemsize, unicode_ofs_length - ) = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) - def malloc_str(length): - return self.malloc_array( - str_basesize, str_itemsize, str_ofs_length, length - ) - def malloc_unicode(length): - return self.malloc_array( - unicode_basesize, unicode_itemsize, unicode_ofs_length, length - ) - self.malloc_str = malloc_str - self.malloc_unicode = malloc_unicode - self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( - [lltype.Signed], llmemory.GCREF)) - - - # on some platform GC_init is required before any other - # GC_* functions, call it here for the benefit of tests - # XXX move this to tests - init_fn_ptr = rffi.llexternal("GC_init", - [], lltype.Void, - compilation_info=self.compilation_info, - sandboxsafe=True, - _nowrapper=True) - - init_fn_ptr() - - def gc_malloc(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return self.funcptr_for_new(sizedescr.size) - - def gc_malloc_array(self, arraydescr, num_elem): - assert isinstance(arraydescr, BaseArrayDescr) - ofs_length = arraydescr.get_ofs_length(self.translate_support_code) - basesize = arraydescr.get_base_size(self.translate_support_code) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return self.malloc_array(basesize, itemsize, ofs_length, num_elem) - - def gc_malloc_str(self, num_elem): - return self.malloc_str(num_elem) - - def gc_malloc_unicode(self, num_elem): - return self.malloc_unicode(num_elem) - - def args_for_new(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return [sizedescr.size] - - def args_for_new_array(self, arraydescr): - ofs_length = arraydescr.get_ofs_length(self.translate_support_code) - basesize = arraydescr.get_base_size(self.translate_support_code) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return [basesize, itemsize, ofs_length] - - def get_funcptr_for_new(self): - return self.funcptr_for_new - - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - # record all GCREFs too, because Boehm cannot see them and keep them - # alive if they end up as constants in the assembler - for op in operations: - self.record_constptrs(op, gcrefs_output_list) - return GcLLDescription.rewrite_assembler(self, cpu, operations, - gcrefs_output_list) + def _bh_malloc_array(self, arraydescr, num_elem): + return self.malloc_array(arraydescr.basesize, num_elem, + arraydescr.itemsize, + arraydescr.lendescr.offset) # ____________________________________________________________ @@ -554,12 +567,14 @@ class WriteBarrierDescr(AbstractDescr): def __init__(self, gc_ll_descr): - GCClass = gc_ll_descr.GCClass self.llop1 = gc_ll_descr.llop1 self.WB_FUNCPTR = gc_ll_descr.WB_FUNCPTR self.WB_ARRAY_FUNCPTR = gc_ll_descr.WB_ARRAY_FUNCPTR - self.fielddescr_tid = get_field_descr(gc_ll_descr, GCClass.HDR, 'tid') + self.fielddescr_tid = gc_ll_descr.fielddescr_tid # + GCClass = gc_ll_descr.GCClass + if GCClass is None: # for tests + return self.jit_wb_if_flag = GCClass.JIT_WB_IF_FLAG self.jit_wb_if_flag_byteofs, self.jit_wb_if_flag_singlebyte = ( self.extract_flag_byte(self.jit_wb_if_flag)) @@ -596,48 +611,74 @@ funcaddr = llmemory.cast_ptr_to_adr(funcptr) return cpu.cast_adr_to_int(funcaddr) # this may return 0 + def has_write_barrier_from_array(self, cpu): + return self.get_write_barrier_from_array_fn(cpu) != 0 + class GcLLDescr_framework(GcLLDescription): DEBUG = False # forced to True by x86/test/test_zrpy_gc.py + kind = 'framework' + round_up = True - def __init__(self, gcdescr, translator, rtyper, llop1=llop): - from pypy.rpython.memory.gctypelayout import check_typeid - from pypy.rpython.memory.gcheader import GCHeaderBuilder - from pypy.rpython.memory.gctransform import framework + def __init__(self, gcdescr, translator, rtyper, llop1=llop, + really_not_translated=False): GcLLDescription.__init__(self, gcdescr, translator, rtyper) - assert self.translate_support_code, "required with the framework GC" self.translator = translator self.llop1 = llop1 + if really_not_translated: + assert not self.translate_support_code # but half does not work + self._initialize_for_tests() + else: + assert self.translate_support_code,"required with the framework GC" + self._check_valid_gc() + self._make_gcrootmap() + self._make_layoutbuilder() + self._setup_gcclass() + self._setup_tid() + self._setup_write_barrier() + self._setup_str() + self._make_functions(really_not_translated) + def _initialize_for_tests(self): + self.layoutbuilder = None + self.fielddescr_tid = AbstractDescr() + self.max_size_of_young_obj = 1000 + self.GCClass = None + + def _check_valid_gc(self): # we need the hybrid or minimark GC for rgc._make_sure_does_not_move() # to work - if gcdescr.config.translation.gc not in ('hybrid', 'minimark'): + if self.gcdescr.config.translation.gc not in ('hybrid', 'minimark'): raise NotImplementedError("--gc=%s not implemented with the JIT" % (gcdescr.config.translation.gc,)) + def _make_gcrootmap(self): # to find roots in the assembler, make a GcRootMap - name = gcdescr.config.translation.gcrootfinder + name = self.gcdescr.config.translation.gcrootfinder try: cls = globals()['GcRootMap_' + name] except KeyError: raise NotImplementedError("--gcrootfinder=%s not implemented" " with the JIT" % (name,)) - gcrootmap = cls(gcdescr) + gcrootmap = cls(self.gcdescr) self.gcrootmap = gcrootmap + def _make_layoutbuilder(self): # make a TransformerLayoutBuilder and save it on the translator # where it can be fished and reused by the FrameworkGCTransformer + from pypy.rpython.memory.gctransform import framework + translator = self.translator self.layoutbuilder = framework.TransformerLayoutBuilder(translator) self.layoutbuilder.delay_encoding() - self.translator._jit2gc = {'layoutbuilder': self.layoutbuilder} - gcrootmap.add_jit2gc_hooks(self.translator._jit2gc) + translator._jit2gc = {'layoutbuilder': self.layoutbuilder} + self.gcrootmap.add_jit2gc_hooks(translator._jit2gc) + def _setup_gcclass(self): + from pypy.rpython.memory.gcheader import GCHeaderBuilder self.GCClass = self.layoutbuilder.GCClass self.moving_gc = self.GCClass.moving_gc self.HDRPTR = lltype.Ptr(self.GCClass.HDR) self.gcheaderbuilder = GCHeaderBuilder(self.HDRPTR.TO) - (self.array_basesize, _, self.array_length_ofs) = \ - symbolic.get_array_token(lltype.GcArray(lltype.Signed), True) self.max_size_of_young_obj = self.GCClass.JIT_max_size_of_young_obj() self.minimal_size_in_nursery=self.GCClass.JIT_minimal_size_in_nursery() @@ -645,87 +686,124 @@ assert self.GCClass.inline_simple_malloc assert self.GCClass.inline_simple_malloc_varsize - # make a malloc function, with two arguments - def malloc_basic(size, tid): - type_id = llop.extract_ushort(llgroup.HALFWORD, tid) - check_typeid(type_id) - res = llop1.do_malloc_fixedsize_clear(llmemory.GCREF, - type_id, size, - False, False, False) - # In case the operation above failed, we are returning NULL - # from this function to assembler. There is also an RPython - # exception set, typically MemoryError; but it's easier and - # faster to check for the NULL return value, as done by - # translator/exceptiontransform.py. - #llop.debug_print(lltype.Void, "\tmalloc_basic", size, type_id, - # "-->", res) - return res - self.malloc_basic = malloc_basic - self.GC_MALLOC_BASIC = lltype.Ptr(lltype.FuncType( - [lltype.Signed, lltype.Signed], llmemory.GCREF)) + def _setup_tid(self): + self.fielddescr_tid = get_field_descr(self, self.GCClass.HDR, 'tid') + + def _setup_write_barrier(self): self.WB_FUNCPTR = lltype.Ptr(lltype.FuncType( [llmemory.Address, llmemory.Address], lltype.Void)) self.WB_ARRAY_FUNCPTR = lltype.Ptr(lltype.FuncType( [llmemory.Address, lltype.Signed, llmemory.Address], lltype.Void)) self.write_barrier_descr = WriteBarrierDescr(self) - # + + def _make_functions(self, really_not_translated): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + (self.standard_array_basesize, _, self.standard_array_length_ofs) = \ + symbolic.get_array_token(lltype.GcArray(lltype.Signed), + not really_not_translated) + + def malloc_nursery_slowpath(size): + """Allocate 'size' null bytes out of the nursery. + Note that the fast path is typically inlined by the backend.""" + if self.DEBUG: + self._random_usage_of_xmm_registers() + type_id = rffi.cast(llgroup.HALFWORD, 0) # missing here + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, size, + False, False, False) + self.generate_function('malloc_nursery', malloc_nursery_slowpath, + [lltype.Signed]) + def malloc_array(itemsize, tid, num_elem): + """Allocate an array with a variable-size num_elem. + Only works for standard arrays.""" type_id = llop.extract_ushort(llgroup.HALFWORD, tid) check_typeid(type_id) return llop1.do_malloc_varsize_clear( llmemory.GCREF, - type_id, num_elem, self.array_basesize, itemsize, - self.array_length_ofs) - self.malloc_array = malloc_array - self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( - [lltype.Signed] * 3, llmemory.GCREF)) - # - (str_basesize, str_itemsize, str_ofs_length - ) = symbolic.get_array_token(rstr.STR, True) - (unicode_basesize, unicode_itemsize, unicode_ofs_length - ) = symbolic.get_array_token(rstr.UNICODE, True) - str_type_id = self.layoutbuilder.get_type_id(rstr.STR) - unicode_type_id = self.layoutbuilder.get_type_id(rstr.UNICODE) - # + type_id, num_elem, self.standard_array_basesize, itemsize, + self.standard_array_length_ofs) + self.generate_function('malloc_array', malloc_array, + [lltype.Signed] * 3) + + def malloc_array_nonstandard(basesize, itemsize, lengthofs, tid, + num_elem): + """For the rare case of non-standard arrays, i.e. arrays where + self.standard_array_{basesize,length_ofs} is wrong. It can + occur e.g. with arrays of floats on Win32.""" + type_id = llop.extract_ushort(llgroup.HALFWORD, tid) + check_typeid(type_id) + return llop1.do_malloc_varsize_clear( + llmemory.GCREF, + type_id, num_elem, basesize, itemsize, lengthofs) + self.generate_function('malloc_array_nonstandard', + malloc_array_nonstandard, + [lltype.Signed] * 5) + + str_type_id = self.str_descr.tid + str_basesize = self.str_descr.basesize + str_itemsize = self.str_descr.itemsize + str_ofs_length = self.str_descr.lendescr.offset + unicode_type_id = self.unicode_descr.tid + unicode_basesize = self.unicode_descr.basesize + unicode_itemsize = self.unicode_descr.itemsize + unicode_ofs_length = self.unicode_descr.lendescr.offset + def malloc_str(length): return llop1.do_malloc_varsize_clear( llmemory.GCREF, str_type_id, length, str_basesize, str_itemsize, str_ofs_length) + self.generate_function('malloc_str', malloc_str, + [lltype.Signed]) + def malloc_unicode(length): return llop1.do_malloc_varsize_clear( llmemory.GCREF, - unicode_type_id, length, unicode_basesize,unicode_itemsize, + unicode_type_id, length, unicode_basesize, unicode_itemsize, unicode_ofs_length) - self.malloc_str = malloc_str - self.malloc_unicode = malloc_unicode - self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( - [lltype.Signed], llmemory.GCREF)) - # - class ForTestOnly: - pass - for_test_only = ForTestOnly() - for_test_only.x = 1.23 - def random_usage_of_xmm_registers(): - x0 = for_test_only.x - x1 = x0 * 0.1 - x2 = x0 * 0.2 - x3 = x0 * 0.3 - for_test_only.x = x0 + x1 + x2 + x3 - # - def malloc_slowpath(size): - if self.DEBUG: - random_usage_of_xmm_registers() - assert size >= self.minimal_size_in_nursery - # NB. although we call do_malloc_fixedsize_clear() here, - # it's a bit of a hack because we set tid to 0 and may - # also use it to allocate varsized objects. The tid - # and possibly the length are both set afterward. - gcref = llop1.do_malloc_fixedsize_clear(llmemory.GCREF, - 0, size, False, False, False) - return rffi.cast(lltype.Signed, gcref) - self.malloc_slowpath = malloc_slowpath - self.MALLOC_SLOWPATH = lltype.FuncType([lltype.Signed], lltype.Signed) + self.generate_function('malloc_unicode', malloc_unicode, + [lltype.Signed]) + + # Rarely called: allocate a fixed-size amount of bytes, but + # not in the nursery, because it is too big. Implemented like + # malloc_nursery_slowpath() above. + self.generate_function('malloc_fixedsize', malloc_nursery_slowpath, + [lltype.Signed]) + + def _bh_malloc(self, sizedescr): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + type_id = llop.extract_ushort(llgroup.HALFWORD, sizedescr.tid) + check_typeid(type_id) + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, sizedescr.size, + False, False, False) + + def _bh_malloc_array(self, arraydescr, num_elem): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + type_id = llop.extract_ushort(llgroup.HALFWORD, arraydescr.tid) + check_typeid(type_id) + return llop1.do_malloc_varsize_clear(llmemory.GCREF, + type_id, num_elem, + arraydescr.basesize, + arraydescr.itemsize, + arraydescr.lendescr.offset) + + + class ForTestOnly: + pass + for_test_only = ForTestOnly() + for_test_only.x = 1.23 + + def _random_usage_of_xmm_registers(self): + x0 = self.for_test_only.x + x1 = x0 * 0.1 + x2 = x0 * 0.2 + x3 = x0 * 0.3 + self.for_test_only.x = x0 + x1 + x2 + x3 def get_nursery_free_addr(self): nurs_addr = llop.gc_adr_of_nursery_free(llmemory.Address) @@ -735,49 +813,26 @@ nurs_top_addr = llop.gc_adr_of_nursery_top(llmemory.Address) return rffi.cast(lltype.Signed, nurs_top_addr) - def get_malloc_slowpath_addr(self): - fptr = llhelper(lltype.Ptr(self.MALLOC_SLOWPATH), self.malloc_slowpath) - return rffi.cast(lltype.Signed, fptr) - def initialize(self): self.gcrootmap.initialize() def init_size_descr(self, S, descr): - type_id = self.layoutbuilder.get_type_id(S) - assert not self.layoutbuilder.is_weakref_type(S) - assert not self.layoutbuilder.has_finalizer(S) - descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) + if self.layoutbuilder is not None: + type_id = self.layoutbuilder.get_type_id(S) + assert not self.layoutbuilder.is_weakref_type(S) + assert not self.layoutbuilder.has_finalizer(S) + descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) def init_array_descr(self, A, descr): - type_id = self.layoutbuilder.get_type_id(A) - descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) + if self.layoutbuilder is not None: + type_id = self.layoutbuilder.get_type_id(A) + descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) - def gc_malloc(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return self.malloc_basic(sizedescr.size, sizedescr.tid) - - def gc_malloc_array(self, arraydescr, num_elem): - assert isinstance(arraydescr, BaseArrayDescr) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return self.malloc_array(itemsize, arraydescr.tid, num_elem) - - def gc_malloc_str(self, num_elem): - return self.malloc_str(num_elem) - - def gc_malloc_unicode(self, num_elem): - return self.malloc_unicode(num_elem) - - def args_for_new(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return [sizedescr.size, sizedescr.tid] - - def args_for_new_array(self, arraydescr): - assert isinstance(arraydescr, BaseArrayDescr) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return [itemsize, arraydescr.tid] - - def get_funcptr_for_new(self): - return llhelper(self.GC_MALLOC_BASIC, self.malloc_basic) + def _set_tid(self, gcptr, tid): + hdr_addr = llmemory.cast_ptr_to_adr(gcptr) + hdr_addr -= self.gcheaderbuilder.size_gc_header + hdr = llmemory.cast_adr_to_ptr(hdr_addr, self.HDRPTR) + hdr.tid = tid def do_write_barrier(self, gcref_struct, gcref_newptr): hdr_addr = llmemory.cast_ptr_to_adr(gcref_struct) @@ -791,108 +846,8 @@ funcptr(llmemory.cast_ptr_to_adr(gcref_struct), llmemory.cast_ptr_to_adr(gcref_newptr)) - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - # Perform two kinds of rewrites in parallel: - # - # - Add COND_CALLs to the write barrier before SETFIELD_GC and - # SETARRAYITEM_GC operations. - # - # - Record the ConstPtrs from the assembler. - # - newops = [] - known_lengths = {} - # we can only remember one malloc since the next malloc can possibly - # collect - last_malloc = None - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - continue - # ---------- record the ConstPtrs ---------- - self.record_constptrs(op, gcrefs_output_list) - if op.is_malloc(): - last_malloc = op.result - elif op.can_malloc(): - last_malloc = None - # ---------- write barrier for SETFIELD_GC ---------- - if op.getopnum() == rop.SETFIELD_GC: - val = op.getarg(0) - # no need for a write barrier in the case of previous malloc - if val is not last_malloc: - v = op.getarg(1) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier(newops, op.getarg(0), v) - op = op.copy_and_change(rop.SETFIELD_RAW) - # ---------- write barrier for SETINTERIORFIELD_GC ------ - if op.getopnum() == rop.SETINTERIORFIELD_GC: - val = op.getarg(0) - if val is not last_malloc: - v = op.getarg(2) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier(newops, op.getarg(0), v) - op = op.copy_and_change(rop.SETINTERIORFIELD_RAW) - # ---------- write barrier for SETARRAYITEM_GC ---------- - if op.getopnum() == rop.SETARRAYITEM_GC: - val = op.getarg(0) - # no need for a write barrier in the case of previous malloc - if val is not last_malloc: - v = op.getarg(2) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier_array(newops, op.getarg(0), - op.getarg(1), v, - cpu, known_lengths) - op = op.copy_and_change(rop.SETARRAYITEM_RAW) - elif op.getopnum() == rop.NEW_ARRAY: - v_length = op.getarg(0) - if isinstance(v_length, ConstInt): - known_lengths[op.result] = v_length.getint() - # ---------- - newops.append(op) - return newops - - def _gen_write_barrier(self, newops, v_base, v_value): - args = [v_base, v_value] - newops.append(ResOperation(rop.COND_CALL_GC_WB, args, None, - descr=self.write_barrier_descr)) - - def _gen_write_barrier_array(self, newops, v_base, v_index, v_value, - cpu, known_lengths): - if self.write_barrier_descr.get_write_barrier_from_array_fn(cpu) != 0: - # If we know statically the length of 'v', and it is not too - # big, then produce a regular write_barrier. If it's unknown or - # too big, produce instead a write_barrier_from_array. - LARGE = 130 - length = known_lengths.get(v_base, LARGE) - if length >= LARGE: - # unknown or too big: produce a write_barrier_from_array - args = [v_base, v_index, v_value] - newops.append(ResOperation(rop.COND_CALL_GC_WB_ARRAY, args, - None, - descr=self.write_barrier_descr)) - return - # fall-back case: produce a write_barrier - self._gen_write_barrier(newops, v_base, v_value) - - def can_inline_malloc(self, descr): - assert isinstance(descr, BaseSizeDescr) - if descr.size < self.max_size_of_young_obj: - has_finalizer = bool(descr.tid & (1<= LARGE: + # unknown or too big: produce a write_barrier_from_array + args = [v_base, v_index, v_value] + self.newops.append( + ResOperation(rop.COND_CALL_GC_WB_ARRAY, args, None, + descr=write_barrier_descr)) + return + # fall-back case: produce a write_barrier + self.gen_write_barrier(v_base, v_value) + + def round_up_for_allocation(self, size): + if not self.gc_ll_descr.round_up: + return size + if self.gc_ll_descr.translate_support_code: + from pypy.rpython.lltypesystem import llarena + return llarena.round_up_for_allocation( + size, self.gc_ll_descr.minimal_size_in_nursery) + else: + # non-translated: do it manually + # assume that "self.gc_ll_descr.minimal_size_in_nursery" is 2 WORDs + size = max(size, 2 * WORD) + return (size + WORD-1) & ~(WORD-1) # round up diff --git a/pypy/jit/backend/llsupport/test/test_descr.py b/pypy/jit/backend/llsupport/test/test_descr.py --- a/pypy/jit/backend/llsupport/test/test_descr.py +++ b/pypy/jit/backend/llsupport/test/test_descr.py @@ -1,4 +1,4 @@ -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, rffi, rstr from pypy.jit.backend.llsupport.descr import * from pypy.jit.backend.llsupport import symbolic from pypy.rlib.objectmodel import Symbolic @@ -53,18 +53,6 @@ ('z', lltype.Ptr(U)), ('f', lltype.Float), ('s', lltype.SingleFloat)) - assert getFieldDescrClass(lltype.Ptr(T)) is GcPtrFieldDescr - assert getFieldDescrClass(lltype.Ptr(U)) is NonGcPtrFieldDescr - cls = getFieldDescrClass(lltype.Char) - assert cls != getFieldDescrClass(lltype.Signed) - assert cls == getFieldDescrClass(lltype.Char) - clsf = getFieldDescrClass(lltype.Float) - assert clsf != cls - assert clsf == getFieldDescrClass(lltype.Float) - clss = getFieldDescrClass(lltype.SingleFloat) - assert clss not in (cls, clsf) - assert clss == getFieldDescrClass(lltype.SingleFloat) - assert clss == getFieldDescrClass(rffi.UINT) # for now # c0 = GcCache(False) c1 = GcCache(True) @@ -77,11 +65,7 @@ descr_z = get_field_descr(c2, S, 'z') descr_f = get_field_descr(c2, S, 'f') descr_s = get_field_descr(c2, S, 's') - assert descr_x.__class__ is cls - assert descr_y.__class__ is GcPtrFieldDescr - assert descr_z.__class__ is NonGcPtrFieldDescr - assert descr_f.__class__ is clsf - assert descr_s.__class__ is clss + assert isinstance(descr_x, FieldDescr) assert descr_x.name == 'S.x' assert descr_y.name == 'S.y' assert descr_z.name == 'S.z' @@ -90,33 +74,27 @@ if not tsc: assert descr_x.offset < descr_y.offset < descr_z.offset assert descr_x.sort_key() < descr_y.sort_key() < descr_z.sort_key() - assert descr_x.get_field_size(False) == rffi.sizeof(lltype.Char) - assert descr_y.get_field_size(False) == rffi.sizeof(lltype.Ptr(T)) - assert descr_z.get_field_size(False) == rffi.sizeof(lltype.Ptr(U)) - assert descr_f.get_field_size(False) == rffi.sizeof(lltype.Float) - assert descr_s.get_field_size(False) == rffi.sizeof( - lltype.SingleFloat) + assert descr_x.field_size == rffi.sizeof(lltype.Char) + assert descr_y.field_size == rffi.sizeof(lltype.Ptr(T)) + assert descr_z.field_size == rffi.sizeof(lltype.Ptr(U)) + assert descr_f.field_size == rffi.sizeof(lltype.Float) + assert descr_s.field_size == rffi.sizeof(lltype.SingleFloat) else: assert isinstance(descr_x.offset, Symbolic) assert isinstance(descr_y.offset, Symbolic) assert isinstance(descr_z.offset, Symbolic) assert isinstance(descr_f.offset, Symbolic) assert isinstance(descr_s.offset, Symbolic) - assert isinstance(descr_x.get_field_size(True), Symbolic) - assert isinstance(descr_y.get_field_size(True), Symbolic) - assert isinstance(descr_z.get_field_size(True), Symbolic) - assert isinstance(descr_f.get_field_size(True), Symbolic) - assert isinstance(descr_s.get_field_size(True), Symbolic) - assert not descr_x.is_pointer_field() - assert descr_y.is_pointer_field() - assert not descr_z.is_pointer_field() - assert not descr_f.is_pointer_field() - assert not descr_s.is_pointer_field() - assert not descr_x.is_float_field() - assert not descr_y.is_float_field() - assert not descr_z.is_float_field() - assert descr_f.is_float_field() - assert not descr_s.is_float_field() + assert isinstance(descr_x.field_size, Symbolic) + assert isinstance(descr_y.field_size, Symbolic) + assert isinstance(descr_z.field_size, Symbolic) + assert isinstance(descr_f.field_size, Symbolic) + assert isinstance(descr_s.field_size, Symbolic) + assert descr_x.flag == FLAG_UNSIGNED + assert descr_y.flag == FLAG_POINTER + assert descr_z.flag == FLAG_UNSIGNED + assert descr_f.flag == FLAG_FLOAT + assert descr_s.flag == FLAG_UNSIGNED def test_get_field_descr_sign(): @@ -128,7 +106,8 @@ for tsc in [False, True]: c2 = GcCache(tsc) descr_x = get_field_descr(c2, S, 'x') - assert descr_x.is_field_signed() == signed + assert descr_x.flag == {False: FLAG_UNSIGNED, + True: FLAG_SIGNED }[signed] def test_get_field_descr_longlong(): if sys.maxint > 2147483647: @@ -136,9 +115,8 @@ c0 = GcCache(False) S = lltype.GcStruct('S', ('y', lltype.UnsignedLongLong)) descr = get_field_descr(c0, S, 'y') - assert not descr.is_pointer_field() - assert descr.is_float_field() - assert descr.get_field_size(False) == 8 + assert descr.flag == FLAG_FLOAT + assert descr.field_size == 8 def test_get_array_descr(): @@ -149,19 +127,8 @@ A3 = lltype.GcArray(lltype.Ptr(U)) A4 = lltype.GcArray(lltype.Float) A5 = lltype.GcArray(lltype.Struct('x', ('v', lltype.Signed), - ('k', lltype.Signed))) + ('k', lltype.Signed))) A6 = lltype.GcArray(lltype.SingleFloat) - assert getArrayDescrClass(A2) is GcPtrArrayDescr - assert getArrayDescrClass(A3) is NonGcPtrArrayDescr - cls = getArrayDescrClass(A1) - assert cls != getArrayDescrClass(lltype.GcArray(lltype.Signed)) - assert cls == getArrayDescrClass(lltype.GcArray(lltype.Char)) - clsf = getArrayDescrClass(A4) - assert clsf != cls - assert clsf == getArrayDescrClass(lltype.GcArray(lltype.Float)) - clss = getArrayDescrClass(A6) - assert clss not in (clsf, cls) - assert clss == getArrayDescrClass(lltype.GcArray(rffi.UINT)) # c0 = GcCache(False) descr1 = get_array_descr(c0, A1) @@ -170,82 +137,61 @@ descr4 = get_array_descr(c0, A4) descr5 = get_array_descr(c0, A5) descr6 = get_array_descr(c0, A6) - assert descr1.__class__ is cls - assert descr2.__class__ is GcPtrArrayDescr - assert descr3.__class__ is NonGcPtrArrayDescr - assert descr4.__class__ is clsf - assert descr6.__class__ is clss + assert isinstance(descr1, ArrayDescr) assert descr1 == get_array_descr(c0, lltype.GcArray(lltype.Char)) - assert not descr1.is_array_of_pointers() - assert descr2.is_array_of_pointers() - assert not descr3.is_array_of_pointers() - assert not descr4.is_array_of_pointers() - assert not descr5.is_array_of_pointers() - assert not descr1.is_array_of_floats() - assert not descr2.is_array_of_floats() - assert not descr3.is_array_of_floats() - assert descr4.is_array_of_floats() - assert not descr5.is_array_of_floats() + assert descr1.flag == FLAG_UNSIGNED + assert descr2.flag == FLAG_POINTER + assert descr3.flag == FLAG_UNSIGNED + assert descr4.flag == FLAG_FLOAT + assert descr5.flag == FLAG_STRUCT + assert descr6.flag == FLAG_UNSIGNED # def get_alignment(code): # Retrieve default alignment for the compiler/platform return struct.calcsize('l' + code) - struct.calcsize(code) - assert descr1.get_base_size(False) == get_alignment('c') - assert descr2.get_base_size(False) == get_alignment('p') - assert descr3.get_base_size(False) == get_alignment('p') - assert descr4.get_base_size(False) == get_alignment('d') - assert descr5.get_base_size(False) == get_alignment('f') - assert descr1.get_ofs_length(False) == 0 - assert descr2.get_ofs_length(False) == 0 - assert descr3.get_ofs_length(False) == 0 - assert descr4.get_ofs_length(False) == 0 - assert descr5.get_ofs_length(False) == 0 - assert descr1.get_item_size(False) == rffi.sizeof(lltype.Char) - assert descr2.get_item_size(False) == rffi.sizeof(lltype.Ptr(T)) - assert descr3.get_item_size(False) == rffi.sizeof(lltype.Ptr(U)) - assert descr4.get_item_size(False) == rffi.sizeof(lltype.Float) - assert descr5.get_item_size(False) == rffi.sizeof(lltype.Signed) * 2 - assert descr6.get_item_size(False) == rffi.sizeof(lltype.SingleFloat) + assert descr1.basesize == get_alignment('c') + assert descr2.basesize == get_alignment('p') + assert descr3.basesize == get_alignment('p') + assert descr4.basesize == get_alignment('d') + assert descr5.basesize == get_alignment('f') + assert descr1.lendescr.offset == 0 + assert descr2.lendescr.offset == 0 + assert descr3.lendescr.offset == 0 + assert descr4.lendescr.offset == 0 + assert descr5.lendescr.offset == 0 + assert descr1.itemsize == rffi.sizeof(lltype.Char) + assert descr2.itemsize == rffi.sizeof(lltype.Ptr(T)) + assert descr3.itemsize == rffi.sizeof(lltype.Ptr(U)) + assert descr4.itemsize == rffi.sizeof(lltype.Float) + assert descr5.itemsize == rffi.sizeof(lltype.Signed) * 2 + assert descr6.itemsize == rffi.sizeof(lltype.SingleFloat) # - assert isinstance(descr1.get_base_size(True), Symbolic) - assert isinstance(descr2.get_base_size(True), Symbolic) - assert isinstance(descr3.get_base_size(True), Symbolic) - assert isinstance(descr4.get_base_size(True), Symbolic) - assert isinstance(descr5.get_base_size(True), Symbolic) - assert isinstance(descr1.get_ofs_length(True), Symbolic) - assert isinstance(descr2.get_ofs_length(True), Symbolic) - assert isinstance(descr3.get_ofs_length(True), Symbolic) - assert isinstance(descr4.get_ofs_length(True), Symbolic) - assert isinstance(descr5.get_ofs_length(True), Symbolic) - assert isinstance(descr1.get_item_size(True), Symbolic) - assert isinstance(descr2.get_item_size(True), Symbolic) - assert isinstance(descr3.get_item_size(True), Symbolic) - assert isinstance(descr4.get_item_size(True), Symbolic) - assert isinstance(descr5.get_item_size(True), Symbolic) CA = rffi.CArray(lltype.Signed) descr = get_array_descr(c0, CA) - assert not descr.is_array_of_floats() - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_SIGNED + assert descr.basesize == 0 + assert descr.lendescr is None CA = rffi.CArray(lltype.Ptr(lltype.GcStruct('S'))) descr = get_array_descr(c0, CA) - assert descr.is_array_of_pointers() - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_POINTER + assert descr.basesize == 0 + assert descr.lendescr is None CA = rffi.CArray(lltype.Ptr(lltype.Struct('S'))) descr = get_array_descr(c0, CA) - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_UNSIGNED + assert descr.basesize == 0 + assert descr.lendescr is None CA = rffi.CArray(lltype.Float) descr = get_array_descr(c0, CA) - assert descr.is_array_of_floats() - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_FLOAT + assert descr.basesize == 0 + assert descr.lendescr is None CA = rffi.CArray(rffi.FLOAT) descr = get_array_descr(c0, CA) - assert not descr.is_array_of_floats() - assert descr.get_base_size(False) == 0 - assert descr.get_ofs_length(False) == -1 + assert descr.flag == FLAG_UNSIGNED + assert descr.basesize == 0 + assert descr.itemsize == rffi.sizeof(lltype.SingleFloat) + assert descr.lendescr is None def test_get_array_descr_sign(): @@ -257,46 +203,55 @@ for tsc in [False, True]: c2 = GcCache(tsc) arraydescr = get_array_descr(c2, A) - assert arraydescr.is_item_signed() == signed + assert arraydescr.flag == {False: FLAG_UNSIGNED, + True: FLAG_SIGNED }[signed] # RA = rffi.CArray(RESTYPE) for tsc in [False, True]: c2 = GcCache(tsc) arraydescr = get_array_descr(c2, RA) - assert arraydescr.is_item_signed() == signed + assert arraydescr.flag == {False: FLAG_UNSIGNED, + True: FLAG_SIGNED }[signed] + + +def test_get_array_descr_str(): + c0 = GcCache(False) + descr1 = get_array_descr(c0, rstr.STR) + assert descr1.itemsize == rffi.sizeof(lltype.Char) + assert descr1.flag == FLAG_UNSIGNED def test_get_call_descr_not_translated(): c0 = GcCache(False) descr1 = get_call_descr(c0, [lltype.Char, lltype.Signed], lltype.Char) - assert descr1.get_result_size(False) == rffi.sizeof(lltype.Char) - assert descr1.get_return_type() == history.INT + assert descr1.get_result_size() == rffi.sizeof(lltype.Char) + assert descr1.get_result_type() == history.INT assert descr1.arg_classes == "ii" # T = lltype.GcStruct('T') descr2 = get_call_descr(c0, [lltype.Ptr(T)], lltype.Ptr(T)) - assert descr2.get_result_size(False) == rffi.sizeof(lltype.Ptr(T)) - assert descr2.get_return_type() == history.REF + assert descr2.get_result_size() == rffi.sizeof(lltype.Ptr(T)) + assert descr2.get_result_type() == history.REF assert descr2.arg_classes == "r" # U = lltype.GcStruct('U', ('x', lltype.Signed)) assert descr2 == get_call_descr(c0, [lltype.Ptr(U)], lltype.Ptr(U)) # V = lltype.Struct('V', ('x', lltype.Signed)) - assert (get_call_descr(c0, [], lltype.Ptr(V)).get_return_type() == + assert (get_call_descr(c0, [], lltype.Ptr(V)).get_result_type() == history.INT) # - assert (get_call_descr(c0, [], lltype.Void).get_return_type() == + assert (get_call_descr(c0, [], lltype.Void).get_result_type() == history.VOID) # descr4 = get_call_descr(c0, [lltype.Float, lltype.Float], lltype.Float) - assert descr4.get_result_size(False) == rffi.sizeof(lltype.Float) - assert descr4.get_return_type() == history.FLOAT + assert descr4.get_result_size() == rffi.sizeof(lltype.Float) + assert descr4.get_result_type() == history.FLOAT assert descr4.arg_classes == "ff" # descr5 = get_call_descr(c0, [lltype.SingleFloat], lltype.SingleFloat) - assert descr5.get_result_size(False) == rffi.sizeof(lltype.SingleFloat) - assert descr5.get_return_type() == "S" + assert descr5.get_result_size() == rffi.sizeof(lltype.SingleFloat) + assert descr5.get_result_type() == "S" assert descr5.arg_classes == "S" def test_get_call_descr_not_translated_longlong(): @@ -305,13 +260,13 @@ c0 = GcCache(False) # descr5 = get_call_descr(c0, [lltype.SignedLongLong], lltype.Signed) - assert descr5.get_result_size(False) == 4 - assert descr5.get_return_type() == history.INT + assert descr5.get_result_size() == 4 + assert descr5.get_result_type() == history.INT assert descr5.arg_classes == "L" # descr6 = get_call_descr(c0, [lltype.Signed], lltype.SignedLongLong) - assert descr6.get_result_size(False) == 8 - assert descr6.get_return_type() == "L" + assert descr6.get_result_size() == 8 + assert descr6.get_result_type() == "L" assert descr6.arg_classes == "i" def test_get_call_descr_translated(): @@ -319,18 +274,18 @@ T = lltype.GcStruct('T') U = lltype.GcStruct('U', ('x', lltype.Signed)) descr3 = get_call_descr(c1, [lltype.Ptr(T)], lltype.Ptr(U)) - assert isinstance(descr3.get_result_size(True), Symbolic) - assert descr3.get_return_type() == history.REF + assert isinstance(descr3.get_result_size(), Symbolic) + assert descr3.get_result_type() == history.REF assert descr3.arg_classes == "r" # descr4 = get_call_descr(c1, [lltype.Float, lltype.Float], lltype.Float) - assert isinstance(descr4.get_result_size(True), Symbolic) - assert descr4.get_return_type() == history.FLOAT + assert isinstance(descr4.get_result_size(), Symbolic) + assert descr4.get_result_type() == history.FLOAT assert descr4.arg_classes == "ff" # descr5 = get_call_descr(c1, [lltype.SingleFloat], lltype.SingleFloat) - assert isinstance(descr5.get_result_size(True), Symbolic) - assert descr5.get_return_type() == "S" + assert isinstance(descr5.get_result_size(), Symbolic) + assert descr5.get_result_type() == "S" assert descr5.arg_classes == "S" def test_call_descr_extra_info(): @@ -358,6 +313,10 @@ def test_repr_of_descr(): + def repr_of_descr(descr): + s = descr.repr_of_descr() + assert ',' not in s # makes the life easier for pypy.tool.jitlogparser + return s c0 = GcCache(False) T = lltype.GcStruct('T') S = lltype.GcStruct('S', ('x', lltype.Char), @@ -365,33 +324,34 @@ ('z', lltype.Ptr(T))) descr1 = get_size_descr(c0, S) s = symbolic.get_size(S, False) - assert descr1.repr_of_descr() == '' % s + assert repr_of_descr(descr1) == '' % s # descr2 = get_field_descr(c0, S, 'y') o, _ = symbolic.get_field_token(S, 'y', False) - assert descr2.repr_of_descr() == '' % o + assert repr_of_descr(descr2) == '' % o # descr2i = get_field_descr(c0, S, 'x') o, _ = symbolic.get_field_token(S, 'x', False) - assert descr2i.repr_of_descr() == '' % o + assert repr_of_descr(descr2i) == '' % o # descr3 = get_array_descr(c0, lltype.GcArray(lltype.Ptr(S))) - assert descr3.repr_of_descr() == '' + o = symbolic.get_size(lltype.Ptr(S), False) + assert repr_of_descr(descr3) == '' % o # descr3i = get_array_descr(c0, lltype.GcArray(lltype.Char)) - assert descr3i.repr_of_descr() == '' + assert repr_of_descr(descr3i) == '' # descr4 = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Ptr(S)) - assert 'GcPtrCallDescr' in descr4.repr_of_descr() + assert repr_of_descr(descr4) == '' % o # descr4i = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Char) - assert 'CharCallDescr' in descr4i.repr_of_descr() + assert repr_of_descr(descr4i) == '' # descr4f = get_call_descr(c0, [lltype.Char, lltype.Ptr(S)], lltype.Float) - assert 'FloatCallDescr' in descr4f.repr_of_descr() + assert repr_of_descr(descr4f) == '' # descr5f = get_call_descr(c0, [lltype.Char], lltype.SingleFloat) - assert 'SingleFloatCallDescr' in descr5f.repr_of_descr() + assert repr_of_descr(descr5f) == '' def test_call_stubs_1(): c0 = GcCache(False) @@ -401,10 +361,10 @@ def f(a, b): return 'c' - call_stub = descr1.call_stub fnptr = llhelper(lltype.Ptr(lltype.FuncType(ARGS, RES)), f) - res = call_stub(rffi.cast(lltype.Signed, fnptr), [1, 2], None, None) + res = descr1.call_stub_i(rffi.cast(lltype.Signed, fnptr), + [1, 2], None, None) assert res == ord('c') def test_call_stubs_2(): @@ -421,8 +381,8 @@ a = lltype.malloc(ARRAY, 3) opaquea = lltype.cast_opaque_ptr(llmemory.GCREF, a) a[0] = 1 - res = descr2.call_stub(rffi.cast(lltype.Signed, fnptr), - [], [opaquea], [longlong.getfloatstorage(3.5)]) + res = descr2.call_stub_f(rffi.cast(lltype.Signed, fnptr), + [], [opaquea], [longlong.getfloatstorage(3.5)]) assert longlong.getrealfloat(res) == 4.5 def test_call_stubs_single_float(): @@ -445,6 +405,22 @@ a = intmask(singlefloat2uint(r_singlefloat(-10.0))) b = intmask(singlefloat2uint(r_singlefloat(3.0))) c = intmask(singlefloat2uint(r_singlefloat(2.0))) - res = descr2.call_stub(rffi.cast(lltype.Signed, fnptr), - [a, b, c], [], []) + res = descr2.call_stub_i(rffi.cast(lltype.Signed, fnptr), + [a, b, c], [], []) assert float(uint2singlefloat(rffi.r_uint(res))) == -11.5 + +def test_field_arraylen_descr(): + c0 = GcCache(True) + A1 = lltype.GcArray(lltype.Signed) + fielddescr = get_field_arraylen_descr(c0, A1) + assert isinstance(fielddescr, FieldDescr) + ofs = fielddescr.offset + assert repr(ofs) == '< ArrayLengthOffset >' + # + fielddescr = get_field_arraylen_descr(c0, rstr.STR) + ofs = fielddescr.offset + assert repr(ofs) == ("< " + " 'chars'> + < ArrayLengthOffset" + " > >") + # caching: + assert fielddescr is get_field_arraylen_descr(c0, rstr.STR) diff --git a/pypy/jit/backend/llsupport/test/test_ffisupport.py b/pypy/jit/backend/llsupport/test/test_ffisupport.py --- a/pypy/jit/backend/llsupport/test/test_ffisupport.py +++ b/pypy/jit/backend/llsupport/test/test_ffisupport.py @@ -1,5 +1,6 @@ from pypy.rlib.libffi import types from pypy.jit.codewriter.longlong import is_64_bit +from pypy.jit.backend.llsupport.descr import * from pypy.jit.backend.llsupport.ffisupport import * @@ -15,7 +16,9 @@ args = [types.sint, types.pointer] descr = get_call_descr_dynamic(FakeCPU(), args, types.sint, None, ffi_flags=42) - assert isinstance(descr, DynamicIntCallDescr) + assert isinstance(descr, CallDescr) + assert descr.result_type == 'i' + assert descr.result_flag == FLAG_SIGNED assert descr.arg_classes == 'ii' assert descr.get_ffi_flags() == 42 @@ -24,18 +27,20 @@ assert descr is None # missing floats descr = get_call_descr_dynamic(FakeCPU(supports_floats=True), args, types.void, None, ffi_flags=43) - assert isinstance(descr, VoidCallDescr) + assert descr.result_type == 'v' + assert descr.result_flag == FLAG_VOID assert descr.arg_classes == 'ifi' assert descr.get_ffi_flags() == 43 descr = get_call_descr_dynamic(FakeCPU(), [], types.sint8, None, 42) - assert isinstance(descr, DynamicIntCallDescr) - assert descr.get_result_size(False) == 1 + assert descr.get_result_size() == 1 + assert descr.result_flag == FLAG_SIGNED assert descr.is_result_signed() == True descr = get_call_descr_dynamic(FakeCPU(), [], types.uint8, None, 42) - assert isinstance(descr, DynamicIntCallDescr) - assert descr.get_result_size(False) == 1 + assert isinstance(descr, CallDescr) + assert descr.get_result_size() == 1 + assert descr.result_flag == FLAG_UNSIGNED assert descr.is_result_signed() == False if not is_64_bit: @@ -44,7 +49,9 @@ assert descr is None # missing longlongs descr = get_call_descr_dynamic(FakeCPU(supports_longlong=True), [], types.slonglong, None, ffi_flags=43) - assert isinstance(descr, LongLongCallDescr) + assert isinstance(descr, CallDescr) + assert descr.result_flag == FLAG_FLOAT + assert descr.result_type == 'L' assert descr.get_ffi_flags() == 43 else: assert types.slonglong is types.slong @@ -53,6 +60,6 @@ assert descr is None # missing singlefloats descr = get_call_descr_dynamic(FakeCPU(supports_singlefloats=True), [], types.float, None, ffi_flags=44) - SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) - assert isinstance(descr, SingleFloatCallDescr) + assert descr.result_flag == FLAG_UNSIGNED + assert descr.result_type == 'S' assert descr.get_ffi_flags() == 44 diff --git a/pypy/jit/backend/llsupport/test/test_gc.py b/pypy/jit/backend/llsupport/test/test_gc.py --- a/pypy/jit/backend/llsupport/test/test_gc.py +++ b/pypy/jit/backend/llsupport/test/test_gc.py @@ -6,6 +6,7 @@ from pypy.jit.backend.llsupport.gc import * from pypy.jit.backend.llsupport import symbolic from pypy.jit.metainterp.gc import get_description +from pypy.jit.metainterp.history import BoxPtr, BoxInt, ConstPtr from pypy.jit.metainterp.resoperation import get_deep_immutable_oplist from pypy.jit.tool.oparser import parse from pypy.rpython.lltypesystem.rclass import OBJECT, OBJECT_VTABLE @@ -15,12 +16,12 @@ gc_ll_descr = GcLLDescr_boehm(None, None, None) # record = [] - prev_funcptr_for_new = gc_ll_descr.funcptr_for_new - def my_funcptr_for_new(size): - p = prev_funcptr_for_new(size) + prev_malloc_fn_ptr = gc_ll_descr.malloc_fn_ptr + def my_malloc_fn_ptr(size): + p = prev_malloc_fn_ptr(size) record.append((size, p)) return p - gc_ll_descr.funcptr_for_new = my_funcptr_for_new + gc_ll_descr.malloc_fn_ptr = my_malloc_fn_ptr # # ---------- gc_malloc ---------- S = lltype.GcStruct('S', ('x', lltype.Signed)) @@ -32,8 +33,8 @@ A = lltype.GcArray(lltype.Signed) arraydescr = get_array_descr(gc_ll_descr, A) p = gc_ll_descr.gc_malloc_array(arraydescr, 10) - assert record == [(arraydescr.get_base_size(False) + - 10 * arraydescr.get_item_size(False), p)] + assert record == [(arraydescr.basesize + + 10 * arraydescr.itemsize, p)] del record[:] # ---------- gc_malloc_str ---------- p = gc_ll_descr.gc_malloc_str(10) @@ -246,24 +247,28 @@ def __init__(self): self.record = [] + def _malloc(self, type_id, size): + tid = llop.combine_ushort(lltype.Signed, type_id, 0) + x = llmemory.raw_malloc(self.gcheaderbuilder.size_gc_header + size) + x += self.gcheaderbuilder.size_gc_header + return x, tid + def do_malloc_fixedsize_clear(self, RESTYPE, type_id, size, has_finalizer, has_light_finalizer, contains_weakptr): assert not contains_weakptr - assert not has_finalizer # in these tests - assert not has_light_finalizer # in these tests - p = llmemory.raw_malloc(size) + assert not has_finalizer + assert not has_light_finalizer + p, tid = self._malloc(type_id, size) p = llmemory.cast_adr_to_ptr(p, RESTYPE) - tid = llop.combine_ushort(lltype.Signed, type_id, 0) self.record.append(("fixedsize", repr(size), tid, p)) return p def do_malloc_varsize_clear(self, RESTYPE, type_id, length, size, itemsize, offset_to_length): - p = llmemory.raw_malloc(size + itemsize * length) + p, tid = self._malloc(type_id, size + itemsize * length) (p + offset_to_length).signed[0] = length p = llmemory.cast_adr_to_ptr(p, RESTYPE) - tid = llop.combine_ushort(lltype.Signed, type_id, 0) self.record.append(("varsize", tid, length, repr(size), repr(itemsize), repr(offset_to_length), p)) @@ -322,43 +327,40 @@ gc_ll_descr = GcLLDescr_framework(gcdescr, FakeTranslator(), None, llop1) gc_ll_descr.initialize() + llop1.gcheaderbuilder = gc_ll_descr.gcheaderbuilder self.llop1 = llop1 self.gc_ll_descr = gc_ll_descr self.fake_cpu = FakeCPU() - def test_args_for_new(self): - S = lltype.GcStruct('S', ('x', lltype.Signed)) - sizedescr = get_size_descr(self.gc_ll_descr, S) - args = self.gc_ll_descr.args_for_new(sizedescr) - for x in args: - assert lltype.typeOf(x) == lltype.Signed - A = lltype.GcArray(lltype.Signed) - arraydescr = get_array_descr(self.gc_ll_descr, A) - args = self.gc_ll_descr.args_for_new(sizedescr) - for x in args: - assert lltype.typeOf(x) == lltype.Signed +## def test_args_for_new(self): +## S = lltype.GcStruct('S', ('x', lltype.Signed)) +## sizedescr = get_size_descr(self.gc_ll_descr, S) +## args = self.gc_ll_descr.args_for_new(sizedescr) +## for x in args: +## assert lltype.typeOf(x) == lltype.Signed +## A = lltype.GcArray(lltype.Signed) +## arraydescr = get_array_descr(self.gc_ll_descr, A) +## args = self.gc_ll_descr.args_for_new(sizedescr) +## for x in args: +## assert lltype.typeOf(x) == lltype.Signed def test_gc_malloc(self): S = lltype.GcStruct('S', ('x', lltype.Signed)) sizedescr = get_size_descr(self.gc_ll_descr, S) p = self.gc_ll_descr.gc_malloc(sizedescr) - assert self.llop1.record == [("fixedsize", - repr(sizedescr.size), + assert lltype.typeOf(p) == llmemory.GCREF + assert self.llop1.record == [("fixedsize", repr(sizedescr.size), sizedescr.tid, p)] - assert repr(self.gc_ll_descr.args_for_new(sizedescr)) == repr( - [sizedescr.size, sizedescr.tid]) def test_gc_malloc_array(self): A = lltype.GcArray(lltype.Signed) arraydescr = get_array_descr(self.gc_ll_descr, A) p = self.gc_ll_descr.gc_malloc_array(arraydescr, 10) assert self.llop1.record == [("varsize", arraydescr.tid, 10, - repr(arraydescr.get_base_size(True)), - repr(arraydescr.get_item_size(True)), - repr(arraydescr.get_ofs_length(True)), + repr(arraydescr.basesize), + repr(arraydescr.itemsize), + repr(arraydescr.lendescr.offset), p)] - assert repr(self.gc_ll_descr.args_for_new_array(arraydescr)) == repr( - [arraydescr.get_item_size(True), arraydescr.tid]) def test_gc_malloc_str(self): p = self.gc_ll_descr.gc_malloc_str(10) @@ -404,10 +406,11 @@ gc_ll_descr = self.gc_ll_descr llop1 = self.llop1 # - newops = [] + rewriter = GcRewriterAssembler(gc_ll_descr, None) + newops = rewriter.newops v_base = BoxPtr() v_value = BoxPtr() - gc_ll_descr._gen_write_barrier(newops, v_base, v_value) + rewriter.gen_write_barrier(v_base, v_value) assert llop1.record == [] assert len(newops) == 1 assert newops[0].getopnum() == rop.COND_CALL_GC_WB @@ -427,8 +430,7 @@ operations = gc_ll_descr.rewrite_assembler(None, operations, []) assert len(operations) == 0 - def test_rewrite_assembler_1(self): - # check recording of ConstPtrs + def test_record_constptrs(self): class MyFakeCPU(object): def cast_adr_to_int(self, adr): assert adr == "some fake address" @@ -455,211 +457,6 @@ assert operations2 == operations assert gcrefs == [s_gcref] - def test_rewrite_assembler_2(self): - # check write barriers before SETFIELD_GC - v_base = BoxPtr() - v_value = BoxPtr() - field_descr = AbstractDescr() - operations = [ - ResOperation(rop.SETFIELD_GC, [v_base, v_value], None, - descr=field_descr), - ] - gc_ll_descr = self.gc_ll_descr - operations = get_deep_immutable_oplist(operations) - operations = gc_ll_descr.rewrite_assembler(self.fake_cpu, operations, - []) - assert len(operations) == 2 - # - assert operations[0].getopnum() == rop.COND_CALL_GC_WB - assert operations[0].getarg(0) == v_base - assert operations[0].getarg(1) == v_value - assert operations[0].result is None - # - assert operations[1].getopnum() == rop.SETFIELD_RAW - assert operations[1].getarg(0) == v_base - assert operations[1].getarg(1) == v_value - assert operations[1].getdescr() == field_descr - - def test_rewrite_assembler_3(self): - # check write barriers before SETARRAYITEM_GC - for v_new_length in (None, ConstInt(5), ConstInt(5000), BoxInt()): - v_base = BoxPtr() - v_index = BoxInt() - v_value = BoxPtr() - array_descr = AbstractDescr() - operations = [ - ResOperation(rop.SETARRAYITEM_GC, [v_base, v_index, v_value], - None, descr=array_descr), - ] - if v_new_length is not None: - operations.insert(0, ResOperation(rop.NEW_ARRAY, - [v_new_length], v_base, - descr=array_descr)) - # we need to insert another, unrelated NEW_ARRAY here - # to prevent the initialization_store optimization - operations.insert(1, ResOperation(rop.NEW_ARRAY, - [ConstInt(12)], BoxPtr(), - descr=array_descr)) - gc_ll_descr = self.gc_ll_descr - operations = get_deep_immutable_oplist(operations) - operations = gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - if v_new_length is not None: - assert operations[0].getopnum() == rop.NEW_ARRAY - assert operations[1].getopnum() == rop.NEW_ARRAY - del operations[:2] - assert len(operations) == 2 - # - assert operations[0].getopnum() == rop.COND_CALL_GC_WB - assert operations[0].getarg(0) == v_base - assert operations[0].getarg(1) == v_value - assert operations[0].result is None - # - assert operations[1].getopnum() == rop.SETARRAYITEM_RAW - assert operations[1].getarg(0) == v_base - assert operations[1].getarg(1) == v_index - assert operations[1].getarg(2) == v_value - assert operations[1].getdescr() == array_descr - - def test_rewrite_assembler_4(self): - # check write barriers before SETARRAYITEM_GC, - # if we have actually a write_barrier_from_array. - self.llop1._have_wb_from_array = True - for v_new_length in (None, ConstInt(5), ConstInt(5000), BoxInt()): - v_base = BoxPtr() - v_index = BoxInt() - v_value = BoxPtr() - array_descr = AbstractDescr() - operations = [ - ResOperation(rop.SETARRAYITEM_GC, [v_base, v_index, v_value], - None, descr=array_descr), - ] - if v_new_length is not None: - operations.insert(0, ResOperation(rop.NEW_ARRAY, - [v_new_length], v_base, - descr=array_descr)) - # we need to insert another, unrelated NEW_ARRAY here - # to prevent the initialization_store optimization - operations.insert(1, ResOperation(rop.NEW_ARRAY, - [ConstInt(12)], BoxPtr(), - descr=array_descr)) - gc_ll_descr = self.gc_ll_descr - operations = get_deep_immutable_oplist(operations) - operations = gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - if v_new_length is not None: - assert operations[0].getopnum() == rop.NEW_ARRAY - assert operations[1].getopnum() == rop.NEW_ARRAY - del operations[:2] - assert len(operations) == 2 - # - if isinstance(v_new_length, ConstInt) and v_new_length.value < 130: - assert operations[0].getopnum() == rop.COND_CALL_GC_WB - assert operations[0].getarg(0) == v_base - assert operations[0].getarg(1) == v_value - else: - assert operations[0].getopnum() == rop.COND_CALL_GC_WB_ARRAY - assert operations[0].getarg(0) == v_base - assert operations[0].getarg(1) == v_index - assert operations[0].getarg(2) == v_value - assert operations[0].result is None - # - assert operations[1].getopnum() == rop.SETARRAYITEM_RAW - assert operations[1].getarg(0) == v_base - assert operations[1].getarg(1) == v_index - assert operations[1].getarg(2) == v_value - assert operations[1].getdescr() == array_descr - - def test_rewrite_assembler_5(self): - S = lltype.GcStruct('S') - A = lltype.GcArray(lltype.Struct('A', ('x', lltype.Ptr(S)))) - interiordescr = get_interiorfield_descr(self.gc_ll_descr, A, - A.OF, 'x') - wbdescr = self.gc_ll_descr.write_barrier_descr - ops = parse(""" - [p1, p2] - setinteriorfield_gc(p1, 0, p2, descr=interiordescr) - jump(p1, p2) - """, namespace=locals()) - expected = parse(""" - [p1, p2] - cond_call_gc_wb(p1, p2, descr=wbdescr) - setinteriorfield_raw(p1, 0, p2, descr=interiordescr) - jump(p1, p2) - """, namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) - - def test_rewrite_assembler_initialization_store(self): - S = lltype.GcStruct('S', ('parent', OBJECT), - ('x', lltype.Signed)) - s_vtable = lltype.malloc(OBJECT_VTABLE, immortal=True) - xdescr = get_field_descr(self.gc_ll_descr, S, 'x') - ops = parse(""" - [p1] - p0 = new_with_vtable(ConstClass(s_vtable)) - setfield_gc(p0, p1, descr=xdescr) - jump() - """, namespace=locals()) - expected = parse(""" - [p1] - p0 = new_with_vtable(ConstClass(s_vtable)) - # no write barrier - setfield_gc(p0, p1, descr=xdescr) - jump() - """, namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) - - def test_rewrite_assembler_initialization_store_2(self): - S = lltype.GcStruct('S', ('parent', OBJECT), - ('x', lltype.Signed)) - s_vtable = lltype.malloc(OBJECT_VTABLE, immortal=True) - wbdescr = self.gc_ll_descr.write_barrier_descr - xdescr = get_field_descr(self.gc_ll_descr, S, 'x') - ops = parse(""" - [p1] - p0 = new_with_vtable(ConstClass(s_vtable)) - p3 = new_with_vtable(ConstClass(s_vtable)) - setfield_gc(p0, p1, descr=xdescr) - jump() - """, namespace=locals()) - expected = parse(""" - [p1] - p0 = new_with_vtable(ConstClass(s_vtable)) - p3 = new_with_vtable(ConstClass(s_vtable)) - cond_call_gc_wb(p0, p1, descr=wbdescr) - setfield_raw(p0, p1, descr=xdescr) - jump() - """, namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) - - def test_rewrite_assembler_initialization_store_3(self): - A = lltype.GcArray(lltype.Ptr(lltype.GcStruct('S'))) - arraydescr = get_array_descr(self.gc_ll_descr, A) - ops = parse(""" - [p1] - p0 = new_array(3, descr=arraydescr) - setarrayitem_gc(p0, 0, p1, descr=arraydescr) - jump() - """, namespace=locals()) - expected = parse(""" - [p1] - p0 = new_array(3, descr=arraydescr) - setarrayitem_gc(p0, 0, p1, descr=arraydescr) - jump() - """, namespace=locals()) - operations = get_deep_immutable_oplist(ops.operations) - operations = self.gc_ll_descr.rewrite_assembler(self.fake_cpu, - operations, []) - equaloplists(operations, expected.operations) class TestFrameworkMiniMark(TestFramework): gc = 'minimark' diff --git a/pypy/jit/backend/llsupport/test/test_rewrite.py b/pypy/jit/backend/llsupport/test/test_rewrite.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/llsupport/test/test_rewrite.py @@ -0,0 +1,668 @@ +from pypy.jit.backend.llsupport.descr import * +from pypy.jit.backend.llsupport.gc import * +from pypy.jit.metainterp.gc import get_description +from pypy.jit.tool.oparser import parse +from pypy.jit.metainterp.optimizeopt.util import equaloplists +from pypy.jit.codewriter.heaptracker import register_known_gctype + + +class Evaluator(object): + def __init__(self, scope): + self.scope = scope + def __getitem__(self, key): + return eval(key, self.scope) + + +class RewriteTests(object): + def check_rewrite(self, frm_operations, to_operations, **namespace): + S = lltype.GcStruct('S', ('x', lltype.Signed), + ('y', lltype.Signed)) + sdescr = get_size_descr(self.gc_ll_descr, S) + sdescr.tid = 1234 + # + T = lltype.GcStruct('T', ('y', lltype.Signed), + ('z', lltype.Ptr(S)), + ('t', lltype.Signed)) + tdescr = get_size_descr(self.gc_ll_descr, T) + tdescr.tid = 5678 + tzdescr = get_field_descr(self.gc_ll_descr, T, 'z') + # + A = lltype.GcArray(lltype.Signed) + adescr = get_array_descr(self.gc_ll_descr, A) + adescr.tid = 4321 + alendescr = adescr.lendescr + # + B = lltype.GcArray(lltype.Char) + bdescr = get_array_descr(self.gc_ll_descr, B) + bdescr.tid = 8765 + blendescr = bdescr.lendescr + # + C = lltype.GcArray(lltype.Ptr(S)) + cdescr = get_array_descr(self.gc_ll_descr, C) + cdescr.tid = 8111 + clendescr = cdescr.lendescr + # + E = lltype.GcStruct('Empty') + edescr = get_size_descr(self.gc_ll_descr, E) + edescr.tid = 9000 + # + vtable_descr = self.gc_ll_descr.fielddescr_vtable + O = lltype.GcStruct('O', ('parent', rclass.OBJECT), + ('x', lltype.Signed)) + o_vtable = lltype.malloc(rclass.OBJECT_VTABLE, immortal=True) + register_known_gctype(self.cpu, o_vtable, O) + # + tiddescr = self.gc_ll_descr.fielddescr_tid + wbdescr = self.gc_ll_descr.write_barrier_descr + WORD = globals()['WORD'] + # + strdescr = self.gc_ll_descr.str_descr + unicodedescr = self.gc_ll_descr.unicode_descr + strlendescr = strdescr.lendescr + unicodelendescr = unicodedescr.lendescr + # + namespace.update(locals()) + # + for funcname in self.gc_ll_descr._generated_functions: + namespace[funcname] = self.gc_ll_descr.get_malloc_fn(funcname) + namespace[funcname + '_descr'] = getattr(self.gc_ll_descr, + '%s_descr' % funcname) + # + ops = parse(frm_operations, namespace=namespace) + expected = parse(to_operations % Evaluator(namespace), + namespace=namespace) + operations = self.gc_ll_descr.rewrite_assembler(self.cpu, + ops.operations, + []) + equaloplists(operations, expected.operations) + + +class TestBoehm(RewriteTests): + def setup_method(self, meth): + class FakeCPU(object): + def sizeof(self, STRUCT): + return SizeDescrWithVTable(102) + self.cpu = FakeCPU() + self.gc_ll_descr = GcLLDescr_boehm(None, None, None) + + def test_new(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + jump() + """, """ + [p1] + p0 = call_malloc_gc(ConstClass(malloc_fixedsize), %(sdescr.size)d,\ + descr=malloc_fixedsize_descr) + jump() + """) + + def test_no_collapsing(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + p1 = new(descr=sdescr) + jump() + """, """ + [] + p0 = call_malloc_gc(ConstClass(malloc_fixedsize), %(sdescr.size)d,\ + descr=malloc_fixedsize_descr) + p1 = call_malloc_gc(ConstClass(malloc_fixedsize), %(sdescr.size)d,\ + descr=malloc_fixedsize_descr) + jump() + """) + + def test_new_array_fixed(self): + self.check_rewrite(""" + [] + p0 = new_array(10, descr=adescr) + jump() + """, """ + [] + p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ + %(adescr.basesize + 10 * adescr.itemsize)d, \ + descr=malloc_fixedsize_descr) + setfield_gc(p0, 10, descr=alendescr) + jump() + """) + + def test_new_array_variable(self): + self.check_rewrite(""" + [i1] + p0 = new_array(i1, descr=adescr) + jump() + """, """ + [i1] + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(adescr.basesize)d, \ + i1, \ + %(adescr.itemsize)d, \ + %(adescr.lendescr.offset)d, \ + descr=malloc_array_descr) + jump() + """) + + def test_new_with_vtable(self): + self.check_rewrite(""" + [] + p0 = new_with_vtable(ConstClass(o_vtable)) + jump() + """, """ + [p1] + p0 = call_malloc_gc(ConstClass(malloc_fixedsize), 102, \ + descr=malloc_fixedsize_descr) + setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr) + jump() + """) + + def test_newstr(self): + self.check_rewrite(""" + [i1] + p0 = newstr(i1) + jump() + """, """ + [i1] + p0 = call_malloc_gc(ConstClass(malloc_array), \ + %(strdescr.basesize)d, \ + i1, \ + %(strdescr.itemsize)d, \ + %(strlendescr.offset)d, \ + descr=malloc_array_descr) + jump() + """) + + def test_newunicode(self): + self.check_rewrite(""" + [i1] + p0 = newunicode(10) + jump() + """, """ + [i1] + p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ + %(unicodedescr.basesize + \ + 10 * unicodedescr.itemsize)d, \ + descr=malloc_fixedsize_descr) + setfield_gc(p0, 10, descr=unicodelendescr) + jump() + """) + + +class TestFramework(RewriteTests): + def setup_method(self, meth): + class config_(object): + class translation(object): + gc = 'hybrid' + gcrootfinder = 'asmgcc' + gctransformer = 'framework' + gcremovetypeptr = False + gcdescr = get_description(config_) + self.gc_ll_descr = GcLLDescr_framework(gcdescr, None, None, None, + really_not_translated=True) + self.gc_ll_descr.write_barrier_descr.has_write_barrier_from_array = ( + lambda cpu: True) + # + class FakeCPU(object): + def sizeof(self, STRUCT): + descr = SizeDescrWithVTable(102) + descr.tid = 9315 + return descr + self.cpu = FakeCPU() + + def test_rewrite_assembler_new_to_malloc(self): + self.check_rewrite(""" + [p1] + p0 = new(descr=sdescr) + jump() + """, """ + [p1] + p0 = call_malloc_nursery(%(sdescr.size)d) + setfield_gc(p0, 1234, descr=tiddescr) + jump() + """) + + def test_rewrite_assembler_new3_to_malloc(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + p1 = new(descr=tdescr) + p2 = new(descr=sdescr) + jump() + """, """ + [] + p0 = call_malloc_nursery( \ + %(sdescr.size + tdescr.size + sdescr.size)d) + setfield_gc(p0, 1234, descr=tiddescr) + p1 = int_add(p0, %(sdescr.size)d) + setfield_gc(p1, 5678, descr=tiddescr) + p2 = int_add(p1, %(tdescr.size)d) + setfield_gc(p2, 1234, descr=tiddescr) + jump() + """) + + def test_rewrite_assembler_new_array_fixed_to_malloc(self): + self.check_rewrite(""" + [] + p0 = new_array(10, descr=adescr) + jump() + """, """ + [] + p0 = call_malloc_nursery( \ + %(adescr.basesize + 10 * adescr.itemsize)d) + setfield_gc(p0, 4321, descr=tiddescr) + setfield_gc(p0, 10, descr=alendescr) + jump() + """) + + def test_rewrite_assembler_new_and_new_array_fixed_to_malloc(self): + self.check_rewrite(""" + [] + p0 = new(descr=sdescr) + p1 = new_array(10, descr=adescr) + jump() + """, """ + [] + p0 = call_malloc_nursery( \ + %(sdescr.size + \ + adescr.basesize + 10 * adescr.itemsize)d) + setfield_gc(p0, 1234, descr=tiddescr) + p1 = int_add(p0, %(sdescr.size)d) + setfield_gc(p1, 4321, descr=tiddescr) + setfield_gc(p1, 10, descr=alendescr) + jump() + """) + + def test_rewrite_assembler_round_up(self): + self.check_rewrite(""" + [] + p0 = new_array(6, descr=bdescr) + jump() + """, """ + [] + p0 = call_malloc_nursery(%(bdescr.basesize + 8)d) + setfield_gc(p0, 8765, descr=tiddescr) + setfield_gc(p0, 6, descr=blendescr) + jump() + """) + + def test_rewrite_assembler_round_up_always(self): + self.check_rewrite(""" + [] + p0 = new_array(5, descr=bdescr) + p1 = new_array(5, descr=bdescr) + p2 = new_array(5, descr=bdescr) + p3 = new_array(5, descr=bdescr) + jump() + """, """ + [] + p0 = call_malloc_nursery(%(4 * (bdescr.basesize + 8))d) + setfield_gc(p0, 8765, descr=tiddescr) + setfield_gc(p0, 5, descr=blendescr) + p1 = int_add(p0, %(bdescr.basesize + 8)d) + setfield_gc(p1, 8765, descr=tiddescr) + setfield_gc(p1, 5, descr=blendescr) + p2 = int_add(p1, %(bdescr.basesize + 8)d) + setfield_gc(p2, 8765, descr=tiddescr) + setfield_gc(p2, 5, descr=blendescr) + p3 = int_add(p2, %(bdescr.basesize + 8)d) + setfield_gc(p3, 8765, descr=tiddescr) + setfield_gc(p3, 5, descr=blendescr) + jump() + """) + + def test_rewrite_assembler_minimal_size(self): + self.check_rewrite(""" + [] + p0 = new(descr=edescr) + p1 = new(descr=edescr) + jump() + """, """ + [] + p0 = call_malloc_nursery(%(4*WORD)d) + setfield_gc(p0, 9000, descr=tiddescr) + p1 = int_add(p0, %(2*WORD)d) + setfield_gc(p1, 9000, descr=tiddescr) + jump() + """) + + def test_rewrite_assembler_variable_size(self): + self.check_rewrite(""" + [i0] + p0 = new_array(i0, descr=bdescr) + jump(i0) + """, """ + [i0] + p0 = call_malloc_gc(ConstClass(malloc_array), 1, \ + %(bdescr.tid)d, i0, \ + descr=malloc_array_descr) + jump(i0) + """) + + def test_rewrite_assembler_nonstandard_array(self): + # a non-standard array is a bit hard to get; e.g. GcArray(Float) + # is like that on Win32, but not on Linux. Build one manually... + NONSTD = lltype.GcArray(lltype.Float) + nonstd_descr = get_array_descr(self.gc_ll_descr, NONSTD) + nonstd_descr.tid = 6464 + nonstd_descr.basesize = 64 # <= hacked + nonstd_descr.itemsize = 8 + nonstd_descr_gcref = 123 + self.check_rewrite(""" + [i0] + p0 = new_array(i0, descr=nonstd_descr) + jump(i0) + """, """ + [i0] + p0 = call_malloc_gc(ConstClass(malloc_array_nonstandard), \ + 64, 8, \ + %(nonstd_descr.lendescr.offset)d, \ + 6464, i0, \ + descr=malloc_array_nonstandard_descr) + jump(i0) + """, nonstd_descr=nonstd_descr) + + def test_rewrite_assembler_maximal_size_1(self): + self.gc_ll_descr.max_size_of_young_obj = 100 + self.check_rewrite(""" + [] + p0 = new_array(103, descr=bdescr) + jump() + """, """ + [] + p0 = call_malloc_gc(ConstClass(malloc_fixedsize), \ + %(bdescr.basesize + 104)d, \ + descr=malloc_fixedsize_descr) + setfield_gc(p0, 8765, descr=tiddescr) + setfield_gc(p0, 103, descr=blendescr) + jump() + """) + + def test_rewrite_assembler_maximal_size_2(self): + self.gc_ll_descr.max_size_of_young_obj = 300 + self.check_rewrite(""" + [] + p0 = new_array(101, descr=bdescr) + p1 = new_array(102, descr=bdescr) # two new_arrays can be combined + p2 = new_array(103, descr=bdescr) # but not all three + jump() + """, """ + [] + p0 = call_malloc_nursery( \ + %(2 * (bdescr.basesize + 104))d) + setfield_gc(p0, 8765, descr=tiddescr) + setfield_gc(p0, 101, descr=blendescr) + p1 = int_add(p0, %(bdescr.basesize + 104)d) + setfield_gc(p1, 8765, descr=tiddescr) + setfield_gc(p1, 102, descr=blendescr) + p2 = call_malloc_nursery( \ + %(bdescr.basesize + 104)d) + setfield_gc(p2, 8765, descr=tiddescr) + setfield_gc(p2, 103, descr=blendescr) + jump() + """) + + def test_rewrite_assembler_huge_size(self): + # "huge" is defined as "larger than 0xffffff bytes, or 16MB" + self.check_rewrite(""" + [] + p0 = new_array(20000000, descr=bdescr) + jump() + """, """ + [] + p0 = call_malloc_gc(ConstClass(malloc_array), 1, \ + %(bdescr.tid)d, 20000000, \ + descr=malloc_array_descr) + jump() + """) + + def test_new_with_vtable(self): + self.check_rewrite(""" + [] + p0 = new_with_vtable(ConstClass(o_vtable)) + jump() + """, """ + [p1] + p0 = call_malloc_nursery(104) # rounded up + setfield_gc(p0, 9315, descr=tiddescr) + setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr) + jump() + """) + + def test_new_with_vtable_too_big(self): + self.gc_ll_descr.max_size_of_young_obj = 100 + self.check_rewrite(""" + [] + p0 = new_with_vtable(ConstClass(o_vtable)) + jump() + """, """ + [p1] + p0 = call_malloc_gc(ConstClass(malloc_fixedsize), 104, \ + descr=malloc_fixedsize_descr) + setfield_gc(p0, 9315, descr=tiddescr) + setfield_gc(p0, ConstClass(o_vtable), descr=vtable_descr) + jump() + """) + + def test_rewrite_assembler_newstr_newunicode(self): + self.check_rewrite(""" + [i2] + p0 = newstr(14) + p1 = newunicode(10) + p2 = newunicode(i2) + p3 = newstr(i2) + jump() + """, """ + [i2] + p0 = call_malloc_nursery( \ + %(strdescr.basesize + 16 * strdescr.itemsize + \ + unicodedescr.basesize + 10 * unicodedescr.itemsize)d) + setfield_gc(p0, %(strdescr.tid)d, descr=tiddescr) + setfield_gc(p0, 14, descr=strlendescr) + p1 = int_add(p0, %(strdescr.basesize + 16 * strdescr.itemsize)d) + setfield_gc(p1, %(unicodedescr.tid)d, descr=tiddescr) + setfield_gc(p1, 10, descr=unicodelendescr) + p2 = call_malloc_gc(ConstClass(malloc_unicode), i2, \ + descr=malloc_unicode_descr) + p3 = call_malloc_gc(ConstClass(malloc_str), i2, \ + descr=malloc_str_descr) + jump() + """) + + def test_write_barrier_before_setfield_gc(self): + self.check_rewrite(""" + [p1, p2] + setfield_gc(p1, p2, descr=tzdescr) + jump() + """, """ + [p1, p2] + cond_call_gc_wb(p1, p2, descr=wbdescr) + setfield_raw(p1, p2, descr=tzdescr) + jump() + """) + + def test_write_barrier_before_array_without_from_array(self): + self.gc_ll_descr.write_barrier_descr.has_write_barrier_from_array = ( + lambda cpu: False) + self.check_rewrite(""" + [p1, i2, p3] + setarrayitem_gc(p1, i2, p3, descr=cdescr) + jump() + """, """ + [p1, i2, p3] + cond_call_gc_wb(p1, p3, descr=wbdescr) + setarrayitem_raw(p1, i2, p3, descr=cdescr) + jump() + """) + + def test_write_barrier_before_short_array(self): + self.gc_ll_descr.max_size_of_young_obj = 2000 + self.check_rewrite(""" + [i2, p3] + p1 = new_array(129, descr=cdescr) + call(123456) + setarrayitem_gc(p1, i2, p3, descr=cdescr) + jump() + """, """ + [i2, p3] + p1 = call_malloc_nursery( \ + %(cdescr.basesize + 129 * cdescr.itemsize)d) + setfield_gc(p1, 8111, descr=tiddescr) + setfield_gc(p1, 129, descr=clendescr) + call(123456) + cond_call_gc_wb(p1, p3, descr=wbdescr) + setarrayitem_raw(p1, i2, p3, descr=cdescr) + jump() + """) + + def test_write_barrier_before_long_array(self): + # the limit of "being too long" is fixed, arbitrarily, at 130 + self.gc_ll_descr.max_size_of_young_obj = 2000 + self.check_rewrite(""" + [i2, p3] + p1 = new_array(130, descr=cdescr) + call(123456) + setarrayitem_gc(p1, i2, p3, descr=cdescr) + jump() + """, """ + [i2, p3] + p1 = call_malloc_nursery( \ + %(cdescr.basesize + 130 * cdescr.itemsize)d) + setfield_gc(p1, 8111, descr=tiddescr) + setfield_gc(p1, 130, descr=clendescr) + call(123456) + cond_call_gc_wb_array(p1, i2, p3, descr=wbdescr) + setarrayitem_raw(p1, i2, p3, descr=cdescr) + jump() + """) + + def test_write_barrier_before_unknown_array(self): + self.check_rewrite(""" + [p1, i2, p3] + setarrayitem_gc(p1, i2, p3, descr=cdescr) + jump() + """, """ + [p1, i2, p3] + cond_call_gc_wb_array(p1, i2, p3, descr=wbdescr) + setarrayitem_raw(p1, i2, p3, descr=cdescr) + jump() + """) + + def test_label_makes_size_unknown(self): + self.check_rewrite(""" + [i2, p3] + p1 = new_array(5, descr=cdescr) + label(p1, i2, p3) + setarrayitem_gc(p1, i2, p3, descr=cdescr) + jump() + """, """ + [i2, p3] + p1 = call_malloc_nursery( \ + %(cdescr.basesize + 5 * cdescr.itemsize)d) + setfield_gc(p1, 8111, descr=tiddescr) + setfield_gc(p1, 5, descr=clendescr) + label(p1, i2, p3) + cond_call_gc_wb_array(p1, i2, p3, descr=wbdescr) + setarrayitem_raw(p1, i2, p3, descr=cdescr) + jump() + """) + + def test_write_barrier_before_setinteriorfield_gc(self): + S1 = lltype.GcStruct('S1') + INTERIOR = lltype.GcArray(('z', lltype.Ptr(S1))) + interiordescr = get_array_descr(self.gc_ll_descr, INTERIOR) + interiordescr.tid = 1291 + interiorlendescr = interiordescr.lendescr + interiorzdescr = get_interiorfield_descr(self.gc_ll_descr, + INTERIOR, 'z') + self.check_rewrite(""" + [p1, p2] + setinteriorfield_gc(p1, 0, p2, descr=interiorzdescr) + jump(p1, p2) + """, """ + [p1, p2] + cond_call_gc_wb(p1, p2, descr=wbdescr) + setinteriorfield_raw(p1, 0, p2, descr=interiorzdescr) + jump(p1, p2) + """, interiorzdescr=interiorzdescr) + + def test_initialization_store(self): + self.check_rewrite(""" + [p1] + p0 = new(descr=tdescr) + setfield_gc(p0, p1, descr=tzdescr) + jump() + """, """ + [p1] + p0 = call_malloc_nursery(%(tdescr.size)d) + setfield_gc(p0, 5678, descr=tiddescr) + setfield_gc(p0, p1, descr=tzdescr) + jump() + """) + + def test_initialization_store_2(self): + self.check_rewrite(""" + [] + p0 = new(descr=tdescr) + p1 = new(descr=sdescr) + setfield_gc(p0, p1, descr=tzdescr) + jump() + """, """ + [] + p0 = call_malloc_nursery(%(tdescr.size + sdescr.size)d) + setfield_gc(p0, 5678, descr=tiddescr) + p1 = int_add(p0, %(tdescr.size)d) + setfield_gc(p1, 1234, descr=tiddescr) + # <<>> + setfield_gc(p0, p1, descr=tzdescr) + jump() + """) + + def test_initialization_store_array(self): + self.check_rewrite(""" + [p1, i2] + p0 = new_array(5, descr=cdescr) + setarrayitem_gc(p0, i2, p1, descr=cdescr) + jump() + """, """ + [p1, i2] + p0 = call_malloc_nursery( \ + %(cdescr.basesize + 5 * cdescr.itemsize)d) + setfield_gc(p0, 8111, descr=tiddescr) + setfield_gc(p0, 5, descr=clendescr) + setarrayitem_gc(p0, i2, p1, descr=cdescr) + jump() + """) + + def test_non_initialization_store(self): + self.check_rewrite(""" + [i0] + p0 = new(descr=tdescr) + p1 = newstr(i0) + setfield_gc(p0, p1, descr=tzdescr) + jump() + """, """ + [i0] + p0 = call_malloc_nursery(%(tdescr.size)d) + setfield_gc(p0, 5678, descr=tiddescr) + p1 = call_malloc_gc(ConstClass(malloc_str), i0, \ + descr=malloc_str_descr) + cond_call_gc_wb(p0, p1, descr=wbdescr) + setfield_raw(p0, p1, descr=tzdescr) + jump() + """) + + def test_non_initialization_store_label(self): + self.check_rewrite(""" + [p1] + p0 = new(descr=tdescr) + label(p0, p1) + setfield_gc(p0, p1, descr=tzdescr) + jump() + """, """ + [p1] + p0 = call_malloc_nursery(%(tdescr.size)d) + setfield_gc(p0, 5678, descr=tiddescr) + label(p0, p1) + cond_call_gc_wb(p0, p1, descr=wbdescr) + setfield_raw(p0, p1, descr=tzdescr) + jump() + """) diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -27,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -2930,6 +2934,8 @@ # overflowing value: fail = self.cpu.execute_token(looptoken, sys.maxint // 4 + 1) assert fail.identifier == excdescr.identifier + exc = self.cpu.grab_exc_value() + assert exc == "memoryerror!" def test_compile_loop_with_target(self): i0 = BoxInt() @@ -2972,6 +2978,56 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -39,6 +40,7 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.codewriter import longlong from pypy.rlib.rarithmetic import intmask +from pypy.rlib.objectmodel import compute_unique_id # darwin requires the stack to be 16 bytes aligned on calls. Same for gcc 4.5.0, # better safe than sorry @@ -58,7 +60,8 @@ self.is_guard_not_invalidated = is_guard_not_invalidated DEBUG_COUNTER = lltype.Struct('DEBUG_COUNTER', ('i', lltype.Signed), - ('bridge', lltype.Signed), # 0 or 1 + ('type', lltype.Char), # 'b'ridge, 'l'abel or + # 'e'ntry point ('number', lltype.Signed)) class Assembler386(object): @@ -70,10 +73,6 @@ self.cpu = cpu self.verbose = False self.rtyper = cpu.rtyper - self.malloc_func_addr = 0 - self.malloc_array_func_addr = 0 - self.malloc_str_func_addr = 0 - self.malloc_unicode_func_addr = 0 self.fail_boxes_int = values_array(lltype.Signed, failargs_limit) self.fail_boxes_ptr = values_array(llmemory.GCREF, failargs_limit) self.fail_boxes_float = values_array(longlong.FLOATSTORAGE, @@ -108,20 +107,6 @@ # the address of the function called by 'new' gc_ll_descr = self.cpu.gc_ll_descr gc_ll_descr.initialize() - ll_new = gc_ll_descr.get_funcptr_for_new() - self.malloc_func_addr = rffi.cast(lltype.Signed, ll_new) - if gc_ll_descr.get_funcptr_for_newarray is not None: - ll_new_array = gc_ll_descr.get_funcptr_for_newarray() - self.malloc_array_func_addr = rffi.cast(lltype.Signed, - ll_new_array) - if gc_ll_descr.get_funcptr_for_newstr is not None: - ll_new_str = gc_ll_descr.get_funcptr_for_newstr() - self.malloc_str_func_addr = rffi.cast(lltype.Signed, - ll_new_str) - if gc_ll_descr.get_funcptr_for_newunicode is not None: - ll_new_unicode = gc_ll_descr.get_funcptr_for_newunicode() - self.malloc_unicode_func_addr = rffi.cast(lltype.Signed, - ll_new_unicode) self.memcpy_addr = self.cpu.cast_ptr_to_int(support.memcpy_fn) self._build_failure_recovery(False) self._build_failure_recovery(True) @@ -165,12 +150,15 @@ def finish_once(self): if self._debug: debug_start('jit-backend-counts') - for struct in self.loop_run_counters: - if struct.bridge: - prefix = 'bridge ' + for i in range(len(self.loop_run_counters)): + struct = self.loop_run_counters[i] + if struct.type == 'l': + prefix = 'TargetToken(%d)' % struct.number + elif struct.type == 'b': + prefix = 'bridge ' + str(struct.number) else: - prefix = 'loop ' - debug_print(prefix + str(struct.number) + ':' + str(struct.i)) + prefix = 'entry ' + str(struct.number) + debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') def _build_float_constants(self): @@ -275,7 +263,8 @@ # self.mc = codebuf.MachineCodeBlockWrapper() # call on_leave_jitted_save_exc() - addr = self.cpu.get_on_leave_jitted_int(save_exception=True) + addr = self.cpu.get_on_leave_jitted_int(save_exception=True, + default_to_memoryerror=True) self.mc.CALL(imm(addr)) self.mc.MOV_ri(eax.value, self.cpu.propagate_exception_v) self._call_footer() @@ -423,6 +412,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -439,8 +429,8 @@ self.setup(looptoken) if log: - self._register_counter(False, looptoken.number) - operations = self._inject_debugging_code(looptoken, operations) + operations = self._inject_debugging_code(looptoken, operations, + 'e', looptoken.number) regalloc = RegAlloc(self, self.cpu.translate_support_code) # @@ -488,7 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -497,17 +488,12 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: - self._register_counter(True, descr_number) - operations = self._inject_debugging_code(faildescr, operations) + operations = self._inject_debugging_code(faildescr, operations, + 'b', descr_number) arglocs = self.rebuild_faillocs_from_descr(failure_recovery) if not we_are_translated(): @@ -515,6 +501,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -549,7 +536,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub @@ -614,22 +601,29 @@ return self.mc.materialize(self.cpu.asmmemmgr, allblocks, self.cpu.gc_ll_descr.gcrootmap) - def _register_counter(self, bridge, number): - if self._debug: - # YYY very minor leak -- we need the counters to stay alive - # forever, just because we want to report them at the end - # of the process - struct = lltype.malloc(DEBUG_COUNTER, flavor='raw', - track_allocation=False) - struct.i = 0 - struct.bridge = int(bridge) + def _register_counter(self, tp, number, token): + # YYY very minor leak -- we need the counters to stay alive + # forever, just because we want to report them at the end + # of the process + struct = lltype.malloc(DEBUG_COUNTER, flavor='raw', + track_allocation=False) + struct.i = 0 + struct.type = tp + if tp == 'b' or tp == 'e': struct.number = number - self.loop_run_counters.append(struct) + else: + assert token + struct.number = compute_unique_id(token) + self.loop_run_counters.append(struct) + return struct def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -668,27 +662,36 @@ targettoken._x86_loop_code += rawstart self.target_tokens_currently_compiling = None + def _append_debugging_code(self, operations, tp, number, token): + counter = self._register_counter(tp, number, token) + c_adr = ConstInt(rffi.cast(lltype.Signed, counter)) + box = BoxInt() + box2 = BoxInt() + ops = [ResOperation(rop.GETFIELD_RAW, [c_adr], + box, descr=self.debug_counter_descr), + ResOperation(rop.INT_ADD, [box, ConstInt(1)], box2), + ResOperation(rop.SETFIELD_RAW, [c_adr, box2], + None, descr=self.debug_counter_descr)] + operations.extend(ops) + @specialize.argtype(1) - def _inject_debugging_code(self, looptoken, operations): + def _inject_debugging_code(self, looptoken, operations, tp, number): if self._debug: # before doing anything, let's increase a counter s = 0 for op in operations: s += op.getopnum() looptoken._x86_debug_checksum = s - c_adr = ConstInt(rffi.cast(lltype.Signed, - self.loop_run_counters[-1])) - box = BoxInt() - box2 = BoxInt() - ops = [ResOperation(rop.GETFIELD_RAW, [c_adr], - box, descr=self.debug_counter_descr), - ResOperation(rop.INT_ADD, [box, ConstInt(1)], box2), - ResOperation(rop.SETFIELD_RAW, [c_adr, box2], - None, descr=self.debug_counter_descr)] - if operations[0].getopnum() == rop.LABEL: - operations = [operations[0]] + ops + operations[1:] - else: - operations = ops + operations + + newoperations = [] + self._append_debugging_code(newoperations, tp, number, + None) + for op in operations: + newoperations.append(op) + if op.getopnum() == rop.LABEL: + self._append_debugging_code(newoperations, 'l', number, + op.getdescr()) + operations = newoperations return operations def _assemble(self, regalloc, operations): @@ -809,7 +812,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): @@ -865,8 +871,8 @@ high_part = rffi.cast(rffi.CArrayPtr(rffi.INT), from_loc.value)[1] low_part = intmask(low_part) high_part = intmask(high_part) - self.mc.MOV_bi(to_loc.value, low_part) - self.mc.MOV_bi(to_loc.value + 4, high_part) + self.mc.MOV32_bi(to_loc.value, low_part) + self.mc.MOV32_bi(to_loc.value + 4, high_part) def regalloc_perform(self, op, arglocs, resloc): genop_list[op.getopnum()](self, op, arglocs, resloc) @@ -1357,46 +1363,10 @@ self.mc.SHR_ri(resloc.value, 7) self.mc.AND_ri(resloc.value, 1) - def genop_new_with_vtable(self, op, arglocs, result_loc): - assert result_loc is eax - loc_vtable = arglocs[-1] - assert isinstance(loc_vtable, ImmedLoc) - arglocs = arglocs[:-1] - self.call(self.malloc_func_addr, arglocs, eax) - self.propagate_memoryerror_if_eax_is_null() - self.set_vtable(eax, loc_vtable) + # ---------- - def set_vtable(self, loc, loc_vtable): - if self.cpu.vtable_offset is not None: - assert isinstance(loc, RegLoc) - assert isinstance(loc_vtable, ImmedLoc) - self.mc.MOV(mem(loc, self.cpu.vtable_offset), loc_vtable) - - def set_new_array_length(self, loc, ofs_length, loc_num_elem): - assert isinstance(loc, RegLoc) - assert isinstance(loc_num_elem, ImmedLoc) - self.mc.MOV(mem(loc, ofs_length), loc_num_elem) - - # XXX genop_new is abused for all varsized mallocs with Boehm, for now - # (instead of genop_new_array, genop_newstr, genop_newunicode) - def genop_new(self, op, arglocs, result_loc): - assert result_loc is eax - self.call(self.malloc_func_addr, arglocs, eax) - self.propagate_memoryerror_if_eax_is_null() - - def genop_new_array(self, op, arglocs, result_loc): - assert result_loc is eax - self.call(self.malloc_array_func_addr, arglocs, eax) - self.propagate_memoryerror_if_eax_is_null() - - def genop_newstr(self, op, arglocs, result_loc): - assert result_loc is eax - self.call(self.malloc_str_func_addr, arglocs, eax) - self.propagate_memoryerror_if_eax_is_null() - - def genop_newunicode(self, op, arglocs, result_loc): - assert result_loc is eax - self.call(self.malloc_unicode_func_addr, arglocs, eax) + def genop_call_malloc_gc(self, op, arglocs, result_loc): + self.genop_call(op, arglocs, result_loc) self.propagate_memoryerror_if_eax_is_null() def propagate_memoryerror_if_eax_is_null(self): @@ -2065,6 +2035,8 @@ self._genop_call(op, arglocs, resloc, force_index) def _genop_call(self, op, arglocs, resloc, force_index): + from pypy.jit.backend.llsupport.descr import CallDescr + sizeloc = arglocs[0] assert isinstance(sizeloc, ImmedLoc) size = sizeloc.value @@ -2079,13 +2051,16 @@ else: tmp = eax + descr = op.getdescr() + assert isinstance(descr, CallDescr) + self._emit_call(force_index, x, arglocs, 3, tmp=tmp, - argtypes=op.getdescr().get_arg_types(), - callconv=op.getdescr().get_call_conv()) + argtypes=descr.get_arg_types(), + callconv=descr.get_call_conv()) if IS_X86_32 and isinstance(resloc, StackLoc) and resloc.type == FLOAT: # a float or a long long return - if op.getdescr().get_return_type() == 'L': + if descr.get_result_type() == 'L': self.mc.MOV_br(resloc.value, eax.value) # long long self.mc.MOV_br(resloc.value + 4, edx.value) # XXX should ideally not move the result on the stack, @@ -2094,7 +2069,7 @@ # can just be always a stack location else: self.mc.FSTPL_b(resloc.value) # float return - elif op.getdescr().get_return_type() == 'S': + elif descr.get_result_type() == 'S': # singlefloat return assert resloc is eax if IS_X86_32: @@ -2292,9 +2267,9 @@ # # Reset the vable token --- XXX really too much special logic here:-( if jd.index_of_virtualizable >= 0: - from pypy.jit.backend.llsupport.descr import BaseFieldDescr + from pypy.jit.backend.llsupport.descr import FieldDescr fielddescr = jd.vable_token_descr - assert isinstance(fielddescr, BaseFieldDescr) + assert isinstance(fielddescr, FieldDescr) ofs = fielddescr.offset self.mc.MOV(eax, arglocs[1]) self.mc.MOV_mi((eax.value, ofs), 0) @@ -2497,9 +2472,8 @@ else: self.mc.JMP(imm(target)) - def malloc_cond(self, nursery_free_adr, nursery_top_adr, size, tid): - size = max(size, self.cpu.gc_ll_descr.minimal_size_in_nursery) - size = (size + WORD-1) & ~(WORD-1) # round up + def malloc_cond(self, nursery_free_adr, nursery_top_adr, size): + assert size & (WORD-1) == 0 # must be correctly aligned self.mc.MOV(eax, heap(nursery_free_adr)) self.mc.LEA_rm(edx.value, (eax.value, size)) self.mc.CMP(edx, heap(nursery_top_adr)) @@ -2535,9 +2509,6 @@ offset = self.mc.get_relative_pos() - jmp_adr assert 0 < offset <= 127 self.mc.overwrite(jmp_adr-1, chr(offset)) - # on 64-bits, 'tid' is a value that fits in 31 bits - assert rx86.fits_in_32bits(tid) - self.mc.MOV_mi((eax.value, 0), tid) self.mc.MOV(heap(nursery_free_adr), edx) genop_discard_list = [Assembler386.not_implemented_op_discard] * rop._LAST @@ -2584,3 +2555,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/x86/jump.py b/pypy/jit/backend/x86/jump.py --- a/pypy/jit/backend/x86/jump.py +++ b/pypy/jit/backend/x86/jump.py @@ -17,7 +17,10 @@ key = src._getregkey() if key in srccount: if key == dst_locations[i]._getregkey(): - srccount[key] = -sys.maxint # ignore a move "x = x" + # ignore a move "x = x" + # setting any "large enough" negative value is ok, but + # be careful of overflows, don't use -sys.maxint + srccount[key] = -len(dst_locations) - 1 pending_dests -= 1 else: srccount[key] += 1 diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -16,8 +16,8 @@ from pypy.jit.codewriter import heaptracker, longlong from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.metainterp.resoperation import rop -from pypy.jit.backend.llsupport.descr import BaseFieldDescr, BaseArrayDescr -from pypy.jit.backend.llsupport.descr import BaseCallDescr, BaseSizeDescr +from pypy.jit.backend.llsupport.descr import FieldDescr, ArrayDescr +from pypy.jit.backend.llsupport.descr import CallDescr, SizeDescr from pypy.jit.backend.llsupport.descr import InteriorFieldDescr from pypy.jit.backend.llsupport.regalloc import FrameManager, RegisterManager,\ TempBox @@ -188,7 +188,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, @@ -741,7 +744,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) @@ -870,9 +873,9 @@ def _consider_call(self, op, guard_not_forced_op=None): calldescr = op.getdescr() - assert isinstance(calldescr, BaseCallDescr) + assert isinstance(calldescr, CallDescr) assert len(calldescr.arg_classes) == op.numargs() - 1 - size = calldescr.get_result_size(self.translate_support_code) + size = calldescr.get_result_size() sign = calldescr.is_result_signed() if sign: sign_loc = imm1 @@ -917,12 +920,15 @@ consider_call_release_gil = consider_call_may_force + def consider_call_malloc_gc(self, op): + self._consider_call(op) + def consider_call_assembler(self, op, guard_op): descr = op.getdescr() assert isinstance(descr, JitCellToken) jd = descr.outermost_jitdriver_sd assert jd is not None - size = jd.portal_calldescr.get_result_size(self.translate_support_code) + size = jd.portal_calldescr.get_result_size() vable_index = jd.index_of_virtualizable if vable_index >= 0: self.rm._sync_var(op.getarg(vable_index)) @@ -957,21 +963,10 @@ consider_cond_call_gc_wb_array = consider_cond_call_gc_wb - def fastpath_malloc_fixedsize(self, op, descr): - assert isinstance(descr, BaseSizeDescr) - self._do_fastpath_malloc(op, descr.size, descr.tid) - - def fastpath_malloc_varsize(self, op, arraydescr, num_elem): - assert isinstance(arraydescr, BaseArrayDescr) - ofs_length = arraydescr.get_ofs_length(self.translate_support_code) - basesize = arraydescr.get_base_size(self.translate_support_code) - itemsize = arraydescr.get_item_size(self.translate_support_code) - size = basesize + itemsize * num_elem - self._do_fastpath_malloc(op, size, arraydescr.tid) - self.assembler.set_new_array_length(eax, ofs_length, imm(num_elem)) - - def _do_fastpath_malloc(self, op, size, tid): - gc_ll_descr = self.assembler.cpu.gc_ll_descr + def consider_call_malloc_nursery(self, op): + size_box = op.getarg(0) + assert isinstance(size_box, ConstInt) + size = size_box.getint() self.rm.force_allocate_reg(op.result, selected_reg=eax) # # We need edx as a temporary, but otherwise don't save any more @@ -980,86 +975,39 @@ self.rm.force_allocate_reg(tmp_box, selected_reg=edx) self.rm.possibly_free_var(tmp_box) # + gc_ll_descr = self.assembler.cpu.gc_ll_descr self.assembler.malloc_cond( gc_ll_descr.get_nursery_free_addr(), gc_ll_descr.get_nursery_top_addr(), - size, tid, - ) - - def consider_new(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - if gc_ll_descr.can_inline_malloc(op.getdescr()): - self.fastpath_malloc_fixedsize(op, op.getdescr()) - else: - args = gc_ll_descr.args_for_new(op.getdescr()) - arglocs = [imm(x) for x in args] - return self._call(op, arglocs) - - def consider_new_with_vtable(self, op): - classint = op.getarg(0).getint() - descrsize = heaptracker.vtable2descr(self.assembler.cpu, classint) - if self.assembler.cpu.gc_ll_descr.can_inline_malloc(descrsize): - self.fastpath_malloc_fixedsize(op, descrsize) - self.assembler.set_vtable(eax, imm(classint)) - # result of fastpath malloc is in eax - else: - args = self.assembler.cpu.gc_ll_descr.args_for_new(descrsize) - arglocs = [imm(x) for x in args] - arglocs.append(self.loc(op.getarg(0))) - return self._call(op, arglocs) - - def consider_newstr(self, op): - loc = self.loc(op.getarg(0)) - return self._call(op, [loc]) - - def consider_newunicode(self, op): - loc = self.loc(op.getarg(0)) - return self._call(op, [loc]) - - def consider_new_array(self, op): - gc_ll_descr = self.assembler.cpu.gc_ll_descr - box_num_elem = op.getarg(0) - if isinstance(box_num_elem, ConstInt): - num_elem = box_num_elem.value - if gc_ll_descr.can_inline_malloc_varsize(op.getdescr(), - num_elem): - self.fastpath_malloc_varsize(op, op.getdescr(), num_elem) - return - args = self.assembler.cpu.gc_ll_descr.args_for_new_array( - op.getdescr()) - arglocs = [imm(x) for x in args] - arglocs.append(self.loc(box_num_elem)) - self._call(op, arglocs) + size) def _unpack_arraydescr(self, arraydescr): - assert isinstance(arraydescr, BaseArrayDescr) - ofs_length = arraydescr.get_ofs_length(self.translate_support_code) - ofs = arraydescr.get_base_size(self.translate_support_code) - size = arraydescr.get_item_size(self.translate_support_code) - ptr = arraydescr.is_array_of_pointers() + assert isinstance(arraydescr, ArrayDescr) + ofs = arraydescr.basesize + size = arraydescr.itemsize sign = arraydescr.is_item_signed() - return size, ofs, ofs_length, ptr, sign + return size, ofs, sign def _unpack_fielddescr(self, fielddescr): - assert isinstance(fielddescr, BaseFieldDescr) + assert isinstance(fielddescr, FieldDescr) ofs = fielddescr.offset - size = fielddescr.get_field_size(self.translate_support_code) - ptr = fielddescr.is_pointer_field() + size = fielddescr.field_size sign = fielddescr.is_field_signed() - return imm(ofs), imm(size), ptr, sign + return imm(ofs), imm(size), sign + _unpack_fielddescr._always_inline_ = True def _unpack_interiorfielddescr(self, descr): assert isinstance(descr, InteriorFieldDescr) arraydescr = descr.arraydescr - ofs = arraydescr.get_base_size(self.translate_support_code) - itemsize = arraydescr.get_item_size(self.translate_support_code) - fieldsize = descr.fielddescr.get_field_size(self.translate_support_code) + ofs = arraydescr.basesize + itemsize = arraydescr.itemsize + fieldsize = descr.fielddescr.field_size sign = descr.fielddescr.is_field_signed() ofs += descr.fielddescr.offset return imm(ofs), imm(itemsize), imm(fieldsize), sign def consider_setfield_gc(self, op): - ofs_loc, size_loc, _, _ = self._unpack_fielddescr(op.getdescr()) + ofs_loc, size_loc, _ = self._unpack_fielddescr(op.getdescr()) assert isinstance(size_loc, ImmedLoc) if size_loc.value == 1: need_lower_byte = True @@ -1117,7 +1065,7 @@ consider_unicodesetitem = consider_strsetitem def consider_setarrayitem_gc(self, op): - itemsize, ofs, _, _, _ = self._unpack_arraydescr(op.getdescr()) + itemsize, ofs, _ = self._unpack_arraydescr(op.getdescr()) args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) if itemsize == 1: @@ -1134,7 +1082,7 @@ consider_setarrayitem_raw = consider_setarrayitem_gc def consider_getfield_gc(self, op): - ofs_loc, size_loc, _, sign = self._unpack_fielddescr(op.getdescr()) + ofs_loc, size_loc, sign = self._unpack_fielddescr(op.getdescr()) args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) self.rm.possibly_free_vars(args) @@ -1150,7 +1098,7 @@ consider_getfield_gc_pure = consider_getfield_gc def consider_getarrayitem_gc(self, op): - itemsize, ofs, _, _, sign = self._unpack_arraydescr(op.getdescr()) + itemsize, ofs, sign = self._unpack_arraydescr(op.getdescr()) args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) ofs_loc = self.rm.make_sure_var_in_reg(op.getarg(1), args) @@ -1229,8 +1177,8 @@ def consider_arraylen_gc(self, op): arraydescr = op.getdescr() - assert isinstance(arraydescr, BaseArrayDescr) - ofs = arraydescr.get_ofs_length(self.translate_support_code) + assert isinstance(arraydescr, ArrayDescr) + ofs = arraydescr.lendescr.offset args = op.getarglist() base_loc = self.rm.make_sure_var_in_reg(op.getarg(0), args) self.rm.possibly_free_vars_for_op(op) diff --git a/pypy/jit/backend/x86/test/test_gc_integration.py b/pypy/jit/backend/x86/test/test_gc_integration.py --- a/pypy/jit/backend/x86/test/test_gc_integration.py +++ b/pypy/jit/backend/x86/test/test_gc_integration.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.jit.codewriter import heaptracker from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.backend.llsupport.descr import GcCache +from pypy.jit.backend.llsupport.descr import GcCache, FieldDescr, FLAG_SIGNED from pypy.jit.backend.llsupport.gc import GcLLDescription from pypy.jit.backend.detect_cpu import getcpuclass from pypy.jit.backend.x86.regalloc import RegAlloc @@ -17,7 +17,7 @@ from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.annlowlevel import llhelper from pypy.rpython.lltypesystem import rclass, rstr -from pypy.jit.backend.llsupport.gc import GcLLDescr_framework, GcPtrFieldDescr +from pypy.jit.backend.llsupport.gc import GcLLDescr_framework from pypy.jit.backend.x86.test.test_regalloc import MockAssembler from pypy.jit.backend.x86.test.test_regalloc import BaseTestRegalloc @@ -41,20 +41,15 @@ return ['compressed'] + shape[1:] class MockGcDescr(GcCache): - def get_funcptr_for_new(self): - return 123 - get_funcptr_for_newarray = get_funcptr_for_new - get_funcptr_for_newstr = get_funcptr_for_new - get_funcptr_for_newunicode = get_funcptr_for_new get_malloc_slowpath_addr = None - + write_barrier_descr = None moving_gc = True gcrootmap = MockGcRootMap() def initialize(self): pass - record_constptrs = GcLLDescr_framework.record_constptrs.im_func + _record_constptrs = GcLLDescr_framework._record_constptrs.im_func rewrite_assembler = GcLLDescr_framework.rewrite_assembler.im_func class TestRegallocDirectGcIntegration(object): @@ -170,42 +165,32 @@ ''' self.interpret(ops, [0, 0, 0, 0, 0, 0, 0, 0, 0], run=False) +NOT_INITIALIZED = chr(0xdd) + class GCDescrFastpathMalloc(GcLLDescription): gcrootmap = None - expected_malloc_slowpath_size = WORD*2 + write_barrier_descr = None def __init__(self): - GcCache.__init__(self, False) + GcLLDescription.__init__(self, None) # create a nursery - NTP = rffi.CArray(lltype.Signed) - self.nursery = lltype.malloc(NTP, 16, flavor='raw') - self.addrs = lltype.malloc(rffi.CArray(lltype.Signed), 3, + NTP = rffi.CArray(lltype.Char) + self.nursery = lltype.malloc(NTP, 64, flavor='raw') + for i in range(64): + self.nursery[i] = NOT_INITIALIZED + self.addrs = lltype.malloc(rffi.CArray(lltype.Signed), 2, flavor='raw') self.addrs[0] = rffi.cast(lltype.Signed, self.nursery) - self.addrs[1] = self.addrs[0] + 16*WORD - self.addrs[2] = 0 - # 16 WORDs + self.addrs[1] = self.addrs[0] + 64 + self.calls = [] def malloc_slowpath(size): - assert size == self.expected_malloc_slowpath_size + self.calls.append(size) + # reset the nursery nadr = rffi.cast(lltype.Signed, self.nursery) self.addrs[0] = nadr + size - self.addrs[2] += 1 return nadr - self.malloc_slowpath = malloc_slowpath - self.MALLOC_SLOWPATH = lltype.FuncType([lltype.Signed], - lltype.Signed) - self._counter = 123000 - - def can_inline_malloc(self, descr): - return True - - def get_funcptr_for_new(self): - return 42 -# return llhelper(lltype.Ptr(self.NEW_TP), self.new) - - def init_size_descr(self, S, descr): - descr.tid = self._counter - self._counter += 1 + self.generate_function('malloc_nursery', malloc_slowpath, + [lltype.Signed], lltype.Signed) def get_nursery_free_addr(self): return rffi.cast(lltype.Signed, self.addrs) @@ -214,204 +199,61 @@ return rffi.cast(lltype.Signed, self.addrs) + WORD def get_malloc_slowpath_addr(self): - fptr = llhelper(lltype.Ptr(self.MALLOC_SLOWPATH), self.malloc_slowpath) - return rffi.cast(lltype.Signed, fptr) + return self.get_malloc_fn_addr('malloc_nursery') - get_funcptr_for_newarray = None - get_funcptr_for_newstr = None - get_funcptr_for_newunicode = None + def check_nothing_in_nursery(self): + # CALL_MALLOC_NURSERY should not write anything in the nursery + for i in range(64): + assert self.nursery[i] == NOT_INITIALIZED class TestMallocFastpath(BaseTestRegalloc): def setup_method(self, method): cpu = CPU(None, None) - cpu.vtable_offset = WORD cpu.gc_ll_descr = GCDescrFastpathMalloc() cpu.setup_once() + self.cpu = cpu - # hack: specify 'tid' explicitly, because this test is not running - # with the gc transformer - NODE = lltype.GcStruct('node', ('tid', lltype.Signed), - ('value', lltype.Signed)) - nodedescr = cpu.sizeof(NODE) - valuedescr = cpu.fielddescrof(NODE, 'value') - - self.cpu = cpu - self.nodedescr = nodedescr - vtable = lltype.malloc(rclass.OBJECT_VTABLE, immortal=True) - vtable_int = cpu.cast_adr_to_int(llmemory.cast_ptr_to_adr(vtable)) - NODE2 = lltype.GcStruct('node2', - ('parent', rclass.OBJECT), - ('tid', lltype.Signed), - ('vtable', lltype.Ptr(rclass.OBJECT_VTABLE))) - descrsize = cpu.sizeof(NODE2) - heaptracker.register_known_gctype(cpu, vtable, NODE2) - self.descrsize = descrsize - self.vtable_int = vtable_int - - self.namespace = locals().copy() - def test_malloc_fastpath(self): ops = ''' - [i0] - p0 = new(descr=nodedescr) - setfield_gc(p0, i0, descr=valuedescr) - finish(p0) + [] + p0 = call_malloc_nursery(16) + p1 = call_malloc_nursery(32) + p2 = call_malloc_nursery(16) + finish(p0, p1, p2) ''' - self.interpret(ops, [42]) - # check the nursery + self.interpret(ops, []) + # check the returned pointers gc_ll_descr = self.cpu.gc_ll_descr - assert gc_ll_descr.nursery[0] == self.nodedescr.tid - assert gc_ll_descr.nursery[1] == 42 nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) - assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*2) - assert gc_ll_descr.addrs[2] == 0 # slowpath never called + ref = self.cpu.get_latest_value_ref + assert rffi.cast(lltype.Signed, ref(0)) == nurs_adr + 0 + assert rffi.cast(lltype.Signed, ref(1)) == nurs_adr + 16 + assert rffi.cast(lltype.Signed, ref(2)) == nurs_adr + 48 + # check the nursery content and state + gc_ll_descr.check_nothing_in_nursery() + assert gc_ll_descr.addrs[0] == nurs_adr + 64 + # slowpath never called + assert gc_ll_descr.calls == [] def test_malloc_slowpath(self): ops = ''' [] - p0 = new(descr=nodedescr) - p1 = new(descr=nodedescr) - p2 = new(descr=nodedescr) - p3 = new(descr=nodedescr) - p4 = new(descr=nodedescr) - p5 = new(descr=nodedescr) - p6 = new(descr=nodedescr) - p7 = new(descr=nodedescr) - p8 = new(descr=nodedescr) - finish(p0, p1, p2, p3, p4, p5, p6, p7, p8) + p0 = call_malloc_nursery(16) + p1 = call_malloc_nursery(32) + p2 = call_malloc_nursery(24) # overflow + finish(p0, p1, p2) ''' self.interpret(ops, []) + # check the returned pointers + gc_ll_descr = self.cpu.gc_ll_descr + nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) + ref = self.cpu.get_latest_value_ref + assert rffi.cast(lltype.Signed, ref(0)) == nurs_adr + 0 + assert rffi.cast(lltype.Signed, ref(1)) == nurs_adr + 16 + assert rffi.cast(lltype.Signed, ref(2)) == nurs_adr + 0 + # check the nursery content and state + gc_ll_descr.check_nothing_in_nursery() + assert gc_ll_descr.addrs[0] == nurs_adr + 24 # this should call slow path once - gc_ll_descr = self.cpu.gc_ll_descr - nadr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) - assert gc_ll_descr.addrs[0] == nadr + (WORD*2) - assert gc_ll_descr.addrs[2] == 1 # slowpath called once - - def test_new_with_vtable(self): - ops = ''' - [i0, i1] - p0 = new_with_vtable(ConstClass(vtable)) - guard_class(p0, ConstClass(vtable)) [i0] - finish(i1) - ''' - self.interpret(ops, [0, 1]) - assert self.getint(0) == 1 - gc_ll_descr = self.cpu.gc_ll_descr - assert gc_ll_descr.nursery[0] == self.descrsize.tid - assert gc_ll_descr.nursery[1] == self.vtable_int - nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) - assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*3) - assert gc_ll_descr.addrs[2] == 0 # slowpath never called - - -class Seen(Exception): - pass - -class GCDescrFastpathMallocVarsize(GCDescrFastpathMalloc): - def can_inline_malloc_varsize(self, arraydescr, num_elem): - return num_elem < 5 - def get_funcptr_for_newarray(self): - return 52 - def init_array_descr(self, A, descr): - descr.tid = self._counter - self._counter += 1 - def args_for_new_array(self, descr): - raise Seen("args_for_new_array") - -class TestMallocVarsizeFastpath(BaseTestRegalloc): - def setup_method(self, method): - cpu = CPU(None, None) - cpu.vtable_offset = WORD - cpu.gc_ll_descr = GCDescrFastpathMallocVarsize() - cpu.setup_once() - self.cpu = cpu - - ARRAY = lltype.GcArray(lltype.Signed) - arraydescr = cpu.arraydescrof(ARRAY) - self.arraydescr = arraydescr - ARRAYCHAR = lltype.GcArray(lltype.Char) - arraychardescr = cpu.arraydescrof(ARRAYCHAR) - - self.namespace = locals().copy() - - def test_malloc_varsize_fastpath(self): - # Hack. Running the GcLLDescr_framework without really having - # a complete GC means that we end up with both the tid and the - # length being at offset 0. In this case, so the length overwrites - # the tid. This is of course only the case in this test class. - ops = ''' - [] - p0 = new_array(4, descr=arraydescr) - setarrayitem_gc(p0, 0, 142, descr=arraydescr) - setarrayitem_gc(p0, 3, 143, descr=arraydescr) - finish(p0) - ''' - self.interpret(ops, []) - # check the nursery - gc_ll_descr = self.cpu.gc_ll_descr - assert gc_ll_descr.nursery[0] == 4 - assert gc_ll_descr.nursery[1] == 142 - assert gc_ll_descr.nursery[4] == 143 - nurs_adr = rffi.cast(lltype.Signed, gc_ll_descr.nursery) - assert gc_ll_descr.addrs[0] == nurs_adr + (WORD*5) - assert gc_ll_descr.addrs[2] == 0 # slowpath never called - - def test_malloc_varsize_slowpath(self): - ops = ''' - [] - p0 = new_array(4, descr=arraydescr) - setarrayitem_gc(p0, 0, 420, descr=arraydescr) - setarrayitem_gc(p0, 3, 430, descr=arraydescr) - p1 = new_array(4, descr=arraydescr) - setarrayitem_gc(p1, 0, 421, descr=arraydescr) - setarrayitem_gc(p1, 3, 431, descr=arraydescr) - p2 = new_array(4, descr=arraydescr) - setarrayitem_gc(p2, 0, 422, descr=arraydescr) - setarrayitem_gc(p2, 3, 432, descr=arraydescr) - p3 = new_array(4, descr=arraydescr) - setarrayitem_gc(p3, 0, 423, descr=arraydescr) - setarrayitem_gc(p3, 3, 433, descr=arraydescr) - finish(p0, p1, p2, p3) - ''' - gc_ll_descr = self.cpu.gc_ll_descr - gc_ll_descr.expected_malloc_slowpath_size = 5*WORD - self.interpret(ops, []) - assert gc_ll_descr.addrs[2] == 1 # slowpath called once - - def test_malloc_varsize_too_big(self): - ops = ''' - [] - p0 = new_array(5, descr=arraydescr) - finish(p0) - ''' - py.test.raises(Seen, self.interpret, ops, []) - - def test_malloc_varsize_variable(self): - ops = ''' - [i0] - p0 = new_array(i0, descr=arraydescr) - finish(p0) - ''' - py.test.raises(Seen, self.interpret, ops, []) - - def test_malloc_array_of_char(self): - # check that fastpath_malloc_varsize() respects the alignment - # of the pointer in the nursery - ops = ''' - [] - p1 = new_array(1, descr=arraychardescr) - p2 = new_array(2, descr=arraychardescr) - p3 = new_array(3, descr=arraychardescr) - p4 = new_array(4, descr=arraychardescr) - finish(p1, p2, p3, p4) - ''' - self.interpret(ops, []) - p1 = self.getptr(0, llmemory.GCREF) - p2 = self.getptr(1, llmemory.GCREF) - p3 = self.getptr(2, llmemory.GCREF) - p4 = self.getptr(3, llmemory.GCREF) - assert p1._obj.intval & (WORD-1) == 0 # aligned - assert p2._obj.intval & (WORD-1) == 0 # aligned - assert p3._obj.intval & (WORD-1) == 0 # aligned - assert p4._obj.intval & (WORD-1) == 0 # aligned + assert gc_ll_descr.calls == [24] diff --git a/pypy/jit/backend/x86/test/test_jump.py b/pypy/jit/backend/x86/test/test_jump.py --- a/pypy/jit/backend/x86/test/test_jump.py +++ b/pypy/jit/backend/x86/test/test_jump.py @@ -20,6 +20,11 @@ def regalloc_pop(self, loc): self.ops.append(('pop', loc)) + def regalloc_immedmem2mem(self, from_loc, to_loc): + assert isinstance(from_loc, ConstFloatLoc) + assert isinstance(to_loc, StackLoc) + self.ops.append(('immedmem2mem', from_loc, to_loc)) + def got(self, expected): print '------------------------ comparing ---------------------------' for op1, op2 in zip(self.ops, expected): @@ -244,6 +249,13 @@ else: return pick1() # + def pick2c(): + n = random.randrange(-2000, 500) + if n >= 0: + return ConstFloatLoc(n) # n is the address, not really used here + else: + return pick2() + # def pick_dst(fn, count, seen): result = [] while len(result) < count: @@ -280,12 +292,12 @@ if loc.get_width() > WORD: stack[loc.value+WORD] = 'value-hiword-%d' % i else: - assert isinstance(loc, ImmedLoc) + assert isinstance(loc, (ImmedLoc, ConstFloatLoc)) return regs1, regs2, stack # for i in range(500): seen = {} - src_locations2 = [pick2() for i in range(4)] + src_locations2 = [pick2c() for i in range(4)] dst_locations2 = pick_dst(pick2, 4, seen) src_locations1 = [pick1c() for i in range(5)] dst_locations1 = pick_dst(pick1, 5, seen) @@ -312,9 +324,15 @@ return got if isinstance(loc, ImmedLoc): return 'const-%d' % loc.value + if isinstance(loc, ConstFloatLoc): + got = 'constfloat-@%d' % loc.value + if loc.get_width() > WORD: + got = (got, 'constfloat-next-@%d' % loc.value) + return got assert 0, loc # def write(loc, newvalue): + assert (type(newvalue) is tuple) == (loc.get_width() > WORD) if isinstance(loc, RegLoc): if loc.is_xmm: regs2[loc.value] = newvalue @@ -337,10 +355,14 @@ for op in assembler.ops: if op[0] == 'mov': src, dst = op[1:] - assert isinstance(src, (RegLoc, StackLoc, ImmedLoc)) - assert isinstance(dst, (RegLoc, StackLoc)) - assert not (isinstance(src, StackLoc) and - isinstance(dst, StackLoc)) + if isinstance(src, ConstFloatLoc): + assert isinstance(dst, RegLoc) + assert dst.is_xmm + else: + assert isinstance(src, (RegLoc, StackLoc, ImmedLoc)) + assert isinstance(dst, (RegLoc, StackLoc)) + assert not (isinstance(src, StackLoc) and + isinstance(dst, StackLoc)) write(dst, read(src)) elif op[0] == 'push': src, = op[1:] @@ -350,6 +372,11 @@ dst, = op[1:] assert isinstance(dst, (RegLoc, StackLoc)) write(dst, extrapushes.pop()) + elif op[0] == 'immedmem2mem': + src, dst = op[1:] + assert isinstance(src, ConstFloatLoc) + assert isinstance(dst, StackLoc) + write(dst, read(src, 8)) else: assert 0, "unknown op: %r" % (op,) assert not extrapushes @@ -358,3 +385,32 @@ assert read(loc, WORD) == src_values1[i] for i, loc in enumerate(dst_locations2): assert read(loc, 8) == src_values2[i] + + +def test_overflow_bug(): + CASE = [ + (-144, -248), # \ cycle + (-248, -144), # / + (-488, -416), # \ two usages of -488 + (-488, -480), # / + (-488, -488), # - one self-application of -488 + ] + class FakeAssembler: + def regalloc_mov(self, src, dst): + print "mov", src, dst + def regalloc_push(self, x): + print "push", x + def regalloc_pop(self, x): + print "pop", x + def regalloc_immedmem2mem(self, x, y): + print "?????????????????????????" + def main(): + srclocs = [StackLoc(9999, x, 'i') for x,y in CASE] + dstlocs = [StackLoc(9999, y, 'i') for x,y in CASE] + remap_frame_layout(FakeAssembler(), srclocs, dstlocs, eax) + # it works when run directly + main() + # but it used to crash when translated, + # because of a -sys.maxint-2 overflowing to sys.maxint + from pypy.rpython.test.test_llinterp import interpret + interpret(main, []) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,13 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -69,6 +76,7 @@ return ctypes.cast(res.value._obj.intval, ctypes.POINTER(item_tp)) def test_allocations(self): + py.test.skip("rewrite or kill") from pypy.rpython.lltypesystem import rstr allocs = [None] @@ -415,12 +423,13 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset - # getfield_raw/int_add/setfield_raw + ops + None - assert len(ops_offset) == 3 + len(operations) + 1 + # 2*(getfield_raw/int_add/setfield_raw) + ops + None + assert len(ops_offset) == 2*3 + len(operations) + 1 assert (ops_offset[operations[0]] <= ops_offset[operations[1]] <= ops_offset[operations[2]] <= @@ -518,16 +527,23 @@ from pypy.tool.logparser import parse_log_file, extract_category from pypy.rlib import debug + targettoken, preambletoken = TargetToken(), TargetToken() loop = """ [i0] - label(i0, descr=targettoken) + label(i0, descr=preambletoken) debug_merge_point('xyz', 0) i1 = int_add(i0, 1) i2 = int_ge(i1, 10) guard_false(i2) [] - jump(i1, descr=targettoken) + label(i1, descr=targettoken) + debug_merge_point('xyz', 0) + i11 = int_add(i1, 1) + i12 = int_ge(i11, 10) + guard_false(i12) [] + jump(i11, descr=targettoken) """ - ops = parse(loop, namespace={'targettoken': TargetToken()}) + ops = parse(loop, namespace={'targettoken': targettoken, + 'preambletoken': preambletoken}) debug._log = dlog = debug.DebugLog() try: self.cpu.assembler.set_debug(True) @@ -536,11 +552,18 @@ self.cpu.execute_token(looptoken, 0) # check debugging info struct = self.cpu.assembler.loop_run_counters[0] - assert struct.i == 10 + assert struct.i == 1 + struct = self.cpu.assembler.loop_run_counters[1] + assert struct.i == 1 + struct = self.cpu.assembler.loop_run_counters[2] + assert struct.i == 9 self.cpu.finish_once() finally: debug._log = None - assert ('jit-backend-counts', [('debug_print', 'loop -1:10')]) in dlog + l0 = ('debug_print', 'entry -1:1') + l1 = ('debug_print', preambletoken.repr_of_descr() + ':1') + l2 = ('debug_print', targettoken.repr_of_descr() + ':9') + assert ('jit-backend-counts', [l0, l1, l2]) in dlog def test_debugger_checksum(self): loop = """ diff --git a/pypy/jit/backend/x86/test/test_zrpy_gc.py b/pypy/jit/backend/x86/test/test_zrpy_gc.py --- a/pypy/jit/backend/x86/test/test_zrpy_gc.py +++ b/pypy/jit/backend/x86/test/test_zrpy_gc.py @@ -69,16 +69,17 @@ def get_functions_to_patch(): from pypy.jit.backend.llsupport import gc # - can_inline_malloc1 = gc.GcLLDescr_framework.can_inline_malloc - def can_inline_malloc2(*args): + can_use_nursery_malloc1 = gc.GcLLDescr_framework.can_use_nursery_malloc + def can_use_nursery_malloc2(*args): try: if os.environ['PYPY_NO_INLINE_MALLOC']: return False except KeyError: pass - return can_inline_malloc1(*args) + return can_use_nursery_malloc1(*args) # - return {(gc.GcLLDescr_framework, 'can_inline_malloc'): can_inline_malloc2} + return {(gc.GcLLDescr_framework, 'can_use_nursery_malloc'): + can_use_nursery_malloc2} def compile(f, gc, enable_opts='', **kwds): from pypy.annotation.listdef import s_list_of_strings diff --git a/pypy/jit/backend/x86/test/test_zrpy_platform.py b/pypy/jit/backend/x86/test/test_zrpy_platform.py --- a/pypy/jit/backend/x86/test/test_zrpy_platform.py +++ b/pypy/jit/backend/x86/test/test_zrpy_platform.py @@ -74,8 +74,8 @@ myjitdriver = jit.JitDriver(greens = [], reds = ['n']) def entrypoint(argv): - myjitdriver.set_param('threshold', 2) - myjitdriver.set_param('trace_eagerness', 0) + jit.set_param(myjitdriver, 'threshold', 2) + jit.set_param(myjitdriver, 'trace_eagerness', 0) n = 16 while n > 0: myjitdriver.can_enter_jit(n=n) diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -42,8 +42,7 @@ except AttributeError: pass - def is_candidate(graph): - return policy.look_inside_graph(graph) + is_candidate = policy.look_inside_graph assert len(self.jitdrivers_sd) > 0 todo = [jd.portal_graph for jd in self.jitdrivers_sd] diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,11 +8,15 @@ class JitPolicy(object): - def __init__(self): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -162,7 +162,6 @@ _ll_4_list_setslice = rlist.ll_listsetslice _ll_2_list_delslice_startonly = rlist.ll_listdelslice_startonly _ll_3_list_delslice_startstop = rlist.ll_listdelslice_startstop -_ll_1_list_list2fixed = lltypesystem_rlist.ll_list2fixed _ll_2_list_inplace_mul = rlist.ll_inplace_mul _ll_2_list_getitem_foldable = _ll_2_list_getitem diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -38,7 +39,8 @@ else: extraprocedures = [procedure] metainterp_sd.stats.view(errmsg=errmsg, - extraprocedures=extraprocedures) + extraprocedures=extraprocedures, + metainterp_sd=metainterp_sd) def create_empty_loop(metainterp, name_prefix=''): name = metainterp.staticdata.stats.name_for_new_loop() @@ -74,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -89,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -105,38 +107,32 @@ def compile_loop(metainterp, greenkey, start, inputargs, jumpargs, - start_resumedescr, full_preamble_needed=True): + resume_at_jump_descr, full_preamble_needed=True): """Try to compile a new procedure by closing the current history back to the first operation. """ from pypy.jit.metainterp.optimizeopt import optimize_trace - history = metainterp.history metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd + history = metainterp.history - if False: - part = partial_trace - assert False - procedur_token = metainterp.get_procedure_token(greenkey) - assert procedure_token - all_target_tokens = [] - else: - jitcell_token = make_jitcell_token(jitdriver_sd) - part = create_empty_loop(metainterp) - part.inputargs = inputargs[:] - h_ops = history.operations - part.start_resumedescr = start_resumedescr - part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ - [h_ops[i].clone() for i in range(start, len(h_ops))] + \ - [ResOperation(rop.JUMP, jumpargs, None, descr=jitcell_token)] - try: - optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) - except InvalidLoop: - return None - target_token = part.operations[0].getdescr() - assert isinstance(target_token, TargetToken) - all_target_tokens = [target_token] + jitcell_token = make_jitcell_token(jitdriver_sd) + part = create_empty_loop(metainterp) + part.inputargs = inputargs[:] + h_ops = history.operations + part.resume_at_jump_descr = resume_at_jump_descr + part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ + [h_ops[i].clone() for i in range(start, len(h_ops))] + \ + [ResOperation(rop.LABEL, jumpargs, None, descr=jitcell_token)] + + try: + optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) + except InvalidLoop: + return None + target_token = part.operations[0].getdescr() + assert isinstance(target_token, TargetToken) + all_target_tokens = [target_token] loop = create_empty_loop(metainterp) loop.inputargs = part.inputargs @@ -174,17 +170,17 @@ loop.original_jitcell_token = jitcell_token for label in all_target_tokens: assert isinstance(label, TargetToken) - label.original_jitcell_token = jitcell_token if label.virtual_state and label.short_preamble: metainterp_sd.logger_ops.log_short_preamble([], label.short_preamble) jitcell_token.target_tokens = all_target_tokens + propagate_original_jitcell_token(loop) send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) return all_target_tokens[0] def compile_retrace(metainterp, greenkey, start, inputargs, jumpargs, - start_resumedescr, partial_trace, resumekey): + resume_at_jump_descr, partial_trace, resumekey): """Try to compile a new procedure by closing the current history back to the first operation. """ @@ -200,7 +196,7 @@ part = create_empty_loop(metainterp) part.inputargs = inputargs[:] - part.start_resumedescr = start_resumedescr + part.resume_at_jump_descr = resume_at_jump_descr h_ops = history.operations part.operations = [partial_trace.operations[-1]] + \ @@ -212,13 +208,12 @@ try: optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) except InvalidLoop: - #return None # XXX: Dissable for now # Fall back on jumping to preamble target_token = label.getdescr() assert isinstance(target_token, TargetToken) assert target_token.exported_state part.operations = [orignial_label] + \ - [ResOperation(rop.JUMP, target_token.exported_state.jump_args, + [ResOperation(rop.JUMP, inputargs[:], None, descr=loop_jitcell_token)] try: optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts, @@ -246,11 +241,11 @@ for box in loop.inputargs: assert isinstance(box, Box) - target_token = loop.operations[-1].getdescr() + target_token = loop.operations[-1].getdescr() resumekey.compile_and_attach(metainterp, loop) + target_token = label.getdescr() assert isinstance(target_token, TargetToken) - target_token.original_jitcell_token = loop.original_jitcell_token record_loop_or_bridge(metainterp_sd, loop) return target_token @@ -287,14 +282,21 @@ assert i == len(inputargs) loop.operations = extra_ops + loop.operations +def propagate_original_jitcell_token(trace): + for op in trace.operations: + if op.getopnum() == rop.LABEL: + token = op.getdescr() + assert isinstance(token, TargetToken) + assert token.original_jitcell_token is None + token.original_jitcell_token = trace.original_jitcell_token + + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -304,21 +306,41 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) + else: + debug_info = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # - metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset) + loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None + metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, + type, ops_offset, + name=loopname) # if metainterp_sd.warmrunnerdesc is not None: # for tests metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive(original_jitcell_token) @@ -326,25 +348,40 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) + else: + hooks = None + debug_info = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests @@ -557,6 +594,7 @@ inputargs = metainterp.history.inputargs if not we_are_translated(): self._debug_suboperations = new_loop.operations + propagate_original_jitcell_token(new_loop) send_bridge_to_backend(metainterp.jitdriver_sd, metainterp.staticdata, self, inputargs, new_loop.operations, new_loop.original_jitcell_token) @@ -743,6 +781,7 @@ jitdriver_sd = metainterp.jitdriver_sd redargs = new_loop.inputargs new_loop.original_jitcell_token = jitcell_token = make_jitcell_token(jitdriver_sd) + propagate_original_jitcell_token(new_loop) send_loop_to_backend(self.original_greenkey, metainterp.jitdriver_sd, metainterp_sd, new_loop, "entry bridge") # send the new_loop to warmspot.py, to be called directly the next time @@ -751,7 +790,7 @@ metainterp_sd.stats.add_jitcell_token(jitcell_token) -def compile_trace(metainterp, resumekey, start_resumedescr=None): +def compile_trace(metainterp, resumekey, resume_at_jump_descr=None): """Try to compile a new bridge leading from the beginning of the history to some existing place. """ @@ -767,7 +806,7 @@ # clone ops, as optimize_bridge can mutate the ops new_trace.operations = [op.clone() for op in metainterp.history.operations] - new_trace.start_resumedescr = start_resumedescr + new_trace.resume_at_jump_descr = resume_at_jump_descr metainterp_sd = metainterp.staticdata state = metainterp.jitdriver_sd.warmstate if isinstance(resumekey, ResumeAtPositionDescr): diff --git a/pypy/jit/metainterp/executor.py b/pypy/jit/metainterp/executor.py --- a/pypy/jit/metainterp/executor.py +++ b/pypy/jit/metainterp/executor.py @@ -46,7 +46,7 @@ # get the function address as an integer func = argboxes[0].getint() # do the call using the correct function from the cpu - rettype = descr.get_return_type() + rettype = descr.get_result_type() if rettype == INT or rettype == 'S': # *S*ingle float try: result = cpu.bh_call_i(func, descr, args_i, args_r, args_f) @@ -344,6 +344,8 @@ rop.SETINTERIORFIELD_RAW, rop.CALL_RELEASE_GIL, rop.QUASIIMMUT_FIELD, + rop.CALL_MALLOC_GC, + rop.CALL_MALLOC_NURSERY, rop.LABEL, ): # list of opcodes never executed by pyjitpl continue diff --git a/pypy/jit/metainterp/graphpage.py b/pypy/jit/metainterp/graphpage.py --- a/pypy/jit/metainterp/graphpage.py +++ b/pypy/jit/metainterp/graphpage.py @@ -12,7 +12,7 @@ def get_display_text(self): return None -def display_procedures(procedures, errmsg=None, highlight_procedures={}): +def display_procedures(procedures, errmsg=None, highlight_procedures={}, metainterp_sd=None): graphs = [(procedure, highlight_procedures.get(procedure, 0)) for procedure in procedures] for graph, highlight in graphs: @@ -20,7 +20,7 @@ if is_interesting_guard(op): graphs.append((SubGraph(op.getdescr()._debug_suboperations), highlight)) - graphpage = ResOpGraphPage(graphs, errmsg) + graphpage = ResOpGraphPage(graphs, errmsg, metainterp_sd) graphpage.display() def is_interesting_guard(op): @@ -36,8 +36,8 @@ class ResOpGraphPage(GraphPage): - def compute(self, graphs, errmsg=None): - resopgen = ResOpGen() + def compute(self, graphs, errmsg=None, metainterp_sd=None): + resopgen = ResOpGen(metainterp_sd) for graph, highlight in graphs: resopgen.add_graph(graph, highlight) if errmsg: @@ -50,13 +50,14 @@ CLUSTERING = True BOX_COLOR = (128, 0, 96) - def __init__(self): + def __init__(self, metainterp_sd=None): self.graphs = [] self.highlight_graphs = {} self.block_starters = {} # {graphindex: {set-of-operation-indices}} self.all_operations = {} self.errmsg = None self.target_tokens = {} + self.metainterp_sd = metainterp_sd def op_name(self, graphindex, opindex): return 'g%dop%d' % (graphindex, opindex) @@ -164,7 +165,14 @@ opindex = opstartindex while True: op = operations[opindex] - lines.append(op.repr(graytext=True)) + op_repr = op.repr(graytext=True) + if op.getopnum() == rop.DEBUG_MERGE_POINT: + jd_sd = self.metainterp_sd.jitdrivers_sd[op.getarg(0).getint()] + if jd_sd._get_printable_location_ptr: + s = jd_sd.warmstate.get_location_str(op.getarglist()[2:]) + s = s.replace(',', '.') # we use comma for argument splitting + op_repr = "debug_merge_point(%d, '%s')" % (op.getarg(1).getint(), s) + lines.append(op_repr) if is_interesting_guard(op): tgt = op.getdescr()._debug_suboperations[0] tgt_g, tgt_i = self.all_operations[tgt] diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -79,9 +79,9 @@ opnum == rop.COPYSTRCONTENT or opnum == rop.COPYUNICODECONTENT): return - if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: - return - if rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST: + if (rop._OVF_FIRST <= opnum <= rop._OVF_LAST or + rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST or + rop._GUARD_FIRST <= opnum <= rop._GUARD_LAST): return if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -142,59 +142,6 @@ def repr_of_descr(self): return '%r' % (self,) - def get_arg_types(self): - """ Implement in call descr. - Must return a string of INT, REF and FLOAT ('i', 'r', 'f'). - """ - raise NotImplementedError - - def get_return_type(self): - """ Implement in call descr. - Must return INT, REF, FLOAT, or 'v' for void. - On 32-bit (hack) it can also be 'L' for longlongs. - Additionally it can be 'S' for singlefloats. - """ - raise NotImplementedError - - def get_extra_info(self): - """ Implement in call descr - """ - raise NotImplementedError - - def is_array_of_pointers(self): - """ Implement for array descr - """ - raise NotImplementedError - - def is_array_of_floats(self): - """ Implement for array descr - """ - raise NotImplementedError - - def is_array_of_structs(self): - """ Implement for array descr - """ - raise NotImplementedError - - def is_pointer_field(self): - """ Implement for field descr - """ - raise NotImplementedError - - def is_float_field(self): - """ Implement for field descr - """ - raise NotImplementedError - - def as_vtable_size_descr(self): - """ Implement for size descr representing objects with vtables. - Returns self. (it's an annotation hack) - """ - raise NotImplementedError - - def count_fields_if_immutable(self): - return -1 - def _clone_if_mutable(self): return self def clone_if_mutable(self): @@ -758,6 +705,9 @@ self.virtual_state = None self.exported_state = None + + def repr_of_descr(self): + return 'TargetToken(%d)' % compute_unique_id(self) class TreeLoop(object): inputargs = None @@ -765,7 +715,7 @@ call_pure_results = None logops = None quasi_immutable_deps = None - start_resumedescr = None + resume_at_jump_descr = None def _token(*args): raise Exception("TreeLoop.token is killed") @@ -1053,35 +1003,16 @@ return insns def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. - - # XXX hacked version, ignore and remove me when jit-targets is merged. - loops = self.get_all_loops() - loops = [loop for loop in loops if 'Preamble' not in repr(loop)] #XXX - assert len(loops) == 1 - loop, = loops - jumpop = loop.operations[-1] - assert jumpop.getopnum() == rop.JUMP - insns = {} - for op in loop.operations: - opname = op.getopname() - insns[opname] = insns.get(opname, 0) + 1 - return self._check_insns(insns, expected, check) - - def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. + """ Usefull in the simplest case when we have only one trace ending with + a jump back to itself and possibly a few bridges. + Only the operations within the loop formed by that single jump will + be counted. + """ loops = self.get_all_loops() assert len(loops) == 1 loop = loops[0] jumpop = loop.operations[-1] assert jumpop.getopnum() == rop.JUMP - assert self.check_resops(jump=1) labels = [op for op in loop.operations if op.getopnum() == rop.LABEL] targets = [op._descr_wref() for op in labels] assert None not in targets # TargetToken was freed, give up @@ -1134,7 +1065,7 @@ if option.view: self.view() - def view(self, errmsg=None, extraprocedures=[]): + def view(self, errmsg=None, extraprocedures=[], metainterp_sd=None): from pypy.jit.metainterp.graphpage import display_procedures procedures = self.get_all_loops()[:] for procedure in extraprocedures: @@ -1146,7 +1077,7 @@ if hasattr(procedure, '_looptoken_number') and ( procedure._looptoken_number in self.invalidated_token_numbers): highlight_procedures.setdefault(procedure, 2) - display_procedures(procedures, errmsg, highlight_procedures) + display_procedures(procedures, errmsg, highlight_procedures, metainterp_sd) # ---------------------------------------------------------------- diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/jitprof.py b/pypy/jit/metainterp/jitprof.py --- a/pypy/jit/metainterp/jitprof.py +++ b/pypy/jit/metainterp/jitprof.py @@ -18,8 +18,8 @@ OPT_FORCINGS ABORT_TOO_LONG ABORT_BRIDGE +ABORT_BAD_LOOP ABORT_ESCAPE -ABORT_BAD_LOOP ABORT_FORCE_QUASIIMMUT NVIRTUALS NVHOLES @@ -30,10 +30,13 @@ TOTAL_FREED_BRIDGES """ +counter_names = [] + def _setup(): names = counters.split() for i, name in enumerate(names): globals()[name] = i + counter_names.append(name) global ncounters ncounters = len(names) _setup() diff --git a/pypy/jit/metainterp/logger.py b/pypy/jit/metainterp/logger.py --- a/pypy/jit/metainterp/logger.py +++ b/pypy/jit/metainterp/logger.py @@ -5,7 +5,7 @@ from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.history import Const, ConstInt, Box, \ - BoxInt, ConstFloat, BoxFloat, AbstractFailDescr + BoxInt, ConstFloat, BoxFloat, AbstractFailDescr, TargetToken class Logger(object): @@ -13,14 +13,14 @@ self.metainterp_sd = metainterp_sd self.guard_number = guard_number - def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None): + def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None, name=''): if type is None: debug_start("jit-log-noopt-loop") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-loop") else: debug_start("jit-log-opt-loop") - debug_print("# Loop", number, ":", type, + debug_print("# Loop", number, '(%s)' % name , ":", type, "with", len(operations), "ops") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-opt-loop") @@ -135,6 +135,13 @@ fail_args = '' return s_offset + res + op.getopname() + '(' + args + ')' + fail_args + def _log_inputarg_setup_ops(self, op): + target_token = op.getdescr() + if isinstance(target_token, TargetToken): + if target_token.exported_state: + for op in target_token.exported_state.inputarg_setup_ops: + debug_print(' ' + self.repr_of_resop(op)) + def _log_operations(self, inputargs, operations, ops_offset): if not have_debug_prints(): return @@ -146,6 +153,8 @@ for i in range(len(operations)): op = operations[i] debug_print(self.repr_of_resop(operations[i], ops_offset)) + if op.getopnum() == rop.LABEL: + self._log_inputarg_setup_ops(op) if ops_offset and None in ops_offset: offset = ops_offset[None] debug_print("+%d: --end of the loop--" % offset) diff --git a/pypy/jit/metainterp/memmgr.py b/pypy/jit/metainterp/memmgr.py --- a/pypy/jit/metainterp/memmgr.py +++ b/pypy/jit/metainterp/memmgr.py @@ -1,5 +1,5 @@ import math -from pypy.rlib.rarithmetic import r_int64, r_uint +from pypy.rlib.rarithmetic import r_int64 from pypy.rlib.debug import debug_start, debug_print, debug_stop from pypy.rlib.objectmodel import we_are_translated @@ -21,7 +21,6 @@ # class MemoryManager(object): - NO_NEXT_CHECK = r_int64(2 ** 63 - 1) def __init__(self): self.check_frequency = -1 @@ -37,13 +36,12 @@ # According to my estimates it's about 5e9 years given 1000 loops # per second self.current_generation = r_int64(1) - self.next_check = self.NO_NEXT_CHECK + self.next_check = r_int64(-1) self.alive_loops = {} - self._cleanup_jitcell_dicts = lambda: None def set_max_age(self, max_age, check_frequency=0): if max_age <= 0: - self.next_check = self.NO_NEXT_CHECK + self.next_check = r_int64(-1) else: self.max_age = max_age if check_frequency <= 0: @@ -51,11 +49,10 @@ self.check_frequency = check_frequency self.next_check = self.current_generation + 1 - def next_generation(self, do_cleanups_now=True): + def next_generation(self): self.current_generation += 1 - if do_cleanups_now and self.current_generation >= self.next_check: + if self.current_generation == self.next_check: self._kill_old_loops_now() - self._cleanup_jitcell_dicts() self.next_check = self.current_generation + self.check_frequency def keep_loop_alive(self, looptoken): @@ -84,22 +81,3 @@ # a single one is not enough for all tests :-( rgc.collect(); rgc.collect(); rgc.collect() debug_stop("jit-mem-collect") - - def get_current_generation_uint(self): - """Return the current generation, possibly truncated to a uint. - To use only as an approximation for decaying counters.""" - return r_uint(self.current_generation) - - def record_jitcell_dict(self, callback): - """NOT_RPYTHON. The given jitcell_dict is a dict that needs - occasional clean-ups of old cells. A cell is old if it never - reached the threshold, and its counter decayed to a tiny value.""" - # note that the various jitcell_dicts have different RPython types, - # so we have to make a different function for each one. These - # functions are chained to each other: each calls the previous one. - def cleanup_dict(): - callback() - cleanup_previous() - # - cleanup_previous = self._cleanup_jitcell_dicts - self._cleanup_jitcell_dicts = cleanup_dict diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,58 +5,3 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" - -class RetraceLoop(JitException): - """ Raised when inlining a short preamble resulted in an - InvalidLoop. This means the optimized loop is too specialized - to be useful here, so we trace it again and produced a second - copy specialized in some different way. - """ - -# ____________________________________________________________ - -def optimize_loop(metainterp_sd, old_loop_tokens, loop, enable_opts): - debug_start("jit-optimize") - try: - return _optimize_loop(metainterp_sd, old_loop_tokens, loop, - enable_opts) - finally: - debug_stop("jit-optimize") - -def _optimize_loop(metainterp_sd, old_loop_tokens, loop, enable_opts): - from pypy.jit.metainterp.optimizeopt import optimize_loop_1 - loop.logops = metainterp_sd.logger_noopt.log_loop(loop.inputargs, - loop.operations) - # XXX do we really still need a list? - if old_loop_tokens: - return old_loop_tokens[0] - optimize_loop_1(metainterp_sd, loop, enable_opts) - return None - -# ____________________________________________________________ - -def optimize_bridge(metainterp_sd, old_loop_tokens, bridge, enable_opts, - inline_short_preamble=True, retraced=False): - debug_start("jit-optimize") - try: - return _optimize_bridge(metainterp_sd, old_loop_tokens, bridge, - enable_opts, - inline_short_preamble, retraced) - finally: - debug_stop("jit-optimize") - -def _optimize_bridge(metainterp_sd, old_loop_tokens, bridge, enable_opts, - inline_short_preamble, retraced=False): - from pypy.jit.metainterp.optimizeopt import optimize_bridge_1 - bridge.logops = metainterp_sd.logger_noopt.log_loop(bridge.inputargs, - bridge.operations) - if old_loop_tokens: - old_loop_token = old_loop_tokens[0] - bridge.operations[-1].setdescr(old_loop_token) # patch jump target - optimize_bridge_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced) - return old_loop_tokens[0] - #return bridge.operations[-1].getdescr() - return None - -# ____________________________________________________________ diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -51,34 +51,6 @@ return optimizations, unroll - -def optimize_loop_1(metainterp_sd, loop, enable_opts, - inline_short_preamble=True, retraced=False): - """Optimize loop.operations to remove internal overheadish operations. - """ - - optimizations, unroll = build_opt_chain(metainterp_sd, enable_opts, - inline_short_preamble, retraced) - if unroll: - optimize_unroll(metainterp_sd, loop, optimizations) - else: - optimizer = Optimizer(metainterp_sd, loop, optimizations) - optimizer.propagate_all_forward() - -def optimize_bridge_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble=True, retraced=False): - """The same, but for a bridge. """ - enable_opts = enable_opts.copy() - try: - del enable_opts['unroll'] - except KeyError: - pass - optimize_loop_1(metainterp_sd, bridge, enable_opts, - inline_short_preamble, retraced) - -if __name__ == '__main__': - print ALL_OPTS_NAMES - def optimize_trace(metainterp_sd, loop, enable_opts, inline_short_preamble=True): """Optimize loop.operations to remove internal overheadish operations. """ @@ -96,3 +68,6 @@ finally: debug_stop("jit-optimize") +if __name__ == '__main__': + print ALL_OPTS_NAMES + diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -453,6 +453,7 @@ def clear_newoperations(self): self._newoperations = [] + self.seen_results = {} def make_equal_to(self, box, value, replace=False): assert isinstance(value, OptValue) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -35,6 +35,9 @@ pass def optimize_LABEL(self, op): + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) self.last_label_descr = op.getdescr() self.emit_operation(op) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -1,21 +1,25 @@ +from __future__ import with_statement from pypy.jit.metainterp.optimizeopt.test.test_util import ( - LLtypeMixin, BaseTest, Storage, _sortboxes, FakeDescrWithSnapshot) + LLtypeMixin, BaseTest, Storage, _sortboxes, FakeDescrWithSnapshot, + FakeMetaInterpStaticData) from pypy.jit.metainterp.history import TreeLoop, JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.jit.metainterp.optimize import InvalidLoop from py.test import raises +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method class BaseTestMultiLabel(BaseTest): enable_opts = "intbounds:rewrite:virtualize:string:earlyforce:pure:heap:unroll" - def optimize_loop(self, ops, expected): + def optimize_loop(self, ops, expected, expected_shorts=None): loop = self.parse(ops) if expected != "crash!": expected = self.parse(expected) part = TreeLoop('part') part.inputargs = loop.inputargs - part.start_resumedescr = FakeDescrWithSnapshot() + part.resume_at_jump_descr = FakeDescrWithSnapshot() token = loop.original_jitcell_token optimized = TreeLoop('optimized') @@ -32,15 +36,17 @@ if nxt < len(loop.operations): label = loop.operations[nxt] assert label.getopnum() == rop.LABEL - jumpop = ResOperation(rop.JUMP, label.getarglist(), - None, descr=token) - operations.append(jumpop) + if label.getdescr() is None: + label.setdescr(token) + operations.append(label) part.operations = operations + self._do_optimize_loop(part, None) if part.operations[-1].getopnum() == rop.LABEL: last_label = [part.operations.pop()] else: last_label = [] + optimized.operations.extend(part.operations) prv = nxt + 1 @@ -53,11 +59,36 @@ print 'Failed!' print + shorts = [op.getdescr().short_preamble + for op in optimized.operations + if op.getopnum() == rop.LABEL] + + if expected_shorts: + for short in shorts: + print + print "Short preamble:" + print '\n'.join([str(o) for o in short]) + + assert expected != "crash!", "should have raised an exception" self.assert_equal(optimized, expected) + if expected_shorts: + assert len(shorts) == len(expected_shorts) + for short, expected_short in zip(shorts, expected_shorts): + expected_short = self.parse(expected_short) + short_preamble = TreeLoop('short preamble') + assert short[0].getopnum() == rop.LABEL + short_preamble.inputargs = short[0].getarglist() + short_preamble.operations = short + self.assert_equal(short_preamble, expected_short, + text_right='expected short preamble') + + return optimized +class OptimizeoptTestMultiLabel(BaseTestMultiLabel): + def test_simple(self): ops = """ [i1] @@ -193,8 +224,244 @@ """ with raises(InvalidLoop): self.optimize_loop(ops, ops) + + def test_two_intermediate_labels_basic_1(self): + ops = """ + [p1, i1] + i2 = getfield_gc(p1, descr=valuedescr) + label(p1, i1) + i3 = getfield_gc(p1, descr=valuedescr) + i4 = int_add(i1, i3) + label(p1, i4) + i5 = int_add(i4, 1) + jump(p1, i5) + """ + expected = """ + [p1, i1] + i2 = getfield_gc(p1, descr=valuedescr) + label(p1, i1, i2) + i4 = int_add(i1, i2) + label(p1, i4) + i5 = int_add(i4, 1) + jump(p1, i5) + """ + short1 = """ + [p1, i1] + label(p1, i1) + i2 = getfield_gc(p1, descr=valuedescr) + jump(p1, i1, i2) + """ + short2 = """ + [p1, i1] + label(p1, i1) + jump(p1, i1) + """ + self.optimize_loop(ops, expected, expected_shorts=[short1, short2]) + + def test_two_intermediate_labels_basic_2(self): + ops = """ + [p1, i1] + i2 = int_add(i1, 1) + label(p1, i1) + i3 = getfield_gc(p1, descr=valuedescr) + i4 = int_add(i1, i3) + label(p1, i4) + i5 = getfield_gc(p1, descr=valuedescr) + i6 = int_add(i4, i5) + jump(p1, i6) + """ + expected = """ + [p1, i1] + i2 = int_add(i1, 1) + label(p1, i1) + i3 = getfield_gc(p1, descr=valuedescr) + i4 = int_add(i1, i3) + label(p1, i4, i3) + i6 = int_add(i4, i3) + jump(p1, i6, i3) + """ + short1 = """ + [p1, i1] + label(p1, i1) + jump(p1, i1) + """ + short2 = """ + [p1, i1] + label(p1, i1) + i2 = getfield_gc(p1, descr=valuedescr) + jump(p1, i1, i2) + """ + self.optimize_loop(ops, expected, expected_shorts=[short1, short2]) + + def test_two_intermediate_labels_both(self): + ops = """ + [p1, i1] + i2 = getfield_gc(p1, descr=valuedescr) + label(p1, i1) + i3 = getfield_gc(p1, descr=valuedescr) + i4 = int_add(i1, i3) + label(p1, i4) + i5 = getfield_gc(p1, descr=valuedescr) + i6 = int_mul(i4, i5) + jump(p1, i6) + """ + expected = """ + [p1, i1] + i2 = getfield_gc(p1, descr=valuedescr) + label(p1, i1, i2) + i4 = int_add(i1, i2) + label(p1, i4, i2) + i6 = int_mul(i4, i2) + jump(p1, i6, i2) + """ + short = """ + [p1, i1] + label(p1, i1) + i2 = getfield_gc(p1, descr=valuedescr) + jump(p1, i1, i2) + """ + self.optimize_loop(ops, expected, expected_shorts=[short, short]) + + def test_import_across_multiple_labels_basic(self): + # Not supported, juts make sure we get a functional trace + ops = """ + [p1, i1] + i2 = getfield_gc(p1, descr=valuedescr) + label(p1, i1) + i3 = int_add(i1, 1) + label(p1, i1) + i4 = getfield_gc(p1, descr=valuedescr) + i5 = int_add(i4, 1) + jump(p1, i5) + """ + self.optimize_loop(ops, ops) + + def test_import_across_multiple_labels_with_duplication(self): + # Not supported, juts make sure we get a functional trace + ops = """ + [p1, i1] + i2 = getfield_gc(p1, descr=valuedescr) + label(p1, i2) + i3 = int_add(i2, 1) + label(p1, i2) + i4 = getfield_gc(p1, descr=valuedescr) + i5 = int_add(i4, 1) + jump(p1, i5) + """ + exported = """ + [p1, i1] + i2 = getfield_gc(p1, descr=valuedescr) + i6 = same_as(i2) + label(p1, i2) + i3 = int_add(i2, 1) + label(p1, i2) + i4 = getfield_gc(p1, descr=valuedescr) + i5 = int_add(i4, 1) + jump(p1, i5) + """ + self.optimize_loop(ops, exported) + + def test_import_virtual_across_multiple_labels(self): + ops = """ + [p0, i1] + i1a = int_add(i1, 1) + pv = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(pv, i1a, descr=valuedescr) + label(pv, i1) + i2 = int_mul(i1, 3) + label(pv, i2) + i3 = getfield_gc(pv, descr=valuedescr) + i4 = int_add(i3, i2) + jump(pv, i4) + """ + expected = """ + [p0, i1] + i1a = int_add(i1, 1) + i5 = same_as(i1a) + label(i1a, i1) + i2 = int_mul(i1, 3) + label(i1a, i2) + i4 = int_add(i1a, i2) + jump(i1a, i4) + """ + self.optimize_loop(ops, expected) + + def test_virtual_as_field_of_forced_box(self): + ops = """ + [p0] + pv1 = new_with_vtable(ConstClass(node_vtable)) + label(pv1, p0) + pv2 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(pv2, pv1, descr=valuedescr) + jump(pv1, pv2) + """ + with raises(InvalidLoop): + self.optimize_loop(ops, ops) + +class OptRenameStrlen(Optimization): + def propagate_forward(self, op): + dispatch_opt(self, op) + + def optimize_STRLEN(self, op): + newop = op.clone() + newop.result = op.result.clonebox() + self.emit_operation(newop) + self.make_equal_to(op.result, self.getvalue(newop.result)) + +dispatch_opt = make_dispatcher_method(OptRenameStrlen, 'optimize_', + default=OptRenameStrlen.emit_operation) + +class BaseTestOptimizerRenamingBoxes(BaseTestMultiLabel): + + def _do_optimize_loop(self, loop, call_pure_results): + from pypy.jit.metainterp.optimizeopt.unroll import optimize_unroll + from pypy.jit.metainterp.optimizeopt.util import args_dict + from pypy.jit.metainterp.optimizeopt.pure import OptPure + + self.loop = loop + loop.call_pure_results = args_dict() + metainterp_sd = FakeMetaInterpStaticData(self.cpu) + optimize_unroll(metainterp_sd, loop, [OptRenameStrlen(), OptPure()], True) + + def test_optimizer_renaming_boxes(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + i2 = strlen(p1) + i3 = int_add(i2, 7) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + i2 = int_add(i11, 7) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + - -class TestLLtype(BaseTestMultiLabel, LLtypeMixin): + +class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): pass +class TestOptimizerRenamingBoxesLLtype(BaseTestOptimizerRenamingBoxes, LLtypeMixin): + pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -4,7 +4,7 @@ LLtypeMixin, BaseTest, Storage, _sortboxes, convert_old_style_to_targets) import pypy.jit.metainterp.optimizeopt.optimizer as optimizeopt import pypy.jit.metainterp.optimizeopt.virtualize as virtualize -from pypy.jit.metainterp.optimizeopt import optimize_loop_1, ALL_OPTS_DICT, build_opt_chain +from pypy.jit.metainterp.optimizeopt import ALL_OPTS_DICT, build_opt_chain from pypy.jit.metainterp.optimize import InvalidLoop from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt from pypy.jit.metainterp.history import TreeLoop, JitCellToken, TargetToken @@ -4211,7 +4211,6 @@ preamble = """ [p0] i0 = strlen(p0) - i3 = same_as(i0) # Should be killed by backend jump(p0) """ expected = """ @@ -5668,8 +5667,7 @@ p3 = newstr(i3) copystrcontent(p1, p3, 0, 0, i1) copystrcontent(p2, p3, 0, i1, i2) - i7 = same_as(i2) - jump(p2, p3, i7) + jump(p2, p3, i2) """ expected = """ [p1, p2, i1] @@ -5744,9 +5742,7 @@ copystrcontent(p1, p5, 0, 0, i1) copystrcontent(p2, p5, 0, i1, i2) copystrcontent(p3, p5, 0, i12, i3) - i129 = same_as(i2) - i130 = same_as(i3) - jump(p2, p3, p5, i129, i130) + jump(p2, p3, p5, i2, i3) """ expected = """ [p1, p2, p3, i1, i2] @@ -5959,8 +5955,7 @@ p4 = newstr(i5) copystrcontent(p1, p4, i1, 0, i3) copystrcontent(p2, p4, 0, i3, i4) - i9 = same_as(i4) - jump(p4, i1, i2, p2, i5, i3, i9) + jump(p4, i1, i2, p2, i5, i3, i4) """ expected = """ [p1, i1, i2, p2, i5, i3, i4] @@ -6082,9 +6077,7 @@ copystrcontent(p2, p4, 0, i1, i2) i0 = call(0, p3, p4, descr=strequaldescr) escape(i0) - i11 = same_as(i1) - i12 = same_as(i2) - jump(p1, p2, p3, i3, i11, i12) + jump(p1, p2, p3, i3, i1, i2) """ expected = """ [p1, p2, p3, i3, i1, i2] @@ -6304,7 +6297,6 @@ i1 = strlen(p1) i0 = int_eq(i1, 0) escape(i0) - i3 = same_as(i1) jump(p1, i0) """ self.optimize_strunicode_loop_extradescrs(ops, expected, preamble) @@ -6350,9 +6342,7 @@ copystrcontent(p2, p4, 0, i1, i2) i0 = call(0, s"hello world", p4, descr=streq_nonnull_descr) escape(i0) - i11 = same_as(i1) - i12 = same_as(i2) - jump(p1, p2, i3, i11, i12) + jump(p1, p2, i3, i1, i2) """ expected = """ [p1, p2, i3, i1, i2] @@ -6925,8 +6915,7 @@ [p9] i843 = strlen(p9) call(i843, descr=nonwritedescr) - i0 = same_as(i843) - jump(p9, i0) + jump(p9, i843) """ short = """ [p9] @@ -7770,7 +7759,7 @@ jump(i0, p0, i2) """ self.optimize_loop(ops, expected) - + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -430,18 +430,18 @@ preamble = TreeLoop('preamble') preamble.inputargs = inputargs - preamble.start_resumedescr = FakeDescrWithSnapshot() + preamble.resume_at_jump_descr = FakeDescrWithSnapshot() token = JitCellToken() preamble.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(token))] + \ operations + \ - [ResOperation(rop.JUMP, jump_args, None, descr=token)] + [ResOperation(rop.LABEL, jump_args, None, descr=token)] self._do_optimize_loop(preamble, call_pure_results) assert preamble.operations[-1].getopnum() == rop.LABEL inliner = Inliner(inputargs, jump_args) - loop.start_resumedescr = preamble.start_resumedescr + loop.resume_at_jump_descr = preamble.resume_at_jump_descr loop.operations = [preamble.operations[-1]] + \ [inliner.inline_op(op, clone=False) for op in cloned_operations] + \ [ResOperation(rop.JUMP, [inliner.inline_arg(a) for a in jump_args], diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -3,7 +3,7 @@ from pypy.jit.metainterp.compile import ResumeGuardDescr from pypy.jit.metainterp.history import TreeLoop, TargetToken, JitCellToken from pypy.jit.metainterp.jitexc import JitException -from pypy.jit.metainterp.optimize import InvalidLoop, RetraceLoop +from pypy.jit.metainterp.optimize import InvalidLoop from pypy.jit.metainterp.optimizeopt.optimizer import * from pypy.jit.metainterp.optimizeopt.generalize import KillHugeIntBounds from pypy.jit.metainterp.inliner import Inliner @@ -51,10 +51,10 @@ distinction anymore)""" inline_short_preamble = True - did_import = False def __init__(self, metainterp_sd, loop, optimizations): self.optimizer = UnrollableOptimizer(metainterp_sd, loop, optimizations) + self.boxes_created_this_iteration = None def fix_snapshot(self, jump_args, snapshot): if snapshot is None: @@ -71,7 +71,6 @@ loop = self.optimizer.loop self.optimizer.clear_newoperations() - start_label = loop.operations[0] if start_label.getopnum() == rop.LABEL: loop.operations = loop.operations[1:] @@ -82,7 +81,7 @@ start_label = None jumpop = loop.operations[-1] - if jumpop.getopnum() == rop.JUMP: + if jumpop.getopnum() == rop.JUMP or jumpop.getopnum() == rop.LABEL: loop.operations = loop.operations[:-1] else: jumpop = None @@ -91,48 +90,87 @@ self.optimizer.propagate_all_forward(clear=False) if not jumpop: - return - if self.jump_to_already_compiled_trace(jumpop): - # Found a compiled trace to jump to - if self.did_import: - - self.close_bridge(start_label) - self.finilize_short_preamble(start_label) return cell_token = jumpop.getdescr() assert isinstance(cell_token, JitCellToken) stop_label = ResOperation(rop.LABEL, jumpop.getarglist(), None, TargetToken(cell_token)) - if not self.did_import: # Enforce the previous behaviour of always peeling exactly one iteration (for now) - self.optimizer.flush() - KillHugeIntBounds(self.optimizer).apply() + From pullrequests-noreply at bitbucket.org Sat Jan 14 22:14:07 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Sat, 14 Jan 2012 21:14:07 -0000 Subject: [pypy-commit] [OPEN] Pull request #19 for pypy/pypy: Improvements to the JVM backend Message-ID: A new pull request has been opened by Micha? Bendowski. benol/pypy has changes to be pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/19/improvements-to-the-jvm-backend Title: Improvements to the JVM backend The translation process now finishes successfully, although produces invalid bytecode. Changes to be pulled: 2e1b33862b2c by benol: "Add files generated by PyCharm to .hgignore" 372628723c37 by benol: "Declare oo_primitives that should implement some rffi operations. For now the a?" 30e6d59d4333 by benol: "Add a missing cast from Unsigned to UnsignedLongLong." 0ab09a670d4a by benol: "Handle the 'jit_is_virtual' opcode by always returning False" 9031e9c4ad78 by benol: "Fix compute_unique_id to support built-ins. Otherwise the translation fails bec?" ee14653b5fa0 by benol: "Fix userspace builders in ootype Implement the ll_getlength() method of StringB?" -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Sat Jan 14 22:35:56 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 14 Jan 2012 22:35:56 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: merge from default Message-ID: <20120114213556.0B89C71025D@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51331:ba2aa2febde8 Date: 2012-01-14 22:17 +0200 http://bitbucket.org/pypy/pypy/changeset/ba2aa2febde8/ Log: merge from default diff too long, truncating to 10000 out of 12004 lines diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -180,7 +180,12 @@ if name is None: name = pyobj.func_name if signature is None: - signature = cpython_code_signature(pyobj.func_code) + if hasattr(pyobj, '_generator_next_method_of_'): + from pypy.interpreter.argument import Signature + signature = Signature(['entry']) # haaaaaack + defaults = () + else: + signature = cpython_code_signature(pyobj.func_code) if defaults is None: defaults = pyobj.func_defaults self.name = name @@ -252,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,8 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable @@ -98,7 +97,6 @@ "Abstract. Get the expected number of locals." raise TypeError, "abstract" - @jit.dont_look_inside def fast2locals(self): # Copy values from the fastlocals to self.w_locals if self.w_locals is None: @@ -112,7 +110,6 @@ w_name = self.space.wrap(name) self.space.setitem(self.w_locals, w_name, w_value) - @jit.dont_look_inside def locals2fast(self): # Copy values from self.w_locals to the fastlocals assert self.w_locals is not None diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -619,7 +619,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -655,7 +656,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -674,7 +676,8 @@ self.descr_reqcls, args.prepend(w_obj)) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -690,7 +693,8 @@ raise OperationError(space.w_SystemError, space.wrap("unexpected DescrMismatch error")) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -708,7 +712,8 @@ self.descr_reqcls, Arguments(space, [w1])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -726,7 +731,8 @@ self.descr_reqcls, Arguments(space, [w1, w2])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -744,7 +750,8 @@ self.descr_reqcls, Arguments(space, [w1, w2, w3])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -763,7 +770,8 @@ Arguments(space, [w1, w2, w3, w4])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -27,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -2974,6 +2978,56 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -39,6 +40,7 @@ from pypy.jit.codewriter.effectinfo import EffectInfo from pypy.jit.codewriter import longlong from pypy.rlib.rarithmetic import intmask +from pypy.rlib.objectmodel import compute_unique_id # darwin requires the stack to be 16 bytes aligned on calls. Same for gcc 4.5.0, # better safe than sorry @@ -58,7 +60,8 @@ self.is_guard_not_invalidated = is_guard_not_invalidated DEBUG_COUNTER = lltype.Struct('DEBUG_COUNTER', ('i', lltype.Signed), - ('bridge', lltype.Signed), # 0 or 1 + ('type', lltype.Char), # 'b'ridge, 'l'abel or + # 'e'ntry point ('number', lltype.Signed)) class Assembler386(object): @@ -147,12 +150,15 @@ def finish_once(self): if self._debug: debug_start('jit-backend-counts') - for struct in self.loop_run_counters: - if struct.bridge: - prefix = 'bridge ' + for i in range(len(self.loop_run_counters)): + struct = self.loop_run_counters[i] + if struct.type == 'l': + prefix = 'TargetToken(%d)' % struct.number + elif struct.type == 'b': + prefix = 'bridge ' + str(struct.number) else: - prefix = 'loop ' - debug_print(prefix + str(struct.number) + ':' + str(struct.i)) + prefix = 'entry ' + str(struct.number) + debug_print(prefix + ':' + str(struct.i)) debug_stop('jit-backend-counts') def _build_float_constants(self): @@ -406,6 +412,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -422,8 +429,8 @@ self.setup(looptoken) if log: - self._register_counter(False, looptoken.number) - operations = self._inject_debugging_code(looptoken, operations) + operations = self._inject_debugging_code(looptoken, operations, + 'e', looptoken.number) regalloc = RegAlloc(self, self.cpu.translate_support_code) # @@ -471,7 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -480,17 +488,12 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: - self._register_counter(True, descr_number) - operations = self._inject_debugging_code(faildescr, operations) + operations = self._inject_debugging_code(faildescr, operations, + 'b', descr_number) arglocs = self.rebuild_faillocs_from_descr(failure_recovery) if not we_are_translated(): @@ -498,6 +501,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -532,7 +536,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub @@ -597,22 +601,29 @@ return self.mc.materialize(self.cpu.asmmemmgr, allblocks, self.cpu.gc_ll_descr.gcrootmap) - def _register_counter(self, bridge, number): - if self._debug: - # YYY very minor leak -- we need the counters to stay alive - # forever, just because we want to report them at the end - # of the process - struct = lltype.malloc(DEBUG_COUNTER, flavor='raw', - track_allocation=False) - struct.i = 0 - struct.bridge = int(bridge) + def _register_counter(self, tp, number, token): + # YYY very minor leak -- we need the counters to stay alive + # forever, just because we want to report them at the end + # of the process + struct = lltype.malloc(DEBUG_COUNTER, flavor='raw', + track_allocation=False) + struct.i = 0 + struct.type = tp + if tp == 'b' or tp == 'e': struct.number = number - self.loop_run_counters.append(struct) + else: + assert token + struct.number = compute_unique_id(token) + self.loop_run_counters.append(struct) + return struct def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -651,27 +662,36 @@ targettoken._x86_loop_code += rawstart self.target_tokens_currently_compiling = None + def _append_debugging_code(self, operations, tp, number, token): + counter = self._register_counter(tp, number, token) + c_adr = ConstInt(rffi.cast(lltype.Signed, counter)) + box = BoxInt() + box2 = BoxInt() + ops = [ResOperation(rop.GETFIELD_RAW, [c_adr], + box, descr=self.debug_counter_descr), + ResOperation(rop.INT_ADD, [box, ConstInt(1)], box2), + ResOperation(rop.SETFIELD_RAW, [c_adr, box2], + None, descr=self.debug_counter_descr)] + operations.extend(ops) + @specialize.argtype(1) - def _inject_debugging_code(self, looptoken, operations): + def _inject_debugging_code(self, looptoken, operations, tp, number): if self._debug: # before doing anything, let's increase a counter s = 0 for op in operations: s += op.getopnum() looptoken._x86_debug_checksum = s - c_adr = ConstInt(rffi.cast(lltype.Signed, - self.loop_run_counters[-1])) - box = BoxInt() - box2 = BoxInt() - ops = [ResOperation(rop.GETFIELD_RAW, [c_adr], - box, descr=self.debug_counter_descr), - ResOperation(rop.INT_ADD, [box, ConstInt(1)], box2), - ResOperation(rop.SETFIELD_RAW, [c_adr, box2], - None, descr=self.debug_counter_descr)] - if operations[0].getopnum() == rop.LABEL: - operations = [operations[0]] + ops + operations[1:] - else: - operations = ops + operations + + newoperations = [] + self._append_debugging_code(newoperations, tp, number, + None) + for op in operations: + newoperations.append(op) + if op.getopnum() == rop.LABEL: + self._append_debugging_code(newoperations, 'l', number, + op.getdescr()) + operations = newoperations return operations def _assemble(self, regalloc, operations): @@ -792,7 +812,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): @@ -2532,3 +2555,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -188,7 +188,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, @@ -741,7 +744,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,13 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -416,12 +423,13 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset - # getfield_raw/int_add/setfield_raw + ops + None - assert len(ops_offset) == 3 + len(operations) + 1 + # 2*(getfield_raw/int_add/setfield_raw) + ops + None + assert len(ops_offset) == 2*3 + len(operations) + 1 assert (ops_offset[operations[0]] <= ops_offset[operations[1]] <= ops_offset[operations[2]] <= @@ -519,6 +527,7 @@ from pypy.tool.logparser import parse_log_file, extract_category from pypy.rlib import debug + targettoken, preambletoken = TargetToken(), TargetToken() loop = """ [i0] label(i0, descr=preambletoken) @@ -533,8 +542,8 @@ guard_false(i12) [] jump(i11, descr=targettoken) """ - ops = parse(loop, namespace={'targettoken': TargetToken(), - 'preambletoken': TargetToken()}) + ops = parse(loop, namespace={'targettoken': targettoken, + 'preambletoken': preambletoken}) debug._log = dlog = debug.DebugLog() try: self.cpu.assembler.set_debug(True) @@ -545,11 +554,16 @@ struct = self.cpu.assembler.loop_run_counters[0] assert struct.i == 1 struct = self.cpu.assembler.loop_run_counters[1] - assert struct.i == 10 + assert struct.i == 1 + struct = self.cpu.assembler.loop_run_counters[2] + assert struct.i == 9 self.cpu.finish_once() finally: debug._log = None - assert ('jit-backend-counts', [('debug_print', 'loop -1:10')]) in dlog + l0 = ('debug_print', 'entry -1:1') + l1 = ('debug_print', preambletoken.repr_of_descr() + ':1') + l2 = ('debug_print', targettoken.repr_of_descr() + ':9') + assert ('jit-backend-counts', [l0, l1, l2]) in dlog def test_debugger_checksum(self): loop = """ diff --git a/pypy/jit/backend/x86/test/test_zrpy_platform.py b/pypy/jit/backend/x86/test/test_zrpy_platform.py --- a/pypy/jit/backend/x86/test/test_zrpy_platform.py +++ b/pypy/jit/backend/x86/test/test_zrpy_platform.py @@ -74,8 +74,8 @@ myjitdriver = jit.JitDriver(greens = [], reds = ['n']) def entrypoint(argv): - myjitdriver.set_param('threshold', 2) - myjitdriver.set_param('trace_eagerness', 0) + jit.set_param(myjitdriver, 'threshold', 2) + jit.set_param(myjitdriver, 'trace_eagerness', 0) n = 16 while n > 0: myjitdriver.can_enter_jit(n=n) diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -42,8 +42,7 @@ except AttributeError: pass - def is_candidate(graph): - return policy.look_inside_graph(graph) + is_candidate = policy.look_inside_graph assert len(self.jitdrivers_sd) > 0 todo = [jd.portal_graph for jd in self.jitdrivers_sd] diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,11 +8,15 @@ class JitPolicy(object): - def __init__(self): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -162,7 +162,6 @@ _ll_4_list_setslice = rlist.ll_listsetslice _ll_2_list_delslice_startonly = rlist.ll_listdelslice_startonly _ll_3_list_delslice_startstop = rlist.ll_listdelslice_startstop -_ll_1_list_list2fixed = lltypesystem_rlist.ll_list2fixed _ll_2_list_inplace_mul = rlist.ll_inplace_mul _ll_2_list_getitem_foldable = _ll_2_list_getitem diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -75,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -90,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -112,33 +113,26 @@ """ from pypy.jit.metainterp.optimizeopt import optimize_trace - history = metainterp.history metainterp_sd = metainterp.staticdata jitdriver_sd = metainterp.jitdriver_sd + history = metainterp.history - if False: - part = partial_trace - assert False - procedur_token = metainterp.get_procedure_token(greenkey) - assert procedure_token - all_target_tokens = [] - else: - jitcell_token = make_jitcell_token(jitdriver_sd) - part = create_empty_loop(metainterp) - part.inputargs = inputargs[:] - h_ops = history.operations - part.resume_at_jump_descr = resume_at_jump_descr - part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ - [h_ops[i].clone() for i in range(start, len(h_ops))] + \ - [ResOperation(rop.LABEL, jumpargs, None, descr=jitcell_token)] + jitcell_token = make_jitcell_token(jitdriver_sd) + part = create_empty_loop(metainterp) + part.inputargs = inputargs[:] + h_ops = history.operations + part.resume_at_jump_descr = resume_at_jump_descr + part.operations = [ResOperation(rop.LABEL, inputargs, None, descr=TargetToken(jitcell_token))] + \ + [h_ops[i].clone() for i in range(start, len(h_ops))] + \ + [ResOperation(rop.LABEL, jumpargs, None, descr=jitcell_token)] - try: - optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) - except InvalidLoop: - return None - target_token = part.operations[0].getdescr() - assert isinstance(target_token, TargetToken) - all_target_tokens = [target_token] + try: + optimize_trace(metainterp_sd, part, jitdriver_sd.warmstate.enable_opts) + except InvalidLoop: + return None + target_token = part.operations[0].getdescr() + assert isinstance(target_token, TargetToken) + all_target_tokens = [target_token] loop = create_empty_loop(metainterp) loop.inputargs = part.inputargs @@ -176,10 +170,10 @@ loop.original_jitcell_token = jitcell_token for label in all_target_tokens: assert isinstance(label, TargetToken) - label.original_jitcell_token = jitcell_token if label.virtual_state and label.short_preamble: metainterp_sd.logger_ops.log_short_preamble([], label.short_preamble) jitcell_token.target_tokens = all_target_tokens + propagate_original_jitcell_token(loop) send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, "loop") record_loop_or_bridge(metainterp_sd, loop) return all_target_tokens[0] @@ -247,11 +241,11 @@ for box in loop.inputargs: assert isinstance(box, Box) - target_token = loop.operations[-1].getdescr() + target_token = loop.operations[-1].getdescr() resumekey.compile_and_attach(metainterp, loop) + target_token = label.getdescr() assert isinstance(target_token, TargetToken) - target_token.original_jitcell_token = loop.original_jitcell_token record_loop_or_bridge(metainterp_sd, loop) return target_token @@ -288,14 +282,21 @@ assert i == len(inputargs) loop.operations = extra_ops + loop.operations +def propagate_original_jitcell_token(trace): + for op in trace.operations: + if op.getopnum() == rop.LABEL: + token = op.getdescr() + assert isinstance(token, TargetToken) + assert token.original_jitcell_token is None + token.original_jitcell_token = trace.original_jitcell_token + + def send_loop_to_backend(greenkey, jitdriver_sd, metainterp_sd, loop, type): vinfo = jitdriver_sd.virtualizable_info if vinfo is not None: patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -305,21 +306,41 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) + else: + debug_info = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # - metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset) + loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None + metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, + type, ops_offset, + name=loopname) # if metainterp_sd.warmrunnerdesc is not None: # for tests metainterp_sd.warmrunnerdesc.memory_manager.keep_loop_alive(original_jitcell_token) @@ -327,25 +348,40 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) + else: + hooks = None + debug_info = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests @@ -558,6 +594,7 @@ inputargs = metainterp.history.inputargs if not we_are_translated(): self._debug_suboperations = new_loop.operations + propagate_original_jitcell_token(new_loop) send_bridge_to_backend(metainterp.jitdriver_sd, metainterp.staticdata, self, inputargs, new_loop.operations, new_loop.original_jitcell_token) @@ -744,6 +781,7 @@ jitdriver_sd = metainterp.jitdriver_sd redargs = new_loop.inputargs new_loop.original_jitcell_token = jitcell_token = make_jitcell_token(jitdriver_sd) + propagate_original_jitcell_token(new_loop) send_loop_to_backend(self.original_greenkey, metainterp.jitdriver_sd, metainterp_sd, new_loop, "entry bridge") # send the new_loop to warmspot.py, to be called directly the next time diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -79,9 +79,9 @@ opnum == rop.COPYSTRCONTENT or opnum == rop.COPYUNICODECONTENT): return - if rop._OVF_FIRST <= opnum <= rop._OVF_LAST: - return - if rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST: + if (rop._OVF_FIRST <= opnum <= rop._OVF_LAST or + rop._NOSIDEEFFECT_FIRST <= opnum <= rop._NOSIDEEFFECT_LAST or + rop._GUARD_FIRST <= opnum <= rop._GUARD_LAST): return if opnum == rop.CALL or opnum == rop.CALL_LOOPINVARIANT: effectinfo = descr.get_extra_info() diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1003,35 +1003,16 @@ return insns def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. - - # XXX hacked version, ignore and remove me when jit-targets is merged. - loops = self.get_all_loops() - loops = [loop for loop in loops if 'Preamble' not in repr(loop)] #XXX - assert len(loops) == 1 - loop, = loops - jumpop = loop.operations[-1] - assert jumpop.getopnum() == rop.JUMP - insns = {} - for op in loop.operations: - opname = op.getopname() - insns[opname] = insns.get(opname, 0) + 1 - return self._check_insns(insns, expected, check) - - def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. + """ Usefull in the simplest case when we have only one trace ending with + a jump back to itself and possibly a few bridges. + Only the operations within the loop formed by that single jump will + be counted. + """ loops = self.get_all_loops() assert len(loops) == 1 loop = loops[0] jumpop = loop.operations[-1] assert jumpop.getopnum() == rop.JUMP - assert self.check_resops(jump=1) labels = [op for op in loop.operations if op.getopnum() == rop.LABEL] targets = [op._descr_wref() for op in labels] assert None not in targets # TargetToken was freed, give up diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/jitprof.py b/pypy/jit/metainterp/jitprof.py --- a/pypy/jit/metainterp/jitprof.py +++ b/pypy/jit/metainterp/jitprof.py @@ -18,8 +18,8 @@ OPT_FORCINGS ABORT_TOO_LONG ABORT_BRIDGE +ABORT_BAD_LOOP ABORT_ESCAPE -ABORT_BAD_LOOP ABORT_FORCE_QUASIIMMUT NVIRTUALS NVHOLES @@ -30,10 +30,13 @@ TOTAL_FREED_BRIDGES """ +counter_names = [] + def _setup(): names = counters.split() for i, name in enumerate(names): globals()[name] = i + counter_names.append(name) global ncounters ncounters = len(names) _setup() diff --git a/pypy/jit/metainterp/logger.py b/pypy/jit/metainterp/logger.py --- a/pypy/jit/metainterp/logger.py +++ b/pypy/jit/metainterp/logger.py @@ -13,14 +13,14 @@ self.metainterp_sd = metainterp_sd self.guard_number = guard_number - def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None): + def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None, name=''): if type is None: debug_start("jit-log-noopt-loop") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-noopt-loop") else: debug_start("jit-log-opt-loop") - debug_print("# Loop", number, ":", type, + debug_print("# Loop", number, '(%s)' % name , ":", type, "with", len(operations), "ops") logops = self._log_operations(inputargs, operations, ops_offset) debug_stop("jit-log-opt-loop") diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -1,10 +1,13 @@ from __future__ import with_statement from pypy.jit.metainterp.optimizeopt.test.test_util import ( - LLtypeMixin, BaseTest, Storage, _sortboxes, FakeDescrWithSnapshot) + LLtypeMixin, BaseTest, Storage, _sortboxes, FakeDescrWithSnapshot, + FakeMetaInterpStaticData) from pypy.jit.metainterp.history import TreeLoop, JitCellToken, TargetToken from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.jit.metainterp.optimize import InvalidLoop from py.test import raises +from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method class BaseTestMultiLabel(BaseTest): enable_opts = "intbounds:rewrite:virtualize:string:earlyforce:pure:heap:unroll" @@ -84,6 +87,8 @@ return optimized +class OptimizeoptTestMultiLabel(BaseTestMultiLabel): + def test_simple(self): ops = """ [i1] @@ -381,6 +386,82 @@ """ self.optimize_loop(ops, expected) -class TestLLtype(BaseTestMultiLabel, LLtypeMixin): + def test_virtual_as_field_of_forced_box(self): + ops = """ + [p0] + pv1 = new_with_vtable(ConstClass(node_vtable)) + label(pv1, p0) + pv2 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(pv2, pv1, descr=valuedescr) + jump(pv1, pv2) + """ + with raises(InvalidLoop): + self.optimize_loop(ops, ops) + +class OptRenameStrlen(Optimization): + def propagate_forward(self, op): + dispatch_opt(self, op) + + def optimize_STRLEN(self, op): + newop = op.clone() + newop.result = op.result.clonebox() + self.emit_operation(newop) + self.make_equal_to(op.result, self.getvalue(newop.result)) + +dispatch_opt = make_dispatcher_method(OptRenameStrlen, 'optimize_', + default=OptRenameStrlen.emit_operation) + +class BaseTestOptimizerRenamingBoxes(BaseTestMultiLabel): + + def _do_optimize_loop(self, loop, call_pure_results): + from pypy.jit.metainterp.optimizeopt.unroll import optimize_unroll + from pypy.jit.metainterp.optimizeopt.util import args_dict + from pypy.jit.metainterp.optimizeopt.pure import OptPure + + self.loop = loop + loop.call_pure_results = args_dict() + metainterp_sd = FakeMetaInterpStaticData(self.cpu) + optimize_unroll(metainterp_sd, loop, [OptRenameStrlen(), OptPure()], True) + + def test_optimizer_renaming_boxes(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + i2 = strlen(p1) + i3 = int_add(i2, 7) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + i2 = int_add(i11, 7) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + + + +class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): pass +class TestOptimizerRenamingBoxesLLtype(BaseTestOptimizerRenamingBoxes, LLtypeMixin): + pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -7759,7 +7759,7 @@ jump(i0, p0, i2) """ self.optimize_loop(ops, expected) - + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -265,7 +265,16 @@ self.optimizer.importable_values[value] = imp newvalue = self.optimizer.getvalue(op.result) newresult = newvalue.get_key_box() - assert newresult is op.result or newvalue.is_constant() + # note that emitting here SAME_AS should not happen, but + # in case it does, we would prefer to be suboptimal in asm + # to a fatal RPython exception. + if newresult is not op.result and not newvalue.is_constant(): + op = ResOperation(rop.SAME_AS, [op.result], newresult) + self.optimizer._newoperations.append(op) + if self.optimizer.loop.logops: + debug_print(' Falling back to add extra: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + self.optimizer.flush() self.optimizer.emitting_dissabled = False @@ -430,7 +439,13 @@ return for a in op.getarglist(): if not isinstance(a, Const) and a not in seen: - self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen) + self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, + seen) + + if self.optimizer.loop.logops: + debug_print(' Emitting short op: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + optimizer.send_extra_operation(op) seen[op.result] = True if op.is_ovf(): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -409,7 +409,13 @@ if self.level == LEVEL_CONSTANT: return assert 0 <= self.position_in_notvirtuals - boxes[self.position_in_notvirtuals] = value.force_box(optimizer) + if optimizer: + box = value.force_box(optimizer) + else: + if value.is_virtual(): + raise BadVirtualState + box = value.get_key_box() + boxes[self.position_in_notvirtuals] = box def _enum(self, virtual_state): if self.level == LEVEL_CONSTANT: @@ -471,8 +477,14 @@ optimizer = optimizer.optearlyforce assert len(values) == len(self.state) inputargs = [None] * len(self.notvirtuals) + + # We try twice. The first time around we allow boxes to be forced + # which might change the virtual state if the box appear in more + # than one place among the inputargs. for i in range(len(values)): self.state[i].enum_forced_boxes(inputargs, values[i], optimizer) + for i in range(len(values)): + self.state[i].enum_forced_boxes(inputargs, values[i], None) if keyboxes: for i in range(len(values)): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -976,10 +976,13 @@ self.verify_green_args(jitdriver_sd, greenboxes) self.debug_merge_point(jitdriver_sd, jdindex, self.metainterp.in_recursion, greenboxes) - + if self.metainterp.seen_loop_header_for_jdindex < 0: - if not jitdriver_sd.no_loop_header or not any_operation: + if not any_operation: return + if self.metainterp.in_recursion or not self.metainterp.get_procedure_token(greenboxes, True): + if not jitdriver_sd.no_loop_header: + return # automatically add a loop_header if there is none self.metainterp.seen_loop_header_for_jdindex = jdindex # @@ -1550,6 +1553,7 @@ class MetaInterp(object): in_recursion = 0 + cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): self.staticdata = staticdata @@ -1790,6 +1794,15 @@ def aborted_tracing(self, reason): self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') + jd_sd = self.jitdriver_sd + if not self.current_merge_points: + greenkey = None # we're in the bridge + else: + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, + jd_sd.jitdriver, + greenkey, + jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): @@ -1963,9 +1976,14 @@ raise SwitchToBlackhole(ABORT_BAD_LOOP) # For now self.compile_loop(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! + self.cancel_count += 1 + if self.staticdata.warmrunnerdesc: + memmgr = self.staticdata.warmrunnerdesc.memory_manager + if memmgr: + if self.cancel_count > memmgr.max_unroll_loops: + self.staticdata.log('cancelled too many times!') + raise SwitchToBlackhole(ABORT_BAD_LOOP) self.staticdata.log('cancelled, tracing more...') - #self.staticdata.log('cancelled, stopping tracing') - #raise SwitchToBlackhole(ABORT_BAD_LOOP) # Otherwise, no loop found so far, so continue tracing. start = len(self.history.operations) @@ -2053,9 +2071,15 @@ from pypy.jit.metainterp.resoperation import opname raise NotImplementedError(opname[opnum]) - def get_procedure_token(self, greenkey): + def get_procedure_token(self, greenkey, with_compiled_targets=False): cell = self.jitdriver_sd.warmstate.jit_cell_at_key(greenkey) - return cell.get_procedure_token() + token = cell.get_procedure_token() + if with_compiled_targets: + if not token: + return None + if not token.target_tokens: + return None + return token def compile_loop(self, original_boxes, live_arg_boxes, start, resume_at_jump_descr): num_green_args = self.jitdriver_sd.num_green_args @@ -2088,11 +2112,9 @@ def compile_trace(self, live_arg_boxes, resume_at_jump_descr): num_green_args = self.jitdriver_sd.num_green_args greenkey = live_arg_boxes[:num_green_args] - target_jitcell_token = self.get_procedure_token(greenkey) + target_jitcell_token = self.get_procedure_token(greenkey, True) if not target_jitcell_token: return - if not target_jitcell_token.target_tokens: - return self.history.record(rop.JUMP, live_arg_boxes[num_green_args:], None, descr=target_jitcell_token) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -16,15 +16,15 @@ # debug name = "" pc = 0 + opnum = 0 + + _attrs_ = ('result',) def __init__(self, result): self.result = result - # methods implemented by each concrete class - # ------------------------------------------ - def getopnum(self): - raise NotImplementedError + return self.opnum # methods implemented by the arity mixins # --------------------------------------- @@ -64,6 +64,9 @@ def setdescr(self, descr): raise NotImplementedError + def cleardescr(self): + pass + # common methods # -------------- @@ -196,6 +199,9 @@ self._check_descr(descr) self._descr = descr + def cleardescr(self): + self._descr = None + def _check_descr(self, descr): if not we_are_translated() and getattr(descr, 'I_am_a_descr', False): return # needed for the mock case in oparser_model @@ -590,12 +596,9 @@ baseclass = PlainResOp mixin = arity2mixin.get(arity, N_aryOp) - def getopnum(self): - return opnum - cls_name = '%s_OP' % name bases = (get_base_class(mixin, baseclass),) - dic = {'getopnum': getopnum} + dic = {'opnum': opnum} return type(cls_name, bases, dic) setup(__name__ == '__main__') # print out the table when run directly diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -56,8 +56,6 @@ greenfield_info = None result_type = result_kind portal_runner_ptr = "???" - on_compile = lambda *args: None - on_compile_bridge = lambda *args: None stats = history.Stats() cpu = CPUClass(rtyper, stats, None, False) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2629,6 +2629,38 @@ self.check_jitcell_token_count(1) self.check_target_token_count(5) + def test_max_unroll_loops(self): + from pypy.jit.metainterp.optimize import InvalidLoop + from pypy.jit.metainterp import optimizeopt + myjitdriver = JitDriver(greens = [], reds = ['n', 'i']) + # + def f(n, limit): + set_param(myjitdriver, 'threshold', 5) + set_param(myjitdriver, 'max_unroll_loops', limit) + i = 0 + while i < n: + myjitdriver.jit_merge_point(n=n, i=i) + print i + i += 1 + return i + # + def my_optimize_trace(*args, **kwds): + raise InvalidLoop + old_optimize_trace = optimizeopt.optimize_trace + optimizeopt.optimize_trace = my_optimize_trace + try: + res = self.meta_interp(f, [23, 4]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(3) + # + res = self.meta_interp(f, [23, 20]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(2) + finally: + optimizeopt.optimize_trace = old_optimize_trace + def test_retrace_limit_with_extra_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) @@ -2697,7 +2729,7 @@ # bridge back to the preamble of the first loop is produced. A guard in # this bridge is later traced resulting in a failed attempt of retracing # the second loop. - self.check_trace_count(8) + self.check_trace_count(9) # FIXME: Add a gloabl retrace counter and test that we are not trying more than 5 times. diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -18,7 +18,7 @@ self.seen.append((inputargs, operations, token)) class FakeLogger(object): - def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None): + def log_loop(self, inputargs, operations, number=0, type=None, ops_offset=None, name=''): pass def repr_of_resop(self, op): @@ -53,8 +53,6 @@ call_pure_results = {} class jitdriver_sd: warmstate = FakeState() - on_compile = staticmethod(lambda *args: None) - on_compile_bridge = staticmethod(lambda *args: None) virtualizable_info = None def test_compile_loop(): diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -255,6 +255,11 @@ assert h.getarrayitem(box1, descr1, index1) is box2 assert h.getarrayitem(box1, descr1, index2) is box4 + h.invalidate_caches(rop.GUARD_TRUE, None, []) + assert h.getfield(box1, descr1) is box2 + assert h.getarrayitem(box1, descr1, index1) is box2 + assert h.getarrayitem(box1, descr1, index2) is box4 + h.invalidate_caches( rop.CALL_LOOPINVARIANT, FakeCallDescr(FakeEffektinfo.EF_LOOPINVARIANT), []) diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -10,57 +10,6 @@ def getloc2(g): return "in jitdriver2, with g=%d" % g -class JitDriverTests(object): - def test_on_compile(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = looptoken - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - i += 1 - - self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "loop")] - self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] - - def test_on_compile_bridge(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = loop - def on_compile_bridge(self, logger, orig_token, operations, n): - assert 'bridge' not in called - called['bridge'] = orig_token - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - if i >= 4: - i += 2 - i += 1 - - self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] - - -class TestLLtypeSingle(JitDriverTests, LLJitMixin): - pass - class MultipleJitDriversTests(object): def test_simple(self): diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -0,0 +1,148 @@ + +from pypy.rlib.jit import JitDriver, JitHookInterface +from pypy.rlib import jit_hooks +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.jit.codewriter.policy import JitPolicy +from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT +from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr + +class TestJitHookInterface(LLJitMixin): + def test_abort_quasi_immut(self): + reasons = [] + + class MyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + assert jitdriver is myjitdriver + assert len(greenkey) == 1 + reasons.append(reason) + assert greenkey_repr == 'blah' + + iface = MyJitIface() + + myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'], + get_printable_location=lambda *args: 'blah') + + class Foo: + _immutable_fields_ = ['a?'] + def __init__(self, a): + self.a = a + def f(a, x): + foo = Foo(a) + total = 0 + while x > 0: + myjitdriver.jit_merge_point(foo=foo, x=x, total=total) + # read a quasi-immutable field out of a Constant + total += foo.a + foo.a += 1 + x -= 1 + return total + # + assert f(100, 7) == 721 + res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface)) + assert res == 721 + assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 + + def test_on_compile(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append(("compile", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + def before_compile(self, di): + called.append(("optimize", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + #def before_optimize(self, jitdriver, logger, looptoken, oeprations, + # type, greenkey): + # called.append(("trace", greenkey[1].getint(), + # greenkey[0].getint(), type)) + + iface = MyJitIface() + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + i += 1 + + self.meta_interp(loop, [1, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] + self.meta_interp(loop, [2, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] + + def test_on_compile_bridge(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append("compile") + + def after_compile_bridge(self, di): + called.append("compile_bridge") + + def before_compile_bridge(self, di): + called.append("before_compile_bridge") + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + if i >= 4: + i += 2 + i += 1 + + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface())) + assert called == ["compile", "before_compile_bridge", "compile_bridge"] + + def test_resop_interface(self): + driver = JitDriver(greens = [], reds = ['i']) + + def loop(i): + while i > 0: + driver.jit_merge_point(i=i) + i -= 1 + + def main(): + loop(1) + op = jit_hooks.resop_new(rop.INT_ADD, + [jit_hooks.boxint_new(3), + jit_hooks.boxint_new(4)], + jit_hooks.boxint_new(1)) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) + box5 = jit_hooks.boxint_new(18) + jit_hooks.resop_setarg(op, 0, box5) + assert jit_hooks.resop_getarg(op, 0) == box5 + box6 = jit_hooks.resop_getresult(op) + assert jit_hooks.box_getint(box6) == 1 + jit_hooks.resop_setresult(op, box5) + assert jit_hooks.resop_getresult(op) == box5 + + self.meta_interp(main, []) diff --git a/pypy/jit/metainterp/test/test_logger.py b/pypy/jit/metainterp/test/test_logger.py --- a/pypy/jit/metainterp/test/test_logger.py +++ b/pypy/jit/metainterp/test/test_logger.py @@ -180,7 +180,7 @@ def test_intro_loop(self): bare_logger = logger.Logger(self.make_metainterp_sd()) output = capturing(bare_logger.log_loop, [], [], 1, "foo") - assert output.splitlines()[0] == "# Loop 1 : foo with 0 ops" + assert output.splitlines()[0] == "# Loop 1 () : foo with 0 ops" pure_parse(output) def test_intro_bridge(self): diff --git a/pypy/jit/metainterp/test/test_loop.py b/pypy/jit/metainterp/test/test_loop.py --- a/pypy/jit/metainterp/test/test_loop.py +++ b/pypy/jit/metainterp/test/test_loop.py @@ -756,7 +756,7 @@ res = self.meta_interp(interpret, [1]) assert res == interpret(1) # XXX it's unsure how many loops should be there - self.check_trace_count(3) + self.check_trace_count(2) def test_path_with_operations_not_from_start(self): jitdriver = JitDriver(greens = ['k'], reds = ['n', 'z']) diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py --- a/pypy/jit/metainterp/test/test_resoperation.py +++ b/pypy/jit/metainterp/test/test_resoperation.py @@ -30,17 +30,17 @@ cls = rop.opclasses[rop.rop.INT_ADD] assert issubclass(cls, rop.PlainResOp) assert issubclass(cls, rop.BinaryOp) - assert cls.getopnum.im_func(None) == rop.rop.INT_ADD + assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD cls = rop.opclasses[rop.rop.CALL] assert issubclass(cls, rop.ResOpWithDescr) assert issubclass(cls, rop.N_aryOp) - assert cls.getopnum.im_func(None) == rop.rop.CALL + assert cls.getopnum.im_func(cls) == rop.rop.CALL cls = rop.opclasses[rop.rop.GUARD_TRUE] assert issubclass(cls, rop.GuardResOp) assert issubclass(cls, rop.UnaryOp) - assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE + assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE def test_mixins_in_common_base(): INT_ADD = rop.opclasses[rop.rop.INT_ADD] diff --git a/pypy/jit/metainterp/test/test_virtual.py b/pypy/jit/metainterp/test/test_virtual.py --- a/pypy/jit/metainterp/test/test_virtual.py +++ b/pypy/jit/metainterp/test/test_virtual.py @@ -880,7 +880,7 @@ elif op == 'j': j = Int(0) elif op == '+': - sa += i.val * j.val + sa += (i.val + 2) * (j.val + 2) elif op == 'a': i = Int(i.val + 1) elif op == 'b': @@ -902,6 +902,7 @@ assert res == f(10) self.check_aborted_count(0) self.check_target_token_count(3) + self.check_resops(int_mul=2) def test_nested_loops_bridge(self): class Int(object): diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -5,7 +5,7 @@ VArrayStateInfo, NotVirtualStateInfo, VirtualState, ShortBoxes from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, llmemory from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, \ equaloplists, FakeDescrWithSnapshot from pypy.jit.metainterp.optimizeopt.intutils import IntBound @@ -82,6 +82,13 @@ assert isgeneral(value1, value2) assert not isgeneral(value2, value1) + assert isgeneral(OptValue(ConstInt(7)), OptValue(ConstInt(7))) + S = lltype.GcStruct('S') + foo = lltype.malloc(S) + fooref = lltype.cast_opaque_ptr(llmemory.GCREF, foo) + assert isgeneral(OptValue(ConstPtr(fooref)), + OptValue(ConstPtr(fooref))) + def test_field_matching_generalization(self): const1 = NotVirtualStateInfo(OptValue(ConstInt(1))) const2 = NotVirtualStateInfo(OptValue(ConstInt(2))) diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,9 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_getopnum from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory class TranslationTest: @@ -22,6 +24,7 @@ # - jitdriver hooks # - two JITs # - string concatenation, slicing and comparison + # - jit hooks interface class Frame(object): _virtualizable2_ = ['l[*]'] @@ -91,7 +94,9 @@ return f.i # def main(i, j): - return f(i) - f2(i+j, i, j) + op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], + boxint_new(8)) + return f(i) - f2(i+j, i, j) + resop_getopnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -1,4 +1,5 @@ import sys, py +from pypy.tool.sourcetools import func_with_new_name from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\ cast_base_ptr_to_instance, hlstr @@ -112,7 +113,7 @@ return ll_meta_interp(function, args, backendopt=backendopt, translate_support_code=True, **kwds) -def _find_jit_marker(graphs, marker_name): +def _find_jit_marker(graphs, marker_name, check_driver=True): results = [] for graph in graphs: for block in graph.iterblocks(): @@ -120,8 +121,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - (op.args[1].value is None or - op.args[1].value.active)): # the jitdriver + (not check_driver or op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -140,6 +141,9 @@ "found several jit_merge_points in the same graph") return results +def find_access_helpers(graphs): + return _find_jit_marker(graphs, 'access_helper', False) + def locate_jit_merge_point(graph): [(graph, block, pos)] = find_jit_merge_points([graph]) return block, pos, block.operations[pos] @@ -206,6 +210,7 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # + self.hooks = policy.jithookiface self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() @@ -213,6 +218,7 @@ self.rewrite_jit_merge_points(policy) verbose = False # not self.cpu.translate_support_code + self.rewrite_access_helpers() self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() self.rewrite_set_param() @@ -619,6 +625,24 @@ graph = self.annhelper.getgraph(func, args_s, s_result) return self.annhelper.graph2delayed(graph, FUNC) + def rewrite_access_helpers(self): + ah = find_access_helpers(self.translator.graphs) + for graph, block, index in ah: + op = block.operations[index] + self.rewrite_access_helper(op) + + def rewrite_access_helper(self, op): + ARGS = [arg.concretetype for arg in op.args[2:]] + RESULT = op.result.concretetype + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + # make sure we make a copy of function so it no longer belongs + # to extregistry + func = op.args[1].value + func = func_with_new_name(func, func.func_name + '_compiled') + ptr = self.helper_func(FUNCPTR, func) + op.opname = 'direct_call' + op.args = [Constant(ptr, FUNCPTR)] + op.args[2:] + def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: self.rewrite_jit_merge_point(jd, policy) diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -244,6 +244,11 @@ if self.warmrunnerdesc.memory_manager: self.warmrunnerdesc.memory_manager.max_retrace_guards = value + def set_param_max_unroll_loops(self, value): + if self.warmrunnerdesc: + if self.warmrunnerdesc.memory_manager: + self.warmrunnerdesc.memory_manager.max_unroll_loops = value + def disable_noninlinable_function(self, greenkey): cell = self.jit_cell_at_key(greenkey) cell.dont_trace_here = True @@ -596,20 +601,6 @@ return fn(*greenargs) self.should_unroll_one_iteration = should_unroll_one_iteration - if hasattr(jd.jitdriver, 'on_compile'): - def on_compile(logger, token, operations, type, greenkey): - greenargs = unwrap_greenkey(greenkey) - return jd.jitdriver.on_compile(logger, token, operations, type, - *greenargs) - def on_compile_bridge(logger, orig_token, operations, n): - return jd.jitdriver.on_compile_bridge(logger, orig_token, - operations, n) - jd.on_compile = on_compile - jd.on_compile_bridge = on_compile_bridge - else: - jd.on_compile = lambda *args: None - jd.on_compile_bridge = lambda *args: None - redargtypes = ''.join([kind[0] for kind in jd.red_args_types]) def get_assembler_token(greenkey): diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -89,11 +89,18 @@ assert typ == 'class' return self.model.ConstObj(ootype.cast_to_object(obj)) - def get_descr(self, poss_descr): + def get_descr(self, poss_descr, allow_invent): if poss_descr.startswith('<'): return None - else: + try: return self._consts[poss_descr] + except KeyError: + if allow_invent: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: @@ -186,7 +193,8 @@ poss_descr = allargs[-1].strip() if poss_descr.startswith('descr='): - descr = self.get_descr(poss_descr[len('descr='):]) + descr = self.get_descr(poss_descr[len('descr='):], + opname == 'label') allargs = allargs[:-1] for arg in allargs: arg = arg.strip() diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -6,7 +6,7 @@ from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat - from pypy.jit.metainterp.history import BasicFailDescr + from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.history import get_const_ptr_for_string @@ -42,6 +42,10 @@ class JitCellToken(object): I_am_a_descr = True + class TargetToken(object): + def __init__(self, jct): + pass + class BasicFailDescr(object): I_am_a_descr = True diff --git a/pypy/jit/tool/pypytrace.vim b/pypy/jit/tool/pypytrace.vim --- a/pypy/jit/tool/pypytrace.vim +++ b/pypy/jit/tool/pypytrace.vim @@ -19,6 +19,7 @@ syn match pypyLoopArgs '^[[].*' syn match pypyLoopStart '^#.*' syn match pypyDebugMergePoint '^debug_merge_point(.\+)' +syn match pypyLogBoundary '[[][0-9a-f]\+[]] \([{].\+\|.\+[}]\)$' hi def link pypyLoopStart Structure "hi def link pypyLoopArgs PreProc @@ -29,3 +30,4 @@ hi def link pypyNumber Number hi def link pypyDescr PreProc hi def link pypyDescrField Label +hi def link pypyLogBoundary Statement diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,8 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken,\ + TargetToken class BaseTestOparser(object): @@ -243,6 +244,16 @@ b = loop.getboxes() assert isinstance(b.sum0, BoxInt) + def test_label(self): + x = """ + [i0] + label(i0, descr=1) + jump(i0, descr=1) + """ + loop = self.parse(x) + assert loop.operations[0].getdescr() is loop.operations[1].getdescr() + assert isinstance(loop.operations[0].getdescr(), TargetToken) + class ForbiddenModule(object): def __init__(self, name, old_mod): diff --git a/pypy/module/_codecs/test/test_codecs.py b/pypy/module/_codecs/test/test_codecs.py --- a/pypy/module/_codecs/test/test_codecs.py +++ b/pypy/module/_codecs/test/test_codecs.py @@ -588,10 +588,18 @@ raises(UnicodeDecodeError, '+3ADYAA-'.decode, 'utf-7') def test_utf_16_encode_decode(self): - import codecs + import codecs, sys x = u'123abc' - assert codecs.getencoder('utf-16')(x) == ('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) - assert codecs.getdecoder('utf-16')('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) + if sys.byteorder == 'big': + assert codecs.getencoder('utf-16')(x) == ( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c', 6) + assert codecs.getdecoder('utf-16')( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c') == (x, 14) + else: + assert codecs.getencoder('utf-16')(x) == ( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) + assert codecs.getdecoder('utf-16')( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) def test_unicode_escape(self): assert u'\\'.encode('unicode-escape') == '\\\\' diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize @@ -387,6 +388,8 @@ "Float": "space.w_float", "Long": "space.w_long", "Complex": "space.w_complex", + "ByteArray": "space.w_bytearray", + "MemoryView": "space.gettypeobject(W_MemoryView.typedef)", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', 'NotImplemented': 'space.type(space.w_NotImplemented)', diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py --- a/pypy/module/cpyext/buffer.py +++ b/pypy/module/cpyext/buffer.py @@ -1,6 +1,36 @@ +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, Py_buffer) +from pypy.module.cpyext.pyobject import PyObject + + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyObject_CheckBuffer(space, w_obj): + """Return 1 if obj supports the buffer interface otherwise 0.""" + return 0 # the bf_getbuffer field is never filled by cpyext + + at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real], + rffi.INT_real, error=-1) +def PyObject_GetBuffer(space, w_obj, view, flags): + """Export obj into a Py_buffer, view. These arguments must + never be NULL. The flags argument is a bit field indicating what + kind of buffer the caller is prepared to deal with and therefore what + kind of buffer the exporter is allowed to return. The buffer interface + allows for complicated memory sharing possibilities, but some caller may + not be able to handle all the complexity but may want to see if the + exporter will let them take a simpler view to its memory. + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a BufferError unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + Py_buffer structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + 0 is returned on success and -1 on error.""" + raise OperationError(space.w_TypeError, space.wrap( + 'PyPy does not yet implement the new buffer interface')) @cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL) def PyBuffer_IsContiguous(space, view, fortran): diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -123,10 +123,6 @@ typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *); typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **); -typedef int (*objobjproc)(PyObject *, PyObject *); -typedef int (*visitproc)(PyObject *, void *); -typedef int (*traverseproc)(PyObject *, visitproc, void *); - /* Py3k buffer interface */ typedef struct bufferinfo { void *buf; @@ -153,6 +149,41 @@ typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); + /* Flags for getting buffers */ +#define PyBUF_SIMPLE 0 +#define PyBUF_WRITABLE 0x0001 +/* we used to include an E, backwards compatible alias */ +#define PyBUF_WRITEABLE PyBUF_WRITABLE +#define PyBUF_FORMAT 0x0004 +#define PyBUF_ND 0x0008 +#define PyBUF_STRIDES (0x0010 | PyBUF_ND) +#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) +#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) +#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) +#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE) +#define PyBUF_CONTIG_RO (PyBUF_ND) + +#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE) +#define PyBUF_STRIDED_RO (PyBUF_STRIDES) + +#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT) + +#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT) + + +#define PyBUF_READ 0x100 +#define PyBUF_WRITE 0x200 +#define PyBUF_SHADOW 0x400 +/* end Py3k buffer interface */ + +typedef int (*objobjproc)(PyObject *, PyObject *); +typedef int (*visitproc)(PyObject *, void *); +typedef int (*traverseproc)(PyObject *, visitproc, void *); + typedef struct { /* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all arguments are guaranteed to be of the object's type (modulo diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -5,7 +5,7 @@ struct _is; /* Forward */ typedef struct _is { - int _foo; + struct _is *next; } PyInterpreterState; typedef struct _ts { diff --git a/pypy/module/cpyext/methodobject.py b/pypy/module/cpyext/methodobject.py --- a/pypy/module/cpyext/methodobject.py +++ b/pypy/module/cpyext/methodobject.py @@ -58,6 +58,7 @@ class W_PyCFunctionObject(Wrappable): def __init__(self, space, ml, w_self, w_module=None): self.ml = ml + self.name = rffi.charp2str(self.ml.c_ml_name) self.w_self = w_self self.w_module = w_module @@ -69,7 +70,7 @@ flags &= ~(METH_CLASS | METH_STATIC | METH_COEXIST) if space.is_true(w_kw) and not flags & METH_KEYWORDS: raise OperationError(space.w_TypeError, space.wrap( - rffi.charp2str(self.ml.c_ml_name) + "() takes no keyword arguments")) + self.name + "() takes no keyword arguments")) func = rffi.cast(PyCFunction, self.ml.c_ml_meth) length = space.int_w(space.len(w_args)) @@ -80,13 +81,12 @@ if length == 0: return generic_cpy_call(space, func, w_self, None) raise OperationError(space.w_TypeError, space.wrap( - rffi.charp2str(self.ml.c_ml_name) + "() takes no arguments")) + self.name + "() takes no arguments")) elif flags & METH_O: if length != 1: raise OperationError(space.w_TypeError, space.wrap("%s() takes exactly one argument (%d given)" % ( - rffi.charp2str(self.ml.c_ml_name), - length))) + self.name, length))) w_arg = space.getitem(w_args, space.wrap(0)) return generic_cpy_call(space, func, w_self, w_arg) elif flags & METH_VARARGS: @@ -199,6 +199,7 @@ __call__ = interp2app(cfunction_descr_call), __doc__ = GetSetProperty(W_PyCFunctionObject.get_doc), __module__ = interp_attrproperty_w('w_module', cls=W_PyCFunctionObject), + __name__ = interp_attrproperty('name', cls=W_PyCFunctionObject), ) W_PyCFunctionObject.typedef.acceptable_as_base_class = False diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -2,7 +2,10 @@ cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) from pypy.rpython.lltypesystem import rffi, lltype -PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ())) +PyInterpreterStateStruct = lltype.ForwardReference() +PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) +cpython_struct( + "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) @@ -54,7 +57,8 @@ class InterpreterState(object): def __init__(self, space): - self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True) + self.interpreter_state = lltype.malloc( + PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) def new_thread_state(self): capsule = ThreadStateCapsule() diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -34,141 +34,6 @@ @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyObject_CheckBuffer(space, obj): - """Return 1 if obj supports the buffer interface otherwise 0.""" - raise NotImplementedError - - at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1) -def PyObject_GetBuffer(space, obj, view, flags): - """Export obj into a Py_buffer, view. These arguments must - never be NULL. The flags argument is a bit field indicating what - kind of buffer the caller is prepared to deal with and therefore what - kind of buffer the exporter is allowed to return. The buffer interface - allows for complicated memory sharing possibilities, but some caller may - not be able to handle all the complexity but may want to see if the - exporter will let them take a simpler view to its memory. - - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a BufferError unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - Py_buffer structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. - - 0 is returned on success and -1 on error. - - The following table gives possible values to the flags arguments. - - Flag - - Description - - PyBUF_SIMPLE - - This is the default flag state. The returned - buffer may or may not have writable memory. The - format of the data will be assumed to be unsigned - bytes. This is a "stand-alone" flag constant. It - never needs to be '|'d to the others. The exporter - will raise an error if it cannot provide such a - contiguous buffer of bytes. - - PyBUF_WRITABLE - - The returned buffer must be writable. If it is - not writable, then raise an error. - - PyBUF_STRIDES - - This implies PyBUF_ND. The returned - buffer must provide strides information (i.e. the - strides cannot be NULL). This would be used when - the consumer can handle strided, discontiguous - arrays. Handling strides automatically assumes - you can handle shape. The exporter can raise an - error if a strided representation of the data is - not possible (i.e. without the suboffsets). - - PyBUF_ND - - The returned buffer must provide shape - information. The memory will be assumed C-style - contiguous (last dimension varies the - fastest). The exporter may raise an error if it - cannot provide this kind of contiguous buffer. If - this is not given then shape will be NULL. - - PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS - - These flags indicate that the contiguity returned - buffer must be respectively, C-contiguous (last - dimension varies the fastest), Fortran contiguous - (first dimension varies the fastest) or either - one. All of these flags imply - PyBUF_STRIDES and guarantee that the - strides buffer info structure will be filled in - correctly. - - PyBUF_INDIRECT - - This flag indicates the returned buffer must have - suboffsets information (which can be NULL if no - suboffsets are needed). This can be used when - the consumer can handle indirect array - referencing implied by these suboffsets. This - implies PyBUF_STRIDES. - - PyBUF_FORMAT - - The returned buffer must have true format - information if this flag is provided. This would - be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An - exporter should always be able to provide this - information if requested. If format is not - explicitly requested then the format must be - returned as NULL (which means 'B', or - unsigned bytes) - - PyBUF_STRIDED - - This is equivalent to (PyBUF_STRIDES | - PyBUF_WRITABLE). - - PyBUF_STRIDED_RO - - This is equivalent to (PyBUF_STRIDES). - - PyBUF_RECORDS - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_RECORDS_RO - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT). - - PyBUF_FULL - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_FULL_RO - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT). - - PyBUF_CONTIG - - This is equivalent to (PyBUF_ND | - PyBUF_WRITABLE). - - PyBUF_CONTIG_RO - - This is equivalent to (PyBUF_ND).""" raise NotImplementedError @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -63,6 +63,7 @@ ), ]) assert mod.getarg_O(1) == 1 + assert mod.getarg_O.__name__ == "getarg_O" raises(TypeError, mod.getarg_O) raises(TypeError, mod.getarg_O, 1, 1) diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -37,6 +37,7 @@ def test_thread_state_interp(self, space, api): ts = api.PyThreadState_Get() assert ts.c_interp == api.PyInterpreterState_Head() + assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO) def test_basic_threadstate_dance(self, space, api): # Let extension modules call these functions, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -4,11 +4,12 @@ class PyPyModule(MixedModule): interpleveldefs = { 'debug_repr': 'interp_extras.debug_repr', + 'remove_invalidates': 'interp_extras.remove_invalidates', } appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule @@ -47,6 +48,7 @@ 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', + 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -14,30 +14,54 @@ return mean(a) def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n,n), dtype=dtype) for i in range(n): a[i][i] = 1 return a -def mean(a): +def mean(a, axis=None): if not hasattr(a, "mean"): - a = numpypy.array(a) - return a.mean() + a = _numpypy.array(a) + return a.mean(axis) -def sum(a): +def sum(a,axis=None): + '''sum(a, axis=None) + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + ''' + # TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements. if not hasattr(a, "sum"): - a = numpypy.array(a) - return a.sum() + a = _numpypy.array(a) + return a.sum(axis) -def min(a): +def min(a, axis=None): if not hasattr(a, "min"): - a = numpypy.array(a) - return a.min() + a = _numpypy.array(a) + return a.min(axis) -def max(a): +def max(a, axis=None): if not hasattr(a, "max"): - a = numpypy.array(a) - return a.max() + a = _numpypy.array(a) + return a.max(axis) def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) @@ -47,9 +71,9 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i @@ -90,5 +114,5 @@ you should assign the new shape to the shape attribute of the array ''' if not hasattr(a, 'reshape'): - a = numpypy.array(a) + a = _numpypy.array(a) return a.reshape(shape) diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -373,13 +373,17 @@ def execute(self, interp): if self.name in SINGLE_ARG_FUNCTIONS: - if len(self.args) != 1: + if len(self.args) != 1 and self.name != 'sum': raise ArgumentMismatch arr = self.args[0].execute(interp) if not isinstance(arr, BaseArray): raise ArgumentNotAnArray if self.name == "sum": - w_res = arr.descr_sum(interp.space) + if len(self.args)>1: + w_res = arr.descr_sum(interp.space, + self.args[1].execute(interp)) + else: + w_res = arr.descr_sum(interp.space) elif self.name == "prod": w_res = arr.descr_prod(interp.space) elif self.name == "max": @@ -430,7 +434,7 @@ ('\]', 'array_right'), ('(->)|[\+\-\*\/]', 'operator'), ('=', 'assign'), - (',', 'coma'), + (',', 'comma'), ('\|', 'pipe'), ('\(', 'paren_left'), ('\)', 'paren_right'), @@ -518,7 +522,7 @@ return SliceConstant(start, stop, step) - def parse_expression(self, tokens): + def parse_expression(self, tokens, accept_comma=False): stack = [] while tokens.remaining(): token = tokens.pop() @@ -538,9 +542,13 @@ stack.append(RangeConstant(tokens.pop().v)) end = tokens.pop() assert end.name == 'pipe' + elif accept_comma and token.name == 'comma': + continue else: tokens.push() break + if accept_comma: + return stack stack.reverse() lhs = stack.pop() while stack: @@ -554,10 +562,7 @@ args = [] tokens.pop() # lparen while tokens.get(0).name != 'paren_right': - if tokens.get(0).name == 'coma': - tokens.pop() - continue - args.append(self.parse_expression(tokens)) + args += self.parse_expression(tokens, accept_comma=True) return FunctionCall(name, args) def parse_array_const(self, tokens): @@ -573,7 +578,7 @@ token = tokens.pop() if token.name == 'array_right': return elems - assert token.name == 'coma' + assert token.name == 'comma' def parse_statement(self, tokens): if (tokens.get(0).name == 'identifier' and diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -78,6 +78,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -170,6 +171,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), @@ -245,6 +247,7 @@ long_name = "int64" W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), __module__ = "numpypy", + __new__ = interp2app(W_LongBox.descr__new__.im_func), ) W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, diff --git a/pypy/module/micronumpy/interp_extras.py b/pypy/module/micronumpy/interp_extras.py --- a/pypy/module/micronumpy/interp_extras.py +++ b/pypy/module/micronumpy/interp_extras.py @@ -5,3 +5,11 @@ @unwrap_spec(array=BaseArray) def debug_repr(space, array): return space.wrap(array.find_sig().debug_repr()) + + at unwrap_spec(array=BaseArray) +def remove_invalidates(space, array): + """ Array modification will no longer invalidate any of it's + potential children. Use only for performance debugging + """ + del array.invalidates[:] + return space.w_None diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -1,19 +1,20 @@ from pypy.rlib import jit from pypy.rlib.objectmodel import instantiate -from pypy.module.micronumpy.strides import calculate_broadcast_strides +from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ + calculate_slice_strides -# Iterators for arrays -# -------------------- -# all those iterators with the exception of BroadcastIterator iterate over the -# entire array in C order (the last index changes the fastest). This will -# yield all elements. Views iterate over indices and look towards strides and -# backstrides to find the correct position. Notably the offset between -# x[..., i + 1] and x[..., i] will be strides[-1]. Offset between -# x[..., k + 1, 0] and x[..., k, i_max] will be backstrides[-2] etc. +class BaseTransform(object): + pass -# BroadcastIterator works like that, but for indexes that don't change source -# in the original array, strides[i] == backstrides[i] == 0 +class ViewTransform(BaseTransform): + def __init__(self, chunks): + # 4-tuple specifying slicing + self.chunks = chunks + +class BroadcastTransform(BaseTransform): + def __init__(self, res_shape): + self.res_shape = res_shape class BaseIterator(object): def next(self, shapelen): @@ -22,6 +23,15 @@ def done(self): raise NotImplementedError + def apply_transformations(self, arr, transformations): + v = self + for transform in transformations: + v = v.transform(arr, transform) + return v + + def transform(self, arr, t): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 @@ -36,6 +46,10 @@ def done(self): return self.offset >= self.size + def transform(self, arr, t): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).transform(arr, t) + class OneDimIterator(BaseIterator): def __init__(self, start, step, stop): self.offset = start @@ -52,26 +66,29 @@ def done(self): return self.offset == self.size -def view_iter_from_arr(arr): - return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) - class ViewIterator(BaseIterator): - def __init__(self, start, strides, backstrides, shape, res_shape=None): + def __init__(self, start, strides, backstrides, shape): self.offset = start self._done = False - if res_shape is not None and res_shape != shape: - r = calculate_broadcast_strides(strides, backstrides, - shape, res_shape) - self.strides, self.backstrides = r - self.res_shape = res_shape - else: - self.strides = strides - self.backstrides = backstrides - self.res_shape = shape + self.strides = strides + self.backstrides = backstrides + self.res_shape = shape self.indices = [0] * len(self.res_shape) + def transform(self, arr, t): + if isinstance(t, BroadcastTransform): + r = calculate_broadcast_strides(self.strides, self.backstrides, + self.res_shape, t.res_shape) + return ViewIterator(self.offset, r[0], r[1], t.res_shape) + elif isinstance(t, ViewTransform): + r = calculate_slice_strides(self.res_shape, self.offset, + self.strides, + self.backstrides, t.chunks) + return ViewIterator(r[1], r[2], r[3], r[0]) + @jit.unroll_safe def next(self, shapelen): + shapelen = jit.promote(len(self.res_shape)) offset = self.offset indices = [0] * shapelen for i in range(shapelen): @@ -96,6 +113,13 @@ res._done = done return res + def apply_transformations(self, arr, transformations): + v = BaseIterator.apply_transformations(self, arr, transformations) + if len(arr.shape) == 1: + return OneDimIterator(self.offset, self.strides[0], + self.res_shape[0]) + return v + def done(self): return self._done @@ -103,11 +127,57 @@ def next(self, shapelen): return self + def transform(self, arr, t): + pass + +class AxisIterator(BaseIterator): + def __init__(self, start, dim, shape, strides, backstrides): + self.res_shape = shape[:] + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + self.first_line = True + self.indices = [0] * len(shape) + self._done = False + self.offset = start + self.dim = dim + + @jit.unroll_safe + def next(self, shapelen): + offset = self.offset + first_line = self.first_line + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - 1: + if i == self.dim: + first_line = False + indices[i] += 1 + offset += self.strides[i] + break + else: + indices[i] = 0 + offset -= self.backstrides[i] + else: + done = True + res = instantiate(AxisIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + res.first_line = first_line + res.dim = self.dim + return res + + def done(self): + return self._done + # ------ other iterators that are not part of the computation frame ---------- - -class AxisIterator(object): - """ This object will return offsets of each start of the last stride - """ + +class SkipLastAxisIterator(object): def __init__(self, arr): self.arr = arr self.indices = [0] * (len(arr.shape) - 1) @@ -125,4 +195,3 @@ self.offset -= self.arr.backstrides[i] else: self.done = True - diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped +from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature from pypy.module.micronumpy.strides import calculate_slice_strides @@ -8,30 +8,39 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator,\ - view_iter_from_arr, OneDimIterator, AxisIterator +from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ + SkipLastAxisIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['result_size', 'frame', 'ri', 'self', 'result'] + reds=['result_size', 'frame', 'ri', 'self', 'result'], + get_printable_location=signature.new_printable_location('numpy'), + name='numpy', ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['frame', 'self', 'dtype'] + reds=['frame', 'self', 'dtype'], + get_printable_location=signature.new_printable_location('all'), + name='numpy_all', ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['frame', 'self', 'dtype'] + reds=['frame', 'self', 'dtype'], + get_printable_location=signature.new_printable_location('any'), + name='numpy_any', ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'] + reds=['self', 'frame', 'arr'], + get_printable_location=signature.new_printable_location('slice'), + name='numpy_slice', ) + def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) @@ -148,9 +157,6 @@ # (meaning that the realignment of elements crosses from one step into another) # return None so that the caller can raise an exception. def calc_new_strides(new_shape, old_shape, old_strides): - # Return the proper strides for new_shape, or None if the mapping crosses - # stepping boundaries - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and # len(new_shape) > 0 steps = [] @@ -158,6 +164,7 @@ oldI = 0 new_strides = [] if old_strides[0] < old_strides[-1]: + #Start at old_shape[0], old_stides[0] for i in range(len(old_shape)): steps.append(old_strides[i] / last_step) last_step *= old_shape[i] @@ -175,10 +182,11 @@ if n_new_elems_used == n_old_elems_to_use: oldI += 1 if oldI >= len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] else: + #Start at old_shape[-1], old_strides[-1] for i in range(len(old_shape) - 1, -1, -1): steps.insert(0, old_strides[i] / last_step) last_step *= old_shape[i] @@ -197,7 +205,7 @@ if n_new_elems_used == n_old_elems_to_use: oldI -= 1 if oldI < -len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] return new_strides @@ -277,21 +285,26 @@ descr_rpow = _binop_right_impl("power") descr_rmod = _binop_right_impl("mod") - def _reduce_ufunc_impl(ufunc_name): - def impl(self, space): + def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): + def impl(self, space, w_dim=None): + if space.is_w(w_dim, space.w_None): + w_dim = space.wrap(-1) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, multidim=True) + self, True, promote_to_largest, w_dim) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") - descr_prod = _reduce_ufunc_impl("multiply") + descr_sum_promote = _reduce_ufunc_impl("add", True) + descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") def _reduce_argmax_argmin_impl(op_name): reduce_driver = jit.JitDriver( greens=['shapelen', 'sig'], - reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'] + reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], + get_printable_location=signature.new_printable_location(op_name), + name='numpy_' + op_name, ) def loop(self): sig = self.find_sig() @@ -377,7 +390,7 @@ elif len(self.shape) < 2 and len(w_other.shape) < 2: w_res = self.descr_mul(space, w_other) assert isinstance(w_res, BaseArray) - return w_res.descr_sum(space) + return w_res.descr_sum(space, space.wrap(-1)) dtype = interp_ufuncs.find_binop_result_dtype(space, self.find_dtype(), w_other.find_dtype()) if self.size < 1 and w_other.size < 1: @@ -438,6 +451,10 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -466,7 +483,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -479,8 +496,9 @@ else: concrete.to_str(space, 1, res, indent=' ') if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -613,9 +631,26 @@ ) return w_result - def descr_mean(self, space): - return space.div(self.descr_sum(space), - space.wrap(self.size)) + def descr_mean(self, space, w_dim=None): + if space.is_w(w_dim, space.w_None): + w_dim = space.wrap(-1) + w_denom = space.wrap(self.size) + else: + dim = space.int_w(w_dim) + w_denom = space.wrap(self.shape[dim]) + return space.div(self.descr_sum_promote(space, w_dim), w_denom) + + def descr_var(self, space): + # var = mean((values - mean(values)) ** 2) + w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) + assert isinstance(w_res, BaseArray) + w_res = w_res.descr_pow(space, space.wrap(2)) + assert isinstance(w_res, BaseArray) + return w_res.descr_mean(space, space.w_None) + + def descr_std(self, space): + # std(v) = sqrt(var(v)) + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) def descr_nonzero(self, space): if self.size > 1: @@ -641,8 +676,8 @@ strides.append(concrete.strides[i]) backstrides.append(concrete.backstrides[i]) shape.append(concrete.shape[i]) - return space.wrap(W_NDimSlice(concrete.start, strides[:], - backstrides[:], shape[:], concrete)) + return space.wrap(W_NDimSlice(concrete.start, strides, + backstrides, shape, concrete)) def descr_get_flatiter(self, space): return space.wrap(W_FlatIterator(self)) @@ -650,11 +685,12 @@ def getitem(self, item): raise NotImplementedError - def find_sig(self, res_shape=None): + def find_sig(self, res_shape=None, arr=None): """ find a correct signature for the array """ res_shape = res_shape or self.shape - return signature.find_sig(self.create_sig(res_shape), self) + arr = arr or self + return signature.find_sig(self.create_sig(), arr) def descr_array_iface(self, space): if not self.shape: @@ -708,7 +744,7 @@ def copy(self, space): return Scalar(self.dtype, self.value) - def create_sig(self, res_shape): + def create_sig(self): return signature.ScalarSignature(self.dtype) def get_concrete_or_scalar(self): @@ -726,7 +762,8 @@ self.name = name def _del_sources(self): - # Function for deleting references to source arrays, to allow garbage-collecting them + # Function for deleting references to source arrays, + # to allow garbage-collecting them raise NotImplementedError def compute(self): @@ -778,11 +815,11 @@ self.size = size VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() return signature.VirtualSliceSignature( - self.child.create_sig(res_shape)) + self.child.create_sig()) def force_if_needed(self): if self.forced_result is None: @@ -792,6 +829,7 @@ def _del_sources(self): self.child = None + class Call1(VirtualArray): def __init__(self, ufunc, name, shape, res_dtype, values): VirtualArray.__init__(self, name, shape, res_dtype) @@ -802,16 +840,17 @@ def _del_sources(self): self.values = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) - return signature.Call1(self.ufunc, self.name, - self.values.create_sig(res_shape)) + return self.forced_result.create_sig() + return signature.Call1(self.ufunc, self.name, self.values.create_sig()) class Call2(VirtualArray): """ Intermediate class for performing binary operations. """ + _immutable_fields_ = ['left', 'right'] + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -826,12 +865,55 @@ self.left = None self.right = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() + if self.shape != self.left.shape and self.shape != self.right.shape: + return signature.BroadcastBoth(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.left.shape: + return signature.BroadcastLeft(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.right.shape: + return signature.BroadcastRight(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) return signature.Call2(self.ufunc, self.name, self.calc_dtype, - self.left.create_sig(res_shape), - self.right.create_sig(res_shape)) + self.left.create_sig(), self.right.create_sig()) + +class SliceArray(Call2): + def __init__(self, shape, dtype, left, right): + Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, + right) + + def create_sig(self): + lsig = self.left.create_sig() + rsig = self.right.create_sig() + if self.shape != self.right.shape: + return signature.SliceloopBroadcastSignature(self.ufunc, + self.name, + self.calc_dtype, + lsig, rsig) + return signature.SliceloopSignature(self.ufunc, self.name, + self.calc_dtype, + lsig, rsig) + +class AxisReduce(Call2): + """ NOTE: this is only used as a container, you should never + encounter such things in the wild. Remove this comment + when we'll make AxisReduce lazy + """ + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not @@ -883,13 +965,8 @@ if self.order == 'C': strides.reverse() backstrides.reverse() - self.strides = strides[:] - self.backstrides = backstrides[:] - - def array_sig(self, res_shape): - if res_shape is not None and self.shape != res_shape: - return signature.ViewSignature(self.dtype) - return signature.ArraySignature(self.dtype) + self.strides = strides + self.backstrides = backstrides def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): '''Modifies builder with a representation of the array/slice @@ -898,76 +975,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) view = self.create_slice([(i, 0, 0, 1)]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) + if i < self.shape[0] - 1: + builder.append(ccomma + '\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) + # create_slice requires len(chunks) > 1 in order to reduce + # shape view = self.create_slice([(i, 0, 0, 1)]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe @@ -1001,20 +1082,22 @@ self.dtype is w_value.find_dtype()): self._fast_setslice(space, w_value) else: - self._sliceloop(w_value, res_shape) + arr = SliceArray(self.shape, self.dtype, self, w_value) + self._sliceloop(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) itemsize = self.dtype.itemtype.get_element_size() - if len(self.shape) == 1: + shapelen = len(self.shape) + if shapelen == 1: rffi.c_memcpy( rffi.ptradd(self.storage, self.start * itemsize), rffi.ptradd(w_value.storage, w_value.start * itemsize), self.size * itemsize ) else: - dest = AxisIterator(self) - source = AxisIterator(w_value) + dest = SkipLastAxisIterator(self) + source = SkipLastAxisIterator(w_value) while not dest.done: rffi.c_memcpy( rffi.ptradd(self.storage, dest.offset * itemsize), @@ -1024,21 +1107,16 @@ source.next() dest.next() - def _sliceloop(self, source, res_shape): - sig = source.find_sig(res_shape) - frame = sig.create_frame(source, res_shape) - res_iter = view_iter_from_arr(self) - shapelen = len(res_shape) - while not res_iter.done(): - slice_driver.jit_merge_point(sig=sig, - frame=frame, - shapelen=shapelen, - self=self, source=source, - res_iter=res_iter) - self.setitem(res_iter.offset, sig.eval(frame, source).convert_to( - self.find_dtype())) + def _sliceloop(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(arr) + shapelen = len(self.shape) + while not frame.done(): + slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, + arr=arr, + shapelen=shapelen) + sig.eval(frame, arr) frame.next(shapelen) - res_iter = res_iter.next(shapelen) def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) @@ -1047,7 +1125,7 @@ class ViewArray(ConcreteArray): - def create_sig(self, res_shape): + def create_sig(self): return signature.ViewSignature(self.dtype) @@ -1084,9 +1162,9 @@ strides.reverse() backstrides.reverse() new_shape.reverse() - self.strides = strides[:] - self.backstrides = backstrides[:] - self.shape = new_shape[:] + self.strides = strides + self.backstrides = backstrides + self.shape = new_shape return new_strides = calc_new_strides(new_shape, self.shape, self.strides) if new_strides is None: @@ -1096,7 +1174,7 @@ for nd in range(len(new_shape)): new_backstrides[nd] = (new_shape[nd] - 1) * new_strides[nd] self.strides = new_strides[:] - self.backstrides = new_backstrides[:] + self.backstrides = new_backstrides self.shape = new_shape[:] class W_NDimArray(ConcreteArray): @@ -1111,8 +1189,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_sig(self, res_shape): - return self.array_sig(res_shape) + def create_sig(self): + return signature.ArraySignature(self.dtype) def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) @@ -1239,6 +1317,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), @@ -1253,6 +1332,8 @@ all = interp2app(BaseArray.descr_all), any = interp2app(BaseArray.descr_any), dot = interp2app(BaseArray.descr_dot), + var = interp2app(BaseArray.descr_var), + std = interp2app(BaseArray.descr_std), copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), @@ -1279,7 +1360,7 @@ def descr_next(self, space): if self.iter.done(): raise OperationError(space.w_StopIteration, space.w_None) - result = self.eval(self.iter) + result = self.getitem(self.iter.offset) self.iter = self.iter.next(self.shapelen) return result diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -1,19 +1,31 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, unwrap_spec +from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty -from pypy.module.micronumpy import interp_boxes, interp_dtype, types -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature, find_sig +from pypy.module.micronumpy import interp_boxes, interp_dtype +from pypy.module.micronumpy.signature import ReduceSignature,\ + find_sig, new_printable_location, AxisReduceSignature, ScalarSignature from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name reduce_driver = jit.JitDriver( - greens = ['shapelen', "sig"], - virtualizables = ["frame"], - reds = ["frame", "self", "dtype", "value", "obj"] + greens=['shapelen', "sig"], + virtualizables=["frame"], + reds=["frame", "self", "dtype", "value", "obj"], + get_printable_location=new_printable_location('reduce'), + name='numpy_reduce', ) +axisreduce_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['self','arr', 'identity', 'frame'], + name='numpy_axisreduce', + get_printable_location=new_printable_location('axisreduce'), +) + + class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -46,18 +58,72 @@ ) return self.call(space, __args__.arguments_w) - def descr_reduce(self, space, w_obj): - return self.reduce(space, w_obj, multidim=False) + def descr_reduce(self, space, w_obj, w_dim=0): + """reduce(...) + reduce(a, axis=0) - def reduce(self, space, w_obj, multidim): - from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar - + Reduces `a`'s dimension by one, by applying ufunc along one axis. + + Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then + :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = + the result of iterating `j` over :math:`range(N_i)`, cumulatively applying + ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. + For a one-dimensional array, reduce produces results equivalent to: + :: + + r = op.identity # op = ufunc + for i in xrange(len(A)): + r = op(r, A[i]) + return r + + For example, add.reduce() is equivalent to sum(). + + Parameters + ---------- + a : array_like + The array to act on. + axis : int, optional + The axis along which to apply the reduction. + + Examples + -------- + >>> np.multiply.reduce([2,3,5]) + 30 + + A multi-dimensional array example: + + >>> X = np.arange(8).reshape((2,2,2)) + >>> X + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> np.add.reduce(X, 0) + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X) # confirm: default axis value is 0 + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X, 1) + array([[ 2, 4], + [10, 12]]) + >>> np.add.reduce(X, 2) + array([[ 1, 5], + [ 9, 13]]) + """ + return self.reduce(space, w_obj, False, False, w_dim) + + def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): + from pypy.module.micronumpy.interp_numarray import convert_to_array, \ + Scalar if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) - + dim = space.int_w(w_dim) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) + if dim >= len(obj.shape): + raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -65,26 +131,80 @@ size = obj.size dtype = find_unaryop_result_dtype( space, obj.find_dtype(), - promote_to_largest=True + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True ) shapelen = len(obj.shape) + if self.identity is None and size == 0: + raise operationerrfmt(space.w_ValueError, "zero-size array to " + "%s.reduce without identity", self.name) + if shapelen > 1 and dim >= 0: + res = self.do_axis_reduce(obj, dtype, dim) + return space.wrap(res) + scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, - ScalarSignature(dtype), - obj.create_sig(obj.shape)), obj) + scalarsig, + obj.create_sig()), obj) frame = sig.create_frame(obj) - if shapelen > 1 and not multidim: - raise OperationError(space.w_NotImplementedError, - space.wrap("not implemented yet")) if self.identity is None: - if size == 0: - raise operationerrfmt(space.w_ValueError, "zero-size array to " - "%s.reduce without identity", self.name) value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) else: value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + def do_axis_reduce(self, obj, dtype, dim): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + W_NDimArray + + shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] + size = 1 + for s in shape: + size *= s + result = W_NDimArray(size, shape, dtype) + rightsig = obj.create_sig() + # note - this is just a wrapper so signature can fetch + # both left and right, nothing more, especially + # this is not a true virtual array, because shapes + # don't quite match + arr = AxisReduce(self.func, self.name, obj.shape, dtype, + result, obj, dim) + scalarsig = ScalarSignature(dtype) + sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, + scalarsig, rightsig), arr) + assert isinstance(sig, AxisReduceSignature) + frame = sig.create_frame(arr) + shapelen = len(obj.shape) + if self.identity is not None: + identity = self.identity.convert_to(dtype) + else: + identity = None + self.reduce_axis_loop(frame, sig, shapelen, arr, identity) + return result + + def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): + # note - we can be advanterous here, depending on the exact field + # layout. For now let's say we iterate the original way and + # simply follow the original iteration order + while not frame.done(): + axisreduce_driver.jit_merge_point(frame=frame, self=self, + sig=sig, + identity=identity, + shapelen=shapelen, arr=arr) + iter = frame.get_final_iter() + v = sig.eval(frame, arr).convert_to(sig.calc_dtype) + if iter.first_line: + if identity is not None: + value = self.func(sig.calc_dtype, identity, v) + else: + value = v + else: + cur = arr.left.getitem(iter.offset) + value = self.func(sig.calc_dtype, cur, v) + arr.left.setitem(iter.offset, value) + frame.next(shapelen) + def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): while not frame.done(): reduce_driver.jit_merge_point(sig=sig, @@ -92,10 +212,12 @@ value=value, obj=obj, frame=frame, dtype=dtype) assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, sig.eval(frame, obj).convert_to(dtype)) + value = sig.binfunc(dtype, value, + sig.eval(frame, obj).convert_to(dtype)) frame.next(shapelen) return value + class W_Ufunc1(W_Ufunc): argcount = 1 @@ -180,6 +302,7 @@ reduce = interp2app(W_Ufunc.descr_reduce), ) + def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, promote_bools=False): # dt1.num should be <= dt2.num @@ -228,6 +351,7 @@ dtypenum += 3 return interp_dtype.get_dtype_cache(space).builtin_dtypes[dtypenum] + def find_unaryop_result_dtype(space, dt, promote_to_float=False, promote_bools=False, promote_to_largest=False): if promote_bools and (dt.kind == interp_dtype.BOOLLTR): @@ -252,6 +376,7 @@ assert False return dt + def find_dtype_for_scalar(space, w_obj, current_guess=None): bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype @@ -345,7 +470,8 @@ identity = extra_kwargs.get("identity") if identity is not None: - identity = interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) + identity = \ + interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,10 +1,37 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator -from pypy.module.micronumpy.strides import calculate_slice_strides + ConstantIterator, AxisIterator, ViewTransform,\ + BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote +""" Signature specifies both the numpy expression that has been constructed +and the assembler to be compiled. This is a very important observation - +Two expressions will be using the same assembler if and only if they are +compiled to the same signature. + +This is also a very convinient tool for specializations. For example +a + a and a + b (where a != b) will compile to different assembler because +we specialize on the same array access. + +When evaluating, signatures will create iterators per signature node, +potentially sharing some of them. Iterators depend also on the actual +expression, they're not only dependant on the array itself. For example +a + b where a is dim 2 and b is dim 1 would create a broadcasted iterator for +the array b. + +Such iterator changes are called Transformations. An actual iterator would +be a combination of array and various transformation, like view, broadcast, +dimension swapping etc. + +See interp_iter for transformations +""" + +def new_printable_location(driver_name): + def get_printable_location(shapelen, sig): + return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) + return get_printable_location + def sigeq(one, two): return one.eq(two) @@ -28,7 +55,8 @@ return sig class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]'] + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity'] @unroll_safe def __init__(self, iterators, arrays): @@ -46,7 +74,7 @@ def done(self): final_iter = promote(self.final_iter) if final_iter < 0: - return False + assert False return self.iterators[final_iter].done() @unroll_safe @@ -54,6 +82,12 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -65,6 +99,9 @@ cache.append(ptr) return res +def new_cache(): + return r_dict(sigeq_no_numbering, sighash) + class Signature(object): _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -73,7 +110,7 @@ iter_no = 0 def invent_numbering(self): - cache = r_dict(sigeq_no_numbering, sighash) + cache = new_cache() allnumbers = [] self._invent_numbering(cache, allnumbers) @@ -90,13 +127,13 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None): - res_shape = res_shape or arr.shape + def create_frame(self, arr): iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, []) + self._create_iter(iterlist, arraylist, arr, []) return NumpyEvalFrame(iterlist, arraylist) + class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -115,16 +152,6 @@ def hash(self): return compute_identity_hash(self.dtype) - def allocate_view_iter(self, arr, res_shape, chunklist): - r = arr.shape, arr.start, arr.strides, arr.backstrides - if chunklist: - for chunkelem in chunklist: - r = calculate_slice_strides(r[0], r[1], r[2], r[3], chunkelem) - shape, start, strides, backstrides = r - if len(res_shape) == 1: - return OneDimIterator(start, strides[0], res_shape[0]) - return ViewIterator(start, strides, backstrides, shape, res_shape) - class ArraySignature(ConcreteSignature): def debug_repr(self): return 'Array' @@ -132,23 +159,25 @@ def _invent_array_numbering(self, arr, cache): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() + # this get_concrete never forces assembler. If we're here and array + # is not of a concrete class it means that we have a _forced_result, + # otherwise the signature would not match assert isinstance(concr, ConcreteArray) + assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, res_shape, chunklist)) + iterlist.append(self.allocate_iter(concr, transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, res_shape, chunklist): - if chunklist: - return self.allocate_view_iter(arr, res_shape, chunklist) - return ArrayIterator(arr.size) + def allocate_iter(self, arr, transforms): + return ArrayIterator(arr.size).apply_transformations(arr, transforms) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] @@ -161,7 +190,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -181,8 +210,9 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, res_shape, chunklist): - return self.allocate_view_iter(arr, res_shape, chunklist) + def allocate_iter(self, arr, transforms): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).apply_transformations(arr, transforms) class VirtualSliceSignature(Signature): def __init__(self, child): @@ -193,6 +223,9 @@ assert isinstance(arr, VirtualSlice) self.child._invent_array_numbering(arr.child, cache) + def _invent_numbering(self, cache, allnumbers): + self.child._invent_numbering(new_cache(), allnumbers) + def hash(self): return intmask(self.child.hash() ^ 1234) @@ -202,12 +235,11 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) - chunklist.append(arr.chunks) - self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist) + transforms = transforms + [ViewTransform(arr.chunks)] + self.child._create_iter(iterlist, arraylist, arr.child, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -243,11 +275,10 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist) + self.child._create_iter(iterlist, arraylist, arr.values, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 @@ -288,29 +319,68 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) - self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist) - self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist) + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class BroadcastLeft(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + +class BroadcastRight(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(cache, allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class BroadcastBoth(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): - self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist) + def _create_iter(self, iterlist, arraylist, arr, transforms): + self.right._create_iter(iterlist, arraylist, arr, transforms) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) @@ -320,3 +390,63 @@ def eval(self, frame, arr): return self.right.eval(frame, arr) + + def debug_repr(self): + return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + +class SliceloopSignature(Call2): + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ofs = frame.iterators[0].offset + arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( + self.calc_dtype)) + + def debug_repr(self): + return 'SliceLoop(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + +class SliceloopBroadcastSignature(SliceloopSignature): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import SliceArray + + assert isinstance(arr, SliceArray) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class AxisReduceSignature(Call2): + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + ConcreteArray + + assert isinstance(arr, AxisReduce) + left = arr.left + assert isinstance(left, ConcreteArray) + iterlist.append(AxisIterator(left.start, arr.dim, arr.shape, + left.strides, left.backstrides)) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + + def _invent_numbering(self, cache, allnumbers): + allnumbers.append(0) + self.right._invent_numbering(cache, allnumbers) + + def _invent_array_numbering(self, arr, cache): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + self.right._invent_array_numbering(arr.right, cache) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + + def debug_repr(self): + return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -1,4 +1,9 @@ +from pypy.rlib import jit + + at jit.look_inside_iff(lambda shape, start, strides, backstrides, chunks: + jit.isconstant(len(chunks)) +) def calculate_slice_strides(shape, start, strides, backstrides, chunks): rstrides = [] rbackstrides = [] diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +36,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +44,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +58,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +156,28 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -181,7 +190,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +205,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +227,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +250,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +260,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +269,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +284,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +295,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -315,7 +324,7 @@ def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +339,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +348,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +361,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -3,33 +3,33 @@ class AppTestNumPyModule(BaseNumpyAppTest): def test_mean(self): - from numpypy import array, mean + from _numpypy import array, mean assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -157,10 +157,13 @@ assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([105, 1], [3, 5, 7], [35, 7, 1]) == [1, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1]) == [105, 1] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -175,12 +178,26 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from _numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1, 2]) + assert x.ndim == 1 + x = array([[1, 2], [3, 4]]) + assert x.ndim == 2 + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but _numpypy raises an + # TypeError + raises(TypeError, 'x.ndim = 3') + def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -189,7 +206,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -200,13 +217,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -215,7 +232,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -231,13 +248,17 @@ c = b.copy() assert (c == b).all() + a = arange(15).reshape(5,3) + b = a.copy() + assert (b == a).all() + def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -246,7 +267,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -256,7 +277,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -264,7 +285,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -275,7 +296,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -286,7 +307,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -305,7 +326,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -313,14 +334,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -329,13 +350,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -344,7 +365,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -364,7 +385,7 @@ a.shape = (1,) def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -377,7 +398,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -413,13 +434,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -432,7 +453,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -440,20 +461,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -462,14 +483,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -477,35 +498,35 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i assert b.dtype is numpypy.dtype(int) - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -513,7 +534,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -545,7 +566,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -553,14 +574,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -570,7 +591,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -578,14 +599,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -598,7 +619,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -606,14 +627,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -624,7 +645,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -635,7 +656,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -646,7 +667,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -660,7 +681,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -674,7 +695,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpypy import array + from _numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -682,7 +703,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -692,7 +713,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -706,13 +727,18 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array + from _numpypy import array, mean a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 + a = array(range(105)).reshape(3, 5, 7) + b = mean(a, axis=0) + b[0,0]==35. + assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() + assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.sum() assert b == 10 @@ -721,53 +747,78 @@ a = array([True] * 5, bool) assert a.sum() == 5 + raises(TypeError, 'a.sum(2, 3)') + + def test_reduce_nd(self): + from numpypy import arange, array, multiply + a = arange(15).reshape(5, 3) + assert a.sum() == 105 + assert a.max() == 14 + assert array([]).sum() == 0.0 + raises(ValueError, 'array([]).max()') + assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(1) == [3, 12, 21, 30, 39]).all() + assert (a.max(0) == [12, 13, 14]).all() + assert (a.max(1) == [2, 5, 8, 11, 14]).all() + assert ((a + a).max() == 28) + assert ((a + a).max(0) == [24, 26, 28]).all() + assert ((a + a).sum(1) == [6, 24, 42, 60, 78]).all() + assert (multiply.reduce(a) == array([0, 3640, 12320])).all() + a = array(range(105)).reshape(3, 5, 7) + assert (a[:, 1, :].sum(0) == [126, 129, 132, 135, 138, 141, 144]).all() + assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() + raises (ValueError, 'a[:, 1, :].sum(2)') + assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() + assert (a.reshape(1,-1).sum(0) == range(105)).all() + assert (a.reshape(1,-1).sum(1) == 5460) + def test_identity(self): - from numpypy import identity, array - from numpypy import int32, float64, dtype + from _numpypy import identity, array + from _numpypy import int32, float64, dtype a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -788,14 +839,14 @@ assert a.argmax() == 2 def test_argmin(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -804,7 +855,7 @@ assert b.all() == True def test_any(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -813,7 +864,7 @@ assert c.any() == False def test_dot(self): - from numpypy import array, dot + from _numpypy import array, dot a = array(range(5)) assert a.dot(a) == 30.0 @@ -839,14 +890,14 @@ [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() def test_dot_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype, float64, int8, bool_ + from _numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -863,7 +914,7 @@ def test_comparison(self): import operator - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -882,7 +933,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpypy import array + from _numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -892,7 +943,7 @@ assert not bool(array([0])) def test_slice_assignment(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[::-1] = a assert (a == [0, 1, 2, 1, 0]).all() @@ -902,8 +953,8 @@ assert (a == [8, 6, 4, 2, 0]).all() def test_debug_repr(self): - from numpypy import zeros, sin - from numpypy.pypy import debug_repr + from _numpypy import zeros, sin + from _numpypy.pypy import debug_repr a = zeros(1) assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' @@ -916,8 +967,17 @@ b[0] = 3 assert debug_repr(b) == 'Array' + def test_remove_invalidates(self): + from _numpypy import array + from _numpypy.pypy import remove_invalidates + a = array([1, 2, 3]) + b = a + a + remove_invalidates(a) + a[0] = 14 + assert b[0] == 28 + def test_virtual_views(self): - from numpypy import arange + from _numpypy import arange a = arange(15) c = (a + a) d = c[::2] @@ -935,7 +995,7 @@ assert b[1] == 2 def test_tolist_scalar(self): - from numpypy import int32, bool_ + from _numpypy import int32, bool_ x = int32(23) assert x.tolist() == 23 assert type(x.tolist()) is int @@ -943,13 +1003,13 @@ assert y.tolist() is True def test_tolist_zerodim(self): - from numpypy import array + from _numpypy import array x = array(3) assert x.tolist() == 3 assert type(x.tolist()) is int def test_tolist_singledim(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.tolist() == [0, 1, 2, 3, 4] assert type(a.tolist()[0]) is int @@ -957,41 +1017,55 @@ assert b.tolist() == [0.2, 0.4, 0.6] def test_tolist_multidim(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert a.tolist() == [[1, 2], [3, 4]] def test_tolist_view(self): - from numpypy import array - a = array([[1,2],[3,4]]) + from _numpypy import array + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): - from numpypy import array + from _numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] + def test_var(self): + from _numpypy import array + a = array(range(10)) + assert a.var() == 8.25 + a = array([5.0]) + assert a.var() == 0.0 + + def test_std(self): + from _numpypy import array + a = array(range(10)) + assert a.std() == 2.8722813232690143 + a = array([5.0]) + assert a.std() == 0.0 + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpypy - a = numpypy.zeros((2, 2)) + import _numpypy + a = _numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpypy - assert numpypy.zeros(1).shape == (1,) - assert numpypy.zeros((2, 2)).shape == (2, 2) - assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpypy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpypy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpypy.zeros(())) - raises(ValueError, numpypy.array, [[1, 2], 3]) + import _numpypy + assert _numpypy.zeros(1).shape == (1,) + assert _numpypy.zeros((2, 2)).shape == (2, 2) + assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(_numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, _numpypy.zeros(())) + raises(ValueError, _numpypy.array, [[1, 2], 3]) def test_getsetitem(self): - import numpypy - a = numpypy.zeros((2, 3, 1)) + import _numpypy + a = _numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -1002,8 +1076,8 @@ assert a[1, -1, 0] == 3 def test_slices(self): - import numpypy - a = numpypy.zeros((4, 3, 2)) + import _numpypy + a = _numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -1036,51 +1110,51 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpypy - raises(ValueError, numpypy.array, [[1], 2]) - raises(ValueError, numpypy.array, [[1, 2], [3]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) - a = numpypy.array([[1, 2], [4, 5]]) + import _numpypy + raises(ValueError, _numpypy.array, [[1], 2]) + raises(ValueError, _numpypy.array, [[1, 2], [3]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = _numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpypy - a = numpypy.zeros((3, 4)) + import _numpypy + a = _numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpypy.array([[1, 2], [3, 4]]) + a = _numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpypy.array([5, 6]) + a[1] = _numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpypy.array([8, 10]) + a[:, 1] = _numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpypy.array([11, 12]) + a[0, :: -1] = _numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == \ array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] @@ -1091,37 +1165,37 @@ assert c[1][1] == 12 def test_multidim_ones(self): - from numpypy import ones + from _numpypy import ones a = ones((1, 2, 3)) assert a[0, 1, 2] == 1.0 def test_multidim_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((10, 10)) b = ones(10) a[:, :] = b assert a[3, 5] == 1 def test_broadcast_shape_agreement(self): - from numpypy import zeros, array + from _numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) @@ -1135,7 +1209,7 @@ assert c.all() def test_broadcast_scalar(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 5), 'd') a[:, 1] = 3 assert a[2, 1] == 3 @@ -1146,14 +1220,14 @@ assert a[3, 2] == 0 def test_broadcast_call2(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((4, 1, 5)) b = ones((4, 3, 5)) b[:] = (a + a) assert (b == zeros((4, 3, 5))).all() def test_broadcast_virtualview(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = arange(8).reshape([2, 2, 2]) b = (a + a)[1, 1] c = zeros((2, 2, 2)) @@ -1161,13 +1235,13 @@ assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all() def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert a.argmax() == 5 assert a[:2, ].argmax() == 3 def test_broadcast_wrong_shapes(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 3, 2)) b = zeros((4, 2)) exc = raises(ValueError, lambda: a + b) @@ -1175,7 +1249,7 @@ " together with shapes (4,3,2) (4,2)" def test_reduce(self): - from numpypy import array + from _numpypy import array a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) assert a.sum() == (13 * 12) / 2 b = a[1:, 1::2] @@ -1183,7 +1257,7 @@ assert c.sum() == (6 + 8 + 10 + 12) * 2 def test_transpose(self): - from numpypy import array + from _numpypy import array a = array(((range(3), range(3, 6)), (range(6, 9), range(9, 12)), (range(12, 15), range(15, 18)), @@ -1202,7 +1276,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from numpypy import array, flatiter + from _numpypy import array, flatiter a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1217,23 +1291,23 @@ assert s == 140 def test_flatiter_array_conv(self): - from numpypy import array, dot + from _numpypy import array, dot a = array([1, 2, 3]) assert dot(a.flat, a.flat) == 14 def test_flatiter_varray(self): - from numpypy import ones + from _numpypy import ones a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] def test_slice_copy(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((10, 10)) b = a[0].copy() assert (b == zeros(10)).all() def test_array_interface(self): - from numpypy import array + from _numpypy import array a = array([1, 2, 3]) i = a.__array_interface__ assert isinstance(i['data'][0], int) @@ -1242,6 +1316,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1254,7 +1329,7 @@ def test_fromstring(self): import sys - from numpypy import fromstring, array, uint8, float32, int32 + from _numpypy import fromstring, array, uint8, float32, int32 a = fromstring(self.data) for i in range(4): @@ -1284,17 +1359,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1318,7 +1393,7 @@ assert (u == [1, 0]).all() def test_fromstring_types(self): - from numpypy import (fromstring, int8, int16, int32, int64, uint8, + from _numpypy import (fromstring, int8, int16, int32, int64, uint8, uint16, uint32, float32, float64) a = fromstring('\xFF', dtype=int8) @@ -1342,9 +1417,8 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): - from numpypy import fromstring, uint16, uint8, int32 + from _numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail raises(ValueError, fromstring, "\x01\x02\x03") #3 bytes is not modulo 2 bytes (int16) @@ -1355,7 +1429,8 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpypy import array, zeros + from _numpypy import array, zeros + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1363,14 +1438,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from _numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1383,9 +1470,19 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -1400,7 +1497,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -1426,14 +1523,14 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' def test_str_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -1449,7 +1546,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array, dtype + from _numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) @@ -1471,10 +1568,14 @@ class AppTestRanges(BaseNumpyAppTest): def test_app_reshape(self): - from numpypy import arange, array, dtype, reshape + from _numpypy import arange, array, dtype, reshape a = arange(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) a = range(12) b = reshape(a, (3, 4)) assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert a.reshape(1, -1).shape == (1, 105) + assert a.reshape(1, 1, -1).shape == (1, 1, 105) + assert a.reshape(-1, 1, 1).shape == (105, 1, 1) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpypy import add, ufunc + from _numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpypy import add, multiply, sin + from _numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpypy import add, sin + from _numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpypy import negative, sign, minimum + from _numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpypy import array, ndarray, negative, minimum + from _numpypy import array, ndarray, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpypy import array, absolute + from _numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpypy import array, add + from _numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpypy import array, divide + from _numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -114,7 +114,7 @@ assert (divide(array([-10]), array([2])) == array([-5])).all() def test_fabs(self): - from numpypy import array, fabs + from _numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -123,7 +123,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpypy import array, minimum + from _numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -132,7 +132,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpypy import array, maximum + from _numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -145,7 +145,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpypy import array, multiply + from _numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -154,7 +154,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpypy import array, sign, dtype + from _numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -173,7 +173,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpypy import array, reciprocal + from _numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -182,7 +182,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpypy import array, subtract + from _numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -191,7 +191,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpypy import array, floor + from _numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -200,7 +200,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpypy import array, copysign + from _numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -216,7 +216,7 @@ def test_exp(self): import math - from numpypy import array, exp + from _numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -230,7 +230,7 @@ def test_sin(self): import math - from numpypy import array, sin + from _numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -243,7 +243,7 @@ def test_cos(self): import math - from numpypy import array, cos + from _numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -252,7 +252,7 @@ def test_tan(self): import math - from numpypy import array, tan + from _numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -262,7 +262,7 @@ def test_arcsin(self): import math - from numpypy import array, arcsin + from _numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -276,7 +276,7 @@ def test_arccos(self): import math - from numpypy import array, arccos + from _numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -291,20 +291,20 @@ def test_arctan(self): import math - from numpypy import array, arctan + from _numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) for i in range(len(a)): assert b[i] == math.atan(a[i]) - a = array([float('nan')]) + a = array([float('nan')]) b = arctan(a) assert math.isnan(b[0]) def test_arcsinh(self): import math - from numpypy import arcsinh, inf + from _numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -312,7 +312,7 @@ def test_arctanh(self): import math - from numpypy import arctanh + from _numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -323,7 +323,7 @@ def test_sqrt(self): import math - from numpypy import sqrt + from _numpypy import sqrt nan, inf = float("nan"), float("inf") data = [1, 2, 3, inf] @@ -333,22 +333,28 @@ assert math.isnan(sqrt(nan)) def test_reduce_errors(self): - from numpypy import sin, add + from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(TypeError, add.reduce, 1) + raises(ValueError, add.reduce, 1) - def test_reduce(self): - from numpypy import add, maximum + def test_reduce_1d(self): + from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 raises(ValueError, maximum.reduce, []) + def test_reduceND(self): + from numpypy import add, arange + a = arange(12).reshape(3, 4) + assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() + assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_comparisons(self): import operator - from numpypy import equal, not_equal, less, less_equal, greater, greater_equal + from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -47,6 +47,8 @@ def f(i): interp = InterpreterState(codes[i]) interp.run(space) + if not len(interp.results): + raise Exception("need results") w_res = interp.results[-1] if isinstance(w_res, BaseArray): concr = w_res.get_concrete_or_scalar() @@ -115,6 +117,28 @@ "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) + def define_axissum(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = sum(a,0) + b -> 1 + """ + + def test_axissum(self): + result = self.run("axissum") + assert result == 30 + # XXX note - the bridge here is fairly crucial and yet it's pretty + # bogus. We need to improve the situation somehow. + self.check_simple_loop({'getinteriorfield_raw': 2, + 'setinteriorfield_raw': 1, + 'arraylen_gc': 1, + 'guard_true': 1, + 'int_lt': 1, + 'jump': 1, + 'float_add': 1, + 'int_add': 3, + }) + def define_prod(): return """ a = |30| @@ -193,9 +217,9 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 26, + self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, - 'getfield_gc_pure': 4, + 'getfield_gc_pure': 8, 'guard_class': 8, 'int_add': 8, 'float_mul': 2, 'jump': 2, 'int_ge': 4, 'getinteriorfield_raw': 4, 'float_add': 2, @@ -212,7 +236,8 @@ def test_ufunc(self): result = self.run("ufunc") assert result == -6 - self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, + self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, + "float_neg": 1, "setinteriorfield_raw": 1, "int_add": 2, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -322,10 +347,9 @@ result = self.run("setslice") assert result == 11.0 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, - 'setinteriorfield_raw': 1, 'int_add': 3, - 'int_lt': 1, 'guard_true': 1, 'jump': 1, - 'arraylen_gc': 3}) + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 2, + 'int_eq': 1, 'guard_false': 1, 'jump': 1}) def define_virtual_slice(): return """ @@ -339,11 +363,12 @@ result = self.run("virtual_slice") assert result == 4 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) + class TestNumpyOld(LLJitMixin): def setup_class(cls): py.test.skip("old") @@ -377,4 +402,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) - diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -7,16 +7,21 @@ interpleveldefs = { 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', - 'set_compile_hook': 'interp_jit.set_compile_hook', - 'DebugMergePoint': 'interp_resop.W_DebugMergePoint', + 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_optimize_hook': 'interp_resop.set_optimize_hook', + 'set_abort_hook': 'interp_resop.set_abort_hook', + 'ResOperation': 'interp_resop.WrappedOp', + 'Box': 'interp_resop.WrappedBox', } def setup_after_space_initialization(self): # force the __extend__ hacks to occur early from pypy.module.pypyjit.interp_jit import pypyjitdriver + from pypy.module.pypyjit.policy import pypy_hooks # add the 'defaults' attribute from pypy.rlib.jit import PARAMETERS space = self.space pypyjitdriver.space = space w_obj = space.wrap(PARAMETERS) space.setattr(space.wrap(self), space.wrap('defaults'), w_obj) + pypy_hooks.space = space diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -13,11 +13,7 @@ from pypy.interpreter.pycode import PyCode, CO_GENERATOR from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame -from pypy.interpreter.gateway import unwrap_spec from opcode import opmap -from pypy.rlib.nonconst import NonConstant -from pypy.jit.metainterp.resoperation import rop -from pypy.module.pypyjit.interp_resop import debug_merge_point_from_boxes PyFrame._virtualizable2_ = ['last_instr', 'pycode', 'valuestackdepth', 'locals_stack_w[*]', @@ -51,72 +47,19 @@ def should_unroll_one_iteration(next_instr, is_being_profiled, bytecode): return (bytecode.co_flags & CO_GENERATOR) != 0 -def wrap_oplist(space, logops, operations): - list_w = [] - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - list_w.append(space.wrap(debug_merge_point_from_boxes( - op.getarglist()))) - else: - list_w.append(space.wrap(logops.repr_of_resop(op))) - return list_w - class PyPyJitDriver(JitDriver): reds = ['frame', 'ec'] greens = ['next_instr', 'is_being_profiled', 'pycode'] virtualizables = ['frame'] - def on_compile(self, logger, looptoken, operations, type, next_instr, - is_being_profiled, ll_pycode): - from pypy.rpython.annlowlevel import cast_base_ptr_to_instance - - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap(type), - space.newtuple([pycode, - space.wrap(next_instr), - space.wrap(is_being_profiled)]), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap('bridge'), - space.wrap(n), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - pypyjitdriver = PyPyJitDriver(get_printable_location = get_printable_location, get_jitcell_at = get_jitcell_at, set_jitcell_at = set_jitcell_at, confirm_enter_jit = confirm_enter_jit, can_never_inline = can_never_inline, should_unroll_one_iteration = - should_unroll_one_iteration) + should_unroll_one_iteration, + name='pypyjit') class __extend__(PyFrame): @@ -223,34 +166,3 @@ '''For testing. Invokes callable(...), but without letting the JIT follow the call.''' return space.call_args(w_callable, __args__) - -class Cache(object): - in_recursion = False - - def __init__(self, space): - self.w_compile_hook = space.w_None - -def set_compile_hook(space, w_hook): - """ set_compile_hook(hook) - - Set a compiling hook that will be called each time a loop is compiled. - The hook will be called with the following signature: - hook(merge_point_type, loop_type, greenkey or guard_number, operations) - - for now merge point type is always `main` - - loop_type can be either `loop` `entry_bridge` or `bridge` - in case loop is not `bridge`, greenkey will be a set of constants - for jit merge point. in case it's `main` it'll be a tuple - (code, offset, is_being_profiled) - - Note that jit hook is not reentrant. It means that if the code - inside the jit hook is itself jitted, it will get compiled, but the - jit hook won't be called for that. - - XXX write down what else - """ - cache = space.fromcache(Cache) - cache.w_compile_hook = w_hook - cache.in_recursion = NonConstant(False) - return space.w_None diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,41 +1,197 @@ -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import unwrap_spec, interp2app +from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode -from pypy.rpython.lltypesystem import lltype -from pypy.rpython.annlowlevel import cast_base_ptr_to_instance +from pypy.interpreter.error import OperationError +from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr from pypy.rpython.lltypesystem.rclass import OBJECT +from pypy.jit.metainterp.resoperation import rop, AbstractResOp +from pypy.rlib.nonconst import NonConstant +from pypy.rlib import jit_hooks -class W_DebugMergePoint(Wrappable): - """ A class representing debug_merge_point JIT operation +class Cache(object): + in_recursion = False + + def __init__(self, space): + self.w_compile_hook = space.w_None + self.w_abort_hook = space.w_None + self.w_optimize_hook = space.w_None + +def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr): + if greenkey is None: + return space.w_None + jitdriver_name = jitdriver.name + if jitdriver_name == 'pypyjit': + next_instr = greenkey[0].getint() + is_being_profiled = greenkey[1].getint() + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + greenkey[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return space.newtuple([space.wrap(pycode), space.wrap(next_instr), + space.newbool(bool(is_being_profiled))]) + else: + return space.wrap(greenkey_repr) + +def set_compile_hook(space, w_hook): + """ set_compile_hook(hook) + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. """ + cache = space.fromcache(Cache) + cache.w_compile_hook = w_hook + cache.in_recursion = NonConstant(False) - def __init__(self, mp_no, offset, pycode): - self.mp_no = mp_no +def set_optimize_hook(space, w_hook): + """ set_optimize_hook(hook) + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + """ + cache = space.fromcache(Cache) + cache.w_optimize_hook = w_hook + cache.in_recursion = NonConstant(False) + +def set_abort_hook(space, w_hook): + """ set_abort_hook(hook) + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. + """ + cache = space.fromcache(Cache) + cache.w_abort_hook = w_hook + cache.in_recursion = NonConstant(False) + +def wrap_oplist(space, logops, operations, ops_offset=None): + l_w = [] + for op in operations: + if ops_offset is None: + ofs = -1 + else: + ofs = ops_offset.get(op, 0) + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) + return l_w + +class WrappedBox(Wrappable): + """ A class representing a single box + """ + def __init__(self, llbox): + self.llbox = llbox + + def descr_getint(self, space): + return space.wrap(jit_hooks.box_getint(self.llbox)) + + at unwrap_spec(no=int) +def descr_new_box(space, w_tp, no): + return WrappedBox(jit_hooks.boxint_new(no)) + +WrappedBox.typedef = TypeDef( + 'Box', + __new__ = interp2app(descr_new_box), + getint = interp2app(WrappedBox.descr_getint), +) + + at unwrap_spec(num=int, offset=int, repr=str, res=WrappedBox) +def descr_new_resop(space, w_tp, num, w_args, res, offset=-1, + repr=''): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + if res is None: + llres = jit_hooks.emptyval() + else: + llres = res.llbox + return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + +class WrappedOp(Wrappable): + """ A class representing a single ResOperation, wrapped nicely + """ + def __init__(self, op, offset, repr_of_resop): + self.op = op self.offset = offset - self.pycode = pycode + self.repr_of_resop = repr_of_resop def descr_repr(self, space): - return space.wrap('DebugMergePoint()') + return space.wrap(self.repr_of_resop) - at unwrap_spec(mp_no=int, offset=int, pycode=PyCode) -def new_debug_merge_point(space, w_tp, mp_no, offset, pycode): - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_num(self, space): + return space.wrap(jit_hooks.resop_getopnum(self.op)) -def debug_merge_point_from_boxes(boxes): - mp_no = boxes[0].getint() - offset = boxes[2].getint() - llcode = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), - boxes[4].getref_base()) - pycode = cast_base_ptr_to_instance(PyCode, llcode) - assert pycode is not None - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_name(self, space): + return space.wrap(hlstr(jit_hooks.resop_getopname(self.op))) -W_DebugMergePoint.typedef = TypeDef( - 'DebugMergePoint', - __new__ = interp2app(new_debug_merge_point), - __doc__ = W_DebugMergePoint.__doc__, - __repr__ = interp2app(W_DebugMergePoint.descr_repr), - code = interp_attrproperty('pycode', W_DebugMergePoint), + @unwrap_spec(no=int) + def descr_getarg(self, space, no): + return WrappedBox(jit_hooks.resop_getarg(self.op, no)) + + @unwrap_spec(no=int, box=WrappedBox) + def descr_setarg(self, space, no, box): + jit_hooks.resop_setarg(self.op, no, box.llbox) + + def descr_getresult(self, space): + return WrappedBox(jit_hooks.resop_getresult(self.op)) + + def descr_setresult(self, space, w_box): + box = space.interp_w(WrappedBox, w_box) + jit_hooks.resop_setresult(self.op, box.llbox) + +WrappedOp.typedef = TypeDef( + 'ResOperation', + __doc__ = WrappedOp.__doc__, + __new__ = interp2app(descr_new_resop), + __repr__ = interp2app(WrappedOp.descr_repr), + num = GetSetProperty(WrappedOp.descr_num), + name = GetSetProperty(WrappedOp.descr_name), + getarg = interp2app(WrappedOp.descr_getarg), + setarg = interp2app(WrappedOp.descr_setarg), + result = GetSetProperty(WrappedOp.descr_getresult, + WrappedOp.descr_setresult) ) +WrappedOp.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,4 +1,112 @@ from pypy.jit.codewriter.policy import JitPolicy +from pypy.rlib.jit import JitHookInterface +from pypy.rlib import jit_hooks +from pypy.interpreter.error import OperationError +from pypy.jit.metainterp.jitprof import counter_names +from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\ + WrappedOp + +class PyPyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_abort_hook): + cache.in_recursion = True + try: + try: + space.call_function(cache.w_abort_hook, + space.wrap(jitdriver.name), + wrap_greenkey(space, jitdriver, + greenkey, greenkey_repr), + space.wrap(counter_names[reason])) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_abort_hook) + finally: + cache.in_recursion = False + + def after_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._compile_hook(debug_info, w_greenkey) + + def after_compile_bridge(self, debug_info): + self._compile_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def before_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._optimize_hook(debug_info, w_greenkey) + + def before_compile_bridge(self, debug_info): + self._optimize_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def _compile_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_compile_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations, + debug_info.asminfo.ops_offset) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + asminfo = debug_info.asminfo + space.call_function(cache.w_compile_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w), + space.wrap(asminfo.asmaddr), + space.wrap(asminfo.asmlen)) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + + def _optimize_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_optimize_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + w_res = space.call_function(cache.w_optimize_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w)) + if space.is_w(w_res, space.w_None): + return + l = [] + for w_item in space.listview(w_res): + item = space.interp_w(WrappedOp, w_item) + l.append(jit_hooks._cast_to_resop(item.op)) + del debug_info.operations[:] # modifying operations above is + # probably not a great idea since types may not work + # and we'll end up with half-working list and + # a segfault/fatal RPython error + for elem in l: + debug_info.operations.append(elem) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + +pypy_hooks = PyPyJitIface() class PyPyJitPolicy(JitPolicy): @@ -12,12 +120,16 @@ modname == 'thread.os_thread'): return True if '.' in modname: - modname, _ = modname.split('.', 1) + modname, rest = modname.split('.', 1) + else: + rest = '' if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', 'mmap', 'marshal']: + if modname == 'pypyjit' and 'interp_resop' in rest: + return False return True return False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -1,22 +1,40 @@ import py from pypy.conftest import gettestobjspace, option +from pypy.interpreter.gateway import interp2app from pypy.interpreter.pycode import PyCode -from pypy.interpreter.gateway import interp2app -from pypy.jit.metainterp.history import JitCellToken -from pypy.jit.metainterp.resoperation import ResOperation, rop +from pypy.jit.metainterp.history import JitCellToken, ConstInt, ConstPtr +from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.logger import Logger from pypy.rpython.annlowlevel import (cast_instance_to_base_ptr, cast_base_ptr_to_instance) from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.module.pypyjit.interp_jit import pypyjitdriver +from pypy.module.pypyjit.policy import pypy_hooks from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.typesystem import llhelper +from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG +from pypy.rlib.jit import JitDebugInfo, AsmInfo + +class MockJitDriverSD(object): + class warmstate(object): + @staticmethod + def get_location_str(boxes): + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + boxes[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return pycode.co_name + + jitdriver = pypyjitdriver + class MockSD(object): class cpu(object): ts = llhelper + jitdrivers_sd = [MockJitDriverSD] + class AppTestJitHook(object): def setup_class(cls): if option.runappdirect: @@ -24,9 +42,9 @@ space = gettestobjspace(usemodules=('pypyjit',)) cls.space = space w_f = space.appexec([], """(): - def f(): + def function(): pass - return f + return function """) cls.w_f = w_f ll_code = cast_instance_to_base_ptr(w_f.code) @@ -34,41 +52,73 @@ logger = Logger(MockSD()) oplist = parse(""" - [i1, i2] + [i1, i2, p2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) + guard_nonnull(p2) [] guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations + greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] + offset = {} + for i, op in enumerate(oplist): + if i != 1: + offset[op] = i + + di_loop = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop_optimize = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop.asminfo = AsmInfo(offset, 0, 0) + di_bridge = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'bridge', fail_descr_no=0) + di_bridge.asminfo = AsmInfo(offset, 0, 0) def interp_on_compile(): - pypyjitdriver.on_compile(logger, JitCellToken(), oplist, 'loop', - 0, False, ll_code) + di_loop.oplist = cls.oplist + pypy_hooks.after_compile(di_loop) def interp_on_compile_bridge(): - pypyjitdriver.on_compile_bridge(logger, JitCellToken(), oplist, 0) + pypy_hooks.after_compile_bridge(di_bridge) + + def interp_on_optimize(): + di_loop_optimize.oplist = cls.oplist + pypy_hooks.before_compile(di_loop_optimize) + + def interp_on_abort(): + pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey, + 'blah') cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) + cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) + cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) + cls.orig_oplist = oplist + + def setup_method(self, meth): + self.__class__.oplist = self.orig_oplist[:] def test_on_compile(self): import pypyjit all = [] - def hook(*args): - assert args[0] == 'main' - assert args[1] in ['loop', 'bridge'] - all.append(args[2:]) + def hook(name, looptype, tuple_or_guard_no, ops, asmstart, asmlen): + all.append((name, looptype, tuple_or_guard_no, ops)) From noreply at buildbot.pypy.org Sat Jan 14 22:35:57 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 14 Jan 2012 22:35:57 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: fix merge Message-ID: <20120114213557.4AADB71025D@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51332:cdee39d4ae23 Date: 2012-01-14 23:05 +0200 http://bitbucket.org/pypy/pypy/changeset/cdee39d4ae23/ Log: fix merge diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -194,7 +194,8 @@ n_new_elems_used = 1 oldI = -1 n_old_elems_to_use = old_shape[-1] - for s in new_shape[::-1]: + for i in range(len(new_shape) - 1, -1, -1): + s = new_shape[i] new_strides.insert(0, cur_step * n_new_elems_used) n_new_elems_used *= s while n_new_elems_used > n_old_elems_to_use: @@ -383,7 +384,6 @@ the second-to-last of `b`:: dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])''' - #numpy's doc string :) w_other = convert_to_array(space, w_other) if isinstance(w_other, Scalar): return self.descr_mul(space, w_other) @@ -420,10 +420,13 @@ for os in out_shape: out_size *= os out_ndims = len(out_shape) - #TODO: what should the order be? C or F? + # TODO: what should the order be? C or F? arr = W_NDimArray(out_size, out_shape, dtype=dtype) - out_iter = view_iter_from_arr(arr) - #TODO: invalidate self, w_other with arr ? + # TODO: this is all a bogus mess of previous work, + # rework within the context of transformations + ''' + out_iter = ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) + # TODO: invalidate self, w_other with arr ? while not out_iter.done(): my_index = self.start other_index = w_other.start @@ -443,6 +446,7 @@ value = w_res.descr_sum(space) arr.setitem(out_iter.get_offset(), value) out_iter = out_iter.next(out_ndims) + ''' return arr def get_concrete(self): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -516,7 +516,7 @@ b = a * a for i in range(5): assert b[i] == i * i - assert b.dtype is numpypy.dtype(int) + assert b.dtype is a.dtype a = _numpypy.array(range(5), dtype=bool) b = a * a @@ -740,8 +740,7 @@ def test_sum(self): from _numpypy import array a = array(range(5)) - b = a.sum() - assert b == 10 + assert a.sum() == 10 assert a[:4].sum() == 6 a = array([True] * 5, bool) From noreply at buildbot.pypy.org Sat Jan 14 22:35:58 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sat, 14 Jan 2012 22:35:58 +0100 (CET) Subject: [pypy-commit] pypy default: numpypy: rename kwarg 'dim' to 'axis' Message-ID: <20120114213558.78E7671025D@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: Changeset: r51333:eb0269c21eec Date: 2012-01-14 23:25 +0200 http://bitbucket.org/pypy/pypy/changeset/eb0269c21eec/ Log: numpypy: rename kwarg 'dim' to 'axis' diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -287,11 +287,11 @@ descr_rmod = _binop_right_impl("mod") def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): - def impl(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, w_dim) + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") @@ -569,14 +569,14 @@ ) return w_result - def descr_mean(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) w_denom = space.wrap(self.size) else: - dim = space.int_w(w_dim) + dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) - return space.div(self.descr_sum_promote(space, w_dim), w_denom) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) def descr_var(self, space): # var = mean((values - mean(values)) ** 2) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -733,6 +733,7 @@ a = array(range(105)).reshape(3, 5, 7) b = mean(a, axis=0) b[0,0]==35. + assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() @@ -755,6 +756,7 @@ assert array([]).sum() == 0.0 raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() From noreply at buildbot.pypy.org Sun Jan 15 13:40:31 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 15 Jan 2012 13:40:31 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays: close old branch will be redone Message-ID: <20120115124031.1D535820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays Changeset: r51334:08302909741e Date: 2012-01-14 22:59 +0200 http://bitbucket.org/pypy/pypy/changeset/08302909741e/ Log: close old branch will be redone From noreply at buildbot.pypy.org Sun Jan 15 13:40:32 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 15 Jan 2012 13:40:32 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: a branch to work on indexing by arrays - start by introducing new way of Message-ID: <20120115124032.7F4E382C03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51335:ad840d9d2a35 Date: 2012-01-15 14:36 +0200 http://bitbucket.org/pypy/pypy/changeset/ad840d9d2a35/ Log: a branch to work on indexing by arrays - start by introducing new way of describing shapes diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -84,6 +84,12 @@ def descr_get_shape(self, space): return space.newtuple([]) + def is_int_type(self): + return self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR + + def is_bool_type(self): + return self.kind == BOOLLTR + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", __new__ = interp2app(W_Dtype.descr__new__.im_func), diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -4,6 +4,30 @@ from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ calculate_slice_strides +# structures to describe slicing + +class BaseChunk(object): + pass + +class Chunk(BaseChunk): + def __init__(self, start, stop, step, lgt): + self.start = start + self.stop = stop + self.step = step + self.lgt = lgt + + def extend_shape(self, shape): + if self.step != 0: + shape.append(self.lgt) + +class IntArrayChunk(BaseChunk): + def __init__(self, arr): + self.arr = arr.get_concrete() + +class BoolArrayChunk(BaseChunk): + def __init__(self, arr): + self.arr = arr.get_concrete() + class BaseTransform(object): pass diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -2,14 +2,15 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ + interp_boxes from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + SkipLastAxisIterator, Chunk numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -40,7 +41,6 @@ name='numpy_slice', ) - def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) @@ -479,8 +479,8 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] def descr_getitem(self, space, w_idx): @@ -509,9 +509,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -938,7 +937,7 @@ builder.append('\n' + indent) else: builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: @@ -955,7 +954,7 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -10,12 +10,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -1298,6 +1300,17 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_array_indexing_one_elem(self): + skip("not yet") + from _numpypy import array, arange + raises(IndexError, 'arange(3)[array([3.5])]') + a = arange(3)[array([1])] + assert a == 1 + assert a[0] == 1 + raises(IndexError,'arange(3)[array([15])]') + assert arange(3)[array([-3])] == 0 + raises(IndexError,'arange(3)[array([-15])]') + assert arange(3)[array(1)] == 1 class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): From pullrequests-noreply at bitbucket.org Sun Jan 15 22:25:55 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Sun, 15 Jan 2012 21:25:55 -0000 Subject: [pypy-commit] [OPEN] Pull request #20 for pypy/pypy: Improvements to the JVM backend, this time with tests Message-ID: A new pull request has been opened by Micha? Bendowski. benol/pypy has changes to be pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/20/improvements-to-the-jvm-backend-this-time Title: Improvements to the JVM backend, this time with tests Changes to be pulled: 8efdb1515b2c by Micha? Bendowski: "Handle the 'jit_is_virtual' opcode by always returning False" f1d52bd62e15 by Micha? Bendowski: "Add a missing cast from Unsigned to UnsignedLongLong in the JVM backend." 8ff039cadd19 by Micha? Bendowski: "Declare oo_primitives that should implement some rffi operations. For now the a?" 25d4d323cb5f by Micha? Bendowski: "Fix compute_unique_id to support built-ins in ootype. Otherwise the translation?" 7f09531dd6a9 by Micha? Bendowski: "Fix userspace builders in ootype Implement the getlength() method of StringBuil?" 578e69f273f9 by Micha? Bendowski: "Add files generated by PyCharm to .hgignore" -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Sun Jan 15 23:03:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 15 Jan 2012 23:03:30 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add a blog draft Message-ID: <20120115220330.1A988820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4034:0d6b3b5ff0f7 Date: 2012-01-16 00:03 +0200 http://bitbucket.org/pypy/extradoc/changeset/0d6b3b5ff0f7/ Log: add a blog draft diff --git a/blog/draft/numpy-internship.rst b/blog/draft/numpy-internship.rst new file mode 100644 --- /dev/null +++ b/blog/draft/numpy-internship.rst @@ -0,0 +1,39 @@ + +Hello everyone. + +I would like to inform everyone that there is a very interesting opportunity +for doing an internship at `NCAR`_ in the lovely town of `Boulder`_, situated +on the foothils of Rocky Mountains. Before you read on, make sure you are: + +* A student of a US University, who is legally eglibale to work in the US. + +* At least finishing second year this year. + +* Apply before February 3rd. + +The internship itself is to focus on using PyPy (in some way) to provide +a high performance numeric kernel for an atmospheric model and measuring how +fast we can go. This is very well in line with what the current effort on +NumPy in PyPy is about. The internship will be mentored by `Davide del Vento`_ +and I hope to have some influence over where it goes myself :-) + +Few interesting links: + +* `program website`_ + +* `internship proposal`_ - note that the actual roadmap is very flexible, as + long as it's numeric kernel of an atmospheric model using PyPy. + +Fell free to contact Davide for details about the proposal and `pypy-dev`_ or +me directly for details about pypy + +.. _`Davide del Vento`: http://www.linkedin.com/in/delvento +.. _`NCAR`: http://ncar.ucar.edu/ +.. _`Boulder`: http://en.wikipedia.org/wiki/Boulder,_Colorado +.. _`program website`: http://www2.cisl.ucar.edu/siparcs/ +.. _`internship proposal`: http://www2.cisl.ucar.edu/siparcs/opportunities/ad +.. _`pypy-dev`: http://mail.python.org/mailman/listinfo/pypy-dev + +Cheers, +fijal + From noreply at buildbot.pypy.org Sun Jan 15 23:32:26 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 15 Jan 2012 23:32:26 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: (confluecne) typos Message-ID: <20120115223226.C8E96820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4035:15ad7db9d445 Date: 2012-01-16 00:32 +0200 http://bitbucket.org/pypy/extradoc/changeset/15ad7db9d445/ Log: (confluecne) typos diff --git a/blog/draft/numpy-internship.rst b/blog/draft/numpy-internship.rst --- a/blog/draft/numpy-internship.rst +++ b/blog/draft/numpy-internship.rst @@ -1,31 +1,31 @@ -Hello everyone. +Hello, everyone -I would like to inform everyone that there is a very interesting opportunity +I would like to inform you that there is a very interesting opportunity for doing an internship at `NCAR`_ in the lovely town of `Boulder`_, situated -on the foothils of Rocky Mountains. Before you read on, make sure you are: +on the foothils of Rocky Mountains. Before you read on, make sure: -* A student of a US University, who is legally eglibale to work in the US. +* are a student of a US University, who is legally egligible to work in the US -* At least finishing second year this year. +* are at least finishing second year this year, -* Apply before February 3rd. +* apply before February 3rd. -The internship itself is to focus on using PyPy (in some way) to provide -a high performance numeric kernel for an atmospheric model and measuring how -fast we can go. This is very well in line with what the current effort on +The internship itself will focus on using PyPy (in some way) to provide +a high performance numeric kernel for an atmospheric model, and measuring how +fast we can go. This is very much in line with what the current effort on NumPy in PyPy is about. The internship will be mentored by `Davide del Vento`_ and I hope to have some influence over where it goes myself :-) -Few interesting links: +A few interesting links: * `program website`_ * `internship proposal`_ - note that the actual roadmap is very flexible, as long as it's numeric kernel of an atmospheric model using PyPy. -Fell free to contact Davide for details about the proposal and `pypy-dev`_ or -me directly for details about pypy +Feel free to contact Davide for details about the proposal and `pypy-dev`_ or +me directly for details about PyPy. .. _`Davide del Vento`: http://www.linkedin.com/in/delvento .. _`NCAR`: http://ncar.ucar.edu/ From noreply at buildbot.pypy.org Sun Jan 15 23:36:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 15 Jan 2012 23:36:19 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: (confluence) more typos Message-ID: <20120115223619.01E45820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4036:469f9b257694 Date: 2012-01-16 00:35 +0200 http://bitbucket.org/pypy/extradoc/changeset/469f9b257694/ Log: (confluence) more typos diff --git a/blog/draft/numpy-internship.rst b/blog/draft/numpy-internship.rst --- a/blog/draft/numpy-internship.rst +++ b/blog/draft/numpy-internship.rst @@ -3,11 +3,11 @@ I would like to inform you that there is a very interesting opportunity for doing an internship at `NCAR`_ in the lovely town of `Boulder`_, situated -on the foothils of Rocky Mountains. Before you read on, make sure: +on the foothils of Rocky Mountains. Before you read on, make sure you: -* are a student of a US University, who is legally egligible to work in the US +* are a student of a US University, who is legally eligible to work in the US -* are at least finishing second year this year, +* are at least finishing second year this year * apply before February 3rd. @@ -22,7 +22,7 @@ * `program website`_ * `internship proposal`_ - note that the actual roadmap is very flexible, as - long as it's numeric kernel of an atmospheric model using PyPy. + long as it's a numeric kernel of an atmospheric model using PyPy. Feel free to contact Davide for details about the proposal and `pypy-dev`_ or me directly for details about PyPy. From noreply at buildbot.pypy.org Mon Jan 16 09:48:54 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 16 Jan 2012 09:48:54 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: add (set|get)_interiorfield_raw methods Message-ID: <20120116084854.7A170820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51336:6b5ffc5a9e85 Date: 2012-01-16 09:45 +0100 http://bitbucket.org/pypy/pypy/changeset/6b5ffc5a9e85/ Log: add (set|get)_interiorfield_raw methods diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -688,6 +688,7 @@ signed = op.getdescr().fielddescr.is_field_signed() self._ensure_result_bit_extension(res_loc, fieldsize.value, signed) return fcond + emit_op_getinteriorfield_raw = emit_op_getinteriorfield_gc def emit_op_setinteriorfield_gc(self, op, arglocs, regalloc, fcond): (base_loc, index_loc, value_loc, @@ -715,6 +716,7 @@ else: assert 0 return fcond + emit_op_setinteriorfield_raw = emit_op_setinteriorfield_gc class ArrayOpAssember(object): diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -772,6 +772,7 @@ result_loc = self.force_allocate_reg(op.result) return [base_loc, index_loc, result_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] + prepare_op_getinteriorfield_raw = prepare_op_getinteriorfield_gc def prepare_op_setinteriorfield_gc(self, op, fcond): t = unpack_interiorfielddescr(op.getdescr()) @@ -788,6 +789,7 @@ self.assembler.load(ofs_loc, immofs) return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] + prepare_op_setinteriorfield_raw = prepare_op_setinteriorfield_gc def prepare_op_arraylen_gc(self, op, fcond): arraydescr = op.getdescr() From noreply at buildbot.pypy.org Mon Jan 16 09:48:55 2012 From: noreply at buildbot.pypy.org (bivab) Date: Mon, 16 Jan 2012 09:48:55 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: print some information when hitting a missing operation in the backend Message-ID: <20120116084855.AC102820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51337:a317879ec998 Date: 2012-01-16 09:48 +0100 http://bitbucket.org/pypy/pypy/changeset/a317879ec998/ Log: print some information when hitting a missing operation in the backend diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -1071,10 +1071,13 @@ def notimplemented_op(self, op, arglocs, regalloc, fcond): + print "[ARM/asm] %s not implemented" % op.getopname() raise NotImplementedError(op) def notimplemented_op_with_guard(self, op, guard_op, arglocs, regalloc, fcond): + print "[ARM/asm] %s with guard %s not implemented" % \ + (op.getopname(), guard_op.getopname()) raise NotImplementedError(op) asm_operations = [notimplemented_op] * (rop._LAST + 1) diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -1166,10 +1166,13 @@ def notimplemented(self, op, fcond): + print "[ARM/regalloc] %s not implemented" % op.getopname() raise NotImplementedError(op) def notimplemented_with_guard(self, op, guard_op, fcond): + print "[ARM/regalloc] %s with guard %s not implemented" % \ + (op.getopname(), guard_op.getopname()) raise NotImplementedError(op) operations = [notimplemented] * (rop._LAST + 1) From noreply at buildbot.pypy.org Mon Jan 16 11:21:59 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 11:21:59 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Planning session for today Message-ID: <20120116102159.2ECCE820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4037:c36311962789 Date: 2012-01-16 11:21 +0100 http://bitbucket.org/pypy/extradoc/changeset/c36311962789/ Log: Planning session for today diff --git a/sprintinfo/leysin-winter-2012/planning.txt b/sprintinfo/leysin-winter-2012/planning.txt new file mode 100644 --- /dev/null +++ b/sprintinfo/leysin-winter-2012/planning.txt @@ -0,0 +1,25 @@ + +People present +-------------- + +* Antonio Cuni +* Armin Rigo +* Romain Guillebert + +Things we want to do +-------------------- + +* some skiing + +* review the JVM backend pull request (anto) + +* py3k (romain) + +* ffistruct + +* Cython backend (romain) + +* STM + start work on the GC (anto, arigo) + +* concurrent-marksweep GC From noreply at buildbot.pypy.org Mon Jan 16 11:22:00 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 11:22:00 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: merge heads Message-ID: <20120116102200.5F3DC820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4038:588db0e8396b Date: 2012-01-16 11:21 +0100 http://bitbucket.org/pypy/extradoc/changeset/588db0e8396b/ Log: merge heads diff --git a/blog/draft/numpy-internship.rst b/blog/draft/numpy-internship.rst new file mode 100644 --- /dev/null +++ b/blog/draft/numpy-internship.rst @@ -0,0 +1,39 @@ + +Hello, everyone + +I would like to inform you that there is a very interesting opportunity +for doing an internship at `NCAR`_ in the lovely town of `Boulder`_, situated +on the foothils of Rocky Mountains. Before you read on, make sure you: + +* are a student of a US University, who is legally eligible to work in the US + +* are at least finishing second year this year + +* apply before February 3rd. + +The internship itself will focus on using PyPy (in some way) to provide +a high performance numeric kernel for an atmospheric model, and measuring how +fast we can go. This is very much in line with what the current effort on +NumPy in PyPy is about. The internship will be mentored by `Davide del Vento`_ +and I hope to have some influence over where it goes myself :-) + +A few interesting links: + +* `program website`_ + +* `internship proposal`_ - note that the actual roadmap is very flexible, as + long as it's a numeric kernel of an atmospheric model using PyPy. + +Feel free to contact Davide for details about the proposal and `pypy-dev`_ or +me directly for details about PyPy. + +.. _`Davide del Vento`: http://www.linkedin.com/in/delvento +.. _`NCAR`: http://ncar.ucar.edu/ +.. _`Boulder`: http://en.wikipedia.org/wiki/Boulder,_Colorado +.. _`program website`: http://www2.cisl.ucar.edu/siparcs/ +.. _`internship proposal`: http://www2.cisl.ucar.edu/siparcs/opportunities/ad +.. _`pypy-dev`: http://mail.python.org/mailman/listinfo/pypy-dev + +Cheers, +fijal + diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -17,8 +17,6 @@ - more attributes/methods on numpy.flatiter -- axis= parameter to various methods - - expose ndarray.ctypes - subclassing ndarray (instantiating subcalsses curently returns the wrong type) From noreply at buildbot.pypy.org Mon Jan 16 14:05:02 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Mon, 16 Jan 2012 14:05:02 +0100 (CET) Subject: [pypy-commit] pypy py3k: Add a failing test for extended attribute unpacking Message-ID: <20120116130502.A0BC9820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51338:5500c9127aab Date: 2012-01-16 14:04 +0100 http://bitbucket.org/pypy/pypy/changeset/5500c9127aab/ Log: Add a failing test for extended attribute unpacking diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -153,3 +153,6 @@ self.parse('0b1101') self.parse('0b0l') py.test.raises(SyntaxError, self.parse, "0b112") + + def test_new_extended_unpacking(self): + self.parse('(a, *rest, b) = 1, 2, 3, 4, 5') From noreply at buildbot.pypy.org Mon Jan 16 14:18:02 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Mon, 16 Jan 2012 14:18:02 +0100 (CET) Subject: [pypy-commit] pypy py3k: Rename the test commited earlier, rewrite tests that should pass. Message-ID: <20120116131802.3EEEF820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51339:34ff01570492 Date: 2012-01-16 14:17 +0100 http://bitbucket.org/pypy/pypy/changeset/34ff01570492/ Log: Rename the test commited earlier, rewrite tests that should pass. diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -146,13 +146,13 @@ def test_new_octal_literal(self): self.parse('0777') self.parse('0o777') - self.parse('0o777L') + py.test.raises(SyntaxError, self.parse, '0o777L') py.test.raises(SyntaxError, self.parse, "0o778") def test_new_binary_literal(self): self.parse('0b1101') - self.parse('0b0l') + py.test.raises(SyntaxError, self.parse, '0b0l') py.test.raises(SyntaxError, self.parse, "0b112") - def test_new_extended_unpacking(self): + def test_py3k_extended_unpacking(self): self.parse('(a, *rest, b) = 1, 2, 3, 4, 5') From noreply at buildbot.pypy.org Mon Jan 16 14:25:15 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Mon, 16 Jan 2012 14:25:15 +0100 (CET) Subject: [pypy-commit] pypy py3k: Parsing 0777 should fail on py3k Message-ID: <20120116132515.E2CD4820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51340:851536b685a6 Date: 2012-01-16 14:24 +0100 http://bitbucket.org/pypy/pypy/changeset/851536b685a6/ Log: Parsing 0777 should fail on py3k diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -144,7 +144,6 @@ py.test.raises(SyntaxError, self.parse, "b'a\\n") def test_new_octal_literal(self): - self.parse('0777') self.parse('0o777') py.test.raises(SyntaxError, self.parse, '0o777L') py.test.raises(SyntaxError, self.parse, "0o778") @@ -154,5 +153,8 @@ py.test.raises(SyntaxError, self.parse, '0b0l') py.test.raises(SyntaxError, self.parse, "0b112") + def test_py3k_reject_old_binary_literal(self): + py.test.raises(SyntaxError, self.parse, '0777') + def test_py3k_extended_unpacking(self): self.parse('(a, *rest, b) = 1, 2, 3, 4, 5') From noreply at buildbot.pypy.org Mon Jan 16 14:59:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 14:59:34 +0100 (CET) Subject: [pypy-commit] pypy stm: Add a (skipped) test about using the minimark GC. Message-ID: <20120116135934.7DC8A820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51341:91a0ee8fc4ab Date: 2012-01-16 11:53 +0100 http://bitbucket.org/pypy/pypy/changeset/91a0ee8fc4ab/ Log: Add a (skipped) test about using the minimark GC. diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -3,29 +3,35 @@ from pypy.translator.stm import rstm -NUM_THREADS = 4 -LENGTH = 5000 - - class Node: def __init__(self, value): self.value = value self.next = None +class Global: + NUM_THREADS = 4 + LENGTH = 5000 + USE_MEMORY = False + anchor = Node(-1) +glob = Global() + def add_at_end_of_chained_list(node, value): + x = Node(value) while node.next: node = node.next - newnode = Node(value) + if glob.USE_MEMORY: + x = Node(value) + newnode = x node.next = newnode def check_chained_list(node): - seen = [0] * (LENGTH+1) - seen[-1] = NUM_THREADS + seen = [0] * (glob.LENGTH+1) + seen[-1] = glob.NUM_THREADS while node is not None: value = node.value #print value - if not (0 <= value < LENGTH): + if not (0 <= value < glob.LENGTH): print "node.value out of bounds:", value raise AssertionError seen[value] += 1 @@ -34,19 +40,15 @@ value, seen[value]) raise AssertionError node = node.next - if seen[LENGTH-1] != NUM_THREADS: + if seen[glob.LENGTH-1] != glob.NUM_THREADS: print "seen[LENGTH-1] != NUM_THREADS" raise AssertionError print "check ok!" -class Global: - anchor = Node(-1) -glob = Global() - def run_me(): print "thread starting..." - for i in range(LENGTH): + for i in range(glob.LENGTH): add_at_end_of_chained_list(glob.anchor, i) rstm.transaction_boundary() print "thread done." @@ -57,11 +59,17 @@ def entry_point(argv): print "hello world" + if len(argv) > 1: + glob.NUM_THREADS = int(argv[1]) + if len(argv) > 2: + glob.LENGTH = int(argv[2]) + if len(argv) > 3: + glob.USE_MEMORY = bool(int(argv[3])) glob.done = 0 - for i in range(NUM_THREADS): + for i in range(glob.NUM_THREADS): ll_thread.start_new_thread(run_me, ()) print "sleeping..." - while glob.done < NUM_THREADS: # poor man's lock + while glob.done < glob.NUM_THREADS: # poor man's lock time.sleep(1) print "done sleeping." check_chained_list(glob.anchor.next) diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -183,12 +183,13 @@ # ____________________________________________________________ class CompiledSTMTests(StandaloneTests): + gc = "none" def compile(self, entry_point): from pypy.config.pypyoption import get_pypy_config self.config = get_pypy_config(translating=True) self.config.translation.stm = True - self.config.translation.gc = "none" + self.config.translation.gc = self.gc # # Prevent the RaiseAnalyzer from just emitting "WARNING: Unknown # operation". We want instead it to crash. diff --git a/pypy/translator/stm/test/test_ztranslated.py b/pypy/translator/stm/test/test_ztranslated.py --- a/pypy/translator/stm/test/test_ztranslated.py +++ b/pypy/translator/stm/test/test_ztranslated.py @@ -1,3 +1,4 @@ +import py from pypy.translator.stm.test.test_transform import CompiledSTMTests from pypy.translator.stm.test import targetdemo @@ -6,6 +7,18 @@ def test_hello_world(self): t, cbuilder = self.compile(targetdemo.entry_point) - data = cbuilder.cmdexec('') + data = cbuilder.cmdexec('4 5000') assert 'done sleeping.' in data assert 'check ok!' in data + + +class TestSTMFramework(CompiledSTMTests): + gc = "minimark" + + def test_hello_world(self): + py.test.skip("in-progress") + t, cbuilder = self.compile(targetdemo.entry_point) + data = cbuilder.cmdexec('4 5000 1') + # ^^^ should check that it doesn't take 1G of RAM + assert 'done sleeping.' in data + assert 'check ok!' in data From noreply at buildbot.pypy.org Mon Jan 16 14:59:35 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 14:59:35 +0100 (CET) Subject: [pypy-commit] pypy stm: Revert 0782958b144f. No longer needed. Message-ID: <20120116135935.AB3AF820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51342:382a5969beda Date: 2012-01-16 14:58 +0100 http://bitbucket.org/pypy/pypy/changeset/382a5969beda/ Log: Revert 0782958b144f. No longer needed. diff --git a/pypy/doc/discussion/stm_todo.txt b/pypy/doc/discussion/stm_todo.txt --- a/pypy/doc/discussion/stm_todo.txt +++ b/pypy/doc/discussion/stm_todo.txt @@ -7,4 +7,3 @@ e23ab2c195c1 Added a number of "# XXX --- custom version for STM ---" 31f2ed861176 One more - 0782958b144f Hard-coded the STM logic in rffi.aroundstate diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -313,21 +313,11 @@ AroundFnPtr = lltype.Ptr(lltype.FuncType([], lltype.Void)) class AroundState: - # XXX for stm with need to comment out this, and use a custom logic -## def _freeze_(self): -## self.before = None # or a regular RPython function -## self.after = None # or a regular RPython function -## return False - @staticmethod - def before(): - from pypy.translator.stm import rstm - rstm.commit_transaction() - @staticmethod - def after(): - from pypy.translator.stm import rstm - rstm.begin_inevitable_transaction() + _alloc_flavor_ = "raw" def _freeze_(self): - return True + self.before = None # or a regular RPython function + self.after = None # or a regular RPython function + return False aroundstate = AroundState() aroundstate._freeze_() From noreply at buildbot.pypy.org Mon Jan 16 14:59:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 14:59:36 +0100 (CET) Subject: [pypy-commit] pypy stm: (antocuni, arigo) Message-ID: <20120116135936.F32A6820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51343:2965c13c2427 Date: 2012-01-16 14:59 +0100 http://bitbucket.org/pypy/pypy/changeset/2965c13c2427/ Log: (antocuni, arigo) Start to refactor the world. The idea is not to build on RPython threads any more. diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -401,13 +401,7 @@ 'stm_setarrayitem': LLOp(), 'stm_getinteriorfield': LLOp(sideeffects=False, canrun=True), 'stm_setinteriorfield': LLOp(), - - 'stm_begin_transaction': LLOp(), - 'stm_commit_transaction': LLOp(), - 'stm_begin_inevitable_transaction': LLOp(), - 'stm_transaction_boundary': LLOp(), - 'stm_declare_variable': LLOp(), - 'stm_try_inevitable': LLOp(), + 'stm_become_inevitable':LLOp(), # __________ address operations __________ diff --git a/pypy/translator/c/funcgen.py b/pypy/translator/c/funcgen.py --- a/pypy/translator/c/funcgen.py +++ b/pypy/translator/c/funcgen.py @@ -227,6 +227,7 @@ yield '\treturn NULL;' yield '}' if self.exception_policy == "stm": + xxxx yield 'STM_MAKE_INEVITABLE();' retval = self.expr(block.inputargs[0]) if self.exception_policy != "exc_helper": @@ -608,12 +609,6 @@ OP_STM_SETARRAYITEM = _OP_STM OP_STM_GETINTERIORFIELD = _OP_STM OP_STM_SETINTERIORFIELD = _OP_STM - OP_STM_BEGIN_TRANSACTION = _OP_STM - OP_STM_COMMIT_TRANSACTION = _OP_STM - OP_STM_BEGIN_INEVITABLE_TRANSACTION = _OP_STM - OP_STM_TRANSACTION_BOUNDARY = _OP_STM - OP_STM_DECLARE_VARIABLE = _OP_STM - OP_STM_TRY_INEVITABLE = _OP_STM def OP_PTR_NONZERO(self, op): diff --git a/pypy/translator/c/genc.py b/pypy/translator/c/genc.py --- a/pypy/translator/c/genc.py +++ b/pypy/translator/c/genc.py @@ -134,7 +134,7 @@ if self.config.translation.stm: from pypy.translator.stm import transform transformer = transform.STMTransformer(self.translator) - transformer.transform(self.getentrypointptr()) + transformer.transform() log.info("Software Transactional Memory transformation applied") gcpolicyclass = self.get_gcpolicyclass() diff --git a/pypy/translator/stm/_rffi_stm.py b/pypy/translator/stm/_rffi_stm.py --- a/pypy/translator/stm/_rffi_stm.py +++ b/pypy/translator/stm/_rffi_stm.py @@ -26,16 +26,16 @@ descriptor_init = llexternal('stm_descriptor_init', [], lltype.Void) descriptor_done = llexternal('stm_descriptor_done', [], lltype.Void) -begin_transaction = llexternal('STM_begin_transaction', [], lltype.Void) -begin_inevitable_transaction = llexternal('stm_begin_inevitable_transaction', - [], lltype.Void) -commit_transaction = llexternal('stm_commit_transaction', [], lltype.Signed) +##begin_transaction = llexternal('STM_begin_transaction', [], lltype.Void) +##begin_inevitable_transaction = llexternal('stm_begin_inevitable_transaction', +## [], lltype.Void) +##commit_transaction = llexternal('stm_commit_transaction', [], lltype.Signed) try_inevitable = llexternal('stm_try_inevitable', [], lltype.Void) -descriptor_init_and_being_inevitable_transaction = llexternal( - 'stm_descriptor_init_and_being_inevitable_transaction', [], lltype.Void) -commit_transaction_and_descriptor_done = llexternal( - 'stm_commit_transaction_and_descriptor_done', [], lltype.Void) +##descriptor_init_and_being_inevitable_transaction = llexternal( +## 'stm_descriptor_init_and_being_inevitable_transaction', [], lltype.Void) +##commit_transaction_and_descriptor_done = llexternal( +## 'stm_commit_transaction_and_descriptor_done', [], lltype.Void) stm_read_word = llexternal('stm_read_word', [SignedP], lltype.Signed) stm_write_word = llexternal('stm_write_word', [SignedP, lltype.Signed], diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -1,7 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.objspace.flow.model import Constant from pypy.translator.c.support import cdecl, c_string_constant -from pypy.translator.stm.rstm import size_of_voidp +from pypy.translator.stm.llstm import size_of_voidp def _stm_generic_get(funcgen, op, expr, simple_struct=False): diff --git a/pypy/translator/stm/rstm.py b/pypy/translator/stm/llstm.py rename from pypy/translator/stm/rstm.py rename to pypy/translator/stm/llstm.py --- a/pypy/translator/stm/rstm.py +++ b/pypy/translator/stm/llstm.py @@ -1,11 +1,20 @@ +""" +This file is mostly here for testing. Usually, transform.py will transform +the getfields (etc) that occur in the graphs directly into stm_getfields, +which are operations that are recognized by the C backend. See funcgen.py +which contains similar logic, which is run by the C backend. +""" import sys -from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rpython.lltypesystem import lltype, rffi, rclass from pypy.rpython.extregistry import ExtRegistryEntry from pypy.translator.stm import _rffi_stm from pypy.annotation import model as annmodel from pypy.objspace.flow.model import Constant from pypy.rlib.rarithmetic import r_uint, r_ulonglong from pypy.rlib import longlong2float +from pypy.rlib.objectmodel import specialize +from pypy.rpython.annlowlevel import cast_instance_to_base_ptr +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance size_of_voidp = rffi.sizeof(rffi.VOIDP) assert size_of_voidp & (size_of_voidp - 1) == 0 @@ -83,21 +92,9 @@ "NOT_RPYTHON" raise NotImplementedError("sorry") -def begin_transaction(): - "NOT_RPYTHON. For tests only" - raise NotImplementedError("hard to really emulate") - -def commit_transaction(): - "NOT_RPYTHON. For tests only" - raise NotImplementedError("hard to really emulate") - -def begin_inevitable_transaction(): - "NOT_RPYTHON. For tests only, and at start up, but not in normal code." - raise NotImplementedError("hard to really emulate") - -def transaction_boundary(): - "NOT_RPYTHON. This is the one normally used" - raise NotImplementedError("hard to really emulate") +##def stm_setarrayitem(arrayptr, index, value): +## "NOT_RPYTHON" +## raise NotImplementedError("sorry") # ____________________________________________________________ @@ -149,13 +146,13 @@ resulttype = hop.r_result) -class ExtEntry(ExtRegistryEntry): - _about_ = (begin_transaction, commit_transaction, - begin_inevitable_transaction, transaction_boundary) +##class ExtEntry(ExtRegistryEntry): +## _about_ = (begin_transaction, commit_transaction, +## begin_inevitable_transaction, transaction_boundary) - def compute_result_annotation(self): - return None +## def compute_result_annotation(self): +## return None - def specialize_call(self, hop): - hop.exception_cannot_occur() - hop.genop("stm_" + self.instance.__name__, []) +## def specialize_call(self, hop): +## hop.exception_cannot_occur() +## hop.genop("stm_" + self.instance.__name__, []) diff --git a/pypy/translator/stm/llstminterp.py b/pypy/translator/stm/llstminterp.py --- a/pypy/translator/stm/llstminterp.py +++ b/pypy/translator/stm/llstminterp.py @@ -1,6 +1,6 @@ from pypy.rpython.lltypesystem import lltype from pypy.rpython.llinterp import LLFrame, LLException -from pypy.translator.stm import rstm +##from pypy.translator.stm import rstm from pypy.translator.stm.transform import op_in_set, ALWAYS_ALLOW_OPERATIONS diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -515,6 +515,8 @@ long stm_read_word(long* addr) { struct tx_descriptor *d = thread_descriptor; + if (!d) + return *addr; #ifdef RPY_STM_ASSERT assert(d->transaction_active); #endif @@ -579,6 +581,11 @@ void stm_write_word(long* addr, long val) { struct tx_descriptor *d = thread_descriptor; + if (!d) + { + *addr = val; + return; + } #ifdef RPY_STM_ASSERT assert(d->transaction_active); #endif diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -1,9 +1,55 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rarithmetic import r_longlong, r_singlefloat from pypy.translator.stm.test.test_transform import CompiledSTMTests -from pypy.translator.stm import rstm +#from pypy.rlib import rstm +from pypy.translator.c.test.test_standalone import StandaloneTests +from pypy.rlib.debug import debug_print +class TestRStm(object): + + def compile(self, entry_point): + from pypy.translator.translator import TranslationContext + from pypy.annotation.listdef import s_list_of_strings + from pypy.translator.c.genc import CStandaloneBuilder + from pypy.translator.tool.cbuild import ExternalCompilationInfo + t = TranslationContext() + t.config.translation.gc = 'boehm' + t.buildannotator().build_types(entry_point, [s_list_of_strings]) + t.buildrtyper().specialize() + t.stm_transformation_applied = True # not really, but for these tests + cbuilder = CStandaloneBuilder(t, entry_point, t.config) + force_debug = ExternalCompilationInfo(pre_include_bits=[ + "#define RPY_ASSERT 1\n" + "#define RPY_LL_ASSERT 1\n" + ]) + cbuilder.eci = cbuilder.eci.merge(force_debug) + cbuilder.generate_source() + cbuilder.compile() + return t, cbuilder + + def test_compiled_stm_getfield(self): + from pypy.translator.stm.test import test_llstm + def entry_point(argv): + test_llstm.test_stm_getfield() + debug_print('ok!') + return 0 + t, cbuilder = self.compile(entry_point) + _, data = cbuilder.cmdexec('', err=True) + assert data.endswith('ok!\n') + + def test_compiled_stm_setfield(self): + from pypy.translator.stm.test import test_llstm + def entry_point(argv): + test_llstm.test_stm_setfield() + debug_print('ok!') + return 0 + t, cbuilder = self.compile(entry_point) + _, data = cbuilder.cmdexec('', err=True) + assert data.endswith('ok!\n') + +# ____________________________________________________________ + A = lltype.GcStruct('A', ('x', lltype.Signed), ('y', lltype.Signed), ('c1', lltype.Char), ('c2', lltype.Char), ('c3', lltype.Char), ('l', lltype.SignedLongLong), diff --git a/pypy/translator/stm/test/test_rstm.py b/pypy/translator/stm/test/test_llstm.py rename from pypy/translator/stm/test/test_rstm.py rename to pypy/translator/stm/test/test_llstm.py --- a/pypy/translator/stm/test/test_rstm.py +++ b/pypy/translator/stm/test/test_llstm.py @@ -1,9 +1,8 @@ import py from pypy.translator.stm._rffi_stm import * -from pypy.translator.stm.rstm import * +from pypy.translator.stm.llstm import * from pypy.rpython.annlowlevel import llhelper from pypy.rlib.rarithmetic import r_longlong, r_singlefloat -from pypy.rlib.debug import debug_print A = lltype.Struct('A', ('x', lltype.Signed), ('y', lltype.Signed), @@ -146,50 +145,3 @@ assert float(a.sb) == float(rs2b) assert a.y == 10 lltype.free(a, flavor='raw') - -# ____________________________________________________________ - -from pypy.translator.translator import TranslationContext -from pypy.annotation.listdef import s_list_of_strings -from pypy.translator.c.genc import CStandaloneBuilder -from pypy.translator.tool.cbuild import ExternalCompilationInfo - -class StmTests(object): - config = None - - def compile(self, entry_point): - t = TranslationContext(self.config) - t.config.translation.gc = 'boehm' - t.buildannotator().build_types(entry_point, [s_list_of_strings]) - t.buildrtyper().specialize() - t.stm_transformation_applied = True # not really, but for these tests - cbuilder = CStandaloneBuilder(t, entry_point, t.config) - force_debug = ExternalCompilationInfo(pre_include_bits=[ - "#define RPY_ASSERT 1\n" - "#define RPY_LL_ASSERT 1\n" - ]) - cbuilder.eci = cbuilder.eci.merge(force_debug) - cbuilder.generate_source() - cbuilder.compile() - return t, cbuilder - - -class TestRStm(StmTests): - - def test_compiled_stm_getfield(self): - def entry_point(argv): - test_stm_getfield() - debug_print('ok!') - return 0 - t, cbuilder = self.compile(entry_point) - _, data = cbuilder.cmdexec('', err=True) - assert data.endswith('ok!\n') - - def test_compiled_stm_setfield(self): - def entry_point(argv): - test_stm_setfield() - debug_print('ok!') - return 0 - t, cbuilder = self.compile(entry_point) - _, data = cbuilder.cmdexec('', err=True) - assert data.endswith('ok!\n') diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -3,7 +3,7 @@ from pypy.objspace.flow.model import summary from pypy.translator.stm.llstminterp import eval_stm_graph from pypy.translator.stm.transform import transform_graph -from pypy.translator.stm import rstm +##from pypy.translator.stm import rstm from pypy.translator.c.test.test_standalone import StandaloneTests from pypy.rlib.debug import debug_print from pypy.conftest import option @@ -198,7 +198,7 @@ try: res = StandaloneTests.compile(self, entry_point, debug=True) finally: - del RaiseAnalyzer.fail_on_unknown_operation + RaiseAnalyzer.fail_on_unknown_operation = False return res diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -23,18 +23,18 @@ def __init__(self, translator=None): self.translator = translator - def transform(self, entrypointptr): + def transform(self): ##, entrypointptr): assert not hasattr(self.translator, 'stm_transformation_applied') - entrypointgraph = entrypointptr._obj.graph +## entrypointgraph = entrypointptr._obj.graph for graph in self.translator.graphs: - self.seen_transaction_boundary = False - self.seen_gc_stack_bottom = False +## self.seen_transaction_boundary = False +## self.seen_gc_stack_bottom = False self.transform_graph(graph) - if self.seen_transaction_boundary: - self.add_stm_declare_variable(graph) - if self.seen_gc_stack_bottom: - self.add_descriptor_init_stuff(graph) - self.add_descriptor_init_stuff(entrypointgraph, main=True) +## if self.seen_transaction_boundary: +## self.add_stm_declare_variable(graph) +## if self.seen_gc_stack_bottom: +## self.add_descriptor_init_stuff(graph) +## self.add_descriptor_init_stuff(entrypointgraph, main=True) self.translator.stm_transformation_applied = True def transform_block(self, block): @@ -68,42 +68,42 @@ for block in graph.iterblocks(): self.transform_block(block) - def add_descriptor_init_stuff(self, graph, main=False): - if main: - self._add_calls_around(graph, - _rffi_stm.begin_inevitable_transaction, - _rffi_stm.commit_transaction) - self._add_calls_around(graph, - _rffi_stm.descriptor_init, - _rffi_stm.descriptor_done) +## def add_descriptor_init_stuff(self, graph, main=False): +## if main: +## self._add_calls_around(graph, +## _rffi_stm.begin_inevitable_transaction, +## _rffi_stm.commit_transaction) +## self._add_calls_around(graph, +## _rffi_stm.descriptor_init, +## _rffi_stm.descriptor_done) - def _add_calls_around(self, graph, f_init, f_done): - c_init = Constant(f_init, lltype.typeOf(f_init)) - c_done = Constant(f_done, lltype.typeOf(f_done)) - # - block = graph.startblock - v = varoftype(lltype.Void) - op = SpaceOperation('direct_call', [c_init], v) - block.operations.insert(0, op) - # - v = copyvar(self.translator.annotator, graph.getreturnvar()) - extrablock = Block([v]) - v_none = varoftype(lltype.Void) - newop = SpaceOperation('direct_call', [c_done], v_none) - extrablock.operations = [newop] - extrablock.closeblock(Link([v], graph.returnblock)) - for block in graph.iterblocks(): - if block is not extrablock: - for link in block.exits: - if link.target is graph.returnblock: - link.target = extrablock - checkgraph(graph) +## def _add_calls_around(self, graph, f_init, f_done): +## c_init = Constant(f_init, lltype.typeOf(f_init)) +## c_done = Constant(f_done, lltype.typeOf(f_done)) +## # +## block = graph.startblock +## v = varoftype(lltype.Void) +## op = SpaceOperation('direct_call', [c_init], v) +## block.operations.insert(0, op) +## # +## v = copyvar(self.translator.annotator, graph.getreturnvar()) +## extrablock = Block([v]) +## v_none = varoftype(lltype.Void) +## newop = SpaceOperation('direct_call', [c_done], v_none) +## extrablock.operations = [newop] +## extrablock.closeblock(Link([v], graph.returnblock)) +## for block in graph.iterblocks(): +## if block is not extrablock: +## for link in block.exits: +## if link.target is graph.returnblock: +## link.target = extrablock +## checkgraph(graph) - def add_stm_declare_variable(self, graph): - block = graph.startblock - v = varoftype(lltype.Void) - op = SpaceOperation('stm_declare_variable', [], v) - block.operations.insert(0, op) +## def add_stm_declare_variable(self, graph): +## block = graph.startblock +## v = varoftype(lltype.Void) +## op = SpaceOperation('stm_declare_variable', [], v) +## block.operations.insert(0, op) # ---------- @@ -173,22 +173,22 @@ op1 = SpaceOperation('stm_setinteriorfield', op.args, op.result) newoperations.append(op1) - def stt_stm_transaction_boundary(self, newoperations, op): - self.seen_transaction_boundary = True - v_result = op.result - # record in op.args the list of variables that are alive across - # this call - block = self.current_block - vars = set() - for op in block.operations[:self.current_op_index:-1]: - vars.discard(op.result) - vars.update(op.args) - for link in block.exits: - vars.update(link.args) - vars.update(link.getextravars()) - livevars = [v for v in vars if isinstance(v, Variable)] - newop = SpaceOperation('stm_transaction_boundary', livevars, v_result) - newoperations.append(newop) +## def stt_stm_transaction_boundary(self, newoperations, op): +## self.seen_transaction_boundary = True +## v_result = op.result +## # record in op.args the list of variables that are alive across +## # this call +## block = self.current_block +## vars = set() +## for op in block.operations[:self.current_op_index:-1]: +## vars.discard(op.result) +## vars.update(op.args) +## for link in block.exits: +## vars.update(link.args) +## vars.update(link.getextravars()) +## livevars = [v for v in vars if isinstance(v, Variable)] +## newop = SpaceOperation('stm_transaction_boundary', livevars, v_result) +## newoperations.append(newop) def stt_malloc(self, newoperations, op): flags = op.args[1].value @@ -199,7 +199,7 @@ return flags['flavor'] == 'gc' def stt_gc_stack_bottom(self, newoperations, op): - self.seen_gc_stack_bottom = True +## self.seen_gc_stack_bottom = True newoperations.append(op) @@ -210,7 +210,7 @@ def turn_inevitable(newoperations, info): c_info = Constant(info, lltype.Void) - op1 = SpaceOperation('stm_try_inevitable', [c_info], + op1 = SpaceOperation('stm_become_inevitable', [c_info], varoftype(lltype.Void)) newoperations.append(op1) From noreply at buildbot.pypy.org Mon Jan 16 15:43:48 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 15:43:48 +0100 (CET) Subject: [pypy-commit] pypy stm: (antocuni, arigo) Message-ID: <20120116144348.2D288820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51344:7dc69a93d5be Date: 2012-01-16 15:43 +0100 http://bitbucket.org/pypy/pypy/changeset/7dc69a93d5be/ Log: (antocuni, arigo) Fix the llstminterp. Wondering a bit what is its purpose... diff --git a/pypy/translator/stm/TODO.txt b/pypy/translator/stm/TODO.txt new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/TODO.txt @@ -0,0 +1,9 @@ + + +* turn calls to stm_read_word() to: + if (d) x = stm_read_word(a); else x = *a; + + this way, gcc will optimize: + if (d) x = stm_read_word(a); else x = *a; + if (d) y = stm_read_word(b); else y = *b; + to check !d only once if it is null diff --git a/pypy/translator/stm/llstm.py b/pypy/translator/stm/llstm.py --- a/pypy/translator/stm/llstm.py +++ b/pypy/translator/stm/llstm.py @@ -92,9 +92,13 @@ "NOT_RPYTHON" raise NotImplementedError("sorry") -##def stm_setarrayitem(arrayptr, index, value): -## "NOT_RPYTHON" -## raise NotImplementedError("sorry") +def stm_setarrayitem(arrayptr, index, value): + "NOT_RPYTHON" + raise NotImplementedError("sorry") + +def stm_become_inevitable(why): + "NOT_RPYTHON" + raise NotImplementedError("sorry") # ____________________________________________________________ @@ -146,13 +150,28 @@ resulttype = hop.r_result) -##class ExtEntry(ExtRegistryEntry): -## _about_ = (begin_transaction, commit_transaction, -## begin_inevitable_transaction, transaction_boundary) +class ExtEntry(ExtRegistryEntry): + _about_ = stm_setarrayitem -## def compute_result_annotation(self): -## return None + def compute_result_annotation(self, s_arrayptr, s_index, s_newvalue): + return None -## def specialize_call(self, hop): -## hop.exception_cannot_occur() -## hop.genop("stm_" + self.instance.__name__, []) + def specialize_call(self, hop): + r_arrayptr = hop.args_r[0] + v_arrayptr, v_index, v_newvalue = hop.inputargs(r_arrayptr, + lltype.Signed, + hop.args_r[2]) + hop.exception_cannot_occur() + hop.genop('stm_setarrayitem', [v_arrayptr, v_index, v_newvalue]) + + +class ExtEntry(ExtRegistryEntry): + _about_ = stm_become_inevitable + + def compute_result_annotation(self, s_why): + return None + + def specialize_call(self, hop): + hop.exception_cannot_occur() + c_why = hop.inputconst(lltype.Signed, 42) # XXX + hop.genop("stm_become_inevitable", [c_why]) diff --git a/pypy/translator/stm/llstminterp.py b/pypy/translator/stm/llstminterp.py --- a/pypy/translator/stm/llstminterp.py +++ b/pypy/translator/stm/llstminterp.py @@ -146,44 +146,38 @@ raise ForbiddenInstructionInSTMMode(stm_mode, self.graph) def opstm_stm_getfield(self, struct, fieldname): - self.check_stm_mode(lambda m: m != "not_in_transaction") return LLFrame.op_getfield(self, struct, fieldname) def opstm_stm_setfield(self, struct, fieldname, value): - self.check_stm_mode(lambda m: m != "not_in_transaction") LLFrame.op_setfield(self, struct, fieldname, value) def opstm_stm_getarrayitem(self, array, index): - self.check_stm_mode(lambda m: m != "not_in_transaction") return LLFrame.op_getarrayitem(self, array, index) def opstm_stm_setarrayitem(self, array, index, value): - self.check_stm_mode(lambda m: m != "not_in_transaction") LLFrame.op_setarrayitem(self, array, index, value) def opstm_stm_getinteriorfield(self, obj, *offsets): - self.check_stm_mode(lambda m: m != "not_in_transaction") return LLFrame.op_getinteriorfield(self, obj, *offsets) def opstm_stm_setinteriorfield(self, obj, *fieldnamesval): - self.check_stm_mode(lambda m: m != "not_in_transaction") LLFrame.op_setinteriorfield(self, obj, *fieldnamesval) - def opstm_stm_begin_transaction(self): - self.check_stm_mode(lambda m: m == "not_in_transaction") - self.llinterpreter.stm_mode = "regular_transaction" - self.llinterpreter.last_transaction_started_in_frame = self +## def opstm_stm_begin_transaction(self): +## self.check_stm_mode(lambda m: m == "not_in_transaction") +## self.llinterpreter.stm_mode = "regular_transaction" +## self.llinterpreter.last_transaction_started_in_frame = self - def opstm_stm_commit_transaction(self): - self.check_stm_mode(lambda m: m != "not_in_transaction") - self.llinterpreter.stm_mode = "not_in_transaction" +## def opstm_stm_commit_transaction(self): +## self.check_stm_mode(lambda m: m != "not_in_transaction") +## self.llinterpreter.stm_mode = "not_in_transaction" - def opstm_stm_transaction_boundary(self): - self.check_stm_mode(lambda m: m != "not_in_transaction") - self.llinterpreter.stm_mode = "regular_transaction" - self.llinterpreter.last_transaction_started_in_frame = self +## def opstm_stm_transaction_boundary(self): +## self.check_stm_mode(lambda m: m != "not_in_transaction") +## self.llinterpreter.stm_mode = "regular_transaction" +## self.llinterpreter.last_transaction_started_in_frame = self - def opstm_stm_try_inevitable(self, why): + def opstm_stm_become_inevitable(self, why): self.check_stm_mode(lambda m: m != "not_in_transaction") self.llinterpreter.stm_mode = "inevitable_transaction" print why diff --git a/pypy/translator/stm/test/test_llstminterp.py b/pypy/translator/stm/test/test_llstminterp.py --- a/pypy/translator/stm/test/test_llstminterp.py +++ b/pypy/translator/stm/test/test_llstminterp.py @@ -4,7 +4,7 @@ from pypy.translator.stm.llstminterp import eval_stm_graph from pypy.translator.stm.llstminterp import ForbiddenInstructionInSTMMode from pypy.translator.stm.llstminterp import ReturnWithTransactionActive -from pypy.translator.stm import rstm +from pypy.translator.stm import llstm ALL_STM_MODES = ["not_in_transaction", "regular_transaction", @@ -14,55 +14,59 @@ def func(n): return (n+1) * (n+2) interp, graph = get_interpreter(func, [5]) - res = eval_stm_graph(interp, graph, [5]) + res = eval_stm_graph(interp, graph, [5], + stm_mode="not_in_transaction", + final_stm_mode="not_in_transaction") assert res == 42 def test_forbidden(): S = lltype.GcStruct('S', ('x', lltype.Signed)) p = lltype.malloc(S, immortal=True) p.x = 42 - def func(p): + # + def funcget(p): return p.x - interp, graph = get_interpreter(func, [p]) + interp, graph = get_interpreter(funcget, [p]) + py.test.raises(ForbiddenInstructionInSTMMode, + eval_stm_graph, interp, graph, [p], + stm_mode="regular_transaction") + # + def funcset(p): + p.x = 43 + interp, graph = get_interpreter(funcset, [p]) py.test.raises(ForbiddenInstructionInSTMMode, eval_stm_graph, interp, graph, [p], stm_mode="regular_transaction") -def test_stm_getfield(): - S = lltype.GcStruct('S', ('x', lltype.Signed)) +def test_stm_getfield_stm_setfield(): + S = lltype.GcStruct('S', ('x', lltype.Signed), ('y', lltype.Signed)) p = lltype.malloc(S, immortal=True) p.x = 42 def func(p): - return rstm.stm_getfield(p, 'x') + llstm.stm_setfield(p, 'y', 43) + return llstm.stm_getfield(p, 'x') interp, graph = get_interpreter(func, [p]) - # forbidden in "not_in_transaction" mode - py.test.raises(ForbiddenInstructionInSTMMode, - eval_stm_graph, interp, graph, [p], - stm_mode="not_in_transaction") - # works in "regular_transaction" mode - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - assert res == 42 - # works in "inevitable_transaction" mode - res = eval_stm_graph(interp, graph, [p], stm_mode="inevitable_transaction") - assert res == 42 + # works in all modes + for mode in ALL_STM_MODES: + p.y = 0 + res = eval_stm_graph(interp, graph, [p], stm_mode=mode) + assert res == 42 + assert p.y == 43 -def test_stm_getarrayitem(): +def test_stm_getarrayitem_stm_setarrayitem(): A = lltype.GcArray(lltype.Signed) p = lltype.malloc(A, 5, immortal=True) p[3] = 42 def func(p): - return rstm.stm_getarrayitem(p, 3) + llstm.stm_setarrayitem(p, 2, 43) + return llstm.stm_getarrayitem(p, 3) interp, graph = get_interpreter(func, [p]) - # forbidden in "not_in_transaction" mode - py.test.raises(ForbiddenInstructionInSTMMode, - eval_stm_graph, interp, graph, [p], - stm_mode="not_in_transaction") - # works in "regular_transaction" mode - res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") - assert res == 42 - # works in "inevitable_transaction" mode - res = eval_stm_graph(interp, graph, [p], stm_mode="inevitable_transaction") - assert res == 42 + # works in all modes + for mode in ALL_STM_MODES: + p[2] = 0 + res = eval_stm_graph(interp, graph, [p], stm_mode=mode) + assert res == 42 + assert p[2] == 43 def test_getfield_immutable(): S = lltype.GcStruct('S', ('x', lltype.Signed), hints = {'immutable': True}) @@ -76,66 +80,13 @@ res = eval_stm_graph(interp, graph, [p], stm_mode=mode) assert res == 42 -def test_begin_commit_transaction(): - S = lltype.GcStruct('S', ('x', lltype.Signed)) - p = lltype.malloc(S, immortal=True) - p.x = 42 - def func(p): - rstm.begin_transaction() - res = rstm.stm_getfield(p, 'x') - rstm.commit_transaction() - return res - interp, graph = get_interpreter(func, [p]) - res = eval_stm_graph(interp, graph, [p]) - assert res == 42 - -def test_call_and_return_with_regular_transaction(): - def g(): - pass - g._dont_inline_ = True +def test_become_inevitable(): def func(): - rstm.begin_transaction() - g() - rstm.commit_transaction() + llstm.stm_become_inevitable("foobar!") interp, graph = get_interpreter(func, []) - eval_stm_graph(interp, graph, []) - -def test_cannot_return_with_regular_transaction(): - def g(): - rstm.begin_transaction() - g._dont_inline_ = True - def func(): - g() - rstm.commit_transaction() - interp, graph = get_interpreter(func, []) - py.test.raises(ReturnWithTransactionActive, - eval_stm_graph, interp, graph, []) - -def test_cannot_raise_with_regular_transaction(): - def g(): - rstm.begin_transaction() - raise ValueError - g._dont_inline_ = True - def func(): - try: - g() - except ValueError: - pass - rstm.commit_transaction() - interp, graph = get_interpreter(func, []) - py.test.raises(ReturnWithTransactionActive, - eval_stm_graph, interp, graph, []) - -def test_transaction_boundary(): - def func(n): - if n > 5: - rstm.transaction_boundary() - interp, graph = get_interpreter(func, [2]) - eval_stm_graph(interp, graph, [10], - stm_mode="regular_transaction", - final_stm_mode="inevitable_transaction", - automatic_promotion=True) - eval_stm_graph(interp, graph, [1], - stm_mode="regular_transaction", - final_stm_mode="regular_transaction", - automatic_promotion=True) + py.test.raises(ForbiddenInstructionInSTMMode, + eval_stm_graph, interp, graph, [], + stm_mode="not_in_transaction") + eval_stm_graph(interp, graph, [], stm_mode="regular_transaction", + final_stm_mode="inevitable_transaction") + eval_stm_graph(interp, graph, [], stm_mode="inevitable_transaction") From noreply at buildbot.pypy.org Mon Jan 16 16:30:58 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 16:30:58 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni): fix test_getfield_all_sizes, and make sure we test both the cases of stm_getfield inside and outside a transaction Message-ID: <20120116153058.1C89D820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51345:a81e51b21e06 Date: 2012-01-16 16:06 +0100 http://bitbucket.org/pypy/pypy/changeset/a81e51b21e06/ Log: (arigo, antocuni): fix test_getfield_all_sizes, and make sure we test both the cases of stm_getfield inside and outside a transaction diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -676,6 +676,10 @@ void* stm_perform_transaction(void*(*callback)(void*), void *arg) { void *result; +#ifdef RPY_STM_ASSERT + /* you need to call descriptor_init() before calling stm_perform_transaction */ + assert(thread_descriptor != NULL); +#endif STM_begin_transaction(); result = callback(arg); stm_commit_transaction(); diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -1,6 +1,6 @@ import time from pypy.module.thread import ll_thread -from pypy.translator.stm import rstm +#from pypy.translator.stm import rstm class Node: diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -2,8 +2,11 @@ from pypy.rlib.rarithmetic import r_longlong, r_singlefloat from pypy.translator.stm.test.test_transform import CompiledSTMTests #from pypy.rlib import rstm +from pypy.translator.stm._rffi_stm import (CALLBACK, perform_transaction, + descriptor_init, descriptor_done) from pypy.translator.c.test.test_standalone import StandaloneTests from pypy.rlib.debug import debug_print +from pypy.rpython.annlowlevel import llhelper class TestRStm(object): @@ -78,7 +81,7 @@ return a a_prebuilt = make_a_1() -def do_stm_getfield(argv): +def _play_with_getfield(dummy_arg): a = a_prebuilt assert a.x == -611 assert a.c1 == '/' @@ -89,7 +92,7 @@ assert a.f == rf1 assert float(a.sa) == float(rs1a) assert float(a.sb) == float(rs1b) - return 0 + return lltype.nullptr(rffi.VOIDP.TO) def do_stm_setfield(argv): a = a_prebuilt @@ -241,6 +244,19 @@ class TestFuncGen(CompiledSTMTests): def test_getfield_all_sizes(self): + def do_stm_getfield(argv): + _play_with_getfield(None) + return 0 + t, cbuilder = self.compile(do_stm_getfield) + cbuilder.cmdexec('') + + def test_getfield_all_sizes_inside_transaction(self): + def do_stm_getfield(argv): + callback = llhelper(CALLBACK, _play_with_getfield) + descriptor_init() + perform_transaction(callback, lltype.nullptr(rffi.VOIDP.TO)) + descriptor_done() + return 0 t, cbuilder = self.compile(do_stm_getfield) cbuilder.cmdexec('') From noreply at buildbot.pypy.org Mon Jan 16 16:30:59 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 16:30:59 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni): fix test_setfield_all_sizes, and make sure we test both the cases of stm_setfield inside and outside a transaction Message-ID: <20120116153059.41A47820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51346:6b3914441db1 Date: 2012-01-16 16:18 +0100 http://bitbucket.org/pypy/pypy/changeset/6b3914441db1/ Log: (arigo, antocuni): fix test_setfield_all_sizes, and make sure we test both the cases of stm_setfield inside and outside a transaction diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -67,6 +67,8 @@ rs1b = r_singlefloat(40.121) rs2b = r_singlefloat(-9e9) +NULL = lltype.nullptr(rffi.VOIDP.TO) + def make_a_1(): a = lltype.malloc(A, immortal=True) a.x = -611 @@ -92,9 +94,9 @@ assert a.f == rf1 assert float(a.sa) == float(rs1a) assert float(a.sb) == float(rs1b) - return lltype.nullptr(rffi.VOIDP.TO) - -def do_stm_setfield(argv): + return NULL + +def _play_with_setfields(dummy_arg): a = a_prebuilt # a.x = 12871981 @@ -111,7 +113,13 @@ a.f = rf2 a.sa = rs2a a.sb = rs2b - # + # read the values which have not been commited yet, but are local to the + # transaction + _check_values_of_fields(dummy_arg) + return NULL + +def _check_values_of_fields(dummy_arg): + a = a_prebuilt assert a.x == 12871981 assert a.c1 == '(' assert a.c2 == '?' @@ -120,19 +128,7 @@ assert a.f == rf2 assert float(a.sa) == float(rs2a) assert float(a.sb) == float(rs2b) - # - rstm.transaction_boundary() - # - assert a.x == 12871981 - assert a.c1 == '(' - assert a.c2 == '?' - assert a.c3 == ')' - assert a.l == rll2 - assert a.f == rf2 - assert float(a.sa) == float(rs2a) - assert float(a.sb) == float(rs2b) - # - return 0 + return NULL def make_array(OF): @@ -254,13 +250,29 @@ def do_stm_getfield(argv): callback = llhelper(CALLBACK, _play_with_getfield) descriptor_init() - perform_transaction(callback, lltype.nullptr(rffi.VOIDP.TO)) + perform_transaction(callback, NULL) descriptor_done() return 0 t, cbuilder = self.compile(do_stm_getfield) cbuilder.cmdexec('') def test_setfield_all_sizes(self): + def do_stm_setfield(argv): + _play_with_setfields(None) + return 0 + t, cbuilder = self.compile(do_stm_setfield) + cbuilder.cmdexec('') + + def test_setfield_all_sizes_inside_transaction(self): + def do_stm_setfield(argv): + callback1 = llhelper(CALLBACK, _play_with_setfields) + callback2 = llhelper(CALLBACK, _check_values_of_fields) + descriptor_init() + perform_transaction(callback1, NULL) + # read values which aren't local to the transaction + perform_transaction(callback2, NULL) + descriptor_done() + return 0 t, cbuilder = self.compile(do_stm_setfield) cbuilder.cmdexec('') From noreply at buildbot.pypy.org Mon Jan 16 16:31:00 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 16:31:00 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni): fix test_getarrayitem_all_sizes, and make sure we test both the cases of stm_getarrayitem inside and outside a transaction Message-ID: <20120116153100.65F91820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51347:08f6815aba37 Date: 2012-01-16 16:20 +0100 http://bitbucket.org/pypy/pypy/changeset/08f6815aba37/ Log: (arigo, antocuni): fix test_getarrayitem_all_sizes, and make sure we test both the cases of stm_getarrayitem inside and outside a transaction diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -152,11 +152,12 @@ array[i] = rffi.cast(lltype.typeOf(array).TO.OF, newvalues[i]) change._annspecialcase_ = 'specialize:ll' -def do_stm_getarrayitem(argv): +def _play_with_getarrayitem(dummy_arg): check(prebuilt_array_signed, [1, 10, -1, -10, 42]) check(prebuilt_array_char, [chr(1), chr(10), chr(255), chr(246), chr(42)]) - return 0 + return NULL + def do_stm_setarrayitem(argv): change(prebuilt_array_signed, [500000, -10000000, 3]) @@ -277,9 +278,23 @@ cbuilder.cmdexec('') def test_getarrayitem_all_sizes(self): + def do_stm_getarrayitem(argv): + _play_with_getarrayitem(None) + return 0 t, cbuilder = self.compile(do_stm_getarrayitem) cbuilder.cmdexec('') + def test_getarrayitem_all_sizes_inside_transaction(self): + def do_stm_getarrayitem(argv): + callback = llhelper(CALLBACK, _play_with_getarrayitem) + descriptor_init() + perform_transaction(callback, NULL) + descriptor_done() + return 0 + t, cbuilder = self.compile(do_stm_getarrayitem) + cbuilder.cmdexec('') + + def test_setarrayitem_all_sizes(self): t, cbuilder = self.compile(do_stm_setarrayitem) cbuilder.cmdexec('') From noreply at buildbot.pypy.org Mon Jan 16 16:31:01 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 16:31:01 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni): fix test_setarrayitem_all_sizes, and make sure we test both the cases of stm_setarrayitem inside and outside a transaction Message-ID: <20120116153101.88CF1820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51348:da7457ea38cd Date: 2012-01-16 16:25 +0100 http://bitbucket.org/pypy/pypy/changeset/da7457ea38cd/ Log: (arigo, antocuni): fix test_setarrayitem_all_sizes, and make sure we test both the cases of stm_setarrayitem inside and outside a transaction diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -159,7 +159,7 @@ return NULL -def do_stm_setarrayitem(argv): +def _play_with_setarrayitem_1(dummy_arg): change(prebuilt_array_signed, [500000, -10000000, 3]) check(prebuilt_array_signed, [500000, -10000000, 3, -10, 42]) prebuilt_array_char[0] = 'A' @@ -168,19 +168,19 @@ check(prebuilt_array_char, ['A', chr(10), chr(255), 'B', chr(42)]) prebuilt_array_char[4] = 'C' check(prebuilt_array_char, ['A', chr(10), chr(255), 'B', 'C']) - # - rstm.transaction_boundary() - # + return NULL + +def _play_with_setarrayitem_2(dummy_arg): check(prebuilt_array_char, ['A', chr(10), chr(255), 'B', 'C']) prebuilt_array_char[1] = 'D' check(prebuilt_array_char, ['A', 'D', chr(255), 'B', 'C']) prebuilt_array_char[2] = 'E' check(prebuilt_array_char, ['A', 'D', 'E', 'B', 'C']) - # - rstm.transaction_boundary() - # + return NULL + +def _play_with_setarrayitem_3(dummy_arg): check(prebuilt_array_char, ['A', 'D', 'E', 'B', 'C']) - return 0 + return NULL def make_array_of_structs(T1, T2): @@ -296,6 +296,26 @@ def test_setarrayitem_all_sizes(self): + def do_stm_setarrayitem(argv): + _play_with_setarrayitem_1(None) + _play_with_setarrayitem_2(None) + _play_with_setarrayitem_3(None) + return 0 + t, cbuilder = self.compile(do_stm_setarrayitem) + cbuilder.cmdexec('') + + def test_setarrayitem_all_sizes_inside_transaction(self): + def do_stm_setarrayitem(argv): + callback1 = llhelper(CALLBACK, _play_with_setarrayitem_1) + callback2 = llhelper(CALLBACK, _play_with_setarrayitem_2) + callback3 = llhelper(CALLBACK, _play_with_setarrayitem_3) + # + descriptor_init() + perform_transaction(callback1, NULL) + perform_transaction(callback2, NULL) + perform_transaction(callback3, NULL) + descriptor_done() + return 0 t, cbuilder = self.compile(do_stm_setarrayitem) cbuilder.cmdexec('') From noreply at buildbot.pypy.org Mon Jan 16 16:31:02 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 16:31:02 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni): fix test_getinteriorfield_all_sizes, and make sure we test both the cases of stm_getinteriorfield inside and outside a transaction Message-ID: <20120116153102.CE994820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51349:e7847828c029 Date: 2012-01-16 16:27 +0100 http://bitbucket.org/pypy/pypy/changeset/e7847828c029/ Log: (arigo, antocuni): fix test_getinteriorfield_all_sizes, and make sure we test both the cases of stm_getinteriorfield inside and outside a transaction diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -212,11 +212,12 @@ array[i].y = rffi.cast(lltype.typeOf(array).TO.OF.y, newvalues2[i]) change2._annspecialcase_ = 'specialize:ll' -def do_stm_getinteriorfield(argv): +def _play_with_getinteriorfield(dummy_arg): check2(prebuilt_array_signed_signed, [1, -1, -50], [10, 20, -30]) check2(prebuilt_array_char_char, [chr(1), chr(255), chr(206)], [chr(10), chr(20), chr(226)]) - return 0 + return NULL + def do_stm_setinteriorfield(argv): change2(prebuilt_array_signed_signed, [500000, -10000000], [102101202]) @@ -320,9 +321,23 @@ cbuilder.cmdexec('') def test_getinteriorfield_all_sizes(self): + def do_stm_getinteriorfield(argv): + _play_with_getinteriorfield(None) + return 0 t, cbuilder = self.compile(do_stm_getinteriorfield) cbuilder.cmdexec('') + def test_getinteriorfield_all_sizes_inside_transaction(self): + def do_stm_getinteriorfield(argv): + callback = llhelper(CALLBACK, _play_with_getinteriorfield) + descriptor_init() + perform_transaction(callback, NULL) + descriptor_done() + return 0 + t, cbuilder = self.compile(do_stm_getinteriorfield) + cbuilder.cmdexec('') + + def test_setinteriorfield_all_sizes(self): t, cbuilder = self.compile(do_stm_setinteriorfield) cbuilder.cmdexec('') From noreply at buildbot.pypy.org Mon Jan 16 16:31:04 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 16:31:04 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni): fix test_setinteriorfield_all_sizes, and make sure we test both the cases of stm_setinteriorfield inside and outside a transaction Message-ID: <20120116153104.00B64820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51350:a2b3f2c9ea45 Date: 2012-01-16 16:29 +0100 http://bitbucket.org/pypy/pypy/changeset/a2b3f2c9ea45/ Log: (arigo, antocuni): fix test_setinteriorfield_all_sizes, and make sure we test both the cases of stm_setinteriorfield inside and outside a transaction diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -219,21 +219,21 @@ return NULL -def do_stm_setinteriorfield(argv): +def _play_with_setinteriorfield_1(dummy_arg): change2(prebuilt_array_signed_signed, [500000, -10000000], [102101202]) check2(prebuilt_array_signed_signed, [500000, -10000000, -50], [102101202, 20, -30]) change2(prebuilt_array_char_char, ['a'], ['b']) check2(prebuilt_array_char_char, ['a', chr(255), chr(206)], ['b', chr(20), chr(226)]) - # - rstm.transaction_boundary() - # + return NULL + +def _play_with_setinteriorfield_2(dummy_arg): check2(prebuilt_array_signed_signed, [500000, -10000000, -50], [102101202, 20, -30]) check2(prebuilt_array_char_char, ['a', chr(255), chr(206)], ['b', chr(20), chr(226)]) - return 0 + return NULL # ____________________________________________________________ @@ -339,5 +339,22 @@ def test_setinteriorfield_all_sizes(self): + def do_stm_setinteriorfield(argv): + _play_with_setinteriorfield_1(None) + _play_with_setinteriorfield_2(None) + return 0 t, cbuilder = self.compile(do_stm_setinteriorfield) cbuilder.cmdexec('') + + def test_setinteriorfield_all_sizes_inside_transaction(self): + def do_stm_setinteriorfield(argv): + callback1 = llhelper(CALLBACK, _play_with_setinteriorfield_1) + callback2 = llhelper(CALLBACK, _play_with_setinteriorfield_2) + # + descriptor_init() + perform_transaction(callback1, NULL) + perform_transaction(callback2, NULL) + descriptor_done() + return 0 + t, cbuilder = self.compile(do_stm_setinteriorfield) + cbuilder.cmdexec('') From noreply at buildbot.pypy.org Mon Jan 16 17:51:11 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 17:51:11 +0100 (CET) Subject: [pypy-commit] pypy default: (antocuni, arigo) Message-ID: <20120116165111.349D2820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51351:e244681d320e Date: 2012-01-16 17:49 +0100 http://bitbucket.org/pypy/pypy/changeset/e244681d320e/ Log: (antocuni, arigo) Add support for malloc'ing RPython instances non-movable, just by specifying "_alloc_nonmovable_ = True" on the class. diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): From noreply at buildbot.pypy.org Mon Jan 16 17:51:23 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 17:51:23 +0100 (CET) Subject: [pypy-commit] pypy stm: hg merge default Message-ID: <20120116165123.872CB820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51352:30f56b4fb5ea Date: 2012-01-16 17:50 +0100 http://bitbucket.org/pypy/pypy/changeset/30f56b4fb5ea/ Log: hg merge default diff too long, truncating to 10000 out of 44586 lines diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib-python/modified-2.7/ctypes/__init__.py b/lib-python/modified-2.7/ctypes/__init__.py --- a/lib-python/modified-2.7/ctypes/__init__.py +++ b/lib-python/modified-2.7/ctypes/__init__.py @@ -351,7 +351,7 @@ self._FuncPtr = _FuncPtr if handle is None: - self._handle = _ffi.CDLL(name) + self._handle = _ffi.CDLL(name, mode) else: self._handle = handle diff --git a/lib-python/modified-2.7/ctypes/test/test_callbacks.py b/lib-python/modified-2.7/ctypes/test/test_callbacks.py --- a/lib-python/modified-2.7/ctypes/test/test_callbacks.py +++ b/lib-python/modified-2.7/ctypes/test/test_callbacks.py @@ -1,5 +1,6 @@ import unittest from ctypes import * +from ctypes.test import xfail import _ctypes_test class Callbacks(unittest.TestCase): @@ -98,6 +99,7 @@ ## self.check_type(c_char_p, "abc") ## self.check_type(c_char_p, "def") + @xfail def test_pyobject(self): o = () from sys import getrefcount as grc diff --git a/lib-python/modified-2.7/ctypes/test/test_libc.py b/lib-python/modified-2.7/ctypes/test/test_libc.py --- a/lib-python/modified-2.7/ctypes/test/test_libc.py +++ b/lib-python/modified-2.7/ctypes/test/test_libc.py @@ -25,7 +25,10 @@ lib.my_qsort(chars, len(chars)-1, sizeof(c_char), comparefunc(sort)) self.assertEqual(chars.raw, " ,,aaaadmmmnpppsss\x00") - def test_no_more_xfail(self): + def SKIPPED_test_no_more_xfail(self): + # We decided to not explicitly support the whole ctypes-2.7 + # and instead go for a case-by-case, demand-driven approach. + # So this test is skipped instead of failing. import socket import ctypes.test self.assertTrue(not hasattr(ctypes.test, 'xfail'), diff --git a/lib_pypy/_collections.py b/lib_pypy/_collections.py --- a/lib_pypy/_collections.py +++ b/lib_pypy/_collections.py @@ -379,12 +379,14 @@ class defaultdict(dict): def __init__(self, *args, **kwds): - self.default_factory = None - if 'default_factory' in kwds: - self.default_factory = kwds.pop('default_factory') - elif len(args) > 0 and (callable(args[0]) or args[0] is None): - self.default_factory = args[0] + if len(args) > 0: + default_factory = args[0] args = args[1:] + if not callable(default_factory) and default_factory is not None: + raise TypeError("first argument must be callable") + else: + default_factory = None + self.default_factory = default_factory super(defaultdict, self).__init__(*args, **kwds) def __missing__(self, key): @@ -404,7 +406,7 @@ recurse.remove(id(self)) def copy(self): - return type(self)(self, default_factory=self.default_factory) + return type(self)(self.default_factory, self) def __copy__(self): return self.copy() diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/lib_pypy/_sha.py b/lib_pypy/_sha.py --- a/lib_pypy/_sha.py +++ b/lib_pypy/_sha.py @@ -1,5 +1,5 @@ #!/usr/bin/env python -# -*- coding: iso-8859-1 +# -*- coding: iso-8859-1 -*- # Note that PyPy contains also a built-in module 'sha' which will hide # this one if compiled in. diff --git a/lib_pypy/_sqlite3.py b/lib_pypy/_sqlite3.py --- a/lib_pypy/_sqlite3.py +++ b/lib_pypy/_sqlite3.py @@ -231,8 +231,10 @@ sqlite.sqlite3_result_text.argtypes = [c_void_p, c_char_p, c_int, c_void_p] sqlite.sqlite3_result_text.restype = None -sqlite.sqlite3_enable_load_extension.argtypes = [c_void_p, c_int] -sqlite.sqlite3_enable_load_extension.restype = c_int +HAS_LOAD_EXTENSION = hasattr(sqlite, "sqlite3_enable_load_extension") +if HAS_LOAD_EXTENSION: + sqlite.sqlite3_enable_load_extension.argtypes = [c_void_p, c_int] + sqlite.sqlite3_enable_load_extension.restype = c_int ########################################## # END Wrapped SQLite C API and constants @@ -708,13 +710,14 @@ from sqlite3.dump import _iterdump return _iterdump(self) - def enable_load_extension(self, enabled): - self._check_thread() - self._check_closed() + if HAS_LOAD_EXTENSION: + def enable_load_extension(self, enabled): + self._check_thread() + self._check_closed() - rc = sqlite.sqlite3_enable_load_extension(self.db, int(enabled)) - if rc != SQLITE_OK: - raise OperationalError("Error enabling load extension") + rc = sqlite.sqlite3_enable_load_extension(self.db, int(enabled)) + if rc != SQLITE_OK: + raise OperationalError("Error enabling load extension") DML, DQL, DDL = range(3) diff --git a/lib_pypy/distributed/socklayer.py b/lib_pypy/distributed/socklayer.py --- a/lib_pypy/distributed/socklayer.py +++ b/lib_pypy/distributed/socklayer.py @@ -2,7 +2,7 @@ import py from socket import socket -XXX needs import adaptation as 'green' is removed from py lib for years +raise ImportError("XXX needs import adaptation as 'green' is removed from py lib for years") from py.impl.green.msgstruct import decodemessage, message from socket import socket, AF_INET, SOCK_STREAM import marshal diff --git a/lib_pypy/itertools.py b/lib_pypy/itertools.py --- a/lib_pypy/itertools.py +++ b/lib_pypy/itertools.py @@ -25,7 +25,7 @@ __all__ = ['chain', 'count', 'cycle', 'dropwhile', 'groupby', 'ifilter', 'ifilterfalse', 'imap', 'islice', 'izip', 'repeat', 'starmap', - 'takewhile', 'tee'] + 'takewhile', 'tee', 'compress', 'product'] try: from __pypy__ import builtinify except ImportError: builtinify = lambda f: f diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/fromnumeric.py @@ -0,0 +1,2400 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplemented('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplemented('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplemented('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplemented('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplemented('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplemented('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplemented('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplemented('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplemented('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -0,0 +1,109 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + # assert (argmax(a, axis=0) == array([0, 0, 0])).all() + # assert (argmax(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() diff --git a/py/_code/code.py b/py/_code/code.py --- a/py/_code/code.py +++ b/py/_code/code.py @@ -164,6 +164,7 @@ # if something: # assume this causes a NameError # # _this_ lines and the one # below we don't want from entry.getsource() + end = min(end, len(source)) for i in range(self.lineno, end): if source[i].rstrip().endswith(':'): end = i + 1 diff --git a/pypy/annotation/binaryop.py b/pypy/annotation/binaryop.py --- a/pypy/annotation/binaryop.py +++ b/pypy/annotation/binaryop.py @@ -252,7 +252,26 @@ # unsignedness is considered a rare and contagious disease def union((int1, int2)): - knowntype = rarithmetic.compute_restype(int1.knowntype, int2.knowntype) + if int1.unsigned == int2.unsigned: + knowntype = rarithmetic.compute_restype(int1.knowntype, int2.knowntype) + else: + t1 = int1.knowntype + if t1 is bool: + t1 = int + t2 = int2.knowntype + if t2 is bool: + t2 = int + + if t2 is int: + if int2.nonneg == False: + raise UnionError, "Merging %s and a possibly negative int is not allowed" % t1 + knowntype = t1 + elif t1 is int: + if int1.nonneg == False: + raise UnionError, "Merging %s and a possibly negative int is not allowed" % t2 + knowntype = t2 + else: + raise UnionError, "Merging these types (%s, %s) is not supported" % (t1, t2) return SomeInteger(nonneg=int1.nonneg and int2.nonneg, knowntype=knowntype) diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -180,7 +180,12 @@ if name is None: name = pyobj.func_name if signature is None: - signature = cpython_code_signature(pyobj.func_code) + if hasattr(pyobj, '_generator_next_method_of_'): + from pypy.interpreter.argument import Signature + signature = Signature(['entry']) # haaaaaack + defaults = () + else: + signature = cpython_code_signature(pyobj.func_code) if defaults is None: defaults = pyobj.func_defaults self.name = name @@ -252,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/annotation/model.py b/pypy/annotation/model.py --- a/pypy/annotation/model.py +++ b/pypy/annotation/model.py @@ -591,13 +591,11 @@ immutable = True def __init__(self, method): self.method = method - -NUMBER = object() + annotation_to_ll_map = [ (SomeSingleFloat(), lltype.SingleFloat), (s_None, lltype.Void), # also matches SomeImpossibleValue() (s_Bool, lltype.Bool), - (SomeInteger(knowntype=r_ulonglong), NUMBER), (SomeFloat(), lltype.Float), (SomeLongFloat(), lltype.LongFloat), (SomeChar(), lltype.Char), @@ -623,10 +621,11 @@ return lltype.Ptr(p.PARENTTYPE) if isinstance(s_val, SomePtr): return s_val.ll_ptrtype + if type(s_val) is SomeInteger: + return lltype.build_number(None, s_val.knowntype) + for witness, T in annotation_to_ll_map: if witness.contains(s_val): - if T is NUMBER: - return lltype.build_number(None, s_val.knowntype) return T if info is None: info = '' @@ -635,7 +634,7 @@ raise ValueError("%sshould return a low-level type,\ngot instead %r" % ( info, s_val)) -ll_to_annotation_map = dict([(ll, ann) for ann, ll in annotation_to_ll_map if ll is not NUMBER]) +ll_to_annotation_map = dict([(ll, ann) for ann, ll in annotation_to_ll_map]) def lltype_to_annotation(T): try: diff --git a/pypy/annotation/specialize.py b/pypy/annotation/specialize.py --- a/pypy/annotation/specialize.py +++ b/pypy/annotation/specialize.py @@ -36,9 +36,7 @@ newtup = SpaceOperation('newtuple', starargs, argscopy[-1]) newstartblock.operations.append(newtup) newstartblock.closeblock(Link(argscopy, graph.startblock)) - graph.startblock.isstartblock = False graph.startblock = newstartblock - newstartblock.isstartblock = True argnames = argnames + ['.star%d' % i for i in range(nb_extra_args)] graph.signature = Signature(argnames) # note that we can mostly ignore defaults: if nb_extra_args > 0, diff --git a/pypy/annotation/test/test_annrpython.py b/pypy/annotation/test/test_annrpython.py --- a/pypy/annotation/test/test_annrpython.py +++ b/pypy/annotation/test/test_annrpython.py @@ -856,6 +856,46 @@ py.test.raises(Exception, a.build_types, f, []) # if you want to get a r_uint, you have to be explicit about it + def test_add_different_ints(self): + def f(a, b): + return a + b + a = self.RPythonAnnotator() + py.test.raises(Exception, a.build_types, f, [r_uint, int]) + + def test_merge_different_ints(self): + def f(a, b): + if a: + c = a + else: + c = b + return c + a = self.RPythonAnnotator() + py.test.raises(Exception, a.build_types, f, [r_uint, int]) + + def test_merge_ruint_zero(self): + def f(a): + if a: + c = a + else: + c = 0 + return c + a = self.RPythonAnnotator() + s = a.build_types(f, [r_uint]) + assert s == annmodel.SomeInteger(nonneg = True, unsigned = True) + + def test_merge_ruint_nonneg_signed(self): + def f(a, b): + if a: + c = a + else: + assert b >= 0 + c = b + return c + a = self.RPythonAnnotator() + s = a.build_types(f, [r_uint, int]) + assert s == annmodel.SomeInteger(nonneg = True, unsigned = True) + + def test_prebuilt_long_that_is_not_too_long(self): small_constant = 12L def f(): @@ -3029,7 +3069,7 @@ if g(x, y): g(x, r_uint(y)) a = self.RPythonAnnotator() - a.build_types(f, [int, int]) + py.test.raises(Exception, a.build_types, f, [int, int]) def test_compare_with_zero(self): def g(): diff --git a/pypy/bin/checkmodule.py b/pypy/bin/checkmodule.py --- a/pypy/bin/checkmodule.py +++ b/pypy/bin/checkmodule.py @@ -1,43 +1,45 @@ #! /usr/bin/env python """ -Usage: checkmodule.py [-b backend] +Usage: checkmodule.py -Compiles the PyPy extension module from pypy/module// -into a fake program which does nothing. Useful for testing whether a -modules compiles without doing a full translation. Default backend is cli. - -WARNING: this is still incomplete: there are chances that the -compilation fails with strange errors not due to the module. If a -module is known to compile during a translation but don't pass -checkmodule.py, please report the bug (or, better, correct it :-). +Check annotation and rtyping of the PyPy extension module from +pypy/module//. Useful for testing whether a +modules compiles without doing a full translation. """ import autopath -import sys +import sys, os from pypy.objspace.fake.checkmodule import checkmodule def main(argv): - try: - assert len(argv) in (2, 4) - if len(argv) == 2: - backend = 'cli' - modname = argv[1] - if modname in ('-h', '--help'): - print >> sys.stderr, __doc__ - sys.exit(0) - if modname.startswith('-'): - print >> sys.stderr, "Bad command line" - print >> sys.stderr, __doc__ - sys.exit(1) - else: - _, b, backend, modname = argv - assert b == '-b' - except AssertionError: + if len(argv) != 2: print >> sys.stderr, __doc__ sys.exit(2) + modname = argv[1] + if modname in ('-h', '--help'): + print >> sys.stderr, __doc__ + sys.exit(0) + if modname.startswith('-'): + print >> sys.stderr, "Bad command line" + print >> sys.stderr, __doc__ + sys.exit(1) + if os.path.sep in modname: + if os.path.basename(modname) == '': + modname = os.path.dirname(modname) + if os.path.basename(os.path.dirname(modname)) != 'module': + print >> sys.stderr, "Must give '../module/xxx', or just 'xxx'." + sys.exit(1) + modname = os.path.basename(modname) + try: + checkmodule(modname) + except Exception, e: + import traceback, pdb + traceback.print_exc() + pdb.post_mortem(sys.exc_info()[2]) + return 1 else: - checkmodule(modname, backend, interactive=True) - print 'Module compiled succesfully' + print 'Passed.' + return 0 if __name__ == '__main__': - main(sys.argv) + sys.exit(main(sys.argv)) diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -252,6 +252,10 @@ "use small tuples", default=False), + BoolOption("withspecialisedtuple", + "use specialised tuples", + default=False), + BoolOption("withrope", "use ropes as the string implementation", default=False, requires=[("objspace.std.withstrslice", False), @@ -366,6 +370,7 @@ config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) config.objspace.std.suggest(withidentitydict=True) + config.objspace.std.suggest(withspecialisedtuple=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) diff --git a/pypy/conftest.py b/pypy/conftest.py --- a/pypy/conftest.py +++ b/pypy/conftest.py @@ -496,6 +496,17 @@ def setup(self): super(AppClassCollector, self).setup() cls = self.obj + # + # + for name in dir(cls): + if name.startswith('test_'): + func = getattr(cls, name, None) + code = getattr(func, 'func_code', None) + if code and code.co_flags & 32: + raise AssertionError("unsupported: %r is a generator " + "app-level test method" % (name,)) + # + # space = cls.space clsname = cls.__name__ if self.config.option.runappdirect: diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,8 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -45,9 +45,9 @@ # built documents. # # The short X.Y version. -version = '1.6' +version = '1.7' # The full version, including alpha/beta/rc tags. -release = '1.6' +release = '1.7' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/config/objspace.std.withspecialisedtuple.txt b/pypy/doc/config/objspace.std.withspecialisedtuple.txt new file mode 100644 --- /dev/null +++ b/pypy/doc/config/objspace.std.withspecialisedtuple.txt @@ -0,0 +1,3 @@ +Use "specialized tuples", a custom implementation for some common kinds +of tuples. Currently limited to tuples of length 2, in three variants: +(int, int), (float, float), and a generic (object, object). diff --git a/pypy/doc/cpython_differences.rst b/pypy/doc/cpython_differences.rst --- a/pypy/doc/cpython_differences.rst +++ b/pypy/doc/cpython_differences.rst @@ -304,5 +304,14 @@ never a dictionary as it sometimes is in CPython. Assigning to ``__builtins__`` has no effect. +* directly calling the internal magic methods of a few built-in types + with invalid arguments may have a slightly different result. For + example, ``[].__add__(None)`` and ``(2).__add__(None)`` both return + ``NotImplemented`` on PyPy; on CPython, only the later does, and the + former raises ``TypeError``. (Of course, ``[]+None`` and ``2+None`` + both raise ``TypeError`` everywhere.) This difference is an + implementation detail that shows up because of internal C-level slots + that PyPy does not have. + .. include:: _ref.txt diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/faq.rst b/pypy/doc/faq.rst --- a/pypy/doc/faq.rst +++ b/pypy/doc/faq.rst @@ -112,10 +112,32 @@ You might be interested in our `benchmarking site`_ and our `jit documentation`_. +Note that the JIT has a very high warm-up cost, meaning that the +programs are slow at the beginning. If you want to compare the timings +with CPython, even relatively simple programs need to run *at least* one +second, preferrably at least a few seconds. Large, complicated programs +need even more time to warm-up the JIT. + .. _`benchmarking site`: http://speed.pypy.org .. _`jit documentation`: jit/index.html +--------------------------------------------------------------- +Couldn't the JIT dump and reload already-compiled machine code? +--------------------------------------------------------------- + +No, we found no way of doing that. The JIT generates machine code +containing a large number of constant addresses --- constant at the time +the machine code is written. The vast majority is probably not at all +constants that you find in the executable, with a nice link name. E.g. +the addresses of Python classes are used all the time, but Python +classes don't come statically from the executable; they are created anew +every time you restart your program. This makes saving and reloading +machine code completely impossible without some very advanced way of +mapping addresses in the old (now-dead) process to addresses in the new +process, including checking that all the previous assumptions about the +(now-dead) object are still true about the new object. + .. _`prolog and javascript`: diff --git a/pypy/doc/index.rst b/pypy/doc/index.rst --- a/pypy/doc/index.rst +++ b/pypy/doc/index.rst @@ -15,7 +15,7 @@ * `FAQ`_: some frequently asked questions. -* `Release 1.6`_: the latest official release +* `Release 1.7`_: the latest official release * `PyPy Blog`_: news and status info about PyPy @@ -75,7 +75,7 @@ .. _`Getting Started`: getting-started.html .. _`Papers`: extradoc.html .. _`Videos`: video-index.html -.. _`Release 1.6`: http://pypy.org/download.html +.. _`Release 1.7`: http://pypy.org/download.html .. _`speed.pypy.org`: http://speed.pypy.org .. _`RPython toolchain`: translation.html .. _`potential project ideas`: project-ideas.html @@ -120,9 +120,9 @@ Windows, on top of .NET, and on top of Java. To dig into PyPy it is recommended to try out the current Mercurial default branch, which is always working or mostly working, -instead of the latest release, which is `1.6`__. +instead of the latest release, which is `1.7`__. -.. __: release-1.6.0.html +.. __: release-1.7.0.html PyPy is mainly developed on Linux and Mac OS X. Windows is supported, but platform-specific bugs tend to take longer before we notice and fix diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -51,6 +51,24 @@ space.setattr(self, w_name, space.getitem(w_state, w_name)) + def missing_field(self, space, required, host): + "Find which required field is missing." + state = self.initialization_state + for i in range(len(required)): + if (state >> i) & 1: + continue # field is present + missing = required[i] + if missing is None: + continue # field is optional + w_obj = self.getdictvalue(space, missing) + if w_obj is None: + err = "required field \"%s\" missing from %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + else: + err = "incorrect type for field \"%s\" in %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + raise AssertionError("should not reach here") + class NodeVisitorNotImplemented(Exception): pass @@ -94,17 +112,6 @@ ) -def missing_field(space, state, required, host): - "Find which required field is missing." - for i in range(len(required)): - if not (state >> i) & 1: - missing = required[i] - if missing is not None: - err = "required field \"%s\" missing from %s" - err = err % (missing, host) - w_err = space.wrap(err) - raise OperationError(space.w_TypeError, w_err) - raise AssertionError("should not reach here") class mod(AST): @@ -112,7 +119,6 @@ class Module(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -128,7 +134,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Module') + self.missing_field(space, ['body'], 'Module') else: pass w_list = self.w_body @@ -145,7 +151,6 @@ class Interactive(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -161,7 +166,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Interactive') + self.missing_field(space, ['body'], 'Interactive') else: pass w_list = self.w_body @@ -178,7 +183,6 @@ class Expression(mod): - def __init__(self, body): self.body = body self.initialization_state = 1 @@ -192,7 +196,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Expression') + self.missing_field(space, ['body'], 'Expression') else: pass self.body.sync_app_attrs(space) @@ -200,7 +204,6 @@ class Suite(mod): - def __init__(self, body): self.body = body self.w_body = None @@ -216,7 +219,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['body'], 'Suite') + self.missing_field(space, ['body'], 'Suite') else: pass w_list = self.w_body @@ -232,15 +235,13 @@ class stmt(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class FunctionDef(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, name, args, body, decorator_list, lineno, col_offset): self.name = name self.args = args @@ -264,7 +265,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['name', 'args', 'body', 'decorator_list', 'lineno', 'col_offset'], 'FunctionDef') + self.missing_field(space, ['lineno', 'col_offset', 'name', 'args', 'body', 'decorator_list'], 'FunctionDef') else: pass self.args.sync_app_attrs(space) @@ -292,9 +293,6 @@ class ClassDef(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, name, bases, body, decorator_list, lineno, col_offset): self.name = name self.bases = bases @@ -320,7 +318,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['name', 'bases', 'body', 'decorator_list', 'lineno', 'col_offset'], 'ClassDef') + self.missing_field(space, ['lineno', 'col_offset', 'name', 'bases', 'body', 'decorator_list'], 'ClassDef') else: pass w_list = self.w_bases @@ -357,9 +355,6 @@ class Return(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value stmt.__init__(self, lineno, col_offset) @@ -374,10 +369,10 @@ return visitor.visit_Return(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 6: - missing_field(space, self.initialization_state, [None, 'lineno', 'col_offset'], 'Return') + if (self.initialization_state & ~4) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None], 'Return') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.value = None if self.value: self.value.sync_app_attrs(space) @@ -385,9 +380,6 @@ class Delete(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, targets, lineno, col_offset): self.targets = targets self.w_targets = None @@ -404,7 +396,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['targets', 'lineno', 'col_offset'], 'Delete') + self.missing_field(space, ['lineno', 'col_offset', 'targets'], 'Delete') else: pass w_list = self.w_targets @@ -421,9 +413,6 @@ class Assign(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, targets, value, lineno, col_offset): self.targets = targets self.w_targets = None @@ -442,7 +431,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['targets', 'value', 'lineno', 'col_offset'], 'Assign') + self.missing_field(space, ['lineno', 'col_offset', 'targets', 'value'], 'Assign') else: pass w_list = self.w_targets @@ -460,9 +449,6 @@ class AugAssign(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, target, op, value, lineno, col_offset): self.target = target self.op = op @@ -480,7 +466,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['target', 'op', 'value', 'lineno', 'col_offset'], 'AugAssign') + self.missing_field(space, ['lineno', 'col_offset', 'target', 'op', 'value'], 'AugAssign') else: pass self.target.sync_app_attrs(space) @@ -489,9 +475,6 @@ class Print(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, dest, values, nl, lineno, col_offset): self.dest = dest self.values = values @@ -511,10 +494,10 @@ return visitor.visit_Print(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 30: - missing_field(space, self.initialization_state, [None, 'values', 'nl', 'lineno', 'col_offset'], 'Print') + if (self.initialization_state & ~4) ^ 27: + self.missing_field(space, ['lineno', 'col_offset', None, 'values', 'nl'], 'Print') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.dest = None if self.dest: self.dest.sync_app_attrs(space) @@ -532,9 +515,6 @@ class For(stmt): - _lineno_mask = 16 - _col_offset_mask = 32 - def __init__(self, target, iter, body, orelse, lineno, col_offset): self.target = target self.iter = iter @@ -559,7 +539,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 63: - missing_field(space, self.initialization_state, ['target', 'iter', 'body', 'orelse', 'lineno', 'col_offset'], 'For') + self.missing_field(space, ['lineno', 'col_offset', 'target', 'iter', 'body', 'orelse'], 'For') else: pass self.target.sync_app_attrs(space) @@ -588,9 +568,6 @@ class While(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -613,7 +590,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'While') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'While') else: pass self.test.sync_app_attrs(space) @@ -641,9 +618,6 @@ class If(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -666,7 +640,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'If') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'If') else: pass self.test.sync_app_attrs(space) @@ -694,9 +668,6 @@ class With(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, context_expr, optional_vars, body, lineno, col_offset): self.context_expr = context_expr self.optional_vars = optional_vars @@ -717,10 +688,10 @@ return visitor.visit_With(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~2) ^ 29: - missing_field(space, self.initialization_state, ['context_expr', None, 'body', 'lineno', 'col_offset'], 'With') + if (self.initialization_state & ~8) ^ 23: + self.missing_field(space, ['lineno', 'col_offset', 'context_expr', None, 'body'], 'With') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.optional_vars = None self.context_expr.sync_app_attrs(space) if self.optional_vars: @@ -739,9 +710,6 @@ class Raise(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, type, inst, tback, lineno, col_offset): self.type = type self.inst = inst @@ -762,14 +730,14 @@ return visitor.visit_Raise(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~7) ^ 24: - missing_field(space, self.initialization_state, [None, None, None, 'lineno', 'col_offset'], 'Raise') + if (self.initialization_state & ~28) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None, None, None], 'Raise') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.type = None - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.inst = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.tback = None if self.type: self.type.sync_app_attrs(space) @@ -781,9 +749,6 @@ class TryExcept(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, body, handlers, orelse, lineno, col_offset): self.body = body self.w_body = None @@ -808,7 +773,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['body', 'handlers', 'orelse', 'lineno', 'col_offset'], 'TryExcept') + self.missing_field(space, ['lineno', 'col_offset', 'body', 'handlers', 'orelse'], 'TryExcept') else: pass w_list = self.w_body @@ -845,9 +810,6 @@ class TryFinally(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, body, finalbody, lineno, col_offset): self.body = body self.w_body = None @@ -868,7 +830,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['body', 'finalbody', 'lineno', 'col_offset'], 'TryFinally') + self.missing_field(space, ['lineno', 'col_offset', 'body', 'finalbody'], 'TryFinally') else: pass w_list = self.w_body @@ -895,9 +857,6 @@ class Assert(stmt): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, test, msg, lineno, col_offset): self.test = test self.msg = msg @@ -914,10 +873,10 @@ return visitor.visit_Assert(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~2) ^ 13: - missing_field(space, self.initialization_state, ['test', None, 'lineno', 'col_offset'], 'Assert') + if (self.initialization_state & ~8) ^ 7: + self.missing_field(space, ['lineno', 'col_offset', 'test', None], 'Assert') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.msg = None self.test.sync_app_attrs(space) if self.msg: @@ -926,9 +885,6 @@ class Import(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, names, lineno, col_offset): self.names = names self.w_names = None @@ -945,7 +901,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['names', 'lineno', 'col_offset'], 'Import') + self.missing_field(space, ['lineno', 'col_offset', 'names'], 'Import') else: pass w_list = self.w_names @@ -962,9 +918,6 @@ class ImportFrom(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, module, names, level, lineno, col_offset): self.module = module self.names = names @@ -982,12 +935,12 @@ return visitor.visit_ImportFrom(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~5) ^ 26: - missing_field(space, self.initialization_state, [None, 'names', None, 'lineno', 'col_offset'], 'ImportFrom') + if (self.initialization_state & ~20) ^ 11: + self.missing_field(space, ['lineno', 'col_offset', None, 'names', None], 'ImportFrom') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.module = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.level = 0 w_list = self.w_names if w_list is not None: @@ -1003,9 +956,6 @@ class Exec(stmt): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, body, globals, locals, lineno, col_offset): self.body = body self.globals = globals @@ -1025,12 +975,12 @@ return visitor.visit_Exec(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~6) ^ 25: - missing_field(space, self.initialization_state, ['body', None, None, 'lineno', 'col_offset'], 'Exec') + if (self.initialization_state & ~24) ^ 7: + self.missing_field(space, ['lineno', 'col_offset', 'body', None, None], 'Exec') else: - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.globals = None - if not self.initialization_state & 4: + if not self.initialization_state & 16: self.locals = None self.body.sync_app_attrs(space) if self.globals: @@ -1041,9 +991,6 @@ class Global(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, names, lineno, col_offset): self.names = names self.w_names = None @@ -1058,7 +1005,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['names', 'lineno', 'col_offset'], 'Global') + self.missing_field(space, ['lineno', 'col_offset', 'names'], 'Global') else: pass w_list = self.w_names @@ -1072,9 +1019,6 @@ class Expr(stmt): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value stmt.__init__(self, lineno, col_offset) @@ -1089,7 +1033,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Expr') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Expr') else: pass self.value.sync_app_attrs(space) @@ -1097,9 +1041,6 @@ class Pass(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1112,16 +1053,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Pass') + self.missing_field(space, ['lineno', 'col_offset'], 'Pass') else: pass class Break(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1134,16 +1072,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Break') + self.missing_field(space, ['lineno', 'col_offset'], 'Break') else: pass class Continue(stmt): - _lineno_mask = 1 - _col_offset_mask = 2 - def __init__(self, lineno, col_offset): stmt.__init__(self, lineno, col_offset) self.initialization_state = 3 @@ -1156,21 +1091,19 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['lineno', 'col_offset'], 'Continue') + self.missing_field(space, ['lineno', 'col_offset'], 'Continue') else: pass class expr(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class BoolOp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, op, values, lineno, col_offset): self.op = op self.values = values @@ -1188,7 +1121,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['op', 'values', 'lineno', 'col_offset'], 'BoolOp') + self.missing_field(space, ['lineno', 'col_offset', 'op', 'values'], 'BoolOp') else: pass w_list = self.w_values @@ -1205,9 +1138,6 @@ class BinOp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, left, op, right, lineno, col_offset): self.left = left self.op = op @@ -1225,7 +1155,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['left', 'op', 'right', 'lineno', 'col_offset'], 'BinOp') + self.missing_field(space, ['lineno', 'col_offset', 'left', 'op', 'right'], 'BinOp') else: pass self.left.sync_app_attrs(space) @@ -1234,9 +1164,6 @@ class UnaryOp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, op, operand, lineno, col_offset): self.op = op self.operand = operand @@ -1252,7 +1179,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['op', 'operand', 'lineno', 'col_offset'], 'UnaryOp') + self.missing_field(space, ['lineno', 'col_offset', 'op', 'operand'], 'UnaryOp') else: pass self.operand.sync_app_attrs(space) @@ -1260,9 +1187,6 @@ class Lambda(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, args, body, lineno, col_offset): self.args = args self.body = body @@ -1279,7 +1203,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['args', 'body', 'lineno', 'col_offset'], 'Lambda') + self.missing_field(space, ['lineno', 'col_offset', 'args', 'body'], 'Lambda') else: pass self.args.sync_app_attrs(space) @@ -1288,9 +1212,6 @@ class IfExp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, test, body, orelse, lineno, col_offset): self.test = test self.body = body @@ -1309,7 +1230,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['test', 'body', 'orelse', 'lineno', 'col_offset'], 'IfExp') + self.missing_field(space, ['lineno', 'col_offset', 'test', 'body', 'orelse'], 'IfExp') else: pass self.test.sync_app_attrs(space) @@ -1319,9 +1240,6 @@ class Dict(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, keys, values, lineno, col_offset): self.keys = keys self.w_keys = None @@ -1342,7 +1260,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['keys', 'values', 'lineno', 'col_offset'], 'Dict') + self.missing_field(space, ['lineno', 'col_offset', 'keys', 'values'], 'Dict') else: pass w_list = self.w_keys @@ -1369,9 +1287,6 @@ class Set(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, elts, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1388,7 +1303,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['elts', 'lineno', 'col_offset'], 'Set') + self.missing_field(space, ['lineno', 'col_offset', 'elts'], 'Set') else: pass w_list = self.w_elts @@ -1405,9 +1320,6 @@ class ListComp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1426,7 +1338,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'ListComp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'ListComp') else: pass self.elt.sync_app_attrs(space) @@ -1444,9 +1356,6 @@ class SetComp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1465,7 +1374,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'SetComp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'SetComp') else: pass self.elt.sync_app_attrs(space) @@ -1483,9 +1392,6 @@ class DictComp(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, key, value, generators, lineno, col_offset): self.key = key self.value = value @@ -1506,7 +1412,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['key', 'value', 'generators', 'lineno', 'col_offset'], 'DictComp') + self.missing_field(space, ['lineno', 'col_offset', 'key', 'value', 'generators'], 'DictComp') else: pass self.key.sync_app_attrs(space) @@ -1525,9 +1431,6 @@ class GeneratorExp(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elt, generators, lineno, col_offset): self.elt = elt self.generators = generators @@ -1546,7 +1449,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elt', 'generators', 'lineno', 'col_offset'], 'GeneratorExp') + self.missing_field(space, ['lineno', 'col_offset', 'elt', 'generators'], 'GeneratorExp') else: pass self.elt.sync_app_attrs(space) @@ -1564,9 +1467,6 @@ class Yield(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1581,10 +1481,10 @@ return visitor.visit_Yield(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~1) ^ 6: - missing_field(space, self.initialization_state, [None, 'lineno', 'col_offset'], 'Yield') + if (self.initialization_state & ~4) ^ 3: + self.missing_field(space, ['lineno', 'col_offset', None], 'Yield') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.value = None if self.value: self.value.sync_app_attrs(space) @@ -1592,9 +1492,6 @@ class Compare(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, left, ops, comparators, lineno, col_offset): self.left = left self.ops = ops @@ -1615,7 +1512,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['left', 'ops', 'comparators', 'lineno', 'col_offset'], 'Compare') + self.missing_field(space, ['lineno', 'col_offset', 'left', 'ops', 'comparators'], 'Compare') else: pass self.left.sync_app_attrs(space) @@ -1640,9 +1537,6 @@ class Call(expr): - _lineno_mask = 32 - _col_offset_mask = 64 - def __init__(self, func, args, keywords, starargs, kwargs, lineno, col_offset): self.func = func self.args = args @@ -1670,12 +1564,12 @@ return visitor.visit_Call(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~24) ^ 103: - missing_field(space, self.initialization_state, ['func', 'args', 'keywords', None, None, 'lineno', 'col_offset'], 'Call') + if (self.initialization_state & ~96) ^ 31: + self.missing_field(space, ['lineno', 'col_offset', 'func', 'args', 'keywords', None, None], 'Call') else: - if not self.initialization_state & 8: + if not self.initialization_state & 32: self.starargs = None - if not self.initialization_state & 16: + if not self.initialization_state & 64: self.kwargs = None self.func.sync_app_attrs(space) w_list = self.w_args @@ -1706,9 +1600,6 @@ class Repr(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1723,7 +1614,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Repr') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Repr') else: pass self.value.sync_app_attrs(space) @@ -1731,9 +1622,6 @@ class Num(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, n, lineno, col_offset): self.n = n expr.__init__(self, lineno, col_offset) @@ -1747,16 +1635,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['n', 'lineno', 'col_offset'], 'Num') + self.missing_field(space, ['lineno', 'col_offset', 'n'], 'Num') else: pass class Str(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, s, lineno, col_offset): self.s = s expr.__init__(self, lineno, col_offset) @@ -1770,16 +1655,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['s', 'lineno', 'col_offset'], 'Str') + self.missing_field(space, ['lineno', 'col_offset', 's'], 'Str') else: pass class Attribute(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, value, attr, ctx, lineno, col_offset): self.value = value self.attr = attr @@ -1796,7 +1678,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['value', 'attr', 'ctx', 'lineno', 'col_offset'], 'Attribute') + self.missing_field(space, ['lineno', 'col_offset', 'value', 'attr', 'ctx'], 'Attribute') else: pass self.value.sync_app_attrs(space) @@ -1804,9 +1686,6 @@ class Subscript(expr): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, value, slice, ctx, lineno, col_offset): self.value = value self.slice = slice @@ -1824,7 +1703,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 31: - missing_field(space, self.initialization_state, ['value', 'slice', 'ctx', 'lineno', 'col_offset'], 'Subscript') + self.missing_field(space, ['lineno', 'col_offset', 'value', 'slice', 'ctx'], 'Subscript') else: pass self.value.sync_app_attrs(space) @@ -1833,9 +1712,6 @@ class Name(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, id, ctx, lineno, col_offset): self.id = id self.ctx = ctx @@ -1850,16 +1726,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['id', 'ctx', 'lineno', 'col_offset'], 'Name') + self.missing_field(space, ['lineno', 'col_offset', 'id', 'ctx'], 'Name') else: pass class List(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elts, ctx, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1877,7 +1750,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elts', 'ctx', 'lineno', 'col_offset'], 'List') + self.missing_field(space, ['lineno', 'col_offset', 'elts', 'ctx'], 'List') else: pass w_list = self.w_elts @@ -1894,9 +1767,6 @@ class Tuple(expr): - _lineno_mask = 4 - _col_offset_mask = 8 - def __init__(self, elts, ctx, lineno, col_offset): self.elts = elts self.w_elts = None @@ -1914,7 +1784,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 15: - missing_field(space, self.initialization_state, ['elts', 'ctx', 'lineno', 'col_offset'], 'Tuple') + self.missing_field(space, ['lineno', 'col_offset', 'elts', 'ctx'], 'Tuple') else: pass w_list = self.w_elts @@ -1931,9 +1801,6 @@ class Const(expr): - _lineno_mask = 2 - _col_offset_mask = 4 - def __init__(self, value, lineno, col_offset): self.value = value expr.__init__(self, lineno, col_offset) @@ -1947,7 +1814,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['value', 'lineno', 'col_offset'], 'Const') + self.missing_field(space, ['lineno', 'col_offset', 'value'], 'Const') else: pass @@ -2009,7 +1876,6 @@ class Ellipsis(slice): - def __init__(self): self.initialization_state = 0 @@ -2021,14 +1887,13 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 0: - missing_field(space, self.initialization_state, [], 'Ellipsis') + self.missing_field(space, [], 'Ellipsis') else: pass class Slice(slice): - def __init__(self, lower, upper, step): self.lower = lower self.upper = upper @@ -2049,7 +1914,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~7) ^ 0: - missing_field(space, self.initialization_state, [None, None, None], 'Slice') + self.missing_field(space, [None, None, None], 'Slice') else: if not self.initialization_state & 1: self.lower = None @@ -2067,7 +1932,6 @@ class ExtSlice(slice): - def __init__(self, dims): self.dims = dims self.w_dims = None @@ -2083,7 +1947,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['dims'], 'ExtSlice') + self.missing_field(space, ['dims'], 'ExtSlice') else: pass w_list = self.w_dims @@ -2100,7 +1964,6 @@ class Index(slice): - def __init__(self, value): self.value = value self.initialization_state = 1 @@ -2114,7 +1977,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 1: - missing_field(space, self.initialization_state, ['value'], 'Index') + self.missing_field(space, ['value'], 'Index') else: pass self.value.sync_app_attrs(space) @@ -2377,7 +2240,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 7: - missing_field(space, self.initialization_state, ['target', 'iter', 'ifs'], 'comprehension') + self.missing_field(space, ['target', 'iter', 'ifs'], 'comprehension') else: pass self.target.sync_app_attrs(space) @@ -2394,15 +2257,13 @@ node.sync_app_attrs(space) class excepthandler(AST): + def __init__(self, lineno, col_offset): self.lineno = lineno self.col_offset = col_offset class ExceptHandler(excepthandler): - _lineno_mask = 8 - _col_offset_mask = 16 - def __init__(self, type, name, body, lineno, col_offset): self.type = type self.name = name @@ -2424,12 +2285,12 @@ return visitor.visit_ExceptHandler(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~3) ^ 28: - missing_field(space, self.initialization_state, [None, None, 'body', 'lineno', 'col_offset'], 'ExceptHandler') + if (self.initialization_state & ~12) ^ 19: + self.missing_field(space, ['lineno', 'col_offset', None, None, 'body'], 'ExceptHandler') else: - if not self.initialization_state & 1: + if not self.initialization_state & 4: self.type = None - if not self.initialization_state & 2: + if not self.initialization_state & 8: self.name = None if self.type: self.type.sync_app_attrs(space) @@ -2470,7 +2331,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~6) ^ 9: - missing_field(space, self.initialization_state, ['args', None, None, 'defaults'], 'arguments') + self.missing_field(space, ['args', None, None, 'defaults'], 'arguments') else: if not self.initialization_state & 2: self.vararg = None @@ -2513,7 +2374,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~0) ^ 3: - missing_field(space, self.initialization_state, ['arg', 'value'], 'keyword') + self.missing_field(space, ['arg', 'value'], 'keyword') else: pass self.value.sync_app_attrs(space) @@ -2533,7 +2394,7 @@ def sync_app_attrs(self, space): if (self.initialization_state & ~2) ^ 1: - missing_field(space, self.initialization_state, ['name', None], 'alias') + self.missing_field(space, ['name', None], 'alias') else: if not self.initialization_state & 2: self.asname = None @@ -3019,6 +2880,8 @@ def Expression_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -3098,7 +2961,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -3112,14 +2975,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def stmt_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -3133,7 +2996,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 stmt.typedef = typedef.TypeDef("stmt", AST.typedef, @@ -3149,7 +3012,7 @@ w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -3163,14 +3026,14 @@ w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def FunctionDef_get_args(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'args') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) @@ -3184,10 +3047,10 @@ w_self.setdictvalue(space, 'args', w_new_value) return w_self.deldictvalue(space, 'args') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def FunctionDef_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3201,10 +3064,10 @@ def FunctionDef_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def FunctionDef_get_decorator_list(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: @@ -3218,7 +3081,7 @@ def FunctionDef_set_decorator_list(space, w_self, w_new_value): w_self.w_decorator_list = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _FunctionDef_field_unroller = unrolling_iterable(['name', 'args', 'body', 'decorator_list']) def FunctionDef_init(space, w_self, __args__): @@ -3254,7 +3117,7 @@ w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -3268,10 +3131,10 @@ w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ClassDef_get_bases(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'bases') if w_self.w_bases is None: @@ -3285,10 +3148,10 @@ def ClassDef_set_bases(space, w_self, w_new_value): w_self.w_bases = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ClassDef_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3302,10 +3165,10 @@ def ClassDef_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def ClassDef_get_decorator_list(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'decorator_list') if w_self.w_decorator_list is None: @@ -3319,7 +3182,7 @@ def ClassDef_set_decorator_list(space, w_self, w_new_value): w_self.w_decorator_list = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _ClassDef_field_unroller = unrolling_iterable(['name', 'bases', 'body', 'decorator_list']) def ClassDef_init(space, w_self, __args__): @@ -3356,7 +3219,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3364,13 +3227,15 @@ def Return_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, True) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Return_field_unroller = unrolling_iterable(['value']) def Return_init(space, w_self, __args__): @@ -3397,7 +3262,7 @@ ) def Delete_get_targets(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: @@ -3411,7 +3276,7 @@ def Delete_set_targets(space, w_self, w_new_value): w_self.w_targets = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Delete_field_unroller = unrolling_iterable(['targets']) def Delete_init(space, w_self, __args__): @@ -3439,7 +3304,7 @@ ) def Assign_get_targets(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'targets') if w_self.w_targets is None: @@ -3453,14 +3318,14 @@ def Assign_set_targets(space, w_self, w_new_value): w_self.w_targets = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Assign_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3468,13 +3333,15 @@ def Assign_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Assign_field_unroller = unrolling_iterable(['targets', 'value']) def Assign_init(space, w_self, __args__): @@ -3507,7 +3374,7 @@ w_obj = w_self.getdictvalue(space, 'target') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) @@ -3515,20 +3382,22 @@ def AugAssign_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'target', w_new_value) return w_self.deldictvalue(space, 'target') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def AugAssign_get_op(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() @@ -3544,14 +3413,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def AugAssign_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -3559,13 +3428,15 @@ def AugAssign_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _AugAssign_field_unroller = unrolling_iterable(['target', 'op', 'value']) def AugAssign_init(space, w_self, __args__): @@ -3598,7 +3469,7 @@ w_obj = w_self.getdictvalue(space, 'dest') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'dest') return space.wrap(w_self.dest) @@ -3606,16 +3477,18 @@ def Print_set_dest(space, w_self, w_new_value): try: w_self.dest = space.interp_w(expr, w_new_value, True) + if type(w_self.dest) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'dest', w_new_value) return w_self.deldictvalue(space, 'dest') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Print_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -3629,14 +3502,14 @@ def Print_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Print_get_nl(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'nl') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'nl') return space.wrap(w_self.nl) @@ -3650,7 +3523,7 @@ w_self.setdictvalue(space, 'nl', w_new_value) return w_self.deldictvalue(space, 'nl') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Print_field_unroller = unrolling_iterable(['dest', 'values', 'nl']) def Print_init(space, w_self, __args__): @@ -3684,7 +3557,7 @@ w_obj = w_self.getdictvalue(space, 'target') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'target') return space.wrap(w_self.target) @@ -3692,20 +3565,22 @@ def For_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'target', w_new_value) return w_self.deldictvalue(space, 'target') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def For_get_iter(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'iter') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'iter') return space.wrap(w_self.iter) @@ -3713,16 +3588,18 @@ def For_set_iter(space, w_self, w_new_value): try: w_self.iter = space.interp_w(expr, w_new_value, False) + if type(w_self.iter) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'iter', w_new_value) return w_self.deldictvalue(space, 'iter') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def For_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3736,10 +3613,10 @@ def For_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def For_get_orelse(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3753,7 +3630,7 @@ def For_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 _For_field_unroller = unrolling_iterable(['target', 'iter', 'body', 'orelse']) def For_init(space, w_self, __args__): @@ -3789,7 +3666,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -3797,16 +3674,18 @@ def While_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def While_get_body(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3820,10 +3699,10 @@ def While_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def While_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3837,7 +3716,7 @@ def While_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _While_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def While_init(space, w_self, __args__): @@ -3872,7 +3751,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -3880,16 +3759,18 @@ def If_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def If_get_body(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -3903,10 +3784,10 @@ def If_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def If_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -3920,7 +3801,7 @@ def If_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _If_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def If_init(space, w_self, __args__): @@ -3955,7 +3836,7 @@ w_obj = w_self.getdictvalue(space, 'context_expr') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'context_expr') return space.wrap(w_self.context_expr) @@ -3963,20 +3844,22 @@ def With_set_context_expr(space, w_self, w_new_value): try: w_self.context_expr = space.interp_w(expr, w_new_value, False) + if type(w_self.context_expr) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'context_expr', w_new_value) return w_self.deldictvalue(space, 'context_expr') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def With_get_optional_vars(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'optional_vars') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'optional_vars') return space.wrap(w_self.optional_vars) @@ -3984,16 +3867,18 @@ def With_set_optional_vars(space, w_self, w_new_value): try: w_self.optional_vars = space.interp_w(expr, w_new_value, True) + if type(w_self.optional_vars) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'optional_vars', w_new_value) return w_self.deldictvalue(space, 'optional_vars') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def With_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4007,7 +3892,7 @@ def With_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _With_field_unroller = unrolling_iterable(['context_expr', 'optional_vars', 'body']) def With_init(space, w_self, __args__): @@ -4041,7 +3926,7 @@ w_obj = w_self.getdictvalue(space, 'type') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) @@ -4049,20 +3934,22 @@ def Raise_set_type(space, w_self, w_new_value): try: w_self.type = space.interp_w(expr, w_new_value, True) + if type(w_self.type) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'type', w_new_value) return w_self.deldictvalue(space, 'type') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Raise_get_inst(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'inst') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'inst') return space.wrap(w_self.inst) @@ -4070,20 +3957,22 @@ def Raise_set_inst(space, w_self, w_new_value): try: w_self.inst = space.interp_w(expr, w_new_value, True) + if type(w_self.inst) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'inst', w_new_value) return w_self.deldictvalue(space, 'inst') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Raise_get_tback(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'tback') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'tback') return space.wrap(w_self.tback) @@ -4091,13 +3980,15 @@ def Raise_set_tback(space, w_self, w_new_value): try: w_self.tback = space.interp_w(expr, w_new_value, True) + if type(w_self.tback) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'tback', w_new_value) return w_self.deldictvalue(space, 'tback') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Raise_field_unroller = unrolling_iterable(['type', 'inst', 'tback']) def Raise_init(space, w_self, __args__): @@ -4126,7 +4017,7 @@ ) def TryExcept_get_body(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4140,10 +4031,10 @@ def TryExcept_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def TryExcept_get_handlers(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'handlers') if w_self.w_handlers is None: @@ -4157,10 +4048,10 @@ def TryExcept_set_handlers(space, w_self, w_new_value): w_self.w_handlers = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def TryExcept_get_orelse(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') if w_self.w_orelse is None: @@ -4174,7 +4065,7 @@ def TryExcept_set_orelse(space, w_self, w_new_value): w_self.w_orelse = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _TryExcept_field_unroller = unrolling_iterable(['body', 'handlers', 'orelse']) def TryExcept_init(space, w_self, __args__): @@ -4206,7 +4097,7 @@ ) def TryFinally_get_body(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -4220,10 +4111,10 @@ def TryFinally_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def TryFinally_get_finalbody(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'finalbody') if w_self.w_finalbody is None: @@ -4237,7 +4128,7 @@ def TryFinally_set_finalbody(space, w_self, w_new_value): w_self.w_finalbody = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _TryFinally_field_unroller = unrolling_iterable(['body', 'finalbody']) def TryFinally_init(space, w_self, __args__): @@ -4271,7 +4162,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -4279,20 +4170,22 @@ def Assert_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Assert_get_msg(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'msg') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'msg') return space.wrap(w_self.msg) @@ -4300,13 +4193,15 @@ def Assert_set_msg(space, w_self, w_new_value): try: w_self.msg = space.interp_w(expr, w_new_value, True) + if type(w_self.msg) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'msg', w_new_value) return w_self.deldictvalue(space, 'msg') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Assert_field_unroller = unrolling_iterable(['test', 'msg']) def Assert_init(space, w_self, __args__): @@ -4334,7 +4229,7 @@ ) def Import_get_names(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4348,7 +4243,7 @@ def Import_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Import_field_unroller = unrolling_iterable(['names']) def Import_init(space, w_self, __args__): @@ -4380,7 +4275,7 @@ w_obj = w_self.getdictvalue(space, 'module') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'module') return space.wrap(w_self.module) @@ -4397,10 +4292,10 @@ w_self.setdictvalue(space, 'module', w_new_value) return w_self.deldictvalue(space, 'module') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ImportFrom_get_names(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4414,14 +4309,14 @@ def ImportFrom_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ImportFrom_get_level(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'level') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'level') return space.wrap(w_self.level) @@ -4435,7 +4330,7 @@ w_self.setdictvalue(space, 'level', w_new_value) return w_self.deldictvalue(space, 'level') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _ImportFrom_field_unroller = unrolling_iterable(['module', 'names', 'level']) def ImportFrom_init(space, w_self, __args__): @@ -4469,7 +4364,7 @@ w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -4477,20 +4372,22 @@ def Exec_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Exec_get_globals(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'globals') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'globals') return space.wrap(w_self.globals) @@ -4498,20 +4395,22 @@ def Exec_set_globals(space, w_self, w_new_value): try: w_self.globals = space.interp_w(expr, w_new_value, True) + if type(w_self.globals) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'globals', w_new_value) return w_self.deldictvalue(space, 'globals') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Exec_get_locals(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'locals') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'locals') return space.wrap(w_self.locals) @@ -4519,13 +4418,15 @@ def Exec_set_locals(space, w_self, w_new_value): try: w_self.locals = space.interp_w(expr, w_new_value, True) + if type(w_self.locals) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'locals', w_new_value) return w_self.deldictvalue(space, 'locals') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Exec_field_unroller = unrolling_iterable(['body', 'globals', 'locals']) def Exec_init(space, w_self, __args__): @@ -4554,7 +4455,7 @@ ) def Global_get_names(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'names') if w_self.w_names is None: @@ -4568,7 +4469,7 @@ def Global_set_names(space, w_self, w_new_value): w_self.w_names = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Global_field_unroller = unrolling_iterable(['names']) def Global_init(space, w_self, __args__): @@ -4600,7 +4501,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -4608,13 +4509,15 @@ def Expr_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Expr_field_unroller = unrolling_iterable(['value']) def Expr_init(space, w_self, __args__): @@ -4696,7 +4599,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -4710,14 +4613,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def expr_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -4731,7 +4634,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 expr.typedef = typedef.TypeDef("expr", AST.typedef, @@ -4747,7 +4650,7 @@ w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return boolop_to_class[w_self.op - 1]() @@ -4763,10 +4666,10 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def BoolOp_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -4780,7 +4683,7 @@ def BoolOp_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _BoolOp_field_unroller = unrolling_iterable(['op', 'values']) def BoolOp_init(space, w_self, __args__): @@ -4813,7 +4716,7 @@ w_obj = w_self.getdictvalue(space, 'left') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) @@ -4821,20 +4724,22 @@ def BinOp_set_left(space, w_self, w_new_value): try: w_self.left = space.interp_w(expr, w_new_value, False) + if type(w_self.left) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'left', w_new_value) return w_self.deldictvalue(space, 'left') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def BinOp_get_op(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return operator_to_class[w_self.op - 1]() @@ -4850,14 +4755,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def BinOp_get_right(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'right') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'right') return space.wrap(w_self.right) @@ -4865,13 +4770,15 @@ def BinOp_set_right(space, w_self, w_new_value): try: w_self.right = space.interp_w(expr, w_new_value, False) + if type(w_self.right) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'right', w_new_value) return w_self.deldictvalue(space, 'right') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _BinOp_field_unroller = unrolling_iterable(['left', 'op', 'right']) def BinOp_init(space, w_self, __args__): @@ -4904,7 +4811,7 @@ w_obj = w_self.getdictvalue(space, 'op') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'op') return unaryop_to_class[w_self.op - 1]() @@ -4920,14 +4827,14 @@ return # need to save the original object too w_self.setdictvalue(space, 'op', w_new_value) - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def UnaryOp_get_operand(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'operand') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'operand') return space.wrap(w_self.operand) @@ -4935,13 +4842,15 @@ def UnaryOp_set_operand(space, w_self, w_new_value): try: w_self.operand = space.interp_w(expr, w_new_value, False) + if type(w_self.operand) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'operand', w_new_value) return w_self.deldictvalue(space, 'operand') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _UnaryOp_field_unroller = unrolling_iterable(['op', 'operand']) def UnaryOp_init(space, w_self, __args__): @@ -4973,7 +4882,7 @@ w_obj = w_self.getdictvalue(space, 'args') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') return space.wrap(w_self.args) @@ -4987,14 +4896,14 @@ w_self.setdictvalue(space, 'args', w_new_value) return w_self.deldictvalue(space, 'args') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Lambda_get_body(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -5002,13 +4911,15 @@ def Lambda_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Lambda_field_unroller = unrolling_iterable(['args', 'body']) def Lambda_init(space, w_self, __args__): @@ -5040,7 +4951,7 @@ w_obj = w_self.getdictvalue(space, 'test') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'test') return space.wrap(w_self.test) @@ -5048,20 +4959,22 @@ def IfExp_set_test(space, w_self, w_new_value): try: w_self.test = space.interp_w(expr, w_new_value, False) + if type(w_self.test) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'test', w_new_value) return w_self.deldictvalue(space, 'test') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def IfExp_get_body(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'body') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') return space.wrap(w_self.body) @@ -5069,20 +4982,22 @@ def IfExp_set_body(space, w_self, w_new_value): try: w_self.body = space.interp_w(expr, w_new_value, False) + if type(w_self.body) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'body', w_new_value) return w_self.deldictvalue(space, 'body') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def IfExp_get_orelse(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'orelse') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'orelse') return space.wrap(w_self.orelse) @@ -5090,13 +5005,15 @@ def IfExp_set_orelse(space, w_self, w_new_value): try: w_self.orelse = space.interp_w(expr, w_new_value, False) + if type(w_self.orelse) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'orelse', w_new_value) return w_self.deldictvalue(space, 'orelse') - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _IfExp_field_unroller = unrolling_iterable(['test', 'body', 'orelse']) def IfExp_init(space, w_self, __args__): @@ -5125,7 +5042,7 @@ ) def Dict_get_keys(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keys') if w_self.w_keys is None: @@ -5139,10 +5056,10 @@ def Dict_set_keys(space, w_self, w_new_value): w_self.w_keys = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Dict_get_values(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'values') if w_self.w_values is None: @@ -5156,7 +5073,7 @@ def Dict_set_values(space, w_self, w_new_value): w_self.w_values = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Dict_field_unroller = unrolling_iterable(['keys', 'values']) def Dict_init(space, w_self, __args__): @@ -5186,7 +5103,7 @@ ) def Set_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -5200,7 +5117,7 @@ def Set_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Set_field_unroller = unrolling_iterable(['elts']) def Set_init(space, w_self, __args__): @@ -5232,7 +5149,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5240,16 +5157,18 @@ def ListComp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ListComp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5263,7 +5182,7 @@ def ListComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _ListComp_field_unroller = unrolling_iterable(['elt', 'generators']) def ListComp_init(space, w_self, __args__): @@ -5296,7 +5215,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5304,16 +5223,18 @@ def SetComp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def SetComp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5327,7 +5248,7 @@ def SetComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _SetComp_field_unroller = unrolling_iterable(['elt', 'generators']) def SetComp_init(space, w_self, __args__): @@ -5360,7 +5281,7 @@ w_obj = w_self.getdictvalue(space, 'key') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'key') return space.wrap(w_self.key) @@ -5368,20 +5289,22 @@ def DictComp_set_key(space, w_self, w_new_value): try: w_self.key = space.interp_w(expr, w_new_value, False) + if type(w_self.key) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'key', w_new_value) return w_self.deldictvalue(space, 'key') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def DictComp_get_value(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5389,16 +5312,18 @@ def DictComp_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def DictComp_get_generators(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5412,7 +5337,7 @@ def DictComp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _DictComp_field_unroller = unrolling_iterable(['key', 'value', 'generators']) def DictComp_init(space, w_self, __args__): @@ -5446,7 +5371,7 @@ w_obj = w_self.getdictvalue(space, 'elt') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elt') return space.wrap(w_self.elt) @@ -5454,16 +5379,18 @@ def GeneratorExp_set_elt(space, w_self, w_new_value): try: w_self.elt = space.interp_w(expr, w_new_value, False) + if type(w_self.elt) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'elt', w_new_value) return w_self.deldictvalue(space, 'elt') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def GeneratorExp_get_generators(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'generators') if w_self.w_generators is None: @@ -5477,7 +5404,7 @@ def GeneratorExp_set_generators(space, w_self, w_new_value): w_self.w_generators = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _GeneratorExp_field_unroller = unrolling_iterable(['elt', 'generators']) def GeneratorExp_init(space, w_self, __args__): @@ -5510,7 +5437,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5518,13 +5445,15 @@ def Yield_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, True) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Yield_field_unroller = unrolling_iterable(['value']) def Yield_init(space, w_self, __args__): @@ -5555,7 +5484,7 @@ w_obj = w_self.getdictvalue(space, 'left') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'left') return space.wrap(w_self.left) @@ -5563,16 +5492,18 @@ def Compare_set_left(space, w_self, w_new_value): try: w_self.left = space.interp_w(expr, w_new_value, False) + if type(w_self.left) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'left', w_new_value) return w_self.deldictvalue(space, 'left') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Compare_get_ops(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ops') if w_self.w_ops is None: @@ -5586,10 +5517,10 @@ def Compare_set_ops(space, w_self, w_new_value): w_self.w_ops = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Compare_get_comparators(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'comparators') if w_self.w_comparators is None: @@ -5603,7 +5534,7 @@ def Compare_set_comparators(space, w_self, w_new_value): w_self.w_comparators = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Compare_field_unroller = unrolling_iterable(['left', 'ops', 'comparators']) def Compare_init(space, w_self, __args__): @@ -5638,7 +5569,7 @@ w_obj = w_self.getdictvalue(space, 'func') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'func') return space.wrap(w_self.func) @@ -5646,16 +5577,18 @@ def Call_set_func(space, w_self, w_new_value): try: w_self.func = space.interp_w(expr, w_new_value, False) + if type(w_self.func) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'func', w_new_value) return w_self.deldictvalue(space, 'func') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Call_get_args(space, w_self): - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'args') if w_self.w_args is None: @@ -5669,10 +5602,10 @@ def Call_set_args(space, w_self, w_new_value): w_self.w_args = w_new_value - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Call_get_keywords(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'keywords') if w_self.w_keywords is None: @@ -5686,14 +5619,14 @@ def Call_set_keywords(space, w_self, w_new_value): w_self.w_keywords = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 def Call_get_starargs(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'starargs') if w_obj is not None: return w_obj - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 32: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'starargs') return space.wrap(w_self.starargs) @@ -5701,20 +5634,22 @@ def Call_set_starargs(space, w_self, w_new_value): try: w_self.starargs = space.interp_w(expr, w_new_value, True) + if type(w_self.starargs) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'starargs', w_new_value) return w_self.deldictvalue(space, 'starargs') - w_self.initialization_state |= 8 + w_self.initialization_state |= 32 def Call_get_kwargs(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'kwargs') if w_obj is not None: return w_obj - if not w_self.initialization_state & 16: + if not w_self.initialization_state & 64: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwargs') return space.wrap(w_self.kwargs) @@ -5722,13 +5657,15 @@ def Call_set_kwargs(space, w_self, w_new_value): try: w_self.kwargs = space.interp_w(expr, w_new_value, True) + if type(w_self.kwargs) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'kwargs', w_new_value) return w_self.deldictvalue(space, 'kwargs') - w_self.initialization_state |= 16 + w_self.initialization_state |= 64 _Call_field_unroller = unrolling_iterable(['func', 'args', 'keywords', 'starargs', 'kwargs']) def Call_init(space, w_self, __args__): @@ -5765,7 +5702,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5773,13 +5710,15 @@ def Repr_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Repr_field_unroller = unrolling_iterable(['value']) def Repr_init(space, w_self, __args__): @@ -5810,7 +5749,7 @@ w_obj = w_self.getdictvalue(space, 'n') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'n') return w_self.n @@ -5824,7 +5763,7 @@ w_self.setdictvalue(space, 'n', w_new_value) return w_self.deldictvalue(space, 'n') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Num_field_unroller = unrolling_iterable(['n']) def Num_init(space, w_self, __args__): @@ -5855,7 +5794,7 @@ w_obj = w_self.getdictvalue(space, 's') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 's') return w_self.s @@ -5869,7 +5808,7 @@ w_self.setdictvalue(space, 's', w_new_value) return w_self.deldictvalue(space, 's') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Str_field_unroller = unrolling_iterable(['s']) def Str_init(space, w_self, __args__): @@ -5900,7 +5839,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5908,20 +5847,22 @@ def Attribute_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Attribute_get_attr(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'attr') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'attr') return space.wrap(w_self.attr) @@ -5935,14 +5876,14 @@ w_self.setdictvalue(space, 'attr', w_new_value) return w_self.deldictvalue(space, 'attr') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Attribute_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -5958,7 +5899,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Attribute_field_unroller = unrolling_iterable(['value', 'attr', 'ctx']) def Attribute_init(space, w_self, __args__): @@ -5991,7 +5932,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return space.wrap(w_self.value) @@ -5999,20 +5940,22 @@ def Subscript_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Subscript_get_slice(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'slice') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'slice') return space.wrap(w_self.slice) @@ -6020,20 +5963,22 @@ def Subscript_set_slice(space, w_self, w_new_value): try: w_self.slice = space.interp_w(slice, w_new_value, False) + if type(w_self.slice) is slice: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'slice', w_new_value) return w_self.deldictvalue(space, 'slice') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def Subscript_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6049,7 +5994,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _Subscript_field_unroller = unrolling_iterable(['value', 'slice', 'ctx']) def Subscript_init(space, w_self, __args__): @@ -6082,7 +6027,7 @@ w_obj = w_self.getdictvalue(space, 'id') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'id') return space.wrap(w_self.id) @@ -6096,14 +6041,14 @@ w_self.setdictvalue(space, 'id', w_new_value) return w_self.deldictvalue(space, 'id') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Name_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6119,7 +6064,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Name_field_unroller = unrolling_iterable(['id', 'ctx']) def Name_init(space, w_self, __args__): @@ -6147,7 +6092,7 @@ ) def List_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -6161,14 +6106,14 @@ def List_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def List_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6184,7 +6129,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _List_field_unroller = unrolling_iterable(['elts', 'ctx']) def List_init(space, w_self, __args__): @@ -6213,7 +6158,7 @@ ) def Tuple_get_elts(space, w_self): - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'elts') if w_self.w_elts is None: @@ -6227,14 +6172,14 @@ def Tuple_set_elts(space, w_self, w_new_value): w_self.w_elts = w_new_value - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def Tuple_get_ctx(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'ctx') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') return expr_context_to_class[w_self.ctx - 1]() @@ -6250,7 +6195,7 @@ return # need to save the original object too w_self.setdictvalue(space, 'ctx', w_new_value) - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 _Tuple_field_unroller = unrolling_iterable(['elts', 'ctx']) def Tuple_init(space, w_self, __args__): @@ -6283,7 +6228,7 @@ w_obj = w_self.getdictvalue(space, 'value') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') return w_self.value @@ -6297,7 +6242,7 @@ w_self.setdictvalue(space, 'value', w_new_value) return w_self.deldictvalue(space, 'value') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 _Const_field_unroller = unrolling_iterable(['value']) def Const_init(space, w_self, __args__): @@ -6409,6 +6354,8 @@ def Slice_set_lower(space, w_self, w_new_value): try: w_self.lower = space.interp_w(expr, w_new_value, True) + if type(w_self.lower) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6430,6 +6377,8 @@ def Slice_set_upper(space, w_self, w_new_value): try: w_self.upper = space.interp_w(expr, w_new_value, True) + if type(w_self.upper) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6451,6 +6400,8 @@ def Slice_set_step(space, w_self, w_new_value): try: w_self.step = space.interp_w(expr, w_new_value, True) + if type(w_self.step) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6540,6 +6491,8 @@ def Index_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6809,6 +6762,8 @@ def comprehension_set_target(space, w_self, w_new_value): try: w_self.target = space.interp_w(expr, w_new_value, False) + if type(w_self.target) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6830,6 +6785,8 @@ def comprehension_set_iter(space, w_self, w_new_value): try: w_self.iter = space.interp_w(expr, w_new_value, False) + if type(w_self.iter) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise @@ -6887,7 +6844,7 @@ w_obj = w_self.getdictvalue(space, 'lineno') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._lineno_mask: + if not w_self.initialization_state & 1: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'lineno') return space.wrap(w_self.lineno) @@ -6901,14 +6858,14 @@ w_self.setdictvalue(space, 'lineno', w_new_value) return w_self.deldictvalue(space, 'lineno') - w_self.initialization_state |= w_self._lineno_mask + w_self.initialization_state |= 1 def excepthandler_get_col_offset(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'col_offset') if w_obj is not None: return w_obj - if not w_self.initialization_state & w_self._col_offset_mask: + if not w_self.initialization_state & 2: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'col_offset') return space.wrap(w_self.col_offset) @@ -6922,7 +6879,7 @@ w_self.setdictvalue(space, 'col_offset', w_new_value) return w_self.deldictvalue(space, 'col_offset') - w_self.initialization_state |= w_self._col_offset_mask + w_self.initialization_state |= 2 excepthandler.typedef = typedef.TypeDef("excepthandler", AST.typedef, @@ -6938,7 +6895,7 @@ w_obj = w_self.getdictvalue(space, 'type') if w_obj is not None: return w_obj - if not w_self.initialization_state & 1: + if not w_self.initialization_state & 4: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'type') return space.wrap(w_self.type) @@ -6946,20 +6903,22 @@ def ExceptHandler_set_type(space, w_self, w_new_value): try: w_self.type = space.interp_w(expr, w_new_value, True) + if type(w_self.type) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'type', w_new_value) return w_self.deldictvalue(space, 'type') - w_self.initialization_state |= 1 + w_self.initialization_state |= 4 def ExceptHandler_get_name(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'name') if w_obj is not None: return w_obj - if not w_self.initialization_state & 2: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'name') return space.wrap(w_self.name) @@ -6967,16 +6926,18 @@ def ExceptHandler_set_name(space, w_self, w_new_value): try: w_self.name = space.interp_w(expr, w_new_value, True) + if type(w_self.name) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise w_self.setdictvalue(space, 'name', w_new_value) return w_self.deldictvalue(space, 'name') - w_self.initialization_state |= 2 + w_self.initialization_state |= 8 def ExceptHandler_get_body(space, w_self): - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'body') if w_self.w_body is None: @@ -6990,7 +6951,7 @@ def ExceptHandler_set_body(space, w_self, w_new_value): w_self.w_body = w_new_value - w_self.initialization_state |= 4 + w_self.initialization_state |= 16 _ExceptHandler_field_unroller = unrolling_iterable(['type', 'name', 'body']) def ExceptHandler_init(space, w_self, __args__): @@ -7164,6 +7125,8 @@ def keyword_set_value(space, w_self, w_new_value): try: w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) except OperationError, e: if not e.match(space, space.w_TypeError): raise diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -1,6 +1,5 @@ """codegen helpers and AST constant folding.""" import sys -import itertools from pypy.interpreter.astcompiler import ast, consts, misc from pypy.tool import stdlib_opcode as ops @@ -146,8 +145,7 @@ } unrolling_unary_folders = unrolling_iterable(unary_folders.items()) -for folder in itertools.chain(binary_folders.itervalues(), - unary_folders.itervalues()): +for folder in binary_folders.values() + unary_folders.values(): folder._always_inline_ = True del folder diff --git a/pypy/interpreter/astcompiler/tools/asdl_py.py b/pypy/interpreter/astcompiler/tools/asdl_py.py --- a/pypy/interpreter/astcompiler/tools/asdl_py.py +++ b/pypy/interpreter/astcompiler/tools/asdl_py.py @@ -79,6 +79,7 @@ else: self.emit("class %s(AST):" % (base,)) if sum.attributes: + self.emit("") args = ", ".join(attr.name.value for attr in sum.attributes) self.emit("def __init__(self, %s):" % (args,), 1) for attr in sum.attributes: @@ -114,7 +115,7 @@ else: names.append(repr(field.name.value)) sub = (", ".join(names), name.value) - self.emit("missing_field(space, self.initialization_state, [%s], %r)" + self.emit("self.missing_field(space, [%s], %r)" % sub, 3) self.emit("else:", 2) # Fill in all the default fields. @@ -195,17 +196,13 @@ def visitConstructor(self, cons, base, extra_attributes): self.emit("class %s(%s):" % (cons.name, base)) self.emit("") - for field in self.data.cons_attributes[cons]: - subst = (field.name, self.data.field_masks[field]) - self.emit("_%s_mask = %i" % subst, 1) - self.emit("") self.make_constructor(cons.fields, cons, extra_attributes, base) self.emit("") self.emit("def walkabout(self, visitor):", 1) self.emit("visitor.visit_%s(self)" % (cons.name,), 2) self.emit("") self.make_mutate_over(cons, cons.name) - self.make_var_syncer(cons.fields + self.data.cons_attributes[cons], + self.make_var_syncer(self.data.cons_attributes[cons] + cons.fields, cons, cons.name) def visitField(self, field): @@ -324,7 +321,7 @@ def visitSum(self, sum, name): for field in sum.attributes: - self.make_property(field, name, True) + self.make_property(field, name) self.make_typedef(name, "AST", sum.attributes, fields_name="_attributes") if not is_simple_sum(sum): @@ -400,13 +397,10 @@ def visitField(self, field, name): self.make_property(field, name) - def make_property(self, field, name, different_masks=False): + def make_property(self, field, name): func = "def %s_get_%s(space, w_self):" % (name, field.name) self.emit(func) - if different_masks: - flag = "w_self._%s_mask" % (field.name,) - else: - flag = self.data.field_masks[field] + flag = self.data.field_masks[field] if not field.seq: self.emit("if w_self.w_dict is not None:", 1) self.emit(" w_obj = w_self.getdictvalue(space, '%s')" % (field.name,), 1) @@ -458,6 +452,11 @@ config = (field.name, field.type, repr(field.opt)) self.emit("w_self.%s = space.interp_w(%s, w_new_value, %s)" % config, 2) + if field.type.value not in self.data.prod_simple: + self.emit("if type(w_self.%s) is %s:" % ( + field.name, field.type), 2) + self.emit("raise OperationError(space.w_TypeError, " + "space.w_None)", 3) else: level = 2 if field.opt and field.type.value != "int": @@ -505,7 +504,10 @@ optional_mask = 0 for i, field in enumerate(fields): flag = 1 << i - field_masks[field] = flag + if field not in field_masks: + field_masks[field] = flag + else: + assert field_masks[field] == flag if field.opt: optional_mask |= flag else: @@ -518,9 +520,9 @@ if is_simple_sum(sum): simple_types.add(tp.name.value) else: + attrs = [field for field in sum.attributes] for cons in sum.types: - attrs = [copy_field(field) for field in sum.attributes] - add_masks(cons.fields + attrs, cons) + add_masks(attrs + cons.fields, cons) cons_attributes[cons] = attrs else: prod = tp.value @@ -588,6 +590,24 @@ space.setattr(self, w_name, space.getitem(w_state, w_name)) + def missing_field(self, space, required, host): + "Find which required field is missing." + state = self.initialization_state + for i in range(len(required)): + if (state >> i) & 1: + continue # field is present + missing = required[i] + if missing is None: + continue # field is optional + w_obj = self.getdictvalue(space, missing) + if w_obj is None: + err = "required field \\"%s\\" missing from %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + else: + err = "incorrect type for field \\"%s\\" in %s" + raise operationerrfmt(space.w_TypeError, err, missing, host) + raise AssertionError("should not reach here") + class NodeVisitorNotImplemented(Exception): pass @@ -631,15 +651,6 @@ ) -def missing_field(space, state, required, host): - "Find which required field is missing." - for i in range(len(required)): - if not (state >> i) & 1: - missing = required[i] - if missing is not None: - err = "required field \\"%s\\" missing from %s" - raise operationerrfmt(space.w_TypeError, err, missing, host) - raise AssertionError("should not reach here") """ diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1,4 +1,3 @@ -import itertools import pypy from pypy.interpreter.executioncontext import ExecutionContext, ActionFlag from pypy.interpreter.executioncontext import UserDelAction, FrameTraceAction @@ -191,8 +190,8 @@ def is_w(self, space, w_other): return self is w_other - def unique_id(self, space): - return space.wrap(compute_unique_id(self)) + def immutable_unique_id(self, space): + return None def str_w(self, space): w_msg = typed_unwrap_error_msg(space, "string", self) @@ -488,6 +487,16 @@ 'parser', 'fcntl', '_codecs', 'binascii' ] + # These modules are treated like CPython treats built-in modules, + # i.e. they always shadow any xx.py. The other modules are treated + # like CPython treats extension modules, and are loaded in sys.path + # order by the fake entry '.../lib_pypy/__extensions__'. + MODULES_THAT_ALWAYS_SHADOW = dict.fromkeys([ + '__builtin__', '__pypy__', '_ast', '_codecs', '_sre', '_warnings', + '_weakref', 'errno', 'exceptions', 'gc', 'imp', 'marshal', + 'posix', 'nt', 'pwd', 'signal', 'sys', 'thread', 'zipimport', + ], None) + def make_builtins(self): "NOT_RPYTHON: only for initializing the space." @@ -519,8 +528,8 @@ exception_types_w = self.export_builtin_exceptions() # initialize with "bootstrap types" from objspace (e.g. w_None) - types_w = itertools.chain(self.get_builtin_types().iteritems(), - exception_types_w.iteritems()) + types_w = (self.get_builtin_types().items() + + exception_types_w.items()) for name, w_type in types_w: self.setitem(self.builtin.w_dict, self.wrap(name), w_type) @@ -697,7 +706,10 @@ return w_two.is_w(self, w_one) def id(self, w_obj): - return w_obj.unique_id(self) + w_result = w_obj.immutable_unique_id(self) + if w_result is None: + w_result = self.wrap(compute_unique_id(w_obj)) + return w_result def hash_w(self, w_obj): """shortcut for space.int_w(space.hash(w_obj))""" @@ -1579,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1605,9 +1620,14 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', + 'UnicodeEncodeError', + 'UnicodeDecodeError', ] ## Irregular part of the interface: diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable @@ -98,7 +97,6 @@ "Abstract. Get the expected number of locals." raise TypeError, "abstract" - @jit.dont_look_inside def fast2locals(self): # Copy values from the fastlocals to self.w_locals if self.w_locals is None: @@ -112,7 +110,6 @@ w_name = self.space.wrap(name) self.space.setitem(self.w_locals, w_name, w_value) - @jit.dont_look_inside def locals2fast(self): # Copy values from self.w_locals to the fastlocals assert self.w_locals is not None diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -619,7 +619,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -655,7 +656,8 @@ self.descr_reqcls, args) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -674,7 +676,8 @@ self.descr_reqcls, args.prepend(w_obj)) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -690,7 +693,8 @@ raise OperationError(space.w_SystemError, space.wrap("unexpected DescrMismatch error")) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -708,7 +712,8 @@ self.descr_reqcls, Arguments(space, [w1])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -726,7 +731,8 @@ self.descr_reqcls, Arguments(space, [w1, w2])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -744,7 +750,8 @@ self.descr_reqcls, Arguments(space, [w1, w2, w3])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result @@ -763,7 +770,8 @@ Arguments(space, [w1, w2, w3, w4])) except Exception, e: - raise self.handle_exception(space, e) + self.handle_exception(space, e) + w_result = None if w_result is None: w_result = space.w_None return w_result diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py --- a/pypy/interpreter/pyframe.py +++ b/pypy/interpreter/pyframe.py @@ -10,7 +10,7 @@ from pypy.rlib.objectmodel import we_are_translated, instantiate from pypy.rlib.jit import hint from pypy.rlib.debug import make_sure_not_resized, check_nonneg -from pypy.rlib.rarithmetic import intmask +from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rlib import jit from pypy.tool import stdlib_opcode from pypy.tool.stdlib_opcode import host_bytecode_spec @@ -167,7 +167,7 @@ # Execution starts just after the last_instr. Initially, # last_instr is -1. After a generator suspends it points to # the YIELD_VALUE instruction. - next_instr = self.last_instr + 1 + next_instr = r_uint(self.last_instr + 1) if next_instr != 0: self.pushvalue(w_inputvalue) # @@ -691,6 +691,7 @@ handlerposition = space.int_w(w_handlerposition) valuestackdepth = space.int_w(w_valuestackdepth) assert valuestackdepth >= 0 + assert handlerposition >= 0 blk = instantiate(get_block_class(opname)) blk.handlerposition = handlerposition blk.valuestackdepth = valuestackdepth diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -837,6 +837,7 @@ raise Yield def jump_absolute(self, jumpto, next_instr, ec): + check_nonneg(jumpto) return jumpto def JUMP_FORWARD(self, jumpby, next_instr): @@ -1278,7 +1279,7 @@ def handle(self, frame, unroller): next_instr = self.really_handle(frame, unroller) # JIT hack - return next_instr + return r_uint(next_instr) def really_handle(self, frame, unroller): """ Purely abstract method diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/interpreter/typedef.py b/pypy/interpreter/typedef.py --- a/pypy/interpreter/typedef.py +++ b/pypy/interpreter/typedef.py @@ -54,7 +54,11 @@ # Hash support def default_identity_hash(space, w_obj): - return space.wrap(compute_identity_hash(w_obj)) + w_unique_id = w_obj.immutable_unique_id(space) + if w_unique_id is None: # common case + return space.wrap(compute_identity_hash(w_obj)) + else: + return space.hash(w_unique_id) # ____________________________________________________________ # diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -8,6 +8,7 @@ from pypy.objspace.flow.model import Variable, Constant from pypy.annotation import model as annmodel from pypy.jit.metainterp.history import REF, INT, FLOAT +from pypy.jit.metainterp import history from pypy.jit.codewriter import heaptracker from pypy.rpython.lltypesystem import lltype, llmemory, rclass, rstr, rffi from pypy.rpython.ootypesystem import ootype @@ -20,7 +21,7 @@ from pypy.jit.backend.llgraph import symbolic from pypy.jit.codewriter import longlong -from pypy.rlib import libffi +from pypy.rlib import libffi, clibffi from pypy.rlib.objectmodel import ComputedIntSymbolic, we_are_translated from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib.rarithmetic import r_longlong, r_ulonglong, r_uint @@ -48,6 +49,11 @@ value._the_opaque_pointer = op return op +def _normalize(value): + if isinstance(value, lltype._ptr): + value = lltype.top_container(value._obj) + return value + def from_opaque_string(s): if isinstance(s, str): return s @@ -322,6 +328,14 @@ _variables.append(v) return r +def compile_started_vars(clt): + if not hasattr(clt, '_debug_argtypes'): # only when compiling the loop + argtypes = [v.concretetype for v in _variables] + try: + clt._debug_argtypes = argtypes + except AttributeError: # when 'clt' is actually a translated + pass # GcStruct + def compile_add(loop, opnum): loop = _from_opaque(loop) loop.operations.append(Operation(opnum)) @@ -347,6 +361,16 @@ op = loop.operations[-1] op.descr = weakref.ref(descr) +TARGET_TOKENS = weakref.WeakKeyDictionary() + +def compile_add_target_token(loop, descr, clt): + # here, 'clt' is the compiled_loop_token of the original loop that + # we are compiling + loop = _from_opaque(loop) + op = loop.operations[-1] + descrobj = _normalize(descr) + TARGET_TOKENS[descrobj] = loop, len(loop.operations), op.args, clt + def compile_add_var(loop, intvar): loop = _from_opaque(loop) op = loop.operations[-1] @@ -381,13 +405,25 @@ _variables.append(v) return r -def compile_add_jump_target(loop, loop_target): +def compile_add_jump_target(loop, targettoken, source_clt): loop = _from_opaque(loop) - loop_target = _from_opaque(loop_target) + descrobj = _normalize(targettoken) + (loop_target, target_opindex, target_inputargs, target_clt + ) = TARGET_TOKENS[descrobj] + # + try: + assert source_clt._debug_argtypes == target_clt._debug_argtypes + except AttributeError: # when translated + pass + # op = loop.operations[-1] op.jump_target = loop_target + op.jump_target_opindex = target_opindex + op.jump_target_inputargs = target_inputargs assert op.opnum == rop.JUMP - assert len(op.args) == len(loop_target.inputargs) + assert [v.concretetype for v in op.args] == ( + [v.concretetype for v in target_inputargs]) + # if loop_target == loop: log.info("compiling new loop") else: @@ -521,10 +557,11 @@ self.opindex += 1 continue if op.opnum == rop.JUMP: - assert len(op.jump_target.inputargs) == len(args) - self.env = dict(zip(op.jump_target.inputargs, args)) + inputargs = op.jump_target_inputargs + assert len(inputargs) == len(args) + self.env = dict(zip(inputargs, args)) self.loop = op.jump_target - self.opindex = 0 + self.opindex = op.jump_target_opindex _stats.exec_jumps += 1 elif op.opnum == rop.FINISH: if self.verbose: @@ -617,6 +654,15 @@ # return _op_default_implementation + def op_label(self, _, *args): + op = self.loop.operations[self.opindex] + assert op.opnum == rop.LABEL + assert len(op.args) == len(args) + newenv = {} + for v, value in zip(op.args, args): + newenv[v] = value + self.env = newenv + def op_debug_merge_point(self, _, *args): from pypy.jit.metainterp.warmspot import get_stats try: @@ -959,6 +1005,7 @@ self._may_force = self.opindex try: inpargs = _from_opaque(ctl.compiled_version).inputargs + assert len(inpargs) == len(args) for i, inparg in enumerate(inpargs): TYPE = inparg.concretetype if TYPE is lltype.Signed: @@ -1432,6 +1479,10 @@ res = _getinteriorfield_raw(libffi.types.slong, array, index, width, ofs) return res +def do_getinteriorfield_raw_float(array, index, width, ofs): + res = _getinteriorfield_raw(libffi.types.double, array, index, width, ofs) + return res + def _getfield_raw(struct, fieldnum): STRUCT, fieldname = symbolic.TokenToField[fieldnum] ptr = cast_from_int(lltype.Ptr(STRUCT), struct) @@ -1510,12 +1561,17 @@ do_setinteriorfield_gc_float = new_setinteriorfield_gc(cast_from_floatstorage) do_setinteriorfield_gc_ptr = new_setinteriorfield_gc(cast_from_ptr) -def new_setinteriorfield_raw(ffitype): +def new_setinteriorfield_raw(cast_func, ffitype): def do_setinteriorfield_raw(array, index, newvalue, width, ofs): addr = rffi.cast(rffi.VOIDP, array) + for TYPE, ffitype2 in clibffi.ffitype_map: + if ffitype2 is ffitype: + newvalue = cast_func(TYPE, newvalue) + break return libffi.array_setitem(ffitype, width, addr, index, ofs, newvalue) return do_setinteriorfield_raw -do_setinteriorfield_raw_int = new_setinteriorfield_raw(libffi.types.slong) +do_setinteriorfield_raw_int = new_setinteriorfield_raw(cast_from_int, libffi.types.slong) +do_setinteriorfield_raw_float = new_setinteriorfield_raw(cast_from_floatstorage, libffi.types.double) def do_setfield_raw_int(struct, fieldnum, newvalue): STRUCT, fieldname = symbolic.TokenToField[fieldnum] @@ -1779,9 +1835,11 @@ setannotation(compile_start_int_var, annmodel.SomeInteger()) setannotation(compile_start_ref_var, annmodel.SomeInteger()) setannotation(compile_start_float_var, annmodel.SomeInteger()) +setannotation(compile_started_vars, annmodel.s_None) setannotation(compile_add, annmodel.s_None) setannotation(compile_add_descr, annmodel.s_None) setannotation(compile_add_descr_arg, annmodel.s_None) +setannotation(compile_add_target_token, annmodel.s_None) setannotation(compile_add_var, annmodel.s_None) setannotation(compile_add_int_const, annmodel.s_None) setannotation(compile_add_ref_const, annmodel.s_None) diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -37,7 +37,7 @@ def get_arg_types(self): return self.arg_types - def get_return_type(self): + def get_result_type(self): return self.typeinfo def get_extra_info(self): @@ -138,29 +138,30 @@ clt = original_loop_token.compiled_loop_token clt.loop_and_bridges.append(c) clt.compiling_a_bridge() - self._compile_loop_or_bridge(c, inputargs, operations) + self._compile_loop_or_bridge(c, inputargs, operations, clt) old, oldindex = faildescr._compiled_fail llimpl.compile_redirect_fail(old, oldindex, c) - def compile_loop(self, inputargs, operations, looptoken, log=True, name=''): + def compile_loop(self, inputargs, operations, jitcell_token, + log=True, name=''): """In a real assembler backend, this should assemble the given list of operations. Here we just generate a similar CompiledLoop instance. The code here is RPython, whereas the code in llimpl is not. """ c = llimpl.compile_start() - clt = model.CompiledLoopToken(self, looptoken.number) + clt = model.CompiledLoopToken(self, jitcell_token.number) clt.loop_and_bridges = [c] clt.compiled_version = c - looptoken.compiled_loop_token = clt - self._compile_loop_or_bridge(c, inputargs, operations) + jitcell_token.compiled_loop_token = clt + self._compile_loop_or_bridge(c, inputargs, operations, clt) def free_loop_and_bridges(self, compiled_loop_token): for c in compiled_loop_token.loop_and_bridges: llimpl.mark_as_free(c) model.AbstractCPU.free_loop_and_bridges(self, compiled_loop_token) - def _compile_loop_or_bridge(self, c, inputargs, operations): + def _compile_loop_or_bridge(self, c, inputargs, operations, clt): var2index = {} for box in inputargs: if isinstance(box, history.BoxInt): @@ -172,10 +173,11 @@ var2index[box] = llimpl.compile_start_float_var(c) else: raise Exception("box is: %r" % (box,)) - self._compile_operations(c, operations, var2index) + llimpl.compile_started_vars(clt) + self._compile_operations(c, operations, var2index, clt) return c - def _compile_operations(self, c, operations, var2index): + def _compile_operations(self, c, operations, var2index, clt): for op in operations: llimpl.compile_add(c, op.getopnum()) descr = op.getdescr() @@ -183,9 +185,11 @@ llimpl.compile_add_descr(c, descr.ofs, descr.typeinfo, descr.arg_types, descr.extrainfo, descr.width) - if (isinstance(descr, history.LoopToken) and - op.getopnum() != rop.JUMP): + if isinstance(descr, history.JitCellToken): + assert op.getopnum() != rop.JUMP llimpl.compile_add_loop_token(c, descr) + if isinstance(descr, history.TargetToken) and op.getopnum() == rop.LABEL: + llimpl.compile_add_target_token(c, descr, clt) if self.is_oo and isinstance(descr, (OODescr, MethDescr)): # hack hack, not rpython c._obj.externalobj.operations[-1].setdescr(descr) @@ -239,9 +243,7 @@ assert op.is_final() if op.getopnum() == rop.JUMP: targettoken = op.getdescr() - assert isinstance(targettoken, history.LoopToken) - compiled_version = targettoken.compiled_loop_token.compiled_version - llimpl.compile_add_jump_target(c, compiled_version) + llimpl.compile_add_jump_target(c, targettoken, clt) elif op.getopnum() == rop.FINISH: faildescr = op.getdescr() index = self.get_fail_descr_number(faildescr) @@ -260,21 +262,28 @@ self.latest_frame = frame return fail_index - def execute_token(self, loop_token): - """Calls the assembler generated for the given loop. - Returns the ResOperation that failed, of type rop.FAIL. - """ - fail_index = self._execute_token(loop_token) - return self.get_fail_descr_from_number(fail_index) - - def set_future_value_int(self, index, intvalue): - llimpl.set_future_value_int(index, intvalue) - - def set_future_value_ref(self, index, objvalue): - llimpl.set_future_value_ref(index, objvalue) - - def set_future_value_float(self, index, floatvalue): - llimpl.set_future_value_float(index, floatvalue) + def make_execute_token(self, *argtypes): + nb_args = len(argtypes) + unroll_argtypes = unrolling_iterable(list(enumerate(argtypes))) + # + def execute_token(loop_token, *args): + assert len(args) == nb_args + for index, TYPE in unroll_argtypes: + x = args[index] + assert TYPE == lltype.typeOf(x) + if TYPE == lltype.Signed: + llimpl.set_future_value_int(index, x) + elif TYPE == llmemory.GCREF: + llimpl.set_future_value_ref(index, x) + elif TYPE == longlong.FLOATSTORAGE: + llimpl.set_future_value_float(index, x) + else: + assert 0 + # + fail_index = self._execute_token(loop_token) + return self.get_fail_descr_from_number(fail_index) + # + return execute_token def get_latest_value_int(self, index): return llimpl.frame_int_getvalue(self.latest_frame, index) diff --git a/pypy/jit/backend/llsupport/asmmemmgr.py b/pypy/jit/backend/llsupport/asmmemmgr.py --- a/pypy/jit/backend/llsupport/asmmemmgr.py +++ b/pypy/jit/backend/llsupport/asmmemmgr.py @@ -37,25 +37,25 @@ self._add_free_block(smaller_stop, stop) stop = smaller_stop result = (start, stop) - self.total_mallocs += stop - start + self.total_mallocs += r_uint(stop - start) return result # pair (start, stop) def free(self, start, stop): """Free a block (start, stop) returned by a previous malloc().""" - self.total_mallocs -= (stop - start) + self.total_mallocs -= r_uint(stop - start) self._add_free_block(start, stop) def open_malloc(self, minsize): """Allocate at least minsize bytes. Returns (start, stop).""" result = self._allocate_block(minsize) (start, stop) = result - self.total_mallocs += stop - start + self.total_mallocs += r_uint(stop - start) return result def open_free(self, middle, stop): """Used for freeing the end of an open-allocated block of memory.""" if stop - middle >= self.min_fragment: - self.total_mallocs -= (stop - middle) + self.total_mallocs -= r_uint(stop - middle) self._add_free_block(middle, stop) return True else: @@ -77,7 +77,7 @@ # Hack to make sure that mcs are not within 32-bits of one # another for testing purposes rmmap.hint.pos += 0x80000000 - size - self.total_memory_allocated += size + self.total_memory_allocated += r_uint(size) data = rffi.cast(lltype.Signed, data) return self._add_free_block(data, data + size) diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -5,11 +5,7 @@ from pypy.jit.metainterp.history import AbstractDescr, getkind from pypy.jit.metainterp import history from pypy.jit.codewriter import heaptracker, longlong - -# The point of the class organization in this file is to make instances -# as compact as possible. This is done by not storing the field size or -# the 'is_pointer_field' flag in the instance itself but in the class -# (in methods actually) using a few classes instead of just one. +from pypy.jit.codewriter.longlong import is_longlong class GcCache(object): @@ -19,6 +15,7 @@ self._cache_size = {} self._cache_field = {} self._cache_array = {} + self._cache_arraylen = {} self._cache_call = {} self._cache_interiorfield = {} @@ -26,24 +23,15 @@ assert isinstance(STRUCT, lltype.GcStruct) def init_array_descr(self, ARRAY, arraydescr): - assert isinstance(ARRAY, lltype.GcArray) + assert (isinstance(ARRAY, lltype.GcArray) or + isinstance(ARRAY, lltype.GcStruct) and ARRAY._arrayfld) -if lltype.SignedLongLong is lltype.Signed: - def is_longlong(TYPE): - return False -else: - assert rffi.sizeof(lltype.SignedLongLong) == rffi.sizeof(lltype.Float) - def is_longlong(TYPE): - return TYPE in (lltype.SignedLongLong, lltype.UnsignedLongLong) - # ____________________________________________________________ # SizeDescrs class SizeDescr(AbstractDescr): size = 0 # help translation - is_immutable = False - tid = llop.combine_ushort(lltype.Signed, 0, 0) def __init__(self, size, count_fields_if_immut=-1): @@ -77,265 +65,247 @@ cache[STRUCT] = sizedescr return sizedescr + # ____________________________________________________________ # FieldDescrs -class BaseFieldDescr(AbstractDescr): +FLAG_POINTER = 'P' +FLAG_FLOAT = 'F' +FLAG_UNSIGNED = 'U' +FLAG_SIGNED = 'S' +FLAG_STRUCT = 'X' +FLAG_VOID = 'V' + +class FieldDescr(AbstractDescr): + name = '' offset = 0 # help translation - name = '' - _clsname = '' + field_size = 0 + flag = '\x00' - def __init__(self, name, offset): + def __init__(self, name, offset, field_size, flag): self.name = name self.offset = offset + self.field_size = field_size + self.flag = flag + + def is_pointer_field(self): + return self.flag == FLAG_POINTER + + def is_float_field(self): + return self.flag == FLAG_FLOAT + + def is_field_signed(self): + return self.flag == FLAG_SIGNED def sort_key(self): return self.offset - def get_field_size(self, translate_support_code): - raise NotImplementedError + def repr_of_descr(self): + return '' % (self.flag, self.name, self.offset) - _is_pointer_field = False # unless overridden by GcPtrFieldDescr - _is_float_field = False # unless overridden by FloatFieldDescr - _is_field_signed = False # unless overridden by XxxFieldDescr - - def is_pointer_field(self): - return self._is_pointer_field - - def is_float_field(self): - return self._is_float_field - - def is_field_signed(self): - return self._is_field_signed - - def repr_of_descr(self): - return '<%s %s %s>' % (self._clsname, self.name, self.offset) - -class DynamicFieldDescr(BaseFieldDescr): - def __init__(self, offset, fieldsize, is_pointer, is_float, is_signed): - self.offset = offset - self._fieldsize = fieldsize - self._is_pointer_field = is_pointer - self._is_float_field = is_float - self._is_field_signed = is_signed - - def get_field_size(self, translate_support_code): - return self._fieldsize - -class NonGcPtrFieldDescr(BaseFieldDescr): - _clsname = 'NonGcPtrFieldDescr' - def get_field_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrFieldDescr(NonGcPtrFieldDescr): - _clsname = 'GcPtrFieldDescr' - _is_pointer_field = True - -def getFieldDescrClass(TYPE): - return getDescrClass(TYPE, BaseFieldDescr, GcPtrFieldDescr, - NonGcPtrFieldDescr, 'Field', 'get_field_size', - '_is_float_field', '_is_field_signed') def get_field_descr(gccache, STRUCT, fieldname): cache = gccache._cache_field try: return cache[STRUCT][fieldname] except KeyError: - offset, _ = symbolic.get_field_token(STRUCT, fieldname, - gccache.translate_support_code) + offset, size = symbolic.get_field_token(STRUCT, fieldname, + gccache.translate_support_code) FIELDTYPE = getattr(STRUCT, fieldname) + flag = get_type_flag(FIELDTYPE) name = '%s.%s' % (STRUCT._name, fieldname) - fielddescr = getFieldDescrClass(FIELDTYPE)(name, offset) + fielddescr = FieldDescr(name, offset, size, flag) cachedict = cache.setdefault(STRUCT, {}) cachedict[fieldname] = fielddescr return fielddescr +def get_type_flag(TYPE): + if isinstance(TYPE, lltype.Ptr): + if TYPE.TO._gckind == 'gc': + return FLAG_POINTER + else: + return FLAG_UNSIGNED + if isinstance(TYPE, lltype.Struct): + return FLAG_STRUCT + if TYPE is lltype.Float or is_longlong(TYPE): + return FLAG_FLOAT + if (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and + rffi.cast(TYPE, -1) == -1): + return FLAG_SIGNED + return FLAG_UNSIGNED + +def get_field_arraylen_descr(gccache, ARRAY_OR_STRUCT): + cache = gccache._cache_arraylen + try: + return cache[ARRAY_OR_STRUCT] + except KeyError: + tsc = gccache.translate_support_code + (_, _, ofs) = symbolic.get_array_token(ARRAY_OR_STRUCT, tsc) + size = symbolic.get_size(lltype.Signed, tsc) + result = FieldDescr("len", ofs, size, get_type_flag(lltype.Signed)) + cache[ARRAY_OR_STRUCT] = result + return result + + # ____________________________________________________________ # ArrayDescrs -_A = lltype.GcArray(lltype.Signed) # a random gcarray -_AF = lltype.GcArray(lltype.Float) # an array of C doubles +class ArrayDescr(AbstractDescr): + tid = 0 + basesize = 0 # workaround for the annotator + itemsize = 0 + lendescr = None + flag = '\x00' - -class BaseArrayDescr(AbstractDescr): - _clsname = '' - tid = llop.combine_ushort(lltype.Signed, 0, 0) - - def get_base_size(self, translate_support_code): - basesize, _, _ = symbolic.get_array_token(_A, translate_support_code) - return basesize - - def get_ofs_length(self, translate_support_code): - _, _, ofslength = symbolic.get_array_token(_A, translate_support_code) - return ofslength - - def get_item_size(self, translate_support_code): - raise NotImplementedError - - _is_array_of_pointers = False # unless overridden by GcPtrArrayDescr - _is_array_of_floats = False # unless overridden by FloatArrayDescr - _is_array_of_structs = False # unless overridden by StructArrayDescr - _is_item_signed = False # unless overridden by XxxArrayDescr + def __init__(self, basesize, itemsize, lendescr, flag): + self.basesize = basesize + self.itemsize = itemsize + self.lendescr = lendescr # or None, if no length + self.flag = flag def is_array_of_pointers(self): - return self._is_array_of_pointers + return self.flag == FLAG_POINTER def is_array_of_floats(self): - return self._is_array_of_floats + return self.flag == FLAG_FLOAT + + def is_item_signed(self): + return self.flag == FLAG_SIGNED def is_array_of_structs(self): - return self._is_array_of_structs - - def is_item_signed(self): - return self._is_item_signed + return self.flag == FLAG_STRUCT def repr_of_descr(self): - return '<%s>' % self._clsname + return '' % (self.flag, self.itemsize) -class NonGcPtrArrayDescr(BaseArrayDescr): - _clsname = 'NonGcPtrArrayDescr' - def get_item_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrArrayDescr(NonGcPtrArrayDescr): - _clsname = 'GcPtrArrayDescr' - _is_array_of_pointers = True - -class FloatArrayDescr(BaseArrayDescr): - _clsname = 'FloatArrayDescr' - _is_array_of_floats = True - def get_base_size(self, translate_support_code): - basesize, _, _ = symbolic.get_array_token(_AF, translate_support_code) - return basesize - def get_item_size(self, translate_support_code): - return symbolic.get_size(lltype.Float, translate_support_code) - -class StructArrayDescr(BaseArrayDescr): - _clsname = 'StructArrayDescr' - _is_array_of_structs = True - -class BaseArrayNoLengthDescr(BaseArrayDescr): - def get_base_size(self, translate_support_code): - return 0 - - def get_ofs_length(self, translate_support_code): - return -1 - -class DynamicArrayNoLengthDescr(BaseArrayNoLengthDescr): - def __init__(self, itemsize): - self.itemsize = itemsize - - def get_item_size(self, translate_support_code): - return self.itemsize - -class NonGcPtrArrayNoLengthDescr(BaseArrayNoLengthDescr): - _clsname = 'NonGcPtrArrayNoLengthDescr' - def get_item_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrArrayNoLengthDescr(NonGcPtrArrayNoLengthDescr): - _clsname = 'GcPtrArrayNoLengthDescr' - _is_array_of_pointers = True - -def getArrayDescrClass(ARRAY): - if ARRAY.OF is lltype.Float: - return FloatArrayDescr - elif isinstance(ARRAY.OF, lltype.Struct): - class Descr(StructArrayDescr): - _clsname = '%sArrayDescr' % ARRAY.OF._name - def get_item_size(self, translate_support_code): - return symbolic.get_size(ARRAY.OF, translate_support_code) - Descr.__name__ = Descr._clsname - return Descr - return getDescrClass(ARRAY.OF, BaseArrayDescr, GcPtrArrayDescr, - NonGcPtrArrayDescr, 'Array', 'get_item_size', - '_is_array_of_floats', '_is_item_signed') - -def getArrayNoLengthDescrClass(ARRAY): - return getDescrClass(ARRAY.OF, BaseArrayNoLengthDescr, GcPtrArrayNoLengthDescr, - NonGcPtrArrayNoLengthDescr, 'ArrayNoLength', 'get_item_size', - '_is_array_of_floats', '_is_item_signed') - -def get_array_descr(gccache, ARRAY): +def get_array_descr(gccache, ARRAY_OR_STRUCT): cache = gccache._cache_array try: - return cache[ARRAY] + return cache[ARRAY_OR_STRUCT] except KeyError: - # we only support Arrays that are either GcArrays, or raw no-length - # non-gc Arrays. - if ARRAY._hints.get('nolength', False): - assert not isinstance(ARRAY, lltype.GcArray) - arraydescr = getArrayNoLengthDescrClass(ARRAY)() + tsc = gccache.translate_support_code + basesize, itemsize, _ = symbolic.get_array_token(ARRAY_OR_STRUCT, tsc) + if isinstance(ARRAY_OR_STRUCT, lltype.Array): + ARRAY_INSIDE = ARRAY_OR_STRUCT else: - assert isinstance(ARRAY, lltype.GcArray) - arraydescr = getArrayDescrClass(ARRAY)() - # verify basic assumption that all arrays' basesize and ofslength - # are equal - basesize, itemsize, ofslength = symbolic.get_array_token(ARRAY, False) - assert basesize == arraydescr.get_base_size(False) - assert itemsize == arraydescr.get_item_size(False) - if not ARRAY._hints.get('nolength', False): - assert ofslength == arraydescr.get_ofs_length(False) - if isinstance(ARRAY, lltype.GcArray): - gccache.init_array_descr(ARRAY, arraydescr) - cache[ARRAY] = arraydescr + ARRAY_INSIDE = ARRAY_OR_STRUCT._flds[ARRAY_OR_STRUCT._arrayfld] + if ARRAY_INSIDE._hints.get('nolength', False): + lendescr = None + else: + lendescr = get_field_arraylen_descr(gccache, ARRAY_OR_STRUCT) + flag = get_type_flag(ARRAY_INSIDE.OF) + arraydescr = ArrayDescr(basesize, itemsize, lendescr, flag) + if ARRAY_OR_STRUCT._gckind == 'gc': + gccache.init_array_descr(ARRAY_OR_STRUCT, arraydescr) + cache[ARRAY_OR_STRUCT] = arraydescr return arraydescr + # ____________________________________________________________ # InteriorFieldDescr class InteriorFieldDescr(AbstractDescr): - arraydescr = BaseArrayDescr() # workaround for the annotator - fielddescr = BaseFieldDescr('', 0) + arraydescr = ArrayDescr(0, 0, None, '\x00') # workaround for the annotator + fielddescr = FieldDescr('', 0, 0, '\x00') def __init__(self, arraydescr, fielddescr): + assert arraydescr.flag == FLAG_STRUCT self.arraydescr = arraydescr self.fielddescr = fielddescr + def sort_key(self): + return self.fielddescr.sort_key() + def is_pointer_field(self): return self.fielddescr.is_pointer_field() def is_float_field(self): return self.fielddescr.is_float_field() - def sort_key(self): - return self.fielddescr.sort_key() - def repr_of_descr(self): return '' % self.fielddescr.repr_of_descr() -def get_interiorfield_descr(gc_ll_descr, ARRAY, FIELDTP, name): +def get_interiorfield_descr(gc_ll_descr, ARRAY, name): cache = gc_ll_descr._cache_interiorfield try: - return cache[(ARRAY, FIELDTP, name)] + return cache[(ARRAY, name)] except KeyError: arraydescr = get_array_descr(gc_ll_descr, ARRAY) - fielddescr = get_field_descr(gc_ll_descr, FIELDTP, name) + fielddescr = get_field_descr(gc_ll_descr, ARRAY.OF, name) descr = InteriorFieldDescr(arraydescr, fielddescr) - cache[(ARRAY, FIELDTP, name)] = descr + cache[(ARRAY, name)] = descr return descr +def get_dynamic_interiorfield_descr(gc_ll_descr, offset, width, fieldsize, + is_pointer, is_float, is_signed): + arraydescr = ArrayDescr(0, width, None, FLAG_STRUCT) + if is_pointer: + assert not is_float + flag = FLAG_POINTER + elif is_float: + flag = FLAG_FLOAT + elif is_signed: + flag = FLAG_SIGNED + else: + flag = FLAG_UNSIGNED + fielddescr = FieldDescr('dynamic', offset, fieldsize, flag) + return InteriorFieldDescr(arraydescr, fielddescr) + + # ____________________________________________________________ # CallDescrs -class BaseCallDescr(AbstractDescr): - _clsname = '' - loop_token = None +class CallDescr(AbstractDescr): arg_classes = '' # <-- annotation hack + result_type = '\x00' + result_flag = '\x00' ffi_flags = 1 + call_stub_i = staticmethod(lambda func, args_i, args_r, args_f: + 0) + call_stub_r = staticmethod(lambda func, args_i, args_r, args_f: + lltype.nullptr(llmemory.GCREF.TO)) + call_stub_f = staticmethod(lambda func,args_i,args_r,args_f: + longlong.ZEROF) - def __init__(self, arg_classes, extrainfo=None, ffi_flags=1): - self.arg_classes = arg_classes # string of "r" and "i" (ref/int) + def __init__(self, arg_classes, result_type, result_signed, result_size, + extrainfo=None, ffi_flags=1): + """ + 'arg_classes' is a string of characters, one per argument: + 'i', 'r', 'f', 'L', 'S' + + 'result_type' is one character from the same list or 'v' + + 'result_signed' is a boolean True/False + """ + self.arg_classes = arg_classes + self.result_type = result_type + self.result_size = result_size self.extrainfo = extrainfo self.ffi_flags = ffi_flags # NB. the default ffi_flags is 1, meaning FUNCFLAG_CDECL, which # makes sense on Windows as it's the one for all the C functions # we are compiling together with the JIT. On non-Windows platforms # it is just ignored anyway. + if result_type == 'v': + result_flag = FLAG_VOID + elif result_type == 'i': + if result_signed: + result_flag = FLAG_SIGNED + else: + result_flag = FLAG_UNSIGNED + elif result_type == history.REF: + result_flag = FLAG_POINTER + elif result_type == history.FLOAT or result_type == 'L': + result_flag = FLAG_FLOAT + elif result_type == 'S': + result_flag = FLAG_UNSIGNED + else: + raise NotImplementedError("result_type = '%s'" % (result_type,)) + self.result_flag = result_flag def __repr__(self): - res = '%s(%s)' % (self.__class__.__name__, self.arg_classes) + res = 'CallDescr(%s)' % (self.arg_classes,) extraeffect = getattr(self.extrainfo, 'extraeffect', None) if extraeffect is not None: res += ' EF=%r' % extraeffect @@ -363,14 +333,14 @@ def get_arg_types(self): return self.arg_classes - def get_return_type(self): - return self._return_type + def get_result_type(self): + return self.result_type - def get_result_size(self, translate_support_code): - raise NotImplementedError + def get_result_size(self): + return self.result_size def is_result_signed(self): - return False # unless overridden + return self.result_flag == FLAG_SIGNED def create_call_stub(self, rtyper, RESULT): from pypy.rlib.clibffi import FFI_DEFAULT_ABI @@ -408,18 +378,26 @@ seen = {'i': 0, 'r': 0, 'f': 0} args = ", ".join([process(c) for c in self.arg_classes]) - if self.get_return_type() == history.INT: + result_type = self.get_result_type() + if result_type == history.INT: result = 'rffi.cast(lltype.Signed, res)' - elif self.get_return_type() == history.REF: + category = 'i' + elif result_type == history.REF: + assert RESULT == llmemory.GCREF # should be ensured by the caller result = 'lltype.cast_opaque_ptr(llmemory.GCREF, res)' - elif self.get_return_type() == history.FLOAT: + category = 'r' + elif result_type == history.FLOAT: result = 'longlong.getfloatstorage(res)' - elif self.get_return_type() == 'L': + category = 'f' + elif result_type == 'L': result = 'rffi.cast(lltype.SignedLongLong, res)' - elif self.get_return_type() == history.VOID: - result = 'None' - elif self.get_return_type() == 'S': + category = 'f' + elif result_type == history.VOID: + result = '0' + category = 'i' + elif result_type == 'S': result = 'longlong.singlefloat2int(res)' + category = 'i' else: assert 0 source = py.code.Source(""" @@ -433,10 +411,13 @@ d = globals().copy() d.update(locals()) exec source.compile() in d - self.call_stub = d['call_stub'] + call_stub = d['call_stub'] + # store the function into one of three attributes, to preserve + # type-correctness of the return value + setattr(self, 'call_stub_%s' % category, call_stub) def verify_types(self, args_i, args_r, args_f, return_type): - assert self._return_type in return_type + assert self.result_type in return_type assert (self.arg_classes.count('i') + self.arg_classes.count('S')) == len(args_i or ()) assert self.arg_classes.count('r') == len(args_r or ()) @@ -444,161 +425,56 @@ self.arg_classes.count('L')) == len(args_f or ()) def repr_of_descr(self): - return '<%s>' % self._clsname + res = 'Call%s %d' % (self.result_type, self.result_size) + if self.arg_classes: + res += ' ' + self.arg_classes + if self.extrainfo: + res += ' EF=%d' % self.extrainfo.extraeffect + oopspecindex = self.extrainfo.oopspecindex + if oopspecindex: + res += ' OS=%d' % oopspecindex + return '<%s>' % res -class BaseIntCallDescr(BaseCallDescr): - # Base class of the various subclasses of descrs corresponding to - # calls having a return kind of 'int' (including non-gc pointers). - # The inheritance hierarchy is a bit different than with other Descr - # classes because of the 'call_stub' attribute, which is of type - # - # lambda func, args_i, args_r, args_f --> int/ref/float/void - # - # The purpose of BaseIntCallDescr is to be the parent of all classes - # in which 'call_stub' has a return kind of 'int'. - _return_type = history.INT - call_stub = staticmethod(lambda func, args_i, args_r, args_f: 0) - - _is_result_signed = False # can be overridden in XxxCallDescr - def is_result_signed(self): - return self._is_result_signed - -class DynamicIntCallDescr(BaseIntCallDescr): - """ - calldescr that works for every integer type, by explicitly passing it the - size of the result. Used only by get_call_descr_dynamic - """ - _clsname = 'DynamicIntCallDescr' - - def __init__(self, arg_classes, result_size, result_sign, extrainfo, ffi_flags): - BaseIntCallDescr.__init__(self, arg_classes, extrainfo, ffi_flags) - assert isinstance(result_sign, bool) - self._result_size = chr(result_size) - self._result_sign = result_sign - - def get_result_size(self, translate_support_code): - return ord(self._result_size) - - def is_result_signed(self): - return self._result_sign - - -class NonGcPtrCallDescr(BaseIntCallDescr): - _clsname = 'NonGcPtrCallDescr' - def get_result_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class GcPtrCallDescr(BaseCallDescr): - _clsname = 'GcPtrCallDescr' - _return_type = history.REF - call_stub = staticmethod(lambda func, args_i, args_r, args_f: - lltype.nullptr(llmemory.GCREF.TO)) - def get_result_size(self, translate_support_code): - return symbolic.get_size_of_ptr(translate_support_code) - -class FloatCallDescr(BaseCallDescr): - _clsname = 'FloatCallDescr' - _return_type = history.FLOAT - call_stub = staticmethod(lambda func,args_i,args_r,args_f: longlong.ZEROF) - def get_result_size(self, translate_support_code): - return symbolic.get_size(lltype.Float, translate_support_code) - -class LongLongCallDescr(FloatCallDescr): - _clsname = 'LongLongCallDescr' - _return_type = 'L' - -class VoidCallDescr(BaseCallDescr): - _clsname = 'VoidCallDescr' - _return_type = history.VOID - call_stub = staticmethod(lambda func, args_i, args_r, args_f: None) - def get_result_size(self, translate_support_code): - return 0 - -_SingleFloatCallDescr = None # built lazily - -def getCallDescrClass(RESULT): - if RESULT is lltype.Void: - return VoidCallDescr - if RESULT is lltype.Float: - return FloatCallDescr - if RESULT is lltype.SingleFloat: - global _SingleFloatCallDescr - if _SingleFloatCallDescr is None: - assert rffi.sizeof(rffi.UINT) == rffi.sizeof(RESULT) - class SingleFloatCallDescr(getCallDescrClass(rffi.UINT)): - _clsname = 'SingleFloatCallDescr' - _return_type = 'S' - _SingleFloatCallDescr = SingleFloatCallDescr - return _SingleFloatCallDescr - if is_longlong(RESULT): - return LongLongCallDescr - return getDescrClass(RESULT, BaseIntCallDescr, GcPtrCallDescr, - NonGcPtrCallDescr, 'Call', 'get_result_size', - Ellipsis, # <= floatattrname should not be used here - '_is_result_signed') -getCallDescrClass._annspecialcase_ = 'specialize:memo' +def map_type_to_argclass(ARG, accept_void=False): + kind = getkind(ARG) + if kind == 'int': + if ARG is lltype.SingleFloat: return 'S' + else: return 'i' + elif kind == 'ref': return 'r' + elif kind == 'float': + if is_longlong(ARG): return 'L' + else: return 'f' + elif kind == 'void': + if accept_void: return 'v' + raise NotImplementedError('ARG = %r' % (ARG,)) def get_call_descr(gccache, ARGS, RESULT, extrainfo=None): - arg_classes = [] - for ARG in ARGS: - kind = getkind(ARG) - if kind == 'int': - if ARG is lltype.SingleFloat: - arg_classes.append('S') + arg_classes = map(map_type_to_argclass, ARGS) + arg_classes = ''.join(arg_classes) + result_type = map_type_to_argclass(RESULT, accept_void=True) + RESULT_ERASED = RESULT + if RESULT is lltype.Void: + result_size = 0 + result_signed = False + else: + if isinstance(RESULT, lltype.Ptr): + # avoid too many CallDescrs + if result_type == 'r': + RESULT_ERASED = llmemory.GCREF else: - arg_classes.append('i') - elif kind == 'ref': arg_classes.append('r') - elif kind == 'float': - if is_longlong(ARG): - arg_classes.append('L') - else: - arg_classes.append('f') - else: - raise NotImplementedError('ARG = %r' % (ARG,)) - arg_classes = ''.join(arg_classes) - cls = getCallDescrClass(RESULT) - key = (cls, arg_classes, extrainfo) + RESULT_ERASED = llmemory.Address + result_size = symbolic.get_size(RESULT_ERASED, + gccache.translate_support_code) + result_signed = get_type_flag(RESULT) == FLAG_SIGNED + key = (arg_classes, result_type, result_signed, RESULT_ERASED, extrainfo) cache = gccache._cache_call try: - return cache[key] + calldescr = cache[key] except KeyError: - calldescr = cls(arg_classes, extrainfo) - calldescr.create_call_stub(gccache.rtyper, RESULT) + calldescr = CallDescr(arg_classes, result_type, result_signed, + result_size, extrainfo) + calldescr.create_call_stub(gccache.rtyper, RESULT_ERASED) cache[key] = calldescr - return calldescr - - -# ____________________________________________________________ - -def getDescrClass(TYPE, BaseDescr, GcPtrDescr, NonGcPtrDescr, - nameprefix, methodname, floatattrname, signedattrname, - _cache={}): - if isinstance(TYPE, lltype.Ptr): - if TYPE.TO._gckind == 'gc': - return GcPtrDescr - else: - return NonGcPtrDescr - if TYPE is lltype.SingleFloat: - assert rffi.sizeof(rffi.UINT) == rffi.sizeof(TYPE) - TYPE = rffi.UINT - try: - return _cache[nameprefix, TYPE] - except KeyError: - # - class Descr(BaseDescr): - _clsname = '%s%sDescr' % (TYPE._name, nameprefix) - Descr.__name__ = Descr._clsname - # - def method(self, translate_support_code): - return symbolic.get_size(TYPE, translate_support_code) - setattr(Descr, methodname, method) - # - if TYPE is lltype.Float or is_longlong(TYPE): - setattr(Descr, floatattrname, True) - elif (TYPE is not lltype.Bool and isinstance(TYPE, lltype.Number) and - rffi.cast(TYPE, -1) == -1): - setattr(Descr, signedattrname, True) - # - _cache[nameprefix, TYPE] = Descr - return Descr + assert repr(calldescr.result_size) == repr(result_size) + return calldescr diff --git a/pypy/jit/backend/llsupport/ffisupport.py b/pypy/jit/backend/llsupport/ffisupport.py --- a/pypy/jit/backend/llsupport/ffisupport.py +++ b/pypy/jit/backend/llsupport/ffisupport.py @@ -1,9 +1,7 @@ from pypy.rlib.rarithmetic import intmask from pypy.jit.metainterp import history from pypy.rpython.lltypesystem import rffi -from pypy.jit.backend.llsupport.descr import ( - DynamicIntCallDescr, NonGcPtrCallDescr, FloatCallDescr, VoidCallDescr, - LongLongCallDescr, getCallDescrClass) +from pypy.jit.backend.llsupport.descr import CallDescr class UnsupportedKind(Exception): pass @@ -16,29 +14,13 @@ argkinds = [get_ffi_type_kind(cpu, arg) for arg in ffi_args] except UnsupportedKind: return None - arg_classes = ''.join(argkinds) - if reskind == history.INT: - size = intmask(ffi_result.c_size) - signed = is_ffi_type_signed(ffi_result) - return DynamicIntCallDescr(arg_classes, size, signed, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.REF: - return NonGcPtrCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.FLOAT: - return FloatCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == history.VOID: - return VoidCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == 'L': - return LongLongCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - elif reskind == 'S': - SingleFloatCallDescr = getCallDescrClass(rffi.FLOAT) - return SingleFloatCallDescr(arg_classes, extrainfo, - ffi_flags=ffi_flags) - assert False + if reskind == history.VOID: + result_size = 0 + else: + result_size = intmask(ffi_result.c_size) + argkinds = ''.join(argkinds) + return CallDescr(argkinds, reskind, is_ffi_type_signed(ffi_result), + result_size, extrainfo, ffi_flags=ffi_flags) def get_ffi_type_kind(cpu, ffi_type): from pypy.rlib.libffi import types diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py --- a/pypy/jit/backend/llsupport/gc.py +++ b/pypy/jit/backend/llsupport/gc.py @@ -1,6 +1,6 @@ import os from pypy.rlib import rgc -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, specialize from pypy.rlib.debug import fatalerror from pypy.rlib.rarithmetic import ovfcheck from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass, rstr @@ -8,52 +8,93 @@ from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper from pypy.translator.tool.cbuild import ExternalCompilationInfo -from pypy.jit.metainterp.history import BoxInt, BoxPtr, ConstInt, ConstPtr -from pypy.jit.metainterp.history import AbstractDescr +from pypy.jit.codewriter import heaptracker +from pypy.jit.metainterp.history import ConstPtr, AbstractDescr from pypy.jit.metainterp.resoperation import ResOperation, rop from pypy.jit.backend.llsupport import symbolic from pypy.jit.backend.llsupport.symbolic import WORD -from pypy.jit.backend.llsupport.descr import BaseSizeDescr, BaseArrayDescr +from pypy.jit.backend.llsupport.descr import SizeDescr, ArrayDescr from pypy.jit.backend.llsupport.descr import GcCache, get_field_descr -from pypy.jit.backend.llsupport.descr import GcPtrFieldDescr +from pypy.jit.backend.llsupport.descr import get_array_descr from pypy.jit.backend.llsupport.descr import get_call_descr +from pypy.jit.backend.llsupport.rewrite import GcRewriterAssembler from pypy.rpython.memory.gctransform import asmgcroot # ____________________________________________________________ class GcLLDescription(GcCache): - minimal_size_in_nursery = 0 - get_malloc_slowpath_addr = None def __init__(self, gcdescr, translator=None, rtyper=None): GcCache.__init__(self, translator is not None, rtyper) self.gcdescr = gcdescr + if translator and translator.config.translation.gcremovetypeptr: + self.fielddescr_vtable = None + else: + self.fielddescr_vtable = get_field_descr(self, rclass.OBJECT, + 'typeptr') + self._generated_functions = [] + + def _setup_str(self): + self.str_descr = get_array_descr(self, rstr.STR) + self.unicode_descr = get_array_descr(self, rstr.UNICODE) + + def generate_function(self, funcname, func, ARGS, RESULT=llmemory.GCREF): + """Generates a variant of malloc with the given name and the given + arguments. It should return NULL if out of memory. If it raises + anything, it must be an optional MemoryError. + """ + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + descr = get_call_descr(self, ARGS, RESULT) + setattr(self, funcname, func) + setattr(self, funcname + '_FUNCPTR', FUNCPTR) + setattr(self, funcname + '_descr', descr) + self._generated_functions.append(funcname) + + @specialize.arg(1) + def get_malloc_fn(self, funcname): + func = getattr(self, funcname) + FUNC = getattr(self, funcname + '_FUNCPTR') + return llhelper(FUNC, func) + + @specialize.arg(1) + def get_malloc_fn_addr(self, funcname): + ll_func = self.get_malloc_fn(funcname) + return heaptracker.adr2int(llmemory.cast_ptr_to_adr(ll_func)) + def _freeze_(self): return True def initialize(self): pass def do_write_barrier(self, gcref_struct, gcref_newptr): pass - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - return operations - def can_inline_malloc(self, descr): - return False - def can_inline_malloc_varsize(self, descr, num_elem): + def can_use_nursery_malloc(self, size): return False def has_write_barrier_class(self): return None def freeing_block(self, start, stop): pass + def get_nursery_free_addr(self): + raise NotImplementedError + def get_nursery_top_addr(self): + raise NotImplementedError - def get_funcptr_for_newarray(self): - return llhelper(self.GC_MALLOC_ARRAY, self.malloc_array) - def get_funcptr_for_newstr(self): - return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_str) - def get_funcptr_for_newunicode(self): - return llhelper(self.GC_MALLOC_STR_UNICODE, self.malloc_unicode) + def gc_malloc(self, sizedescr): + """Blackhole: do a 'bh_new'. Also used for 'bh_new_with_vtable', + with the vtable pointer set manually afterwards.""" + assert isinstance(sizedescr, SizeDescr) + return self._bh_malloc(sizedescr) + def gc_malloc_array(self, arraydescr, num_elem): + assert isinstance(arraydescr, ArrayDescr) + return self._bh_malloc_array(arraydescr, num_elem) - def record_constptrs(self, op, gcrefs_output_list): + def gc_malloc_str(self, num_elem): + return self._bh_malloc_array(self.str_descr, num_elem) + + def gc_malloc_unicode(self, num_elem): + return self._bh_malloc_array(self.unicode_descr, num_elem) + + def _record_constptrs(self, op, gcrefs_output_list): for i in range(op.numargs()): v = op.getarg(i) if isinstance(v, ConstPtr) and bool(v.value): @@ -61,11 +102,27 @@ rgc._make_sure_does_not_move(p) gcrefs_output_list.append(p) + def rewrite_assembler(self, cpu, operations, gcrefs_output_list): + rewriter = GcRewriterAssembler(self, cpu) + newops = rewriter.rewrite(operations) + # record all GCREFs, because the GC (or Boehm) cannot see them and + # keep them alive if they end up as constants in the assembler + for op in newops: + self._record_constptrs(op, gcrefs_output_list) + return newops + # ____________________________________________________________ class GcLLDescr_boehm(GcLLDescription): - moving_gc = False - gcrootmap = None + kind = 'boehm' + moving_gc = False + round_up = False + gcrootmap = None + write_barrier_descr = None + fielddescr_tid = None + str_type_id = 0 + unicode_type_id = 0 + get_malloc_slowpath_addr = None @classmethod def configure_boehm_once(cls): @@ -76,6 +133,16 @@ from pypy.rpython.tool import rffi_platform compilation_info = rffi_platform.configure_boehm() + # on some platform GC_init is required before any other + # GC_* functions, call it here for the benefit of tests + # XXX move this to tests + init_fn_ptr = rffi.llexternal("GC_init", + [], lltype.Void, + compilation_info=compilation_info, + sandboxsafe=True, + _nowrapper=True) + init_fn_ptr() + # Versions 6.x of libgc needs to use GC_local_malloc(). # Versions 7.x of libgc removed this function; GC_malloc() has # the same behavior if libgc was compiled with @@ -95,96 +162,42 @@ sandboxsafe=True, _nowrapper=True) cls.malloc_fn_ptr = malloc_fn_ptr - cls.compilation_info = compilation_info return malloc_fn_ptr def __init__(self, gcdescr, translator, rtyper): GcLLDescription.__init__(self, gcdescr, translator, rtyper) # grab a pointer to the Boehm 'malloc' function - malloc_fn_ptr = self.configure_boehm_once() - self.funcptr_for_new = malloc_fn_ptr + self.malloc_fn_ptr = self.configure_boehm_once() + self._setup_str() + self._make_functions() - def malloc_array(basesize, itemsize, ofs_length, num_elem): + def _make_functions(self): + + def malloc_fixedsize(size): + return self.malloc_fn_ptr(size) + self.generate_function('malloc_fixedsize', malloc_fixedsize, + [lltype.Signed]) + + def malloc_array(basesize, num_elem, itemsize, ofs_length): try: - size = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) + totalsize = ovfcheck(basesize + ovfcheck(itemsize * num_elem)) except OverflowError: return lltype.nullptr(llmemory.GCREF.TO) - res = self.funcptr_for_new(size) - if not res: - return res - rffi.cast(rffi.CArrayPtr(lltype.Signed), res)[ofs_length/WORD] = num_elem + res = self.malloc_fn_ptr(totalsize) + if res: + arrayptr = rffi.cast(rffi.CArrayPtr(lltype.Signed), res) + arrayptr[ofs_length/WORD] = num_elem return res - self.malloc_array = malloc_array - self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( - [lltype.Signed] * 4, llmemory.GCREF)) + self.generate_function('malloc_array', malloc_array, + [lltype.Signed] * 4) + def _bh_malloc(self, sizedescr): + return self.malloc_fixedsize(sizedescr.size) - (str_basesize, str_itemsize, str_ofs_length - ) = symbolic.get_array_token(rstr.STR, self.translate_support_code) - (unicode_basesize, unicode_itemsize, unicode_ofs_length - ) = symbolic.get_array_token(rstr.UNICODE, self.translate_support_code) - def malloc_str(length): - return self.malloc_array( - str_basesize, str_itemsize, str_ofs_length, length - ) - def malloc_unicode(length): - return self.malloc_array( - unicode_basesize, unicode_itemsize, unicode_ofs_length, length - ) - self.malloc_str = malloc_str - self.malloc_unicode = malloc_unicode - self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( - [lltype.Signed], llmemory.GCREF)) - - - # on some platform GC_init is required before any other - # GC_* functions, call it here for the benefit of tests - # XXX move this to tests - init_fn_ptr = rffi.llexternal("GC_init", - [], lltype.Void, - compilation_info=self.compilation_info, - sandboxsafe=True, - _nowrapper=True) - - init_fn_ptr() - - def gc_malloc(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return self.funcptr_for_new(sizedescr.size) - - def gc_malloc_array(self, arraydescr, num_elem): - assert isinstance(arraydescr, BaseArrayDescr) - ofs_length = arraydescr.get_ofs_length(self.translate_support_code) - basesize = arraydescr.get_base_size(self.translate_support_code) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return self.malloc_array(basesize, itemsize, ofs_length, num_elem) - - def gc_malloc_str(self, num_elem): - return self.malloc_str(num_elem) - - def gc_malloc_unicode(self, num_elem): - return self.malloc_unicode(num_elem) - - def args_for_new(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return [sizedescr.size] - - def args_for_new_array(self, arraydescr): - ofs_length = arraydescr.get_ofs_length(self.translate_support_code) - basesize = arraydescr.get_base_size(self.translate_support_code) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return [basesize, itemsize, ofs_length] - - def get_funcptr_for_new(self): - return self.funcptr_for_new - - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - # record all GCREFs too, because Boehm cannot see them and keep them - # alive if they end up as constants in the assembler - for op in operations: - self.record_constptrs(op, gcrefs_output_list) - return GcLLDescription.rewrite_assembler(self, cpu, operations, - gcrefs_output_list) + def _bh_malloc_array(self, arraydescr, num_elem): + return self.malloc_array(arraydescr.basesize, num_elem, + arraydescr.itemsize, + arraydescr.lendescr.offset) # ____________________________________________________________ @@ -554,12 +567,14 @@ class WriteBarrierDescr(AbstractDescr): def __init__(self, gc_ll_descr): - GCClass = gc_ll_descr.GCClass self.llop1 = gc_ll_descr.llop1 self.WB_FUNCPTR = gc_ll_descr.WB_FUNCPTR self.WB_ARRAY_FUNCPTR = gc_ll_descr.WB_ARRAY_FUNCPTR - self.fielddescr_tid = get_field_descr(gc_ll_descr, GCClass.HDR, 'tid') + self.fielddescr_tid = gc_ll_descr.fielddescr_tid # + GCClass = gc_ll_descr.GCClass + if GCClass is None: # for tests + return self.jit_wb_if_flag = GCClass.JIT_WB_IF_FLAG self.jit_wb_if_flag_byteofs, self.jit_wb_if_flag_singlebyte = ( self.extract_flag_byte(self.jit_wb_if_flag)) @@ -596,48 +611,74 @@ funcaddr = llmemory.cast_ptr_to_adr(funcptr) return cpu.cast_adr_to_int(funcaddr) # this may return 0 + def has_write_barrier_from_array(self, cpu): + return self.get_write_barrier_from_array_fn(cpu) != 0 + class GcLLDescr_framework(GcLLDescription): DEBUG = False # forced to True by x86/test/test_zrpy_gc.py + kind = 'framework' + round_up = True - def __init__(self, gcdescr, translator, rtyper, llop1=llop): - from pypy.rpython.memory.gctypelayout import check_typeid - from pypy.rpython.memory.gcheader import GCHeaderBuilder - from pypy.rpython.memory.gctransform import framework + def __init__(self, gcdescr, translator, rtyper, llop1=llop, + really_not_translated=False): GcLLDescription.__init__(self, gcdescr, translator, rtyper) - assert self.translate_support_code, "required with the framework GC" self.translator = translator self.llop1 = llop1 + if really_not_translated: + assert not self.translate_support_code # but half does not work + self._initialize_for_tests() + else: + assert self.translate_support_code,"required with the framework GC" + self._check_valid_gc() + self._make_gcrootmap() + self._make_layoutbuilder() + self._setup_gcclass() + self._setup_tid() + self._setup_write_barrier() + self._setup_str() + self._make_functions(really_not_translated) + def _initialize_for_tests(self): + self.layoutbuilder = None + self.fielddescr_tid = AbstractDescr() + self.max_size_of_young_obj = 1000 + self.GCClass = None + + def _check_valid_gc(self): # we need the hybrid or minimark GC for rgc._make_sure_does_not_move() # to work - if gcdescr.config.translation.gc not in ('hybrid', 'minimark'): + if self.gcdescr.config.translation.gc not in ('hybrid', 'minimark'): raise NotImplementedError("--gc=%s not implemented with the JIT" % (gcdescr.config.translation.gc,)) + def _make_gcrootmap(self): # to find roots in the assembler, make a GcRootMap - name = gcdescr.config.translation.gcrootfinder + name = self.gcdescr.config.translation.gcrootfinder try: cls = globals()['GcRootMap_' + name] except KeyError: raise NotImplementedError("--gcrootfinder=%s not implemented" " with the JIT" % (name,)) - gcrootmap = cls(gcdescr) + gcrootmap = cls(self.gcdescr) self.gcrootmap = gcrootmap + def _make_layoutbuilder(self): # make a TransformerLayoutBuilder and save it on the translator # where it can be fished and reused by the FrameworkGCTransformer + from pypy.rpython.memory.gctransform import framework + translator = self.translator self.layoutbuilder = framework.TransformerLayoutBuilder(translator) self.layoutbuilder.delay_encoding() - self.translator._jit2gc = {'layoutbuilder': self.layoutbuilder} - gcrootmap.add_jit2gc_hooks(self.translator._jit2gc) + translator._jit2gc = {'layoutbuilder': self.layoutbuilder} + self.gcrootmap.add_jit2gc_hooks(translator._jit2gc) + def _setup_gcclass(self): + from pypy.rpython.memory.gcheader import GCHeaderBuilder self.GCClass = self.layoutbuilder.GCClass self.moving_gc = self.GCClass.moving_gc self.HDRPTR = lltype.Ptr(self.GCClass.HDR) self.gcheaderbuilder = GCHeaderBuilder(self.HDRPTR.TO) - (self.array_basesize, _, self.array_length_ofs) = \ - symbolic.get_array_token(lltype.GcArray(lltype.Signed), True) self.max_size_of_young_obj = self.GCClass.JIT_max_size_of_young_obj() self.minimal_size_in_nursery=self.GCClass.JIT_minimal_size_in_nursery() @@ -645,87 +686,124 @@ assert self.GCClass.inline_simple_malloc assert self.GCClass.inline_simple_malloc_varsize - # make a malloc function, with two arguments - def malloc_basic(size, tid): - type_id = llop.extract_ushort(llgroup.HALFWORD, tid) - check_typeid(type_id) - res = llop1.do_malloc_fixedsize_clear(llmemory.GCREF, - type_id, size, - False, False, False) - # In case the operation above failed, we are returning NULL - # from this function to assembler. There is also an RPython - # exception set, typically MemoryError; but it's easier and - # faster to check for the NULL return value, as done by - # translator/exceptiontransform.py. - #llop.debug_print(lltype.Void, "\tmalloc_basic", size, type_id, - # "-->", res) - return res - self.malloc_basic = malloc_basic - self.GC_MALLOC_BASIC = lltype.Ptr(lltype.FuncType( - [lltype.Signed, lltype.Signed], llmemory.GCREF)) + def _setup_tid(self): + self.fielddescr_tid = get_field_descr(self, self.GCClass.HDR, 'tid') + + def _setup_write_barrier(self): self.WB_FUNCPTR = lltype.Ptr(lltype.FuncType( [llmemory.Address, llmemory.Address], lltype.Void)) self.WB_ARRAY_FUNCPTR = lltype.Ptr(lltype.FuncType( [llmemory.Address, lltype.Signed, llmemory.Address], lltype.Void)) self.write_barrier_descr = WriteBarrierDescr(self) - # + + def _make_functions(self, really_not_translated): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + (self.standard_array_basesize, _, self.standard_array_length_ofs) = \ + symbolic.get_array_token(lltype.GcArray(lltype.Signed), + not really_not_translated) + + def malloc_nursery_slowpath(size): + """Allocate 'size' null bytes out of the nursery. + Note that the fast path is typically inlined by the backend.""" + if self.DEBUG: + self._random_usage_of_xmm_registers() + type_id = rffi.cast(llgroup.HALFWORD, 0) # missing here + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, size, + False, False, False) + self.generate_function('malloc_nursery', malloc_nursery_slowpath, + [lltype.Signed]) + def malloc_array(itemsize, tid, num_elem): + """Allocate an array with a variable-size num_elem. + Only works for standard arrays.""" type_id = llop.extract_ushort(llgroup.HALFWORD, tid) check_typeid(type_id) return llop1.do_malloc_varsize_clear( llmemory.GCREF, - type_id, num_elem, self.array_basesize, itemsize, - self.array_length_ofs) - self.malloc_array = malloc_array - self.GC_MALLOC_ARRAY = lltype.Ptr(lltype.FuncType( - [lltype.Signed] * 3, llmemory.GCREF)) - # - (str_basesize, str_itemsize, str_ofs_length - ) = symbolic.get_array_token(rstr.STR, True) - (unicode_basesize, unicode_itemsize, unicode_ofs_length - ) = symbolic.get_array_token(rstr.UNICODE, True) - str_type_id = self.layoutbuilder.get_type_id(rstr.STR) - unicode_type_id = self.layoutbuilder.get_type_id(rstr.UNICODE) - # + type_id, num_elem, self.standard_array_basesize, itemsize, + self.standard_array_length_ofs) + self.generate_function('malloc_array', malloc_array, + [lltype.Signed] * 3) + + def malloc_array_nonstandard(basesize, itemsize, lengthofs, tid, + num_elem): + """For the rare case of non-standard arrays, i.e. arrays where + self.standard_array_{basesize,length_ofs} is wrong. It can + occur e.g. with arrays of floats on Win32.""" + type_id = llop.extract_ushort(llgroup.HALFWORD, tid) + check_typeid(type_id) + return llop1.do_malloc_varsize_clear( + llmemory.GCREF, + type_id, num_elem, basesize, itemsize, lengthofs) + self.generate_function('malloc_array_nonstandard', + malloc_array_nonstandard, + [lltype.Signed] * 5) + + str_type_id = self.str_descr.tid + str_basesize = self.str_descr.basesize + str_itemsize = self.str_descr.itemsize + str_ofs_length = self.str_descr.lendescr.offset + unicode_type_id = self.unicode_descr.tid + unicode_basesize = self.unicode_descr.basesize + unicode_itemsize = self.unicode_descr.itemsize + unicode_ofs_length = self.unicode_descr.lendescr.offset + def malloc_str(length): return llop1.do_malloc_varsize_clear( llmemory.GCREF, str_type_id, length, str_basesize, str_itemsize, str_ofs_length) + self.generate_function('malloc_str', malloc_str, + [lltype.Signed]) + def malloc_unicode(length): return llop1.do_malloc_varsize_clear( llmemory.GCREF, - unicode_type_id, length, unicode_basesize,unicode_itemsize, + unicode_type_id, length, unicode_basesize, unicode_itemsize, unicode_ofs_length) - self.malloc_str = malloc_str - self.malloc_unicode = malloc_unicode - self.GC_MALLOC_STR_UNICODE = lltype.Ptr(lltype.FuncType( - [lltype.Signed], llmemory.GCREF)) - # - class ForTestOnly: - pass - for_test_only = ForTestOnly() - for_test_only.x = 1.23 - def random_usage_of_xmm_registers(): - x0 = for_test_only.x - x1 = x0 * 0.1 - x2 = x0 * 0.2 - x3 = x0 * 0.3 - for_test_only.x = x0 + x1 + x2 + x3 - # - def malloc_slowpath(size): - if self.DEBUG: - random_usage_of_xmm_registers() - assert size >= self.minimal_size_in_nursery - # NB. although we call do_malloc_fixedsize_clear() here, - # it's a bit of a hack because we set tid to 0 and may - # also use it to allocate varsized objects. The tid - # and possibly the length are both set afterward. - gcref = llop1.do_malloc_fixedsize_clear(llmemory.GCREF, - 0, size, False, False, False) - return rffi.cast(lltype.Signed, gcref) - self.malloc_slowpath = malloc_slowpath - self.MALLOC_SLOWPATH = lltype.FuncType([lltype.Signed], lltype.Signed) + self.generate_function('malloc_unicode', malloc_unicode, + [lltype.Signed]) + + # Rarely called: allocate a fixed-size amount of bytes, but + # not in the nursery, because it is too big. Implemented like + # malloc_nursery_slowpath() above. + self.generate_function('malloc_fixedsize', malloc_nursery_slowpath, + [lltype.Signed]) + + def _bh_malloc(self, sizedescr): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + type_id = llop.extract_ushort(llgroup.HALFWORD, sizedescr.tid) + check_typeid(type_id) + return llop1.do_malloc_fixedsize_clear(llmemory.GCREF, + type_id, sizedescr.size, + False, False, False) + + def _bh_malloc_array(self, arraydescr, num_elem): + from pypy.rpython.memory.gctypelayout import check_typeid + llop1 = self.llop1 + type_id = llop.extract_ushort(llgroup.HALFWORD, arraydescr.tid) + check_typeid(type_id) + return llop1.do_malloc_varsize_clear(llmemory.GCREF, + type_id, num_elem, + arraydescr.basesize, + arraydescr.itemsize, + arraydescr.lendescr.offset) + + + class ForTestOnly: + pass + for_test_only = ForTestOnly() + for_test_only.x = 1.23 + + def _random_usage_of_xmm_registers(self): + x0 = self.for_test_only.x + x1 = x0 * 0.1 + x2 = x0 * 0.2 + x3 = x0 * 0.3 + self.for_test_only.x = x0 + x1 + x2 + x3 def get_nursery_free_addr(self): nurs_addr = llop.gc_adr_of_nursery_free(llmemory.Address) @@ -735,49 +813,26 @@ nurs_top_addr = llop.gc_adr_of_nursery_top(llmemory.Address) return rffi.cast(lltype.Signed, nurs_top_addr) - def get_malloc_slowpath_addr(self): - fptr = llhelper(lltype.Ptr(self.MALLOC_SLOWPATH), self.malloc_slowpath) - return rffi.cast(lltype.Signed, fptr) - def initialize(self): self.gcrootmap.initialize() def init_size_descr(self, S, descr): - type_id = self.layoutbuilder.get_type_id(S) - assert not self.layoutbuilder.is_weakref_type(S) - assert not self.layoutbuilder.has_finalizer(S) - descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) + if self.layoutbuilder is not None: + type_id = self.layoutbuilder.get_type_id(S) + assert not self.layoutbuilder.is_weakref_type(S) + assert not self.layoutbuilder.has_finalizer(S) + descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) def init_array_descr(self, A, descr): - type_id = self.layoutbuilder.get_type_id(A) - descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) + if self.layoutbuilder is not None: + type_id = self.layoutbuilder.get_type_id(A) + descr.tid = llop.combine_ushort(lltype.Signed, type_id, 0) - def gc_malloc(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return self.malloc_basic(sizedescr.size, sizedescr.tid) - - def gc_malloc_array(self, arraydescr, num_elem): - assert isinstance(arraydescr, BaseArrayDescr) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return self.malloc_array(itemsize, arraydescr.tid, num_elem) - - def gc_malloc_str(self, num_elem): - return self.malloc_str(num_elem) - - def gc_malloc_unicode(self, num_elem): - return self.malloc_unicode(num_elem) - - def args_for_new(self, sizedescr): - assert isinstance(sizedescr, BaseSizeDescr) - return [sizedescr.size, sizedescr.tid] - - def args_for_new_array(self, arraydescr): - assert isinstance(arraydescr, BaseArrayDescr) - itemsize = arraydescr.get_item_size(self.translate_support_code) - return [itemsize, arraydescr.tid] - - def get_funcptr_for_new(self): - return llhelper(self.GC_MALLOC_BASIC, self.malloc_basic) + def _set_tid(self, gcptr, tid): + hdr_addr = llmemory.cast_ptr_to_adr(gcptr) + hdr_addr -= self.gcheaderbuilder.size_gc_header + hdr = llmemory.cast_adr_to_ptr(hdr_addr, self.HDRPTR) + hdr.tid = tid def do_write_barrier(self, gcref_struct, gcref_newptr): hdr_addr = llmemory.cast_ptr_to_adr(gcref_struct) @@ -791,99 +846,8 @@ funcptr(llmemory.cast_ptr_to_adr(gcref_struct), llmemory.cast_ptr_to_adr(gcref_newptr)) - def rewrite_assembler(self, cpu, operations, gcrefs_output_list): - # Perform two kinds of rewrites in parallel: - # - # - Add COND_CALLs to the write barrier before SETFIELD_GC and - # SETARRAYITEM_GC operations. - # - # - Record the ConstPtrs from the assembler. - # - newops = [] - known_lengths = {} - # we can only remember one malloc since the next malloc can possibly - # collect - last_malloc = None - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - continue - # ---------- record the ConstPtrs ---------- - self.record_constptrs(op, gcrefs_output_list) - if op.is_malloc(): - last_malloc = op.result - elif op.can_malloc(): - last_malloc = None - # ---------- write barrier for SETFIELD_GC ---------- - if op.getopnum() == rop.SETFIELD_GC: - val = op.getarg(0) - # no need for a write barrier in the case of previous malloc - if val is not last_malloc: - v = op.getarg(1) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier(newops, op.getarg(0), v) - op = op.copy_and_change(rop.SETFIELD_RAW) - # ---------- write barrier for SETARRAYITEM_GC ---------- - if op.getopnum() == rop.SETARRAYITEM_GC: - val = op.getarg(0) - # no need for a write barrier in the case of previous malloc - if val is not last_malloc: - v = op.getarg(2) - if isinstance(v, BoxPtr) or (isinstance(v, ConstPtr) and - bool(v.value)): # store a non-NULL - self._gen_write_barrier_array(newops, op.getarg(0), - op.getarg(1), v, - cpu, known_lengths) - op = op.copy_and_change(rop.SETARRAYITEM_RAW) - elif op.getopnum() == rop.NEW_ARRAY: - v_length = op.getarg(0) - if isinstance(v_length, ConstInt): - known_lengths[op.result] = v_length.getint() - # ---------- - newops.append(op) - return newops - - def _gen_write_barrier(self, newops, v_base, v_value): - args = [v_base, v_value] - newops.append(ResOperation(rop.COND_CALL_GC_WB, args, None, - descr=self.write_barrier_descr)) - - def _gen_write_barrier_array(self, newops, v_base, v_index, v_value, - cpu, known_lengths): - if self.write_barrier_descr.get_write_barrier_from_array_fn(cpu) != 0: - # If we know statically the length of 'v', and it is not too - # big, then produce a regular write_barrier. If it's unknown or - # too big, produce instead a write_barrier_from_array. - LARGE = 130 - length = known_lengths.get(v_base, LARGE) - if length >= LARGE: - # unknown or too big: produce a write_barrier_from_array - args = [v_base, v_index, v_value] - newops.append(ResOperation(rop.COND_CALL_GC_WB_ARRAY, args, - None, - descr=self.write_barrier_descr)) - return - # fall-back case: produce a write_barrier - self._gen_write_barrier(newops, v_base, v_value) - - def can_inline_malloc(self, descr): - assert isinstance(descr, BaseSizeDescr) - if descr.size < self.max_size_of_young_obj: - has_finalizer = bool(descr.tid & (1<= len(self.used): + self.used.append(False) + if self.used[index + i]: + return False # already in use + # good, we can reuse the location + for i in range(size): + self.used[index + i] = True + self.bindings[box] = loc + return True + # abstract methods that need to be overwritten for specific assemblers @staticmethod def frame_pos(loc, type): @@ -49,6 +123,10 @@ @staticmethod def frame_size(type): return 1 + @staticmethod + def get_loc_index(loc): + raise NotImplementedError("Purely abstract") + class RegisterManager(object): """ Class that keeps track of register allocations @@ -68,7 +146,14 @@ self.frame_manager = frame_manager self.assembler = assembler + def is_still_alive(self, v): + # Check if 'v' is alive at the current position. + # Return False if the last usage is strictly before. + return self.longevity[v][1] >= self.position + def stays_alive(self, v): + # Check if 'v' stays alive after the current position. + # Return False if the last usage is before or at position. return self.longevity[v][1] > self.position def next_instruction(self, incr=1): @@ -84,11 +169,14 @@ point for all variables that might be in registers. """ self._check_type(v) - if isinstance(v, Const) or v not in self.reg_bindings: + if isinstance(v, Const): return if v not in self.longevity or self.longevity[v][1] <= self.position: - self.free_regs.append(self.reg_bindings[v]) - del self.reg_bindings[v] + if v in self.reg_bindings: + self.free_regs.append(self.reg_bindings[v]) + del self.reg_bindings[v] + if self.frame_manager is not None: + self.frame_manager.mark_as_free(v) def possibly_free_vars(self, vars): """ Same as 'possibly_free_var', but for all v in vars. diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py new file mode 100644 --- /dev/null +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -0,0 +1,328 @@ +import sys +from pypy.rlib.rarithmetic import ovfcheck +from pypy.jit.metainterp.history import ConstInt, BoxPtr, ConstPtr +from pypy.jit.metainterp.resoperation import ResOperation, rop +from pypy.jit.codewriter import heaptracker +from pypy.jit.backend.llsupport.symbolic import WORD +from pypy.jit.backend.llsupport.descr import SizeDescr, ArrayDescr + + +class GcRewriterAssembler(object): + # This class performs the following rewrites on the list of operations: + # + # - Remove the DEBUG_MERGE_POINTs. + # + # - Turn all NEW_xxx to either a CALL_MALLOC_GC, or a CALL_MALLOC_NURSERY + # followed by SETFIELDs in order to initialize their GC fields. The + # two advantages of CALL_MALLOC_NURSERY is that it inlines the common + # path, and we need only one such operation to allocate several blocks + # of memory at once. + # + # - Add COND_CALLs to the write barrier before SETFIELD_GC and + # SETARRAYITEM_GC operations. + + _previous_size = -1 + _op_malloc_nursery = None + _v_last_malloced_nursery = None + c_zero = ConstInt(0) + + def __init__(self, gc_ll_descr, cpu): + self.gc_ll_descr = gc_ll_descr + self.cpu = cpu + self.newops = [] + self.known_lengths = {} + self.recent_mallocs = {} # set of variables + + def rewrite(self, operations): + # we can only remember one malloc since the next malloc can possibly + # collect; but we can try to collapse several known-size mallocs into + # one, both for performance and to reduce the number of write + # barriers. We do this on each "basic block" of operations, which in + # this case means between CALLs or unknown-size mallocs. + # + for op in operations: + if op.getopnum() == rop.DEBUG_MERGE_POINT: + continue + # ---------- turn NEWxxx into CALL_MALLOC_xxx ---------- + if op.is_malloc(): + self.handle_malloc_operation(op) + continue From noreply at buildbot.pypy.org Mon Jan 16 17:53:42 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 17:53:42 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni) fix test which failed after the renaming of stm_become_inevitable Message-ID: <20120116165342.AA6E3820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51353:c3ebb4f01089 Date: 2012-01-16 16:33 +0100 http://bitbucket.org/pypy/pypy/changeset/c3ebb4f01089/ Log: (arigo, antocuni) fix test which failed after the renaming of stm_become_inevitable diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -150,7 +150,7 @@ return p.x interp, graph = get_interpreter(func, [p]) transform_graph(graph) - assert summary(graph) == {'stm_try_inevitable': 1, 'getfield': 1} + assert summary(graph) == {'stm_become_inevitable': 1, 'getfield': 1} res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", final_stm_mode="inevitable_transaction") assert res == 42 @@ -163,7 +163,7 @@ p.x = 43 interp, graph = get_interpreter(func, [p]) transform_graph(graph) - assert summary(graph) == {'stm_try_inevitable': 1, 'setfield': 1} + assert summary(graph) == {'stm_become_inevitable': 1, 'setfield': 1} eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", final_stm_mode="inevitable_transaction") @@ -175,7 +175,7 @@ return p[3] interp, graph = get_interpreter(func, [p]) transform_graph(graph) - assert summary(graph) == {'stm_try_inevitable': 1, 'getarrayitem': 1} + assert summary(graph) == {'stm_become_inevitable': 1, 'getarrayitem': 1} res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", final_stm_mode="inevitable_transaction") assert res == 42 From noreply at buildbot.pypy.org Mon Jan 16 17:53:43 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 17:53:43 +0100 (CET) Subject: [pypy-commit] pypy stm: hg merge Message-ID: <20120116165343.CA6F0820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51354:a14e0f148aad Date: 2012-01-16 17:53 +0100 http://bitbucket.org/pypy/pypy/changeset/a14e0f148aad/ Log: hg merge diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -150,7 +150,7 @@ return p.x interp, graph = get_interpreter(func, [p]) transform_graph(graph) - assert summary(graph) == {'stm_try_inevitable': 1, 'getfield': 1} + assert summary(graph) == {'stm_become_inevitable': 1, 'getfield': 1} res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", final_stm_mode="inevitable_transaction") assert res == 42 @@ -163,7 +163,7 @@ p.x = 43 interp, graph = get_interpreter(func, [p]) transform_graph(graph) - assert summary(graph) == {'stm_try_inevitable': 1, 'setfield': 1} + assert summary(graph) == {'stm_become_inevitable': 1, 'setfield': 1} eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", final_stm_mode="inevitable_transaction") @@ -175,7 +175,7 @@ return p[3] interp, graph = get_interpreter(func, [p]) transform_graph(graph) - assert summary(graph) == {'stm_try_inevitable': 1, 'getarrayitem': 1} + assert summary(graph) == {'stm_become_inevitable': 1, 'getarrayitem': 1} res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", final_stm_mode="inevitable_transaction") assert res == 42 From noreply at buildbot.pypy.org Mon Jan 16 18:15:39 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 18:15:39 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni): start to write the RPython level interface for using transactions in rlib/rstm.py. Move the compiled tests from translator/stm/test_transform.py to rlib/test/test_rstm.py Message-ID: <20120116171539.5FE89820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51355:1aa99f2f035f Date: 2012-01-16 18:06 +0100 http://bitbucket.org/pypy/pypy/changeset/1aa99f2f035f/ Log: (arigo, antocuni): start to write the RPython level interface for using transactions in rlib/rstm.py. Move the compiled tests from translator/stm/test_transform.py to rlib/test/test_rstm.py diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/rstm.py @@ -0,0 +1,38 @@ +from pypy.rlib.objectmodel import specialize, we_are_translated, keepalive_until_here +from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.annlowlevel import (cast_base_ptr_to_instance, + cast_instance_to_base_ptr, + llhelper) +from pypy.translator.stm import _rffi_stm + + at specialize.memo() +def _get_stm_callback(func, argcls): + def _stm_callback(llarg): + if we_are_translated(): + llarg = rffi.cast(rclass.OBJECTPTR, llarg) + arg = cast_base_ptr_to_instance(argcls, llarg) + else: + arg = lltype.TLS.stm_callback_arg + res = func(arg) + assert res is None + return lltype.nullptr(rffi.VOIDP.TO) + return _stm_callback + + at specialize.arg(0, 1) +def stm_perform_transaction(func, argcls, arg): + assert isinstance(arg, argcls) + assert argcls._alloc_nonmovable_ + if we_are_translated(): + llarg = cast_instance_to_base_ptr(arg) + llarg = rffi.cast(rffi.VOIDP, llarg) + else: + # only for tests + lltype.TLS.stm_callback_arg = arg + llarg = lltype.nullptr(rffi.VOIDP.TO) + callback = _get_stm_callback(func, argcls) + llcallback = llhelper(_rffi_stm.CALLBACK, callback) + _rffi_stm.perform_transaction(llcallback, llarg) + keepalive_until_here(arg) + +stm_descriptor_init = _rffi_stm.descriptor_init +stm_descriptor_done = _rffi_stm.descriptor_done diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/test/test_rstm.py @@ -0,0 +1,51 @@ +from pypy.rlib.debug import debug_print +from pypy.rlib import rstm +from pypy.translator.c.test.test_standalone import StandaloneTests + +def test_stm_perform_transaction(): + class Arg(object): + _alloc_nonmovable_ = True + + def setx(arg): + arg.x = 42 + + arg = Arg() + rstm.stm_descriptor_init() + rstm.stm_perform_transaction(setx, Arg, arg) + rstm.stm_descriptor_done() + assert arg.x == 42 + + +class CompiledSTMTests(StandaloneTests): + gc = "none" + + def compile(self, entry_point): + from pypy.config.pypyoption import get_pypy_config + self.config = get_pypy_config(translating=True) + self.config.translation.stm = True + self.config.translation.gc = self.gc + # + # Prevent the RaiseAnalyzer from just emitting "WARNING: Unknown + # operation". We want instead it to crash. + from pypy.translator.backendopt.canraise import RaiseAnalyzer + RaiseAnalyzer.fail_on_unknown_operation = True + try: + res = StandaloneTests.compile(self, entry_point, debug=True) + finally: + RaiseAnalyzer.fail_on_unknown_operation = False + return res + + +class TestTransformSingleThread(CompiledSTMTests): + + def test_no_pointer_operations(self): + def simplefunc(argv): + i = 0 + while i < 100: + i += 3 + debug_print(i) + return 0 + t, cbuilder = self.compile(simplefunc) + dataout, dataerr = cbuilder.cmdexec('', err=True) + assert dataout == '' + assert '102' in dataerr.splitlines() diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -676,10 +676,8 @@ void* stm_perform_transaction(void*(*callback)(void*), void *arg) { void *result; -#ifdef RPY_STM_ASSERT /* you need to call descriptor_init() before calling stm_perform_transaction */ assert(thread_descriptor != NULL); -#endif STM_begin_transaction(); result = callback(arg); stm_commit_transaction(); diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -3,9 +3,6 @@ from pypy.objspace.flow.model import summary from pypy.translator.stm.llstminterp import eval_stm_graph from pypy.translator.stm.transform import transform_graph -##from pypy.translator.stm import rstm -from pypy.translator.c.test.test_standalone import StandaloneTests -from pypy.rlib.debug import debug_print from pypy.conftest import option @@ -179,96 +176,3 @@ res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction", final_stm_mode="inevitable_transaction") assert res == 42 - -# ____________________________________________________________ - -class CompiledSTMTests(StandaloneTests): - gc = "none" - - def compile(self, entry_point): - from pypy.config.pypyoption import get_pypy_config - self.config = get_pypy_config(translating=True) - self.config.translation.stm = True - self.config.translation.gc = self.gc - # - # Prevent the RaiseAnalyzer from just emitting "WARNING: Unknown - # operation". We want instead it to crash. - from pypy.translator.backendopt.canraise import RaiseAnalyzer - RaiseAnalyzer.fail_on_unknown_operation = True - try: - res = StandaloneTests.compile(self, entry_point, debug=True) - finally: - RaiseAnalyzer.fail_on_unknown_operation = False - return res - - -class TestTransformSingleThread(CompiledSTMTests): - - def test_no_pointer_operations(self): - def simplefunc(argv): - i = 0 - while i < 100: - i += 3 - debug_print(i) - return 0 - t, cbuilder = self.compile(simplefunc) - dataout, dataerr = cbuilder.cmdexec('', err=True) - assert dataout == '' - assert '102' in dataerr.splitlines() - - def test_fails_when_nonbalanced_begin(self): - def simplefunc(argv): - rstm.begin_transaction() - return 0 - t, cbuilder = self.compile(simplefunc) - cbuilder.cmdexec('', expect_crash=True) - - def test_fails_when_nonbalanced_commit(self): - def simplefunc(argv): - rstm.commit_transaction() - rstm.commit_transaction() - return 0 - t, cbuilder = self.compile(simplefunc) - cbuilder.cmdexec('', expect_crash=True) - - def test_begin_inevitable_transaction(self): - def simplefunc(argv): - rstm.commit_transaction() - rstm.begin_inevitable_transaction() - return 0 - t, cbuilder = self.compile(simplefunc) - cbuilder.cmdexec('') - - def test_transaction_boundary_1(self): - def simplefunc(argv): - rstm.transaction_boundary() - return 0 - t, cbuilder = self.compile(simplefunc) - cbuilder.cmdexec('') - - def test_transaction_boundary_2(self): - def simplefunc(argv): - rstm.transaction_boundary() - rstm.transaction_boundary() - rstm.transaction_boundary() - return 0 - t, cbuilder = self.compile(simplefunc) - cbuilder.cmdexec('') - - def test_transaction_boundary_3(self): - def simplefunc(argv): - s1 = argv[0] - debug_print('STEP1:', len(s1)) - rstm.transaction_boundary() - rstm.transaction_boundary() - rstm.transaction_boundary() - debug_print('STEP2:', len(s1)) - return 0 - t, cbuilder = self.compile(simplefunc) - data, err = cbuilder.cmdexec('', err=True) - lines = err.splitlines() - steps = [(line[:6], line[6:]) - for line in lines if line.startswith('STEP')] - steps = zip(*steps) - assert steps[0] == ('STEP1:', 'STEP2:') - assert steps[1][0] == steps[1][1] From noreply at buildbot.pypy.org Mon Jan 16 18:15:40 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 16 Jan 2012 18:15:40 +0100 (CET) Subject: [pypy-commit] pypy stm: (arigo, antocuni): rename _rffi_stm.* to _rffi_stm.stm_* and rstm.stm_* to rstm.* Message-ID: <20120116171540.926FE820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: stm Changeset: r51356:f19ffa0b7bb3 Date: 2012-01-16 18:14 +0100 http://bitbucket.org/pypy/pypy/changeset/f19ffa0b7bb3/ Log: (arigo, antocuni): rename _rffi_stm.* to _rffi_stm.stm_* and rstm.stm_* to rstm.* diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -19,7 +19,7 @@ return _stm_callback @specialize.arg(0, 1) -def stm_perform_transaction(func, argcls, arg): +def perform_transaction(func, argcls, arg): assert isinstance(arg, argcls) assert argcls._alloc_nonmovable_ if we_are_translated(): @@ -31,8 +31,8 @@ llarg = lltype.nullptr(rffi.VOIDP.TO) callback = _get_stm_callback(func, argcls) llcallback = llhelper(_rffi_stm.CALLBACK, callback) - _rffi_stm.perform_transaction(llcallback, llarg) + _rffi_stm.stm_perform_transaction(llcallback, llarg) keepalive_until_here(arg) -stm_descriptor_init = _rffi_stm.descriptor_init -stm_descriptor_done = _rffi_stm.descriptor_done +descriptor_init = _rffi_stm.stm_descriptor_init +descriptor_done = _rffi_stm.stm_descriptor_done diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py --- a/pypy/rlib/test/test_rstm.py +++ b/pypy/rlib/test/test_rstm.py @@ -10,9 +10,9 @@ arg.x = 42 arg = Arg() - rstm.stm_descriptor_init() - rstm.stm_perform_transaction(setx, Arg, arg) - rstm.stm_descriptor_done() + rstm.descriptor_init() + rstm.perform_transaction(setx, Arg, arg) + rstm.descriptor_done() assert arg.x == 42 diff --git a/pypy/translator/stm/_rffi_stm.py b/pypy/translator/stm/_rffi_stm.py --- a/pypy/translator/stm/_rffi_stm.py +++ b/pypy/translator/stm/_rffi_stm.py @@ -23,14 +23,14 @@ SignedP = rffi.CArrayPtr(lltype.Signed) -descriptor_init = llexternal('stm_descriptor_init', [], lltype.Void) -descriptor_done = llexternal('stm_descriptor_done', [], lltype.Void) +stm_descriptor_init = llexternal('stm_descriptor_init', [], lltype.Void) +stm_descriptor_done = llexternal('stm_descriptor_done', [], lltype.Void) ##begin_transaction = llexternal('STM_begin_transaction', [], lltype.Void) ##begin_inevitable_transaction = llexternal('stm_begin_inevitable_transaction', ## [], lltype.Void) ##commit_transaction = llexternal('stm_commit_transaction', [], lltype.Signed) -try_inevitable = llexternal('stm_try_inevitable', [], lltype.Void) +stm_try_inevitable = llexternal('stm_try_inevitable', [], lltype.Void) ##descriptor_init_and_being_inevitable_transaction = llexternal( ## 'stm_descriptor_init_and_being_inevitable_transaction', [], lltype.Void) @@ -42,7 +42,7 @@ lltype.Void) CALLBACK = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.VOIDP)) -perform_transaction = llexternal('stm_perform_transaction', - [CALLBACK, rffi.VOIDP], rffi.VOIDP) +stm_perform_transaction = llexternal('stm_perform_transaction', + [CALLBACK, rffi.VOIDP], rffi.VOIDP) -abort_and_retry = llexternal('stm_abort_and_retry', [], lltype.Void) +stm_abort_and_retry = llexternal('stm_abort_and_retry', [], lltype.Void) diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -1,9 +1,9 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rarithmetic import r_longlong, r_singlefloat -from pypy.translator.stm.test.test_transform import CompiledSTMTests +from pypy.rlib.test.test_rstm import CompiledSTMTests #from pypy.rlib import rstm -from pypy.translator.stm._rffi_stm import (CALLBACK, perform_transaction, - descriptor_init, descriptor_done) +from pypy.translator.stm._rffi_stm import (CALLBACK, stm_perform_transaction, + stm_descriptor_init, stm_descriptor_done) from pypy.translator.c.test.test_standalone import StandaloneTests from pypy.rlib.debug import debug_print from pypy.rpython.annlowlevel import llhelper @@ -251,9 +251,9 @@ def test_getfield_all_sizes_inside_transaction(self): def do_stm_getfield(argv): callback = llhelper(CALLBACK, _play_with_getfield) - descriptor_init() - perform_transaction(callback, NULL) - descriptor_done() + stm_descriptor_init() + stm_perform_transaction(callback, NULL) + stm_descriptor_done() return 0 t, cbuilder = self.compile(do_stm_getfield) cbuilder.cmdexec('') @@ -269,11 +269,11 @@ def do_stm_setfield(argv): callback1 = llhelper(CALLBACK, _play_with_setfields) callback2 = llhelper(CALLBACK, _check_values_of_fields) - descriptor_init() - perform_transaction(callback1, NULL) + stm_descriptor_init() + stm_perform_transaction(callback1, NULL) # read values which aren't local to the transaction - perform_transaction(callback2, NULL) - descriptor_done() + stm_perform_transaction(callback2, NULL) + stm_descriptor_done() return 0 t, cbuilder = self.compile(do_stm_setfield) cbuilder.cmdexec('') @@ -288,9 +288,9 @@ def test_getarrayitem_all_sizes_inside_transaction(self): def do_stm_getarrayitem(argv): callback = llhelper(CALLBACK, _play_with_getarrayitem) - descriptor_init() - perform_transaction(callback, NULL) - descriptor_done() + stm_descriptor_init() + stm_perform_transaction(callback, NULL) + stm_descriptor_done() return 0 t, cbuilder = self.compile(do_stm_getarrayitem) cbuilder.cmdexec('') @@ -311,11 +311,11 @@ callback2 = llhelper(CALLBACK, _play_with_setarrayitem_2) callback3 = llhelper(CALLBACK, _play_with_setarrayitem_3) # - descriptor_init() - perform_transaction(callback1, NULL) - perform_transaction(callback2, NULL) - perform_transaction(callback3, NULL) - descriptor_done() + stm_descriptor_init() + stm_perform_transaction(callback1, NULL) + stm_perform_transaction(callback2, NULL) + stm_perform_transaction(callback3, NULL) + stm_descriptor_done() return 0 t, cbuilder = self.compile(do_stm_setarrayitem) cbuilder.cmdexec('') @@ -330,9 +330,9 @@ def test_getinteriorfield_all_sizes_inside_transaction(self): def do_stm_getinteriorfield(argv): callback = llhelper(CALLBACK, _play_with_getinteriorfield) - descriptor_init() - perform_transaction(callback, NULL) - descriptor_done() + stm_descriptor_init() + stm_perform_transaction(callback, NULL) + stm_descriptor_done() return 0 t, cbuilder = self.compile(do_stm_getinteriorfield) cbuilder.cmdexec('') @@ -351,10 +351,10 @@ callback1 = llhelper(CALLBACK, _play_with_setinteriorfield_1) callback2 = llhelper(CALLBACK, _play_with_setinteriorfield_2) # - descriptor_init() - perform_transaction(callback1, NULL) - perform_transaction(callback2, NULL) - descriptor_done() + stm_descriptor_init() + stm_perform_transaction(callback1, NULL) + stm_perform_transaction(callback2, NULL) + stm_descriptor_done() return 0 t, cbuilder = self.compile(do_stm_setinteriorfield) cbuilder.cmdexec('') diff --git a/pypy/translator/stm/test/test_llstm.py b/pypy/translator/stm/test/test_llstm.py --- a/pypy/translator/stm/test/test_llstm.py +++ b/pypy/translator/stm/test/test_llstm.py @@ -44,7 +44,7 @@ assert a.x == -611 # xxx still the old value when reading non-transact. if a.y < 10: a.y += 1 # non-transactionally - abort_and_retry() + stm_abort_and_retry() return lltype.nullptr(rffi.VOIDP.TO) def test_stm_getfield(): @@ -58,10 +58,10 @@ a.f = rf1 a.sa = rs1a a.sb = rs1b - descriptor_init() - perform_transaction(llhelper(CALLBACK, callback1), + stm_descriptor_init() + stm_perform_transaction(llhelper(CALLBACK, callback1), rffi.cast(rffi.VOIDP, a)) - descriptor_done() + stm_descriptor_done() assert a.x == 420 assert a.c1 == '/' assert a.c2 == '\\' @@ -117,7 +117,7 @@ assert float(a.sb) == float(rs1b) if a.y < 10: a.y += 1 # non-transactionally - abort_and_retry() + stm_abort_and_retry() return lltype.nullptr(rffi.VOIDP.TO) def test_stm_setfield(): @@ -131,10 +131,10 @@ a.f = rf1 a.sa = rs1a a.sb = rs1b - descriptor_init() - perform_transaction(llhelper(CALLBACK, callback2), + stm_descriptor_init() + stm_perform_transaction(llhelper(CALLBACK, callback2), rffi.cast(rffi.VOIDP, a)) - descriptor_done() + stm_descriptor_done() assert a.x == 420 assert a.c1 == '(' assert a.c2 == '?' diff --git a/pypy/translator/stm/test/test_rffi_stm.py b/pypy/translator/stm/test/test_rffi_stm.py --- a/pypy/translator/stm/test/test_rffi_stm.py +++ b/pypy/translator/stm/test/test_rffi_stm.py @@ -2,35 +2,35 @@ from pypy.rpython.annlowlevel import llhelper def test_descriptor(): - descriptor_init() - descriptor_done() + stm_descriptor_init() + stm_descriptor_done() -def test_perform_transaction(): +def test_stm_perform_transaction(): def callback1(x): return lltype.nullptr(rffi.VOIDP.TO) - descriptor_init() - perform_transaction(llhelper(CALLBACK, callback1), + stm_descriptor_init() + stm_perform_transaction(llhelper(CALLBACK, callback1), lltype.nullptr(rffi.VOIDP.TO)) - descriptor_done() + stm_descriptor_done() -def test_abort_and_retry(): +def test_stm_abort_and_retry(): A = lltype.Struct('A', ('x', lltype.Signed), ('y', lltype.Signed)) a = lltype.malloc(A, immortal=True, flavor='raw') a.y = 0 def callback1(x): if a.y < 10: a.y += 1 # non-transactionally - abort_and_retry() + stm_abort_and_retry() else: a.x = 42 * a.y return lltype.nullptr(rffi.VOIDP.TO) - descriptor_init() - perform_transaction(llhelper(CALLBACK, callback1), + stm_descriptor_init() + stm_perform_transaction(llhelper(CALLBACK, callback1), lltype.nullptr(rffi.VOIDP.TO)) - descriptor_done() + stm_descriptor_done() assert a.x == 420 -def test_abort_and_retry_transactionally(): +def test_stm_abort_and_retry_transactionally(): A = lltype.Struct('A', ('x', lltype.Signed), ('y', lltype.Signed)) a = lltype.malloc(A, immortal=True, flavor='raw') a.x = -611 @@ -45,11 +45,11 @@ assert a.x == -611 # xxx still the old value when reading non-transact. if a.y < 10: a.y += 1 # non-transactionally - abort_and_retry() + stm_abort_and_retry() else: return lltype.nullptr(rffi.VOIDP.TO) - descriptor_init() - perform_transaction(llhelper(CALLBACK, callback1), + stm_descriptor_init() + stm_perform_transaction(llhelper(CALLBACK, callback1), lltype.nullptr(rffi.VOIDP.TO)) - descriptor_done() + stm_descriptor_done() assert a.x == 420 diff --git a/pypy/translator/stm/test/test_ztranslated.py b/pypy/translator/stm/test/test_ztranslated.py --- a/pypy/translator/stm/test/test_ztranslated.py +++ b/pypy/translator/stm/test/test_ztranslated.py @@ -1,5 +1,5 @@ import py -from pypy.translator.stm.test.test_transform import CompiledSTMTests +from pypy.rlib.test.test_rstm import CompiledSTMTests from pypy.translator.stm.test import targetdemo From noreply at buildbot.pypy.org Mon Jan 16 18:16:24 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 16 Jan 2012 18:16:24 +0100 (CET) Subject: [pypy-commit] pypy default: allow inlining into _codecs, makes simple decoding ~3x faster Message-ID: <20120116171624.357B0820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51357:af6b237eaecf Date: 2012-01-16 18:15 +0100 http://bitbucket.org/pypy/pypy/changeset/af6b237eaecf/ Log: allow inlining into _codecs, makes simple decoding ~3x faster diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True From noreply at buildbot.pypy.org Mon Jan 16 18:16:25 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Mon, 16 Jan 2012 18:16:25 +0100 (CET) Subject: [pypy-commit] pypy default: merged upstream Message-ID: <20120116171625.5B8DD820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51358:8688ce42472e Date: 2012-01-16 18:16 +0100 http://bitbucket.org/pypy/pypy/changeset/8688ce42472e/ Log: merged upstream diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): From noreply at buildbot.pypy.org Mon Jan 16 18:19:04 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 18:19:04 +0100 (CET) Subject: [pypy-commit] pypy stm: (antocuni, arigo) Message-ID: <20120116171904.34B72820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51359:e2645247a9c2 Date: 2012-01-16 18:18 +0100 http://bitbucket.org/pypy/pypy/changeset/e2645247a9c2/ Log: (antocuni, arigo) Move CompiledSTMTests to its own file in translator/stm/test/support.py. diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py --- a/pypy/rlib/test/test_rstm.py +++ b/pypy/rlib/test/test_rstm.py @@ -1,6 +1,6 @@ from pypy.rlib.debug import debug_print from pypy.rlib import rstm -from pypy.translator.c.test.test_standalone import StandaloneTests +from pypy.translator.stm.test.support import CompiledSTMTests def test_stm_perform_transaction(): class Arg(object): @@ -16,26 +16,6 @@ assert arg.x == 42 -class CompiledSTMTests(StandaloneTests): - gc = "none" - - def compile(self, entry_point): - from pypy.config.pypyoption import get_pypy_config - self.config = get_pypy_config(translating=True) - self.config.translation.stm = True - self.config.translation.gc = self.gc - # - # Prevent the RaiseAnalyzer from just emitting "WARNING: Unknown - # operation". We want instead it to crash. - from pypy.translator.backendopt.canraise import RaiseAnalyzer - RaiseAnalyzer.fail_on_unknown_operation = True - try: - res = StandaloneTests.compile(self, entry_point, debug=True) - finally: - RaiseAnalyzer.fail_on_unknown_operation = False - return res - - class TestTransformSingleThread(CompiledSTMTests): def test_no_pointer_operations(self): diff --git a/pypy/translator/stm/test/support.py b/pypy/translator/stm/test/support.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/test/support.py @@ -0,0 +1,23 @@ +"""CompiledSTMTests, a support class for translated tests with STM""" + +from pypy.translator.c.test.test_standalone import StandaloneTests + + +class CompiledSTMTests(StandaloneTests): + gc = "none" + + def compile(self, entry_point): + from pypy.config.pypyoption import get_pypy_config + self.config = get_pypy_config(translating=True) + self.config.translation.stm = True + self.config.translation.gc = self.gc + # + # Prevent the RaiseAnalyzer from just emitting "WARNING: Unknown + # operation". We want instead it to crash. + from pypy.translator.backendopt.canraise import RaiseAnalyzer + RaiseAnalyzer.fail_on_unknown_operation = True + try: + res = StandaloneTests.compile(self, entry_point, debug=True) + finally: + RaiseAnalyzer.fail_on_unknown_operation = False + return res diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -1,7 +1,6 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rarithmetic import r_longlong, r_singlefloat -from pypy.rlib.test.test_rstm import CompiledSTMTests -#from pypy.rlib import rstm +from pypy.translator.stm.test.support import CompiledSTMTests from pypy.translator.stm._rffi_stm import (CALLBACK, stm_perform_transaction, stm_descriptor_init, stm_descriptor_done) from pypy.translator.c.test.test_standalone import StandaloneTests diff --git a/pypy/translator/stm/test/test_ztranslated.py b/pypy/translator/stm/test/test_ztranslated.py --- a/pypy/translator/stm/test/test_ztranslated.py +++ b/pypy/translator/stm/test/test_ztranslated.py @@ -1,5 +1,5 @@ import py -from pypy.rlib.test.test_rstm import CompiledSTMTests +from pypy.translator.stm.test.support import CompiledSTMTests from pypy.translator.stm.test import targetdemo From noreply at buildbot.pypy.org Mon Jan 16 18:29:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 18:29:45 +0100 (CET) Subject: [pypy-commit] pypy stm: (antocuni, arigo) Message-ID: <20120116172945.04325820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51360:dbadc138e9b5 Date: 2012-01-16 18:29 +0100 http://bitbucket.org/pypy/pypy/changeset/dbadc138e9b5/ Log: (antocuni, arigo) Add an integration test to test_rstm.py: really call rstm.perform_transaction() in a C-compiled test. diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -1,5 +1,5 @@ from pypy.rlib.objectmodel import specialize, we_are_translated, keepalive_until_here -from pypy.rpython.lltypesystem import rffi, lltype +from pypy.rpython.lltypesystem import rffi, lltype, rclass from pypy.rpython.annlowlevel import (cast_base_ptr_to_instance, cast_instance_to_base_ptr, llhelper) diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py --- a/pypy/rlib/test/test_rstm.py +++ b/pypy/rlib/test/test_rstm.py @@ -2,14 +2,18 @@ from pypy.rlib import rstm from pypy.translator.stm.test.support import CompiledSTMTests + +class Arg(object): + _alloc_nonmovable_ = True + +def setx(arg): + debug_print(arg.x) + arg.x = 42 + + def test_stm_perform_transaction(): - class Arg(object): - _alloc_nonmovable_ = True - - def setx(arg): - arg.x = 42 - arg = Arg() + arg.x = 202 rstm.descriptor_init() rstm.perform_transaction(setx, Arg, arg) rstm.descriptor_done() @@ -29,3 +33,12 @@ dataout, dataerr = cbuilder.cmdexec('', err=True) assert dataout == '' assert '102' in dataerr.splitlines() + + def test_perform_transaction(self): + def f(argv): + test_stm_perform_transaction() + return 0 + t, cbuilder = self.compile(f) + dataout, dataerr = cbuilder.cmdexec('', err=True) + assert dataout == '' + assert '202' in dataerr.splitlines() diff --git a/pypy/translator/c/funcgen.py b/pypy/translator/c/funcgen.py --- a/pypy/translator/c/funcgen.py +++ b/pypy/translator/c/funcgen.py @@ -226,9 +226,7 @@ yield '\tRPyConvertExceptionToCPython();' yield '\treturn NULL;' yield '}' - if self.exception_policy == "stm": - xxxx - yield 'STM_MAKE_INEVITABLE();' + assert self.exception_policy != "stm", "old code" retval = self.expr(block.inputargs[0]) if self.exception_policy != "exc_helper": yield 'RPY_DEBUG_RETURN();' @@ -609,6 +607,7 @@ OP_STM_SETARRAYITEM = _OP_STM OP_STM_GETINTERIORFIELD = _OP_STM OP_STM_SETINTERIORFIELD = _OP_STM + OP_STM_BECOME_INEVITABLE = _OP_STM def OP_PTR_NONZERO(self, op): diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -118,49 +118,7 @@ return _stm_generic_set(funcgen, op, expr, T) -def stm_begin_transaction(funcgen, op): - return 'STM_begin_transaction();' - -def stm_commit_transaction(funcgen, op): - return 'stm_commit_transaction();' - -def stm_begin_inevitable_transaction(funcgen, op): - return 'stm_begin_inevitable_transaction();' - -def stm_declare_variable(funcgen, op): - # this operation occurs only once at the start of a function if - # it uses stm_transaction_boundary - assert funcgen.exception_policy is None - funcgen.exception_policy = 'stm' - return 'STM_DECLARE_VARIABLE();' - -def stm_transaction_boundary(funcgen, op): - assert funcgen.exception_policy == 'stm' - # make code looking like this: - # - # stm_commit_transaction(); - # { - # volatile long tmp_123 = l_123; - # setjmp(jmpbuf); - # l_123 = tmp_123; - # } - # stm_begin_transaction(&jmpbuf); - # - lines = ['\tsetjmp(jmpbuf);'] - TMPVAR = 'tmp_%s' - for v in op.args: - tmpname = TMPVAR % v.name - cdeclname = cdecl(funcgen.lltypename(v), 'volatile ' + tmpname) - realname = funcgen.expr(v) - lines.insert(0, '\t%s = %s;' % (cdeclname, realname)) - lines.append('\t%s = %s;' % (realname, tmpname)) - lines.insert(0, '{') - lines.insert(0, 'stm_commit_transaction();') - lines.append('}') - lines.append('stm_begin_transaction(&jmpbuf);') - return '\n'.join(lines) - -def stm_try_inevitable(funcgen, op): +def stm_become_inevitable(funcgen, op): info = op.args[0].value string_literal = c_string_constant(info) return 'stm_try_inevitable(STM_EXPLAIN1(%s));' % (string_literal,) diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -775,6 +775,8 @@ by another thread. We set the lowest bit in global_timestamp to 1. */ struct tx_descriptor *d = thread_descriptor; + if (!d) + return; #ifdef RPY_STM_ASSERT PYPY_DEBUG_START("stm-inevitable"); From noreply at buildbot.pypy.org Mon Jan 16 18:56:27 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 18:56:27 +0100 (CET) Subject: [pypy-commit] pypy stm: (antocuni, arigo) Message-ID: <20120116175627.1DA9B820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51361:1e52821ea7bc Date: 2012-01-16 18:44 +0100 http://bitbucket.org/pypy/pypy/changeset/1e52821ea7bc/ Log: (antocuni, arigo) Test (maybe) for inevitable transactions. diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py --- a/pypy/rlib/test/test_rstm.py +++ b/pypy/rlib/test/test_rstm.py @@ -1,3 +1,4 @@ +import os from pypy.rlib.debug import debug_print from pypy.rlib import rstm from pypy.translator.stm.test.support import CompiledSTMTests @@ -8,12 +9,15 @@ def setx(arg): debug_print(arg.x) + if arg.x == 303: + # this will trigger stm_become_inevitable() + os.write(1, "hello\n") arg.x = 42 -def test_stm_perform_transaction(): +def test_stm_perform_transaction(initial_x=202): arg = Arg() - arg.x = 202 + arg.x = initial_x rstm.descriptor_init() rstm.perform_transaction(setx, Arg, arg) rstm.descriptor_done() @@ -42,3 +46,12 @@ dataout, dataerr = cbuilder.cmdexec('', err=True) assert dataout == '' assert '202' in dataerr.splitlines() + + def test_perform_transaction_inevitable(self): + def f(argv): + test_stm_perform_transaction(303) + return 0 + t, cbuilder = self.compile(f) + dataout, dataerr = cbuilder.cmdexec('', err=True) + assert 'hello' in dataout.splitlines() + assert '303' in dataerr.splitlines() From noreply at buildbot.pypy.org Mon Jan 16 18:56:28 2012 From: noreply at buildbot.pypy.org (arigo) Date: Mon, 16 Jan 2012 18:56:28 +0100 (CET) Subject: [pypy-commit] pypy stm: (antocuni, arigo) Message-ID: <20120116175628.49899820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51362:f565b7971e14 Date: 2012-01-16 18:55 +0100 http://bitbucket.org/pypy/pypy/changeset/f565b7971e14/ Log: (antocuni, arigo) A way to get and test the current transaction mode, for debugging. diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -36,3 +36,4 @@ descriptor_init = _rffi_stm.stm_descriptor_init descriptor_done = _rffi_stm.stm_descriptor_done +debug_get_state = _rffi_stm.stm_debug_get_state diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py --- a/pypy/rlib/test/test_rstm.py +++ b/pypy/rlib/test/test_rstm.py @@ -9,18 +9,24 @@ def setx(arg): debug_print(arg.x) + assert rstm.debug_get_state() == 1 if arg.x == 303: # this will trigger stm_become_inevitable() os.write(1, "hello\n") + assert rstm.debug_get_state() == 2 arg.x = 42 def test_stm_perform_transaction(initial_x=202): arg = Arg() arg.x = initial_x + assert rstm.debug_get_state() == -1 rstm.descriptor_init() + assert rstm.debug_get_state() == 0 rstm.perform_transaction(setx, Arg, arg) + assert rstm.debug_get_state() == 0 rstm.descriptor_done() + assert rstm.debug_get_state() == -1 assert arg.x == 42 diff --git a/pypy/translator/stm/_rffi_stm.py b/pypy/translator/stm/_rffi_stm.py --- a/pypy/translator/stm/_rffi_stm.py +++ b/pypy/translator/stm/_rffi_stm.py @@ -46,3 +46,4 @@ [CALLBACK, rffi.VOIDP], rffi.VOIDP) stm_abort_and_retry = llexternal('stm_abort_and_retry', [], lltype.Void) +stm_debug_get_state = llexternal('stm_debug_get_state', [], lltype.Signed) diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -82,9 +82,7 @@ owner_version_t my_lock_word; unsigned init_counter; struct RedoLog redolog; /* last item, because it's the biggest one */ -#ifdef RPY_STM_ASSERT int transaction_active; -#endif }; /* global_timestamp contains in its lowest bit a flag equal to 1 @@ -148,9 +146,7 @@ static _Bool is_inevitable(struct tx_descriptor *d) { -#ifdef RPY_STM_ASSERT assert(d->transaction_active); -#endif return is_inevitable_or_inactive(d); } @@ -254,10 +250,8 @@ { d->reads.size = 0; redolog_clear(&d->redolog); -#ifdef RPY_STM_ASSERT assert(d->transaction_active); d->transaction_active = 0; -#endif d->setjmp_buf = NULL; } @@ -687,10 +681,8 @@ void stm_begin_transaction(jmp_buf* buf) { struct tx_descriptor *d = thread_descriptor; -#ifdef RPY_STM_ASSERT assert(!d->transaction_active); d->transaction_active = 1; -#endif d->setjmp_buf = buf; d->start_time = d->last_known_global_timestamp & ~1; } @@ -842,9 +834,7 @@ struct tx_descriptor *d = thread_descriptor; unsigned long curtime; -#ifdef RPY_STM_ASSERT assert(!d->transaction_active); -#endif retry: mutex_lock(); /* possibly waiting here */ @@ -861,10 +851,8 @@ if (bool_cas(&global_timestamp, curtime, curtime + 1)) break; } -#ifdef RPY_STM_ASSERT assert(!d->transaction_active); d->transaction_active = 1; -#endif d->setjmp_buf = NULL; d->start_time = curtime; #ifdef COMMIT_OTHER_INEV @@ -974,4 +962,17 @@ #endif } +long stm_debug_get_state(void) +{ + struct tx_descriptor *d = thread_descriptor; + if (!d) + return -1; + if (!d->transaction_active) + return 0; + if (!is_inevitable(d)) + return 1; + else + return 2; +} + #endif /* PYPY_NOT_MAIN_FILE */ diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -37,6 +37,10 @@ void stm_abort_and_retry(void); void stm_descriptor_init_and_being_inevitable_transaction(void); void stm_commit_transaction_and_descriptor_done(void); +long stm_debug_get_state(void); /* -1: descriptor_init() was not called + 0: not in a transaction + 1: in a regular transaction + 2: in an inevitable transaction */ /* for testing only: */ #define STM_begin_transaction() ; \ diff --git a/pypy/translator/stm/test/test_rffi_stm.py b/pypy/translator/stm/test/test_rffi_stm.py --- a/pypy/translator/stm/test/test_rffi_stm.py +++ b/pypy/translator/stm/test/test_rffi_stm.py @@ -50,6 +50,19 @@ return lltype.nullptr(rffi.VOIDP.TO) stm_descriptor_init() stm_perform_transaction(llhelper(CALLBACK, callback1), - lltype.nullptr(rffi.VOIDP.TO)) + lltype.nullptr(rffi.VOIDP.TO)) stm_descriptor_done() assert a.x == 420 + +def test_stm_debug_get_state(): + def callback1(x): + assert stm_debug_get_state() == 1 + stm_try_inevitable() + assert stm_debug_get_state() == 2 + return lltype.nullptr(rffi.VOIDP.TO) + assert stm_debug_get_state() == -1 + stm_descriptor_init() + assert stm_debug_get_state() == 0 + stm_perform_transaction(llhelper(CALLBACK, callback1), + lltype.nullptr(rffi.VOIDP.TO)) + stm_descriptor_done() From noreply at buildbot.pypy.org Mon Jan 16 20:11:35 2012 From: noreply at buildbot.pypy.org (mattip) Date: Mon, 16 Jan 2012 20:11:35 +0100 (CET) Subject: [pypy-commit] pypy default: add failing test, add fix Message-ID: <20120116191135.CFC46820D8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: Changeset: r51363:ee1b4ea67ed9 Date: 2012-01-16 21:10 +0200 http://bitbucket.org/pypy/pypy/changeset/ee1b4ea67ed9/ Log: add failing test, add fix diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -157,6 +157,8 @@ offset += self.strides[i] break else: + if i == self.dim: + first_line = True indices[i] = 0 offset -= self.backstrides[i] else: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -771,6 +771,8 @@ assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_identity(self): from _numpypy import identity, array From noreply at buildbot.pypy.org Mon Jan 16 21:16:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 16 Jan 2012 21:16:09 +0100 (CET) Subject: [pypy-commit] pypy default: A bit experimental - try to preallocate the size of unicode join and remove Message-ID: <20120116201610.009C0820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51364:f6b8525d8a10 Date: 2012-01-16 22:14 +0200 http://bitbucket.org/pypy/pypy/changeset/f6b8525d8a10/ Log: A bit experimental - try to preallocate the size of unicode join and remove a pointless performance hack (the general optimization should work already) diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -201,7 +201,7 @@ return space.newbool(container.find(item) != -1) def unicode_join__Unicode_ANY(space, w_self, w_list): - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -216,22 +216,21 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = 0 + for i in range(size): + prealloc_size += len(space.unicode_w(list_w[i])) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if isinstance(w_s, W_UnicodeObject): - # shortcut for performance - sb.append(w_s._value) - else: - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + try: + sb.append(space.unicode_w(w_s)) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): From noreply at buildbot.pypy.org Mon Jan 16 21:16:11 2012 From: noreply at buildbot.pypy.org (fijal) Date: Mon, 16 Jan 2012 21:16:11 +0100 (CET) Subject: [pypy-commit] pypy default: mere Message-ID: <20120116201611.60CE7820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51365:5e43d79c76a7 Date: 2012-01-16 22:15 +0200 http://bitbucket.org/pypy/pypy/changeset/5e43d79c76a7/ Log: mere diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -157,6 +157,8 @@ offset += self.strides[i] break else: + if i == self.dim: + first_line = True indices[i] = 0 offset -= self.backstrides[i] else: diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -287,11 +287,11 @@ descr_rmod = _binop_right_impl("mod") def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): - def impl(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, w_dim) + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") @@ -569,14 +569,14 @@ ) return w_result - def descr_mean(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) w_denom = space.wrap(self.size) else: - dim = space.int_w(w_dim) + dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) - return space.div(self.descr_sum_promote(space, w_dim), w_denom) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) def descr_var(self, space): # var = mean((values - mean(values)) ** 2) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -733,6 +733,7 @@ a = array(range(105)).reshape(3, 5, 7) b = mean(a, axis=0) b[0,0]==35. + assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() @@ -755,6 +756,7 @@ assert array([]).sum() == 0.0 raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() @@ -769,6 +771,8 @@ assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_identity(self): from _numpypy import identity, array diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): From notifications-noreply at bitbucket.org Mon Jan 16 23:03:37 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Mon, 16 Jan 2012 22:03:37 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120116220337.18609.7903@bitbucket05.managed.contegix.com> You have received a notification from Romain Guillebert. Hi, I forked pypy. My fork is at https://bitbucket.org/rguillebert/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Tue Jan 17 00:17:03 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Tue, 17 Jan 2012 00:17:03 +0100 (CET) Subject: [pypy-commit] pypy py3k: Reject old octal literals Message-ID: <20120116231703.0C4B5820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51366:d745dbc657b4 Date: 2012-01-16 18:25 +0100 http://bitbucket.org/pypy/pypy/changeset/d745dbc657b4/ Log: Reject old octal literals diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -143,7 +143,7 @@ elif opcode == opcodedesc.LOAD_NAME.index: self.co_flags |= CO_CONTAINSGLOBALS - co_names = property(lambda self: [self.space.unwrap(w_name) for w_name in self.co_names_w]) # for trace + co_names = property(lambda self: [self.space.str_w(w_name) for w_name in self.co_names_w]) # for trace def signature(self): return self._signature diff --git a/pypy/interpreter/pyparser/genpytokenize.py b/pypy/interpreter/pyparser/genpytokenize.py --- a/pypy/interpreter/pyparser/genpytokenize.py +++ b/pypy/interpreter/pyparser/genpytokenize.py @@ -63,10 +63,8 @@ groupStr(states, "0123456789abcdefABCDEF"))) octNumber = chain(states, newArcPair(states, "0"), - maybe(states, - chain(states, - groupStr(states, "oO"), - groupStr(states, "01234567"))), + groupStr(states, "oO"), + groupStr(states, "01234567"), any(states, groupStr(states, "01234567"))) binNumber = chain(states, newArcPair(states, "0"), @@ -75,7 +73,8 @@ decNumber = chain(states, groupStr(states, "123456789"), any(states, makeDigits())) - intNumber = group(states, hexNumber, octNumber, binNumber, decNumber) + zero = newArcPair(states, "0") + intNumber = group(states, hexNumber, octNumber, binNumber, decNumber, zero) # ____________________________________________________________ # Exponents def makeExp (): diff --git a/pypy/interpreter/pyparser/pytokenize.py b/pypy/interpreter/pyparser/pytokenize.py --- a/pypy/interpreter/pyparser/pytokenize.py +++ b/pypy/interpreter/pyparser/pytokenize.py @@ -25,10 +25,10 @@ accepts = [True, True, True, True, True, True, True, True, True, True, False, True, True, True, True, False, - False, False, True, False, False, True, False, - False, True, False, True, False, True, False, - False, True, False, False, True, True, True, - False, False, True, False, False, False, True] + False, False, True, False, False, False, False, + True, False, True, False, True, False, False, + True, False, False, True, True, True, False, + False, True, False, False, False, True] states = [ # 0 {'\t': 0, '\n': 13, '\x0c': 0, @@ -110,21 +110,21 @@ 'v': 1, 'w': 1, 'x': 1, 'y': 1, 'z': 1}, # 4 - {'.': 24, '0': 21, '1': 21, '2': 21, - '3': 21, '4': 21, '5': 21, '6': 21, - '7': 21, '8': 23, '9': 23, 'B': 22, - 'E': 25, 'J': 13, 'O': 20, 'X': 19, - 'b': 22, 'e': 25, 'j': 13, 'o': 20, + {'.': 23, '0': 22, '1': 22, '2': 22, + '3': 22, '4': 22, '5': 22, '6': 22, + '7': 22, '8': 22, '9': 22, 'B': 21, + 'E': 24, 'J': 13, 'O': 20, 'X': 19, + 'b': 21, 'e': 24, 'j': 13, 'o': 20, 'x': 19}, # 5 - {'.': 24, '0': 5, '1': 5, '2': 5, + {'.': 23, '0': 5, '1': 5, '2': 5, '3': 5, '4': 5, '5': 5, '6': 5, - '7': 5, '8': 5, '9': 5, 'E': 25, - 'J': 13, 'e': 25, 'j': 13}, + '7': 5, '8': 5, '9': 5, 'E': 24, + 'J': 13, 'e': 24, 'j': 13}, # 6 - {'0': 26, '1': 26, '2': 26, '3': 26, - '4': 26, '5': 26, '6': 26, '7': 26, - '8': 26, '9': 26}, + {'0': 25, '1': 25, '2': 25, '3': 25, + '4': 25, '5': 25, '6': 25, '7': 25, + '8': 25, '9': 25}, # 7 {'*': 12, '=': 13}, # 8 @@ -142,105 +142,100 @@ # 14 {'\n': 13}, # 15 - {automata.DEFAULT: 30, '\n': 27, - '\r': 27, "'": 28, '\\': 29}, + {automata.DEFAULT: 29, '\n': 26, + '\r': 26, "'": 27, '\\': 28}, # 16 - {automata.DEFAULT: 33, '\n': 27, - '\r': 27, '"': 31, '\\': 32}, + {automata.DEFAULT: 32, '\n': 26, + '\r': 26, '"': 30, '\\': 31}, # 17 {'\n': 13, '\r': 14}, # 18 - {automata.DEFAULT: 18, '\n': 27, '\r': 27}, + {automata.DEFAULT: 18, '\n': 26, '\r': 26}, # 19 + {'0': 33, '1': 33, '2': 33, '3': 33, + '4': 33, '5': 33, '6': 33, '7': 33, + '8': 33, '9': 33, 'A': 33, 'B': 33, + 'C': 33, 'D': 33, 'E': 33, 'F': 33, + 'a': 33, 'b': 33, 'c': 33, 'd': 33, + 'e': 33, 'f': 33}, + # 20 {'0': 34, '1': 34, '2': 34, '3': 34, - '4': 34, '5': 34, '6': 34, '7': 34, - '8': 34, '9': 34, 'A': 34, 'B': 34, - 'C': 34, 'D': 34, 'E': 34, 'F': 34, - 'a': 34, 'b': 34, 'c': 34, 'd': 34, - 'e': 34, 'f': 34}, - # 20 - {'0': 35, '1': 35, '2': 35, '3': 35, - '4': 35, '5': 35, '6': 35, '7': 35}, + '4': 34, '5': 34, '6': 34, '7': 34}, # 21 - {'.': 24, '0': 21, '1': 21, '2': 21, - '3': 21, '4': 21, '5': 21, '6': 21, - '7': 21, '8': 23, '9': 23, 'E': 25, - 'J': 13, 'e': 25, 'j': 13}, + {'0': 35, '1': 35}, # 22 - {'0': 36, '1': 36}, + {'.': 23, '0': 22, '1': 22, '2': 22, + '3': 22, '4': 22, '5': 22, '6': 22, + '7': 22, '8': 22, '9': 22, 'E': 24, + 'J': 13, 'e': 24, 'j': 13}, # 23 - {'.': 24, '0': 23, '1': 23, '2': 23, - '3': 23, '4': 23, '5': 23, '6': 23, - '7': 23, '8': 23, '9': 23, 'E': 25, - 'J': 13, 'e': 25, 'j': 13}, + {'0': 23, '1': 23, '2': 23, '3': 23, + '4': 23, '5': 23, '6': 23, '7': 23, + '8': 23, '9': 23, 'E': 36, 'J': 13, + 'e': 36, 'j': 13}, # 24 - {'0': 24, '1': 24, '2': 24, '3': 24, - '4': 24, '5': 24, '6': 24, '7': 24, - '8': 24, '9': 24, 'E': 37, 'J': 13, - 'e': 37, 'j': 13}, + {'+': 37, '-': 37, '0': 38, '1': 38, + '2': 38, '3': 38, '4': 38, '5': 38, + '6': 38, '7': 38, '8': 38, '9': 38}, # 25 - {'+': 38, '-': 38, '0': 39, '1': 39, - '2': 39, '3': 39, '4': 39, '5': 39, - '6': 39, '7': 39, '8': 39, '9': 39}, + {'0': 25, '1': 25, '2': 25, '3': 25, + '4': 25, '5': 25, '6': 25, '7': 25, + '8': 25, '9': 25, 'E': 36, 'J': 13, + 'e': 36, 'j': 13}, # 26 - {'0': 26, '1': 26, '2': 26, '3': 26, - '4': 26, '5': 26, '6': 26, '7': 26, - '8': 26, '9': 26, 'E': 37, 'J': 13, - 'e': 37, 'j': 13}, + {}, # 27 - {}, + {"'": 13}, # 28 - {"'": 13}, + {automata.DEFAULT: 39, '\n': 13, '\r': 14}, # 29 + {automata.DEFAULT: 29, '\n': 26, + '\r': 26, "'": 13, '\\': 28}, + # 30 + {'"': 13}, + # 31 {automata.DEFAULT: 40, '\n': 13, '\r': 14}, - # 30 - {automata.DEFAULT: 30, '\n': 27, - '\r': 27, "'": 13, '\\': 29}, - # 31 - {'"': 13}, # 32 - {automata.DEFAULT: 41, '\n': 13, '\r': 14}, + {automata.DEFAULT: 32, '\n': 26, + '\r': 26, '"': 13, '\\': 31}, # 33 - {automata.DEFAULT: 33, '\n': 27, - '\r': 27, '"': 13, '\\': 32}, + {'0': 33, '1': 33, '2': 33, '3': 33, + '4': 33, '5': 33, '6': 33, '7': 33, + '8': 33, '9': 33, 'A': 33, 'B': 33, + 'C': 33, 'D': 33, 'E': 33, 'F': 33, + 'a': 33, 'b': 33, 'c': 33, 'd': 33, + 'e': 33, 'f': 33}, # 34 {'0': 34, '1': 34, '2': 34, '3': 34, - '4': 34, '5': 34, '6': 34, '7': 34, - '8': 34, '9': 34, 'A': 34, 'B': 34, - 'C': 34, 'D': 34, 'E': 34, 'F': 34, - 'a': 34, 'b': 34, 'c': 34, 'd': 34, - 'e': 34, 'f': 34}, + '4': 34, '5': 34, '6': 34, '7': 34}, # 35 - {'0': 35, '1': 35, '2': 35, '3': 35, - '4': 35, '5': 35, '6': 35, '7': 35}, + {'0': 35, '1': 35}, # 36 - {'0': 36, '1': 36}, + {'+': 41, '-': 41, '0': 42, '1': 42, + '2': 42, '3': 42, '4': 42, '5': 42, + '6': 42, '7': 42, '8': 42, '9': 42}, # 37 - {'+': 42, '-': 42, '0': 43, '1': 43, - '2': 43, '3': 43, '4': 43, '5': 43, - '6': 43, '7': 43, '8': 43, '9': 43}, + {'0': 38, '1': 38, '2': 38, '3': 38, + '4': 38, '5': 38, '6': 38, '7': 38, + '8': 38, '9': 38}, # 38 - {'0': 39, '1': 39, '2': 39, '3': 39, - '4': 39, '5': 39, '6': 39, '7': 39, - '8': 39, '9': 39}, + {'0': 38, '1': 38, '2': 38, '3': 38, + '4': 38, '5': 38, '6': 38, '7': 38, + '8': 38, '9': 38, 'J': 13, 'j': 13}, # 39 - {'0': 39, '1': 39, '2': 39, '3': 39, - '4': 39, '5': 39, '6': 39, '7': 39, - '8': 39, '9': 39, 'J': 13, 'j': 13}, + {automata.DEFAULT: 39, '\n': 26, + '\r': 26, "'": 13, '\\': 28}, # 40 - {automata.DEFAULT: 40, '\n': 27, - '\r': 27, "'": 13, '\\': 29}, + {automata.DEFAULT: 40, '\n': 26, + '\r': 26, '"': 13, '\\': 31}, # 41 - {automata.DEFAULT: 41, '\n': 27, - '\r': 27, '"': 13, '\\': 32}, + {'0': 42, '1': 42, '2': 42, '3': 42, + '4': 42, '5': 42, '6': 42, '7': 42, + '8': 42, '9': 42}, # 42 - {'0': 43, '1': 43, '2': 43, '3': 43, - '4': 43, '5': 43, '6': 43, '7': 43, - '8': 43, '9': 43}, - # 43 - {'0': 43, '1': 43, '2': 43, '3': 43, - '4': 43, '5': 43, '6': 43, '7': 43, - '8': 43, '9': 43, 'J': 13, 'j': 13}, + {'0': 42, '1': 42, '2': 42, '3': 42, + '4': 42, '5': 42, '6': 42, '7': 42, + '8': 42, '9': 42, 'J': 13, 'j': 13}, ] pseudoDFA = automata.DFA(states, accepts) @@ -304,6 +299,8 @@ ] doubleDFA = automata.DFA(states, accepts) + + #_______________________________________________________________________ # End of automatically generated DFA's From noreply at buildbot.pypy.org Tue Jan 17 00:17:04 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Tue, 17 Jan 2012 00:17:04 +0100 (CET) Subject: [pypy-commit] pypy py3k: Use new octal literals Message-ID: <20120116231704.33DCC820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51367:4714f110a1ae Date: 2012-01-16 18:58 +0100 http://bitbucket.org/pypy/pypy/changeset/4714f110a1ae/ Log: Use new octal literals diff --git a/pypy/translator/goal/nanos.py b/pypy/translator/goal/nanos.py --- a/pypy/translator/goal/nanos.py +++ b/pypy/translator/goal/nanos.py @@ -85,7 +85,7 @@ st = posix.stat(path) except posix.error: return False - return (st.st_mode & 0170000) == 0100000 # S_ISREG + return (st.st_mode & 0o170000) == 0o100000 # S_ISREG def islink(path): import posix @@ -93,7 +93,7 @@ st = posix.lstat(path) except posix.error: return False - return (st.st_mode & 0170000) == 0120000 # S_ISLNK + return (st.st_mode & 0o170000) == 0o120000 # S_ISLNK """, filename=__file__) @@ -249,7 +249,7 @@ st = nt.stat(path) except nt.error: return False - return (st.st_mode & 0170000) == 0100000 # S_ISREG + return (st.st_mode & 0o170000) == 0o100000 # S_ISREG def islink(path): return False From noreply at buildbot.pypy.org Tue Jan 17 00:26:32 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 00:26:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: PEP3147: .pyc files are now named ./__pycache__/foo.pypy-17.pyc Message-ID: <20120116232632.6BF12820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51368:a20df1bb1bb8 Date: 2012-01-16 23:53 +0100 http://bitbucket.org/pypy/pypy/changeset/a20df1bb1bb8/ Log: PEP3147: .pyc files are now named ./__pycache__/foo.pypy-17.pyc (Note: the "nolonepycfile" option makes less sense now) diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -34,6 +34,8 @@ DEFAULT_SOABI = 'pypy-%d%d' % PYPY_VERSION[:2] CHECK_FOR_PYW = sys.platform == 'win32' +PYC_TAG = 'pypy-%d%d' % PYPY_VERSION[:2] + @specialize.memo() def get_so_extension(space): if space.config.objspace.soabi is not None: @@ -91,10 +93,7 @@ else: # XXX that's slow def case_ok(filename): - index = filename.rfind(os.sep) - if os.altsep is not None: - index2 = filename.rfind(os.altsep) - index = max(index, index2) + index = rightmost_sep(filename) if index < 0: directory = os.curdir else: @@ -862,17 +861,63 @@ space.wrap(space.builtin)) code_w.exec_code(space, w_dict, w_dict) +def rightmost_sep(filename): + "Like filename.rfind('/'), but also search for \\." + index = filename.rfind(os.sep) + if os.altsep is not None: + index2 = filename.rfind(os.altsep) + index = max(index, index2) + return index + def make_compiled_pathname(pathname): "Given the path to a .py file, return the path to its .pyc file." - return pathname + 'c' + # foo.py -> __pycache__/foo..pyc + + lastpos = rightmost_sep(pathname) + 1 + assert lastpos >= 0 # zero when slash, takes the full name + fname = pathname[lastpos:] + if lastpos > 0: + # Windows: re-use the last separator character (/ or \\) when + # appending the __pycache__ path. + lastsep = pathname[lastpos-1] + else: + lastsep = os.sep + ext = fname + for i in range(len(fname)): + if fname[i] == '.': + ext = fname[:i + 1] + + result = (pathname[:lastpos] + "__pycache__" + lastsep + + ext + PYC_TAG + '.pyc') + return result def make_source_pathname(pathname): - pos_extension = len(pathname) - 4 # len('.pyc') - if pos_extension < 0: - raise ValueError("path is too short") - if pathname[pos_extension:] != '.pyc': - raise ValueError("not a .pyc path name") - return pathname[:pos_extension + 3] + "Given the path to a .pyc file, return the path to its .py file." + # (...)/__pycache__/foo..pyc -> (...)/foo.py + + right = rightmost_sep(pathname) + if right < 0: + raise ValueError() + left = rightmost_sep(pathname[:right]) + 1 + assert left >= 0 + if pathname[left:right] != '__pycache__': + raise ValueError() + + # Now verify that the path component to the right of the last + # slash has two dots in it. + rightpart = pathname[right + 1:] + dot0 = rightpart.find('.') + 1 + if dot0 <= 0: + raise ValueError + dot1 = rightpart[dot0:].find('.') + 1 + if dot1 <= 0: + raise ValueError + # Too many dots? + if rightpart[dot0 + dot1:].find('.') >= 0: + raise ValueError + + result = pathname[:left] + rightpart[:dot0] + 'py' + return result @jit.dont_look_inside def load_source_module(space, w_modulename, w_mod, pathname, source, @@ -1027,6 +1072,17 @@ Errors are ignored, if a write error occurs an attempt is made to remove the file. """ + # Ensure that the __pycache__ directory exists + dirsep = rightmost_sep(cpathname) + if dirsep < 0: + return + dirname = cpathname[:dirsep] + mode = src_mode | 0333 # +wx + try: + os.mkdir(dirname, mode) + except OSError: + pass + w_marshal = space.getbuiltinmodule('marshal') try: w_bytes = space.call_method(w_marshal, 'dumps', space.wrap(co), diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -38,7 +38,7 @@ test_reload = "def test():\n raise ValueError\n", infinite_reload = "import infinite_reload; reload(infinite_reload)", del_sys_module = "import sys\ndel sys.modules['del_sys_module']\n", - itertools = "hello_world = 42\n", + queue = "hello_world = 42\n", gc = "should_never_be_seen = 42\n", ) root.ensure("notapackage", dir=1) # empty, no __init__.py @@ -75,7 +75,7 @@ ) setuppkg("pkg_substituting", __init__ = "import sys, pkg_substituted\n" - "print 'TOTO', __name__\n" + "print('TOTO', __name__)\n" "sys.modules[__name__] = pkg_substituted") setuppkg("pkg_substituted", mod='') setuppkg("evil_pkg", @@ -102,6 +102,7 @@ w = space.wrap w_modname = w("compiled.x") filename = str(p.join("x.py")) + pycname = importing.make_compiled_pathname("x.py") stream = streamio.open_file_as_stream(filename, "r") try: importing.load_source_module(space, @@ -113,7 +114,7 @@ stream.close() if space.config.objspace.usepycfiles: # also create a lone .pyc file - p.join('lone.pyc').write(p.join('x.pyc').read(mode='rb'), + p.join('lone.pyc').write(p.join(pycname).read(mode='rb'), mode='wb') # create a .pyw file @@ -150,7 +151,7 @@ class AppTestImport: def setup_class(cls): # interpreter-level - cls.space = gettestobjspace(usemodules=['itertools']) + #cls.space = gettestobjspace(usemodules=['itertools']) cls.w_runappdirect = cls.space.wrap(conftest.option.runappdirect) cls.saved_modules = _setup(cls.space) #XXX Compile class @@ -159,6 +160,9 @@ def teardown_class(cls): # interpreter-level _teardown(cls.space, cls.saved_modules) + def w_exec_(self, cmd, ns): + exec(cmd, ns) + def test_set_sys_modules_during_import(self): from evil_pkg import evil assert evil.a == 42 @@ -373,7 +377,7 @@ def test_invalid__name__(self): glob = {} - exec "__name__ = None; import sys" in glob + exec("__name__ = None; import sys", glob) import sys assert glob['sys'] is sys @@ -418,16 +422,18 @@ assert pkg.pkg1.__package__ == 'pkg.pkg1' def test_future_relative_import_error_when_in_non_package(self): - exec """def imp(): + ns = {} + exec("""def imp(): from .string import inpackage - """.rstrip() - raises(ValueError, imp) + """.rstrip(), ns) + raises(ValueError, ns['imp']) def test_future_relative_import_error_when_in_non_package2(self): - exec """def imp(): + ns = {} + exec("""def imp(): from .. import inpackage - """.rstrip() - raises(ValueError, imp) + """.rstrip(), ns) + raises(ValueError, ns['imp']) def test_relative_import_with___name__(self): import sys @@ -456,9 +462,9 @@ def test__package__(self): # Regression test for http://bugs.python.org/issue3221. def check_absolute(): - exec "from os import path" in ns + self.exec_("from os import path", ns) def check_relative(): - exec "from . import a" in ns + self.exec_("from . import a", ns) # Check both OK with __package__ and __name__ correct ns = dict(__package__='pkg', __name__='pkg.notarealmodule') @@ -578,8 +584,11 @@ def test_cache_from_source(self): import imp - assert imp.cache_from_source('a/b/c.py') == 'a/b/c.pyc' - assert imp.source_from_cache('a/b/c.pyc') == 'a/b/c.py' + pycfile = imp.cache_from_source('a/b/c.py') + assert pycfile.startswith('a/b/__pycache__/c.pypy-') + assert pycfile.endswith('.pyc') + assert imp.source_from_cache('a/b/__pycache__/c.pypy-17.pyc' + ) == 'a/b/c.py' raises(ValueError, imp.source_from_cache, 'a/b/c.py') def test_shadow_builtin(self): @@ -596,34 +605,32 @@ def test_shadow_extension_1(self): if self.runappdirect: skip("hard to test: module is already imported") - # 'import itertools' is supposed to find itertools.py if there is + # 'import queue' is supposed to find queue.py if there is # one in sys.path. import sys - assert 'itertools' not in sys.modules - import itertools - assert hasattr(itertools, 'hello_world') - assert not hasattr(itertools, 'count') - assert '(built-in)' not in repr(itertools) - del sys.modules['itertools'] + assert 'queue' not in sys.modules + import queue + assert hasattr(queue, 'hello_world') + assert not hasattr(queue, 'count') + assert '(built-in)' not in repr(queue) + del sys.modules['queue'] def test_shadow_extension_2(self): if self.runappdirect: skip("hard to test: module is already imported") - # 'import itertools' is supposed to find the built-in module even + # 'import queue' is supposed to find the built-in module even # if there is also one in sys.path as long as it is *after* the - # special entry '.../lib_pypy/__extensions__'. (Note that for now - # there is one in lib_pypy/itertools.py, which should not be seen - # either; hence the (built-in) test below.) + # special entry '.../lib_pypy/__extensions__'. import sys - assert 'itertools' not in sys.modules + assert 'queue' not in sys.modules sys.path.append(sys.path.pop(0)) try: - import itertools - assert not hasattr(itertools, 'hello_world') - assert hasattr(itertools, 'izip') - assert '(built-in)' in repr(itertools) + import queue + assert not hasattr(queue, 'hello_world') + assert hasattr(queue, 'izip') + assert '(built-in)' in repr(queue) finally: sys.path.insert(0, sys.path.pop()) - del sys.modules['itertools'] + del sys.modules['queue'] class TestAbi: @@ -885,7 +892,7 @@ stream.close() # And the .pyc has been generated - cpathname = udir.join('test.pyc') + cpathname = udir.join(importing.make_compiled_pathname('test.py')) assert cpathname.check() def test_write_compiled_module(self): @@ -979,7 +986,7 @@ py.test.skip("unresolved issues with win32 shell quoting rules") from pypy.interpreter.test.test_zpy import pypypath extrapath = udir.ensure("pythonpath", dir=1) - extrapath.join("urllib.py").write("print 42\n") + extrapath.join("urllib.py").write("print(42)\n") old = os.environ.get('PYTHONPATH', None) oldlang = os.environ.pop('LANG', None) try: @@ -1024,7 +1031,7 @@ if fullname in self.namestoblock: return self def load_module(self, fullname): - raise ImportError, "blocked" + raise ImportError("blocked") import sys, imp modname = "errno" # an arbitrary harmless builtin module @@ -1094,7 +1101,7 @@ path = [self.path] try: file, filename, stuff = imp.find_module(subname, path) - except ImportError, e: + except ImportError: return None return ImpLoader(file, filename, stuff) @@ -1139,8 +1146,8 @@ def test_run_compiled_module(self): # XXX minimal test only - import imp, new - module = new.module('foobar') + import imp, types + module = types.ModuleType('foobar') raises(IOError, imp._run_compiled_module, 'foobar', 'this_file_does_not_exist', None, module) @@ -1161,14 +1168,14 @@ # a mostly-empty zip file path = os.path.join(self.udir, 'test_getimporter.zip') f = open(path, 'wb') - f.write('PK\x03\x04\n\x00\x00\x00\x00\x00P\x9eN>\x00\x00\x00\x00\x00' - '\x00\x00\x00\x00\x00\x00\x00\x05\x00\x15\x00emptyUT\t\x00' - '\x03wyYMwyYMUx\x04\x00\xf4\x01d\x00PK\x01\x02\x17\x03\n\x00' - '\x00\x00\x00\x00P\x9eN>\x00\x00\x00\x00\x00\x00\x00\x00\x00' - '\x00\x00\x00\x05\x00\r\x00\x00\x00\x00\x00\x00\x00\x00\x00' - '\xa4\x81\x00\x00\x00\x00emptyUT\x05\x00\x03wyYMUx\x00\x00PK' - '\x05\x06\x00\x00\x00\x00\x01\x00\x01\x00@\x00\x00\x008\x00' - '\x00\x00\x00\x00') + f.write(b'PK\x03\x04\n\x00\x00\x00\x00\x00P\x9eN>\x00\x00\x00\x00\x00' + b'\x00\x00\x00\x00\x00\x00\x00\x05\x00\x15\x00emptyUT\t\x00' + b'\x03wyYMwyYMUx\x04\x00\xf4\x01d\x00PK\x01\x02\x17\x03\n\x00' + b'\x00\x00\x00\x00P\x9eN>\x00\x00\x00\x00\x00\x00\x00\x00\x00' + b'\x00\x00\x00\x05\x00\r\x00\x00\x00\x00\x00\x00\x00\x00\x00' + b'\xa4\x81\x00\x00\x00\x00emptyUT\x05\x00\x03wyYMUx\x00\x00PK' + b'\x05\x06\x00\x00\x00\x00\x01\x00\x01\x00@\x00\x00\x008\x00' + b'\x00\x00\x00\x00') f.close() importer = imp._getimporter(path) import zipimport @@ -1193,16 +1200,16 @@ def test_import_possibly_from_pyc(self): from compiled import x if self.usepycfiles: - assert x.__file__.endswith('x.pyc') + assert x.__file__.endswith('.pyc') else: - assert x.__file__.endswith('x.py') + assert x.__file__.endswith('.py') try: from compiled import lone except ImportError: assert not self.lonepycfiles, "should have found 'lone.pyc'" else: assert self.lonepycfiles, "should not have found 'lone.pyc'" - assert lone.__file__.endswith('lone.pyc') + assert lone.__file__.endswith('.pyc') class AppTestNoLonePycFile(AppTestNoPycFile): spaceconfig = { From noreply at buildbot.pypy.org Tue Jan 17 00:26:33 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 00:26:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: Remove long() and L suffix from binascii.py Message-ID: <20120116232633.92827820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51369:708194fb76ef Date: 2012-01-16 20:32 +0100 http://bitbucket.org/pypy/pypy/changeset/708194fb76ef/ Log: Remove long() and L suffix from binascii.py diff --git a/lib_pypy/binascii.py b/lib_pypy/binascii.py --- a/lib_pypy/binascii.py +++ b/lib_pypy/binascii.py @@ -598,68 +598,68 @@ return ''.join(result) crc_32_tab = [ - 0x00000000L, 0x77073096L, 0xee0e612cL, 0x990951baL, 0x076dc419L, - 0x706af48fL, 0xe963a535L, 0x9e6495a3L, 0x0edb8832L, 0x79dcb8a4L, - 0xe0d5e91eL, 0x97d2d988L, 0x09b64c2bL, 0x7eb17cbdL, 0xe7b82d07L, - 0x90bf1d91L, 0x1db71064L, 0x6ab020f2L, 0xf3b97148L, 0x84be41deL, - 0x1adad47dL, 0x6ddde4ebL, 0xf4d4b551L, 0x83d385c7L, 0x136c9856L, - 0x646ba8c0L, 0xfd62f97aL, 0x8a65c9ecL, 0x14015c4fL, 0x63066cd9L, - 0xfa0f3d63L, 0x8d080df5L, 0x3b6e20c8L, 0x4c69105eL, 0xd56041e4L, - 0xa2677172L, 0x3c03e4d1L, 0x4b04d447L, 0xd20d85fdL, 0xa50ab56bL, - 0x35b5a8faL, 0x42b2986cL, 0xdbbbc9d6L, 0xacbcf940L, 0x32d86ce3L, - 0x45df5c75L, 0xdcd60dcfL, 0xabd13d59L, 0x26d930acL, 0x51de003aL, - 0xc8d75180L, 0xbfd06116L, 0x21b4f4b5L, 0x56b3c423L, 0xcfba9599L, - 0xb8bda50fL, 0x2802b89eL, 0x5f058808L, 0xc60cd9b2L, 0xb10be924L, - 0x2f6f7c87L, 0x58684c11L, 0xc1611dabL, 0xb6662d3dL, 0x76dc4190L, - 0x01db7106L, 0x98d220bcL, 0xefd5102aL, 0x71b18589L, 0x06b6b51fL, - 0x9fbfe4a5L, 0xe8b8d433L, 0x7807c9a2L, 0x0f00f934L, 0x9609a88eL, - 0xe10e9818L, 0x7f6a0dbbL, 0x086d3d2dL, 0x91646c97L, 0xe6635c01L, - 0x6b6b51f4L, 0x1c6c6162L, 0x856530d8L, 0xf262004eL, 0x6c0695edL, - 0x1b01a57bL, 0x8208f4c1L, 0xf50fc457L, 0x65b0d9c6L, 0x12b7e950L, - 0x8bbeb8eaL, 0xfcb9887cL, 0x62dd1ddfL, 0x15da2d49L, 0x8cd37cf3L, - 0xfbd44c65L, 0x4db26158L, 0x3ab551ceL, 0xa3bc0074L, 0xd4bb30e2L, - 0x4adfa541L, 0x3dd895d7L, 0xa4d1c46dL, 0xd3d6f4fbL, 0x4369e96aL, - 0x346ed9fcL, 0xad678846L, 0xda60b8d0L, 0x44042d73L, 0x33031de5L, - 0xaa0a4c5fL, 0xdd0d7cc9L, 0x5005713cL, 0x270241aaL, 0xbe0b1010L, - 0xc90c2086L, 0x5768b525L, 0x206f85b3L, 0xb966d409L, 0xce61e49fL, - 0x5edef90eL, 0x29d9c998L, 0xb0d09822L, 0xc7d7a8b4L, 0x59b33d17L, - 0x2eb40d81L, 0xb7bd5c3bL, 0xc0ba6cadL, 0xedb88320L, 0x9abfb3b6L, - 0x03b6e20cL, 0x74b1d29aL, 0xead54739L, 0x9dd277afL, 0x04db2615L, - 0x73dc1683L, 0xe3630b12L, 0x94643b84L, 0x0d6d6a3eL, 0x7a6a5aa8L, - 0xe40ecf0bL, 0x9309ff9dL, 0x0a00ae27L, 0x7d079eb1L, 0xf00f9344L, - 0x8708a3d2L, 0x1e01f268L, 0x6906c2feL, 0xf762575dL, 0x806567cbL, - 0x196c3671L, 0x6e6b06e7L, 0xfed41b76L, 0x89d32be0L, 0x10da7a5aL, - 0x67dd4accL, 0xf9b9df6fL, 0x8ebeeff9L, 0x17b7be43L, 0x60b08ed5L, - 0xd6d6a3e8L, 0xa1d1937eL, 0x38d8c2c4L, 0x4fdff252L, 0xd1bb67f1L, - 0xa6bc5767L, 0x3fb506ddL, 0x48b2364bL, 0xd80d2bdaL, 0xaf0a1b4cL, - 0x36034af6L, 0x41047a60L, 0xdf60efc3L, 0xa867df55L, 0x316e8eefL, - 0x4669be79L, 0xcb61b38cL, 0xbc66831aL, 0x256fd2a0L, 0x5268e236L, - 0xcc0c7795L, 0xbb0b4703L, 0x220216b9L, 0x5505262fL, 0xc5ba3bbeL, - 0xb2bd0b28L, 0x2bb45a92L, 0x5cb36a04L, 0xc2d7ffa7L, 0xb5d0cf31L, - 0x2cd99e8bL, 0x5bdeae1dL, 0x9b64c2b0L, 0xec63f226L, 0x756aa39cL, - 0x026d930aL, 0x9c0906a9L, 0xeb0e363fL, 0x72076785L, 0x05005713L, - 0x95bf4a82L, 0xe2b87a14L, 0x7bb12baeL, 0x0cb61b38L, 0x92d28e9bL, - 0xe5d5be0dL, 0x7cdcefb7L, 0x0bdbdf21L, 0x86d3d2d4L, 0xf1d4e242L, - 0x68ddb3f8L, 0x1fda836eL, 0x81be16cdL, 0xf6b9265bL, 0x6fb077e1L, - 0x18b74777L, 0x88085ae6L, 0xff0f6a70L, 0x66063bcaL, 0x11010b5cL, - 0x8f659effL, 0xf862ae69L, 0x616bffd3L, 0x166ccf45L, 0xa00ae278L, - 0xd70dd2eeL, 0x4e048354L, 0x3903b3c2L, 0xa7672661L, 0xd06016f7L, - 0x4969474dL, 0x3e6e77dbL, 0xaed16a4aL, 0xd9d65adcL, 0x40df0b66L, - 0x37d83bf0L, 0xa9bcae53L, 0xdebb9ec5L, 0x47b2cf7fL, 0x30b5ffe9L, - 0xbdbdf21cL, 0xcabac28aL, 0x53b39330L, 0x24b4a3a6L, 0xbad03605L, - 0xcdd70693L, 0x54de5729L, 0x23d967bfL, 0xb3667a2eL, 0xc4614ab8L, - 0x5d681b02L, 0x2a6f2b94L, 0xb40bbe37L, 0xc30c8ea1L, 0x5a05df1bL, - 0x2d02ef8dL + 0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, + 0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, + 0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, + 0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, + 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7, 0x136c9856, + 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9, + 0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, + 0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, + 0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, + 0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a, + 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, + 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924, + 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, + 0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, + 0x9fbfe4a5, 0xe8b8d433, 0x7807c9a2, 0x0f00f934, 0x9609a88e, + 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01, + 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed, + 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950, + 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, + 0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, + 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, + 0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, + 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa, 0xbe0b1010, + 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f, + 0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, + 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, + 0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615, + 0x73dc1683, 0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8, + 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1, 0xf00f9344, + 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb, + 0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, + 0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, + 0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, + 0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda, 0xaf0a1b4c, + 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, + 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, + 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, + 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, + 0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c, + 0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713, + 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b, + 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242, + 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, + 0x18b74777, 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, + 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278, + 0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, + 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc, 0x40df0b66, + 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9, + 0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, + 0xcdd70693, 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, + 0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, + 0x2d02ef8d ] def crc32(s, crc=0): result = 0 - crc = ~long(crc) & 0xffffffffL + crc = ~int(crc) & 0xffffffff for c in s: - crc = crc_32_tab[(crc ^ c) & 0xffL] ^ (crc >> 8) + crc = crc_32_tab[(crc ^ c) & 0xff] ^ (crc >> 8) #/* Note: (crc >> 8) MUST zero fill on left - result = crc ^ 0xffffffffL + result = crc ^ 0xffffffff if result > 2**31: result = ((result + 2**31) % 2**32) - 2**31 From noreply at buildbot.pypy.org Tue Jan 17 00:26:34 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 00:26:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix a test in test_longobject.py Message-ID: <20120116232634.B9A30820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51370:dcae009093d8 Date: 2012-01-16 20:54 +0100 http://bitbucket.org/pypy/pypy/changeset/dcae009093d8/ Log: Fix a test in test_longobject.py diff --git a/pypy/objspace/std/longtype.py b/pypy/objspace/std/longtype.py --- a/pypy/objspace/std/longtype.py +++ b/pypy/objspace/std/longtype.py @@ -40,6 +40,7 @@ w_obj = space.int(w_obj) else: w_obj = space.trunc(w_obj) + w_obj = space.int(w_obj) return w_obj else: base = space.int_w(w_base) From noreply at buildbot.pypy.org Tue Jan 17 00:26:35 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 00:26:35 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fixes in termios module Message-ID: <20120116232635.E70AC820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51371:4a6fb1817feb Date: 2012-01-16 23:53 +0100 http://bitbucket.org/pypy/pypy/changeset/4a6fb1817feb/ Log: Fixes in termios module diff --git a/pypy/module/termios/interp_termios.py b/pypy/module/termios/interp_termios.py --- a/pypy/module/termios/interp_termios.py +++ b/pypy/module/termios/interp_termios.py @@ -24,15 +24,11 @@ fd = space.c_filedescriptor_w(w_fd) w_iflag, w_oflag, w_cflag, w_lflag, w_ispeed, w_ospeed, w_cc = \ space.unpackiterable(w_attributes, expected_length=7) - w_builtin = space.getbuiltinmodule('__builtin__') cc = [] for w_c in space.unpackiterable(w_cc): if space.is_true(space.isinstance(w_c, space.w_int)): - ch = space.call_function(space.getattr(w_builtin, - space.wrap('chr')), w_c) - cc.append(space.str_w(ch)) - else: - cc.append(space.str_w(w_c)) + w_c = space.call(space.w_bytes, space.newlist([w_c])) + cc.append(space.bytes_w(w_c)) tup = (space.int_w(w_iflag), space.int_w(w_oflag), space.int_w(w_cflag), space.int_w(w_lflag), space.int_w(w_ispeed), space.int_w(w_ospeed), cc) diff --git a/pypy/module/termios/test/test_termios.py b/pypy/module/termios/test/test_termios.py --- a/pypy/module/termios/test/test_termios.py +++ b/pypy/module/termios/test/test_termios.py @@ -23,7 +23,7 @@ def _spawn(self, *args, **kwds): print 'SPAWN:', args, kwds - child = self.pexpect.spawn(*args, **kwds) + child = self.pexpect.spawn(timeout=600, *args, **kwds) child.logfile = sys.stdout return child @@ -53,7 +53,7 @@ termios.tcdrain(2) termios.tcflush(2, termios.TCIOFLUSH) termios.tcflow(2, termios.TCOON) - print 'ok!' + print('ok!') """) f = udir.join("test_tcall.py") f.write(source) @@ -65,7 +65,7 @@ import sys import termios termios.tcsetattr(sys.stdin, 1, [16640, 4, 191, 2608, 15, 15, ['\x03', '\x1c', '\x7f', '\x15', '\x04', 0, 1, '\x00', '\x11', '\x13', '\x1a', '\x00', '\x12', '\x0f', '\x17', '\x16', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00']]) - print 'ok!' + print('ok!') """) f = udir.join("test_tcsetattr.py") f.write(source) @@ -76,9 +76,9 @@ source = py.code.Source(""" import termios import fcntl - lgt = len(fcntl.ioctl(2, termios.TIOCGWINSZ, '\000'*8)) + lgt = len(fcntl.ioctl(2, termios.TIOCGWINSZ, b'\000'*8)) assert lgt == 8 - print 'ok!' + print('ok!') """) f = udir.join("test_ioctl_termios.py") f.write(source) @@ -97,7 +97,7 @@ assert len([i for i in f[-1] if isinstance(i, int)]) == 2 assert isinstance(f[-1][termios.VMIN], int) assert isinstance(f[-1][termios.VTIME], int) - print 'ok!' + print('ok!') """) f = udir.join("test_ioctl_termios.py") f.write(source) From noreply at buildbot.pypy.org Tue Jan 17 00:26:37 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 00:26:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: bin/checkmodule.py: Add support for space.wrabytes Message-ID: <20120116232637.19DEB820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51372:3843c6835175 Date: 2012-01-16 23:54 +0100 http://bitbucket.org/pypy/pypy/changeset/3843c6835175/ Log: bin/checkmodule.py: Add support for space.wrabytes diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -157,6 +157,10 @@ "NOT_RPYTHON" raise NotImplementedError + def wrapbytes(self, x): + assert isinstance(x, str) + return w_some_obj() + def _see_interp2app(self, interp2app): "NOT_RPYTHON" activation = interp2app._code.activation @@ -278,7 +282,7 @@ for name in (ObjSpace.ConstantTable + ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', - 'dict', 'unicode', 'complex', 'slice', 'bool', + 'dict', 'bytes', 'complex', 'slice', 'bool', 'type', 'text', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # From noreply at buildbot.pypy.org Tue Jan 17 09:56:37 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 09:56:37 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: indexing by bool arrays of the same shape, step 1 Message-ID: <20120117085637.7C341820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51373:bec89443f75e Date: 2012-01-17 10:55 +0200 http://bitbucket.org/pypy/pypy/changeset/bec89443f75e/ Log: indexing by bool arrays of the same shape, step 1 diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -46,6 +46,10 @@ def getitem(self, storage, i): return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + def getitem_bool(self, storage, i): + isize = self.itemtype.get_element_size() + return self.itemtype.read_raw(storage, isize, i, 0, bool) + def setitem(self, storage, i, box): self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -62,11 +62,18 @@ self.size = size def next(self, shapelen): + return self._next(1) + + def _next(self, ofs): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + 1 + arr.offset = self.offset + ofs return arr + def next_no_increase(self, shapelen): + # a hack to make JIT believe this is always virtual + return self._next(0) + def done(self): return self.offset >= self.size diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -483,7 +483,41 @@ return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(self) + shapelen = len(arr.shape) + s = 0 + while not frame.done(): + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = ArrayIterator(concr.size) + sig = self.find_sig() + frame = sig.create_frame(self) + while not frame.done(): + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -716,8 +750,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1312,6 +1312,13 @@ raises(IndexError,'arange(3)[array([-15])]') assert arange(3)[array(1)] == 1 + def test_array_indexing_bool(self): + from _numpypy import arange + a = arange(10) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + a = arange(10).reshape(5, 2) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -94,6 +94,11 @@ width, storage, i, offset )) + @specialize.arg(5) + def read_raw(self, storage, width, i, offset, tp): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def store(self, storage, width, i, offset, box): value = self.unbox(box) libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), @@ -436,4 +441,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" From noreply at buildbot.pypy.org Tue Jan 17 10:10:20 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:20 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Add files generated by PyCharm to .hgignore Message-ID: <20120117091020.03C69820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51374:1afe14470fa3 Date: 2012-01-14 20:18 +0100 http://bitbucket.org/pypy/pypy/changeset/1afe14470fa3/ Log: Add files generated by PyCharm to .hgignore diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ From noreply at buildbot.pypy.org Tue Jan 17 10:10:21 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:21 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Fix userspace builders in ootype Message-ID: <20120117091021.31C8A820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51375:90cb8d420991 Date: 2012-01-11 22:26 +0100 http://bitbucket.org/pypy/pypy/changeset/90cb8d420991/ Log: Fix userspace builders in ootype Implement the getlength() method of StringBuilders in ootype. diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1543,6 +1544,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") From noreply at buildbot.pypy.org Tue Jan 17 10:10:22 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:22 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Fix compute_unique_id to support built-ins in ootype. Message-ID: <20120117091022.5D2F1820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51376:143a2edf9601 Date: 2012-01-11 22:29 +0100 http://bitbucket.org/pypy/pypy/changeset/143a2edf9601/ Log: Fix compute_unique_id to support built-ins in ootype. Otherwise the translation fails because it doesn't know how to apply compute_unique_id to a String. In the jvm backend this is implemented by System.identityHashCode() which can be applied to our representations of built-ins equally well as for instances. diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -420,7 +420,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -1377,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return hash(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,20 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): From noreply at buildbot.pypy.org Tue Jan 17 10:10:23 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:23 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Declare oo_primitives that should implement some rffi operations. Message-ID: <20120117091023.831F3820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51377:6b650a500d68 Date: 2012-01-14 19:04 +0100 http://bitbucket.org/pypy/pypy/changeset/6b650a500d68/ Log: Declare oo_primitives that should implement some rffi operations. For now the actual implementations are missing, but once we get the JVM backend to work in some way, this will have to be revisited. diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/rmd5.py b/pypy/rlib/rmd5.py --- a/pypy/rlib/rmd5.py +++ b/pypy/rlib/rmd5.py @@ -51,7 +51,7 @@ _rotateLeft = rffi.llexternal( "pypy__rotateLeft", [lltype.Unsigned, lltype.Signed], lltype.Unsigned, _callable=_rotateLeft_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True) + _nowrapper=True, elidable_function=True, oo_primitive='pypy__rotateLeft') # TODO implement the oo_primitive # we expect the function _rotateLeft to be actually inlined From noreply at buildbot.pypy.org Tue Jan 17 10:10:24 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:24 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Add a missing cast from Unsigned to UnsignedLongLong in the JVM Message-ID: <20120117091024.AA136820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51378:c59ec9806ac6 Date: 2012-01-14 20:15 +0100 http://bitbucket.org/pypy/pypy/changeset/c59ec9806ac6/ Log: Add a missing cast from Unsigned to UnsignedLongLong in the JVM backend. diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Tue Jan 17 10:10:25 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:25 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Handle the 'jit_is_virtual' opcode by always returning False Message-ID: <20120117091025.D29F0820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51379:e31e85a8d333 Date: 2012-01-14 19:01 +0100 http://bitbucket.org/pypy/pypy/changeset/e31e85a8d333/ Log: Handle the 'jit_is_virtual' opcode by always returning False diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, From noreply at buildbot.pypy.org Tue Jan 17 10:10:27 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:27 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Implemented float2longlong and longlong2float for the JVM. Message-ID: <20120117091027.0EEFE820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51380:5ba62496112e Date: 2012-01-16 00:56 +0100 http://bitbucket.org/pypy/pypy/changeset/5ba62496112e/ Log: Implemented float2longlong and longlong2float for the JVM. Also removed the oo_primitive for pypy__rotateLeft - it's not needed on 32 bit architecture (and JVM backend doesn't support 64 bit anyway). diff --git a/pypy/rlib/rmd5.py b/pypy/rlib/rmd5.py --- a/pypy/rlib/rmd5.py +++ b/pypy/rlib/rmd5.py @@ -51,7 +51,7 @@ _rotateLeft = rffi.llexternal( "pypy__rotateLeft", [lltype.Unsigned, lltype.Signed], lltype.Unsigned, _callable=_rotateLeft_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, oo_primitive='pypy__rotateLeft') # TODO implement the oo_primitive + _nowrapper=True, elidable_function=True) # we expect the function _rotateLeft to be actually inlined diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -8,6 +8,7 @@ import java.util.Map; import java.text.DecimalFormat; import java.lang.reflect.Array; +import java.nio.ByteBuffer; /** * Class with a number of utility routines. One instance of this is @@ -283,6 +284,20 @@ } } + public double pypy__longlong2float(long l) { + ByteBuffer buf = ByteBuffer.allocate(8); + buf.putLong(l); + buf.flip(); + return buf.getDouble(); + } + + public long pypy__float2longlong(double d) { + ByteBuffer buf = ByteBuffer.allocate(8); + buf.putDouble(d); + buf.flip(); + return buf.getLong(); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +368,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) From noreply at buildbot.pypy.org Tue Jan 17 10:10:28 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:28 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Simpler implementations of float2longlong and longlong2float. Message-ID: <20120117091028.3F46C820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51381:79b211ffff3c Date: 2012-01-16 21:19 +0100 http://bitbucket.org/pypy/pypy/changeset/79b211ffff3c/ Log: Simpler implementations of float2longlong and longlong2float. diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -8,7 +8,6 @@ import java.util.Map; import java.text.DecimalFormat; import java.lang.reflect.Array; -import java.nio.ByteBuffer; /** * Class with a number of utility routines. One instance of this is @@ -285,17 +284,11 @@ } public double pypy__longlong2float(long l) { - ByteBuffer buf = ByteBuffer.allocate(8); - buf.putLong(l); - buf.flip(); - return buf.getDouble(); + return Double.longBitsToDouble(l); } public long pypy__float2longlong(double d) { - ByteBuffer buf = ByteBuffer.allocate(8); - buf.putDouble(d); - buf.flip(); - return buf.getLong(); + return Double.doubleToRawLongBits(d); } public double ooparse_float(String s) { From noreply at buildbot.pypy.org Tue Jan 17 10:10:29 2012 From: noreply at buildbot.pypy.org (benol) Date: Tue, 17 Jan 2012 10:10:29 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: Fix the implementation of compute_unique_id for _builtin_type. Message-ID: <20120117091029.6BEFE820D8@wyvern.cs.uni-duesseldorf.de> Author: Michal Bendowski Branch: jvm-improvements Changeset: r51382:259f2ab80ae4 Date: 2012-01-16 22:26 +0100 http://bitbucket.org/pypy/pypy/changeset/259f2ab80ae4/ Log: Fix the implementation of compute_unique_id for _builtin_type. diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -1378,7 +1378,7 @@ return make_object(self) def _identityhash(self): - return hash(self) + return object.__hash__(self) class _string(_builtin_type): diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -477,6 +477,17 @@ for id in self.ll_unpack_tuple(res, 6): assert isinstance(id, (int, r_longlong)) + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): From noreply at buildbot.pypy.org Tue Jan 17 10:29:48 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 10:29:48 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: add & and | Message-ID: <20120117092948.C88A5820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51383:35596eb33a6f Date: 2012-01-17 11:29 +0200 http://bitbucket.org/pypy/pypy/changeset/35596eb33a6f/ Log: add & and | diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -85,6 +85,8 @@ ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ('bitwise_and', 'bitwise_and'), + ('bitwise_or', 'bitwise_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -270,6 +270,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -1279,6 +1282,9 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -425,6 +425,8 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1318,6 +1318,7 @@ assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() a = arange(10).reshape(5, 2) assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + assert (a[a & 1] == [0, 2, 4, 6, 8]).all() class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -347,11 +347,19 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): - from numpypy import add, arange + from _numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_bitwise(self): + from _numpypy import bitwise_and, bitwise_or, arange + a = arange(6).reshape(2, 3) + assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() + assert (a & 1 == bitwise_and(a, 1)).all() + assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() + assert (a | 1 == bitwise_or(a, 1)).all() + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -174,6 +174,15 @@ def min(self, v1, v2): return min(v1, v2) + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + + class Bool(BaseType, Primitive): T = lltype.Bool BoxType = interp_boxes.W_BoolBox From noreply at buildbot.pypy.org Tue Jan 17 10:53:32 2012 From: noreply at buildbot.pypy.org (hager) Date: Tue, 17 Jan 2012 10:53:32 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: check var.type in make_sure_var_in_reg Message-ID: <20120117095332.BE5C3820D8@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51384:909126039943 Date: 2012-01-17 10:51 +0100 http://bitbucket.org/pypy/pypy/changeset/909126039943/ Log: check var.type in make_sure_var_in_reg diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -306,8 +306,11 @@ def make_sure_var_in_reg(self, var, forbidden_vars=[], selected_reg=None, need_lower_byte=False): - return self.rm.make_sure_var_in_reg(var, forbidden_vars, - selected_reg, need_lower_byte) + if var.type == FLOAT: + assert 0, "not implemented yet" + else: + return self.rm.make_sure_var_in_reg(var, forbidden_vars, + selected_reg, need_lower_byte) def _sync_var(self, v): if v.type == FLOAT: From noreply at buildbot.pypy.org Tue Jan 17 11:10:54 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 11:10:54 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: good, fix the test Message-ID: <20120117101054.E80EE820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51385:23d3ca4e7881 Date: 2012-01-17 12:10 +0200 http://bitbucket.org/pypy/pypy/changeset/23d3ca4e7881/ Log: good, fix the test diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1318,7 +1318,7 @@ assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() a = arange(10).reshape(5, 2) assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() - assert (a[a & 1] == [0, 2, 4, 6, 8]).all() + assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): From cfbolz at gmx.de Tue Jan 17 11:11:05 2012 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Tue, 17 Jan 2012 11:11:05 +0100 Subject: [pypy-commit] pypy default: A bit experimental - try to preallocate the size of unicode join and remove In-Reply-To: <20120116201610.009C0820D8@wyvern.cs.uni-duesseldorf.de> References: <20120116201610.009C0820D8@wyvern.cs.uni-duesseldorf.de> Message-ID: <4F154939.4030304@gmx.de> Comments inline. On 01/16/2012 09:16 PM, fijal wrote: > Author: Maciej Fijalkowski > Branch: > Changeset: r51364:f6b8525d8a10 > Date: 2012-01-16 22:14 +0200 > http://bitbucket.org/pypy/pypy/changeset/f6b8525d8a10/ > > Log: A bit experimental - try to preallocate the size of unicode join and > remove a pointless performance hack (the general optimization should > work already) > > diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py > --- a/pypy/objspace/std/unicodeobject.py > +++ b/pypy/objspace/std/unicodeobject.py > @@ -201,7 +201,7 @@ > return space.newbool(container.find(item) != -1) > > def unicode_join__Unicode_ANY(space, w_self, w_list): > - list_w = space.unpackiterable(w_list) > + list_w = space.listview(w_list) > size = len(list_w) > > if size == 0: > @@ -216,22 +216,21 @@ > > def _unicode_join_many_items(space, w_self, list_w, size): > self = w_self._value > - sb = UnicodeBuilder() > + prealloc_size = 0 > + for i in range(size): > + prealloc_size += len(space.unicode_w(list_w[i])) > + sb = UnicodeBuilder(prealloc_size) Doesn't this produce the wrong error message if something in the list is not a unicode object? I guess no tests fails because nobody checks the error message. Cheers, Carl Friedrich > for i in range(size): > if self and i != 0: > sb.append(self) > w_s = list_w[i] > - if isinstance(w_s, W_UnicodeObject): > - # shortcut for performance > - sb.append(w_s._value) > - else: > - try: > - sb.append(space.unicode_w(w_s)) > - except OperationError, e: > - if not e.match(space, space.w_TypeError): > - raise > - raise operationerrfmt(space.w_TypeError, > - "sequence item %d: expected string or Unicode", i) > + try: > + sb.append(space.unicode_w(w_s)) > + except OperationError, e: > + if not e.match(space, space.w_TypeError): > + raise > + raise operationerrfmt(space.w_TypeError, > + "sequence item %d: expected string or Unicode", i) > return space.wrap(sb.build()) > > def hash__Unicode(space, w_uni): > _______________________________________________ > pypy-commit mailing list > pypy-commit at python.org > http://mail.python.org/mailman/listinfo/pypy-commit From noreply at buildbot.pypy.org Tue Jan 17 11:11:51 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 17 Jan 2012 11:11:51 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: (all) planning for today Message-ID: <20120117101151.E2DED820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: extradoc Changeset: r4039:eb50533e45ea Date: 2012-01-17 11:11 +0100 http://bitbucket.org/pypy/extradoc/changeset/eb50533e45ea/ Log: (all) planning for today diff --git a/sprintinfo/leysin-winter-2012/planning.txt b/sprintinfo/leysin-winter-2012/planning.txt --- a/sprintinfo/leysin-winter-2012/planning.txt +++ b/sprintinfo/leysin-winter-2012/planning.txt @@ -9,17 +9,19 @@ Things we want to do -------------------- -* some skiing +* some skiing: wednesday (last day of sun :-/ ) -* review the JVM backend pull request (anto) +* review the JVM backend pull request (DONE) -* py3k (romain) +* py3k (romain, anto later) * ffistruct -* Cython backend (romain) +* Cython backend * STM - start work on the GC (anto, arigo) + - refactored the RPython API: mostly done, must adapt targetdemo*.py + (anto, arigo) + - start work on the GC (arigo later?) * concurrent-marksweep GC From noreply at buildbot.pypy.org Tue Jan 17 11:16:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 11:16:33 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: jit merge points Message-ID: <20120117101633.A8E49820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51386:fed67b9508a6 Date: 2012-01-17 12:15 +0200 http://bitbucket.org/pypy/pypy/changeset/fed67b9508a6/ Log: jit merge points diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -40,6 +40,18 @@ get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['frame', 's', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['sig', 'shapelen'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res'], + name='numpy_filter', +) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -491,7 +503,10 @@ frame = sig.create_frame(self) shapelen = len(arr.shape) s = 0 + iter = None while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) iter = frame.get_final_iter() s += arr.dtype.getitem_bool(arr.storage, iter.offset) frame.next(shapelen) @@ -506,7 +521,11 @@ argi = ArrayIterator(concr.size) sig = self.find_sig() frame = sig.create_frame(self) + v = None while not frame.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen) if concr.dtype.getitem_bool(concr.storage, argi.offset): v = sig.eval(frame, self) res.setitem(ri.offset, v) From noreply at buildbot.pypy.org Tue Jan 17 11:27:20 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 11:27:20 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: fix test_zjit. Skip the test that fails every time someone touches the world. Message-ID: <20120117102720.ACD76820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51387:92eaf5297003 Date: 2012-01-17 12:26 +0200 http://bitbucket.org/pypy/pypy/changeset/92eaf5297003/ Log: fix test_zjit. Skip the test that fails every time someone touches the world. diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -48,7 +48,7 @@ def getitem_bool(self, storage, i): isize = self.itemtype.get_element_size() - return self.itemtype.read_raw(storage, isize, i, 0, bool) + return self.itemtype.read_bool(storage, isize, i, 0) def setitem(self, storage, i, box): self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -43,13 +43,13 @@ count_driver = jit.JitDriver( greens=['shapelen'], virtualizables=['frame'], - reds=['frame', 's', 'iter', 'arr'], + reds=['s', 'frame', 'iter', 'arr'], name='numpy_count' ) filter_driver = jit.JitDriver( - greens=['sig', 'shapelen'], + greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['concr', 'argi', 'ri', 'frame', 'v', 'res'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], name='numpy_filter', ) @@ -525,7 +525,7 @@ while not frame.done(): filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, frame=frame, v=v, res=res, sig=sig, - shapelen=shapelen) + shapelen=shapelen, self=self) if concr.dtype.getitem_bool(concr.storage, argi.offset): v = sig.eval(frame, self) res.setitem(ri.offset, v) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -217,6 +217,7 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. + py.test.skip("too fragile") self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, 'getfield_gc_pure': 8, diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -94,10 +94,8 @@ width, storage, i, offset )) - @specialize.arg(5) - def read_raw(self, storage, width, i, offset, tp): - return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), - width, storage, i, offset) + def read_bool(self, storage, width, i, offset): + raise NotImplementedError def store(self, storage, width, i, offset, box): value = self.unbox(box) @@ -199,6 +197,11 @@ else: return self.False + + def read_bool(self, storage, width, i, offset): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def coerce_subtype(self, space, w_subtype, w_item): # Doesn't return subclasses so it can return the constants. return self._coerce(space, w_item) From noreply at buildbot.pypy.org Tue Jan 17 11:29:46 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 17 Jan 2012 11:29:46 +0100 (CET) Subject: [pypy-commit] pypy jvm-improvements: close about-to-be-merged branch Message-ID: <20120117102946.DC461820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: jvm-improvements Changeset: r51388:88e6de55b13e Date: 2012-01-17 11:28 +0100 http://bitbucket.org/pypy/pypy/changeset/88e6de55b13e/ Log: close about-to-be-merged branch From noreply at buildbot.pypy.org Tue Jan 17 11:29:48 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 17 Jan 2012 11:29:48 +0100 (CET) Subject: [pypy-commit] pypy default: (benol) merge the jvm-improvements branch, which fixes (again :-)) the translation for the JVM backend Message-ID: <20120117102948.1EF92820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r51389:c1b66ebfb441 Date: 2012-01-17 11:29 +0100 http://bitbucket.org/pypy/pypy/changeset/c1b66ebfb441/ Log: (benol) merge the jvm-improvements branch, which fixes (again :-)) the translation for the JVM backend diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -420,7 +420,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Tue Jan 17 12:16:17 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 12:16:17 +0100 (CET) Subject: [pypy-commit] pypy default: a minor improvement Message-ID: <20120117111617.82677820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51390:89436d06f0ae Date: 2012-01-17 12:36 +0200 http://bitbucket.org/pypy/pypy/changeset/89436d06f0ae/ Log: a minor improvement diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -216,7 +216,7 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - prealloc_size = 0 + prealloc_size = len(self) * (size - 1) for i in range(size): prealloc_size += len(space.unicode_w(list_w[i])) sb = UnicodeBuilder(prealloc_size) From noreply at buildbot.pypy.org Tue Jan 17 12:16:18 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 12:16:18 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: progress on arr[arr_of_bools] = arr Message-ID: <20120117111618.AE2B2820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51391:fe9bca1da2b6 Date: 2012-01-17 13:15 +0200 http://bitbucket.org/pypy/pypy/changeset/fe9bca1da2b6/ Log: progress on arr[arr_of_bools] = arr diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -10,7 +10,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator, Chunk + SkipLastAxisIterator, Chunk, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -518,7 +518,7 @@ res = W_NDimArray(size, [size], self.find_dtype()) ri = ArrayIterator(size) shapelen = len(self.shape) - argi = ArrayIterator(concr.size) + argi = concr.create_iter() sig = self.find_sig() frame = sig.create_frame(self) v = None @@ -536,6 +536,18 @@ frame.next(shapelen) return res + def setitem_filter(self, space, idx, val): + arr = SliceArray(self.shape, self.dtype, self, val) + shapelen = len(arr.shape) + sig = arr.find_sig() + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + idxi = idxi.next(shapelen) + frame.next(shapelen) + def descr_getitem(self, space, w_idx): if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and w_idx.find_dtype().is_bool_type()): @@ -549,6 +561,11 @@ def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -1135,6 +1152,10 @@ parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1181,6 +1202,9 @@ self.shape = new_shape self.calc_strides(new_shape) + def create_iter(self): + return ArrayIterator(self.size) + def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1235,6 +1259,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1320,6 +1320,15 @@ assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() + def test_array_indexing_bool_setitem(self): + from _numpypy import arange, array + a = arange(6) + a[a > 3] = 15 + assert (a == [0, 1, 2, 3, 15, 15]).all() + a = arange(6).reshape(3, 2) + a[a & 1 == 1] = array([8, 9, 10]) + assert (a == [[0, 8], [3, 9], [5, 10]]).all() + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct From noreply at buildbot.pypy.org Tue Jan 17 12:16:19 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 12:16:19 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120117111619.D552C820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51392:81aaf2060a57 Date: 2012-01-17 13:15 +0200 http://bitbucket.org/pypy/pypy/changeset/81aaf2060a57/ Log: merge diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -216,7 +216,7 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - prealloc_size = 0 + prealloc_size = len(self) * (size - 1) for i in range(size): prealloc_size += len(space.unicode_w(list_w[i])) sb = UnicodeBuilder(prealloc_size) From noreply at buildbot.pypy.org Tue Jan 17 12:25:26 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Tue, 17 Jan 2012 12:25:26 +0100 (CET) Subject: [pypy-commit] pypy py3k: On linux, set ascii as the default locale if none can be found Message-ID: <20120117112526.02E53820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51393:16202ef48d39 Date: 2012-01-17 12:23 +0100 http://bitbucket.org/pypy/pypy/changeset/16202ef48d39/ Log: On linux, set ascii as the default locale if none can be found diff --git a/pypy/module/sys/interp_encoding.py b/pypy/module/sys/interp_encoding.py --- a/pypy/module/sys/interp_encoding.py +++ b/pypy/module/sys/interp_encoding.py @@ -33,6 +33,8 @@ base_encoding = "mbcs" elif sys.platform == "darwin": base_encoding = "utf-8" +elif sys.platform == "linux2": + base_encoding = "ascii" else: base_encoding = None From noreply at buildbot.pypy.org Tue Jan 17 12:25:27 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Tue, 17 Jan 2012 12:25:27 +0100 (CET) Subject: [pypy-commit] pypy py3k: Merge heads Message-ID: <20120117112527.61741820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51394:7948fef3ed5a Date: 2012-01-17 12:25 +0100 http://bitbucket.org/pypy/pypy/changeset/7948fef3ed5a/ Log: Merge heads diff --git a/lib_pypy/binascii.py b/lib_pypy/binascii.py --- a/lib_pypy/binascii.py +++ b/lib_pypy/binascii.py @@ -598,68 +598,68 @@ return ''.join(result) crc_32_tab = [ - 0x00000000L, 0x77073096L, 0xee0e612cL, 0x990951baL, 0x076dc419L, - 0x706af48fL, 0xe963a535L, 0x9e6495a3L, 0x0edb8832L, 0x79dcb8a4L, - 0xe0d5e91eL, 0x97d2d988L, 0x09b64c2bL, 0x7eb17cbdL, 0xe7b82d07L, - 0x90bf1d91L, 0x1db71064L, 0x6ab020f2L, 0xf3b97148L, 0x84be41deL, - 0x1adad47dL, 0x6ddde4ebL, 0xf4d4b551L, 0x83d385c7L, 0x136c9856L, - 0x646ba8c0L, 0xfd62f97aL, 0x8a65c9ecL, 0x14015c4fL, 0x63066cd9L, - 0xfa0f3d63L, 0x8d080df5L, 0x3b6e20c8L, 0x4c69105eL, 0xd56041e4L, - 0xa2677172L, 0x3c03e4d1L, 0x4b04d447L, 0xd20d85fdL, 0xa50ab56bL, - 0x35b5a8faL, 0x42b2986cL, 0xdbbbc9d6L, 0xacbcf940L, 0x32d86ce3L, - 0x45df5c75L, 0xdcd60dcfL, 0xabd13d59L, 0x26d930acL, 0x51de003aL, - 0xc8d75180L, 0xbfd06116L, 0x21b4f4b5L, 0x56b3c423L, 0xcfba9599L, - 0xb8bda50fL, 0x2802b89eL, 0x5f058808L, 0xc60cd9b2L, 0xb10be924L, - 0x2f6f7c87L, 0x58684c11L, 0xc1611dabL, 0xb6662d3dL, 0x76dc4190L, - 0x01db7106L, 0x98d220bcL, 0xefd5102aL, 0x71b18589L, 0x06b6b51fL, - 0x9fbfe4a5L, 0xe8b8d433L, 0x7807c9a2L, 0x0f00f934L, 0x9609a88eL, - 0xe10e9818L, 0x7f6a0dbbL, 0x086d3d2dL, 0x91646c97L, 0xe6635c01L, - 0x6b6b51f4L, 0x1c6c6162L, 0x856530d8L, 0xf262004eL, 0x6c0695edL, - 0x1b01a57bL, 0x8208f4c1L, 0xf50fc457L, 0x65b0d9c6L, 0x12b7e950L, - 0x8bbeb8eaL, 0xfcb9887cL, 0x62dd1ddfL, 0x15da2d49L, 0x8cd37cf3L, - 0xfbd44c65L, 0x4db26158L, 0x3ab551ceL, 0xa3bc0074L, 0xd4bb30e2L, - 0x4adfa541L, 0x3dd895d7L, 0xa4d1c46dL, 0xd3d6f4fbL, 0x4369e96aL, - 0x346ed9fcL, 0xad678846L, 0xda60b8d0L, 0x44042d73L, 0x33031de5L, - 0xaa0a4c5fL, 0xdd0d7cc9L, 0x5005713cL, 0x270241aaL, 0xbe0b1010L, - 0xc90c2086L, 0x5768b525L, 0x206f85b3L, 0xb966d409L, 0xce61e49fL, - 0x5edef90eL, 0x29d9c998L, 0xb0d09822L, 0xc7d7a8b4L, 0x59b33d17L, - 0x2eb40d81L, 0xb7bd5c3bL, 0xc0ba6cadL, 0xedb88320L, 0x9abfb3b6L, - 0x03b6e20cL, 0x74b1d29aL, 0xead54739L, 0x9dd277afL, 0x04db2615L, - 0x73dc1683L, 0xe3630b12L, 0x94643b84L, 0x0d6d6a3eL, 0x7a6a5aa8L, - 0xe40ecf0bL, 0x9309ff9dL, 0x0a00ae27L, 0x7d079eb1L, 0xf00f9344L, - 0x8708a3d2L, 0x1e01f268L, 0x6906c2feL, 0xf762575dL, 0x806567cbL, - 0x196c3671L, 0x6e6b06e7L, 0xfed41b76L, 0x89d32be0L, 0x10da7a5aL, - 0x67dd4accL, 0xf9b9df6fL, 0x8ebeeff9L, 0x17b7be43L, 0x60b08ed5L, - 0xd6d6a3e8L, 0xa1d1937eL, 0x38d8c2c4L, 0x4fdff252L, 0xd1bb67f1L, - 0xa6bc5767L, 0x3fb506ddL, 0x48b2364bL, 0xd80d2bdaL, 0xaf0a1b4cL, - 0x36034af6L, 0x41047a60L, 0xdf60efc3L, 0xa867df55L, 0x316e8eefL, - 0x4669be79L, 0xcb61b38cL, 0xbc66831aL, 0x256fd2a0L, 0x5268e236L, - 0xcc0c7795L, 0xbb0b4703L, 0x220216b9L, 0x5505262fL, 0xc5ba3bbeL, - 0xb2bd0b28L, 0x2bb45a92L, 0x5cb36a04L, 0xc2d7ffa7L, 0xb5d0cf31L, - 0x2cd99e8bL, 0x5bdeae1dL, 0x9b64c2b0L, 0xec63f226L, 0x756aa39cL, - 0x026d930aL, 0x9c0906a9L, 0xeb0e363fL, 0x72076785L, 0x05005713L, - 0x95bf4a82L, 0xe2b87a14L, 0x7bb12baeL, 0x0cb61b38L, 0x92d28e9bL, - 0xe5d5be0dL, 0x7cdcefb7L, 0x0bdbdf21L, 0x86d3d2d4L, 0xf1d4e242L, - 0x68ddb3f8L, 0x1fda836eL, 0x81be16cdL, 0xf6b9265bL, 0x6fb077e1L, - 0x18b74777L, 0x88085ae6L, 0xff0f6a70L, 0x66063bcaL, 0x11010b5cL, - 0x8f659effL, 0xf862ae69L, 0x616bffd3L, 0x166ccf45L, 0xa00ae278L, - 0xd70dd2eeL, 0x4e048354L, 0x3903b3c2L, 0xa7672661L, 0xd06016f7L, - 0x4969474dL, 0x3e6e77dbL, 0xaed16a4aL, 0xd9d65adcL, 0x40df0b66L, - 0x37d83bf0L, 0xa9bcae53L, 0xdebb9ec5L, 0x47b2cf7fL, 0x30b5ffe9L, - 0xbdbdf21cL, 0xcabac28aL, 0x53b39330L, 0x24b4a3a6L, 0xbad03605L, - 0xcdd70693L, 0x54de5729L, 0x23d967bfL, 0xb3667a2eL, 0xc4614ab8L, - 0x5d681b02L, 0x2a6f2b94L, 0xb40bbe37L, 0xc30c8ea1L, 0x5a05df1bL, - 0x2d02ef8dL + 0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, + 0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, + 0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, + 0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, + 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7, 0x136c9856, + 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9, + 0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, + 0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, + 0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, + 0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a, + 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, + 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924, + 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, + 0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, + 0x9fbfe4a5, 0xe8b8d433, 0x7807c9a2, 0x0f00f934, 0x9609a88e, + 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01, + 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed, + 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950, + 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, + 0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, + 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, + 0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, + 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa, 0xbe0b1010, + 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f, + 0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, + 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, + 0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615, + 0x73dc1683, 0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8, + 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1, 0xf00f9344, + 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb, + 0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, + 0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, + 0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, + 0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda, 0xaf0a1b4c, + 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, + 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, + 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, + 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, + 0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c, + 0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713, + 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b, + 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242, + 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, + 0x18b74777, 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, + 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278, + 0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, + 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc, 0x40df0b66, + 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9, + 0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, + 0xcdd70693, 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, + 0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, + 0x2d02ef8d ] def crc32(s, crc=0): result = 0 - crc = ~long(crc) & 0xffffffffL + crc = ~int(crc) & 0xffffffff for c in s: - crc = crc_32_tab[(crc ^ c) & 0xffL] ^ (crc >> 8) + crc = crc_32_tab[(crc ^ c) & 0xff] ^ (crc >> 8) #/* Note: (crc >> 8) MUST zero fill on left - result = crc ^ 0xffffffffL + result = crc ^ 0xffffffff if result > 2**31: result = ((result + 2**31) % 2**32) - 2**31 diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -34,6 +34,8 @@ DEFAULT_SOABI = 'pypy-%d%d' % PYPY_VERSION[:2] CHECK_FOR_PYW = sys.platform == 'win32' +PYC_TAG = 'pypy-%d%d' % PYPY_VERSION[:2] + @specialize.memo() def get_so_extension(space): if space.config.objspace.soabi is not None: @@ -91,10 +93,7 @@ else: # XXX that's slow def case_ok(filename): - index = filename.rfind(os.sep) - if os.altsep is not None: - index2 = filename.rfind(os.altsep) - index = max(index, index2) + index = rightmost_sep(filename) if index < 0: directory = os.curdir else: @@ -862,17 +861,63 @@ space.wrap(space.builtin)) code_w.exec_code(space, w_dict, w_dict) +def rightmost_sep(filename): + "Like filename.rfind('/'), but also search for \\." + index = filename.rfind(os.sep) + if os.altsep is not None: + index2 = filename.rfind(os.altsep) + index = max(index, index2) + return index + def make_compiled_pathname(pathname): "Given the path to a .py file, return the path to its .pyc file." - return pathname + 'c' + # foo.py -> __pycache__/foo..pyc + + lastpos = rightmost_sep(pathname) + 1 + assert lastpos >= 0 # zero when slash, takes the full name + fname = pathname[lastpos:] + if lastpos > 0: + # Windows: re-use the last separator character (/ or \\) when + # appending the __pycache__ path. + lastsep = pathname[lastpos-1] + else: + lastsep = os.sep + ext = fname + for i in range(len(fname)): + if fname[i] == '.': + ext = fname[:i + 1] + + result = (pathname[:lastpos] + "__pycache__" + lastsep + + ext + PYC_TAG + '.pyc') + return result def make_source_pathname(pathname): - pos_extension = len(pathname) - 4 # len('.pyc') - if pos_extension < 0: - raise ValueError("path is too short") - if pathname[pos_extension:] != '.pyc': - raise ValueError("not a .pyc path name") - return pathname[:pos_extension + 3] + "Given the path to a .pyc file, return the path to its .py file." + # (...)/__pycache__/foo..pyc -> (...)/foo.py + + right = rightmost_sep(pathname) + if right < 0: + raise ValueError() + left = rightmost_sep(pathname[:right]) + 1 + assert left >= 0 + if pathname[left:right] != '__pycache__': + raise ValueError() + + # Now verify that the path component to the right of the last + # slash has two dots in it. + rightpart = pathname[right + 1:] + dot0 = rightpart.find('.') + 1 + if dot0 <= 0: + raise ValueError + dot1 = rightpart[dot0:].find('.') + 1 + if dot1 <= 0: + raise ValueError + # Too many dots? + if rightpart[dot0 + dot1:].find('.') >= 0: + raise ValueError + + result = pathname[:left] + rightpart[:dot0] + 'py' + return result @jit.dont_look_inside def load_source_module(space, w_modulename, w_mod, pathname, source, @@ -1027,6 +1072,17 @@ Errors are ignored, if a write error occurs an attempt is made to remove the file. """ + # Ensure that the __pycache__ directory exists + dirsep = rightmost_sep(cpathname) + if dirsep < 0: + return + dirname = cpathname[:dirsep] + mode = src_mode | 0333 # +wx + try: + os.mkdir(dirname, mode) + except OSError: + pass + w_marshal = space.getbuiltinmodule('marshal') try: w_bytes = space.call_method(w_marshal, 'dumps', space.wrap(co), diff --git a/pypy/module/imp/test/test_import.py b/pypy/module/imp/test/test_import.py --- a/pypy/module/imp/test/test_import.py +++ b/pypy/module/imp/test/test_import.py @@ -38,7 +38,7 @@ test_reload = "def test():\n raise ValueError\n", infinite_reload = "import infinite_reload; reload(infinite_reload)", del_sys_module = "import sys\ndel sys.modules['del_sys_module']\n", - itertools = "hello_world = 42\n", + queue = "hello_world = 42\n", gc = "should_never_be_seen = 42\n", ) root.ensure("notapackage", dir=1) # empty, no __init__.py @@ -75,7 +75,7 @@ ) setuppkg("pkg_substituting", __init__ = "import sys, pkg_substituted\n" - "print 'TOTO', __name__\n" + "print('TOTO', __name__)\n" "sys.modules[__name__] = pkg_substituted") setuppkg("pkg_substituted", mod='') setuppkg("evil_pkg", @@ -102,6 +102,7 @@ w = space.wrap w_modname = w("compiled.x") filename = str(p.join("x.py")) + pycname = importing.make_compiled_pathname("x.py") stream = streamio.open_file_as_stream(filename, "r") try: importing.load_source_module(space, @@ -113,7 +114,7 @@ stream.close() if space.config.objspace.usepycfiles: # also create a lone .pyc file - p.join('lone.pyc').write(p.join('x.pyc').read(mode='rb'), + p.join('lone.pyc').write(p.join(pycname).read(mode='rb'), mode='wb') # create a .pyw file @@ -150,7 +151,7 @@ class AppTestImport: def setup_class(cls): # interpreter-level - cls.space = gettestobjspace(usemodules=['itertools']) + #cls.space = gettestobjspace(usemodules=['itertools']) cls.w_runappdirect = cls.space.wrap(conftest.option.runappdirect) cls.saved_modules = _setup(cls.space) #XXX Compile class @@ -159,6 +160,9 @@ def teardown_class(cls): # interpreter-level _teardown(cls.space, cls.saved_modules) + def w_exec_(self, cmd, ns): + exec(cmd, ns) + def test_set_sys_modules_during_import(self): from evil_pkg import evil assert evil.a == 42 @@ -373,7 +377,7 @@ def test_invalid__name__(self): glob = {} - exec "__name__ = None; import sys" in glob + exec("__name__ = None; import sys", glob) import sys assert glob['sys'] is sys @@ -418,16 +422,18 @@ assert pkg.pkg1.__package__ == 'pkg.pkg1' def test_future_relative_import_error_when_in_non_package(self): - exec """def imp(): + ns = {} + exec("""def imp(): from .string import inpackage - """.rstrip() - raises(ValueError, imp) + """.rstrip(), ns) + raises(ValueError, ns['imp']) def test_future_relative_import_error_when_in_non_package2(self): - exec """def imp(): + ns = {} + exec("""def imp(): from .. import inpackage - """.rstrip() - raises(ValueError, imp) + """.rstrip(), ns) + raises(ValueError, ns['imp']) def test_relative_import_with___name__(self): import sys @@ -456,9 +462,9 @@ def test__package__(self): # Regression test for http://bugs.python.org/issue3221. def check_absolute(): - exec "from os import path" in ns + self.exec_("from os import path", ns) def check_relative(): - exec "from . import a" in ns + self.exec_("from . import a", ns) # Check both OK with __package__ and __name__ correct ns = dict(__package__='pkg', __name__='pkg.notarealmodule') @@ -578,8 +584,11 @@ def test_cache_from_source(self): import imp - assert imp.cache_from_source('a/b/c.py') == 'a/b/c.pyc' - assert imp.source_from_cache('a/b/c.pyc') == 'a/b/c.py' + pycfile = imp.cache_from_source('a/b/c.py') + assert pycfile.startswith('a/b/__pycache__/c.pypy-') + assert pycfile.endswith('.pyc') + assert imp.source_from_cache('a/b/__pycache__/c.pypy-17.pyc' + ) == 'a/b/c.py' raises(ValueError, imp.source_from_cache, 'a/b/c.py') def test_shadow_builtin(self): @@ -596,34 +605,32 @@ def test_shadow_extension_1(self): if self.runappdirect: skip("hard to test: module is already imported") - # 'import itertools' is supposed to find itertools.py if there is + # 'import queue' is supposed to find queue.py if there is # one in sys.path. import sys - assert 'itertools' not in sys.modules - import itertools - assert hasattr(itertools, 'hello_world') - assert not hasattr(itertools, 'count') - assert '(built-in)' not in repr(itertools) - del sys.modules['itertools'] + assert 'queue' not in sys.modules + import queue + assert hasattr(queue, 'hello_world') + assert not hasattr(queue, 'count') + assert '(built-in)' not in repr(queue) + del sys.modules['queue'] def test_shadow_extension_2(self): if self.runappdirect: skip("hard to test: module is already imported") - # 'import itertools' is supposed to find the built-in module even + # 'import queue' is supposed to find the built-in module even # if there is also one in sys.path as long as it is *after* the - # special entry '.../lib_pypy/__extensions__'. (Note that for now - # there is one in lib_pypy/itertools.py, which should not be seen - # either; hence the (built-in) test below.) + # special entry '.../lib_pypy/__extensions__'. import sys - assert 'itertools' not in sys.modules + assert 'queue' not in sys.modules sys.path.append(sys.path.pop(0)) try: - import itertools - assert not hasattr(itertools, 'hello_world') - assert hasattr(itertools, 'izip') - assert '(built-in)' in repr(itertools) + import queue + assert not hasattr(queue, 'hello_world') + assert hasattr(queue, 'izip') + assert '(built-in)' in repr(queue) finally: sys.path.insert(0, sys.path.pop()) - del sys.modules['itertools'] + del sys.modules['queue'] class TestAbi: @@ -885,7 +892,7 @@ stream.close() # And the .pyc has been generated - cpathname = udir.join('test.pyc') + cpathname = udir.join(importing.make_compiled_pathname('test.py')) assert cpathname.check() def test_write_compiled_module(self): @@ -979,7 +986,7 @@ py.test.skip("unresolved issues with win32 shell quoting rules") from pypy.interpreter.test.test_zpy import pypypath extrapath = udir.ensure("pythonpath", dir=1) - extrapath.join("urllib.py").write("print 42\n") + extrapath.join("urllib.py").write("print(42)\n") old = os.environ.get('PYTHONPATH', None) oldlang = os.environ.pop('LANG', None) try: @@ -1024,7 +1031,7 @@ if fullname in self.namestoblock: return self def load_module(self, fullname): - raise ImportError, "blocked" + raise ImportError("blocked") import sys, imp modname = "errno" # an arbitrary harmless builtin module @@ -1094,7 +1101,7 @@ path = [self.path] try: file, filename, stuff = imp.find_module(subname, path) - except ImportError, e: + except ImportError: return None return ImpLoader(file, filename, stuff) @@ -1139,8 +1146,8 @@ def test_run_compiled_module(self): # XXX minimal test only - import imp, new - module = new.module('foobar') + import imp, types + module = types.ModuleType('foobar') raises(IOError, imp._run_compiled_module, 'foobar', 'this_file_does_not_exist', None, module) @@ -1161,14 +1168,14 @@ # a mostly-empty zip file path = os.path.join(self.udir, 'test_getimporter.zip') f = open(path, 'wb') - f.write('PK\x03\x04\n\x00\x00\x00\x00\x00P\x9eN>\x00\x00\x00\x00\x00' - '\x00\x00\x00\x00\x00\x00\x00\x05\x00\x15\x00emptyUT\t\x00' - '\x03wyYMwyYMUx\x04\x00\xf4\x01d\x00PK\x01\x02\x17\x03\n\x00' - '\x00\x00\x00\x00P\x9eN>\x00\x00\x00\x00\x00\x00\x00\x00\x00' - '\x00\x00\x00\x05\x00\r\x00\x00\x00\x00\x00\x00\x00\x00\x00' - '\xa4\x81\x00\x00\x00\x00emptyUT\x05\x00\x03wyYMUx\x00\x00PK' - '\x05\x06\x00\x00\x00\x00\x01\x00\x01\x00@\x00\x00\x008\x00' - '\x00\x00\x00\x00') + f.write(b'PK\x03\x04\n\x00\x00\x00\x00\x00P\x9eN>\x00\x00\x00\x00\x00' + b'\x00\x00\x00\x00\x00\x00\x00\x05\x00\x15\x00emptyUT\t\x00' + b'\x03wyYMwyYMUx\x04\x00\xf4\x01d\x00PK\x01\x02\x17\x03\n\x00' + b'\x00\x00\x00\x00P\x9eN>\x00\x00\x00\x00\x00\x00\x00\x00\x00' + b'\x00\x00\x00\x05\x00\r\x00\x00\x00\x00\x00\x00\x00\x00\x00' + b'\xa4\x81\x00\x00\x00\x00emptyUT\x05\x00\x03wyYMUx\x00\x00PK' + b'\x05\x06\x00\x00\x00\x00\x01\x00\x01\x00@\x00\x00\x008\x00' + b'\x00\x00\x00\x00') f.close() importer = imp._getimporter(path) import zipimport @@ -1193,16 +1200,16 @@ def test_import_possibly_from_pyc(self): from compiled import x if self.usepycfiles: - assert x.__file__.endswith('x.pyc') + assert x.__file__.endswith('.pyc') else: - assert x.__file__.endswith('x.py') + assert x.__file__.endswith('.py') try: from compiled import lone except ImportError: assert not self.lonepycfiles, "should have found 'lone.pyc'" else: assert self.lonepycfiles, "should not have found 'lone.pyc'" - assert lone.__file__.endswith('lone.pyc') + assert lone.__file__.endswith('.pyc') class AppTestNoLonePycFile(AppTestNoPycFile): spaceconfig = { diff --git a/pypy/module/termios/interp_termios.py b/pypy/module/termios/interp_termios.py --- a/pypy/module/termios/interp_termios.py +++ b/pypy/module/termios/interp_termios.py @@ -24,15 +24,11 @@ fd = space.c_filedescriptor_w(w_fd) w_iflag, w_oflag, w_cflag, w_lflag, w_ispeed, w_ospeed, w_cc = \ space.unpackiterable(w_attributes, expected_length=7) - w_builtin = space.getbuiltinmodule('__builtin__') cc = [] for w_c in space.unpackiterable(w_cc): if space.is_true(space.isinstance(w_c, space.w_int)): - ch = space.call_function(space.getattr(w_builtin, - space.wrap('chr')), w_c) - cc.append(space.str_w(ch)) - else: - cc.append(space.str_w(w_c)) + w_c = space.call(space.w_bytes, space.newlist([w_c])) + cc.append(space.bytes_w(w_c)) tup = (space.int_w(w_iflag), space.int_w(w_oflag), space.int_w(w_cflag), space.int_w(w_lflag), space.int_w(w_ispeed), space.int_w(w_ospeed), cc) diff --git a/pypy/module/termios/test/test_termios.py b/pypy/module/termios/test/test_termios.py --- a/pypy/module/termios/test/test_termios.py +++ b/pypy/module/termios/test/test_termios.py @@ -23,7 +23,7 @@ def _spawn(self, *args, **kwds): print 'SPAWN:', args, kwds - child = self.pexpect.spawn(*args, **kwds) + child = self.pexpect.spawn(timeout=600, *args, **kwds) child.logfile = sys.stdout return child @@ -53,7 +53,7 @@ termios.tcdrain(2) termios.tcflush(2, termios.TCIOFLUSH) termios.tcflow(2, termios.TCOON) - print 'ok!' + print('ok!') """) f = udir.join("test_tcall.py") f.write(source) @@ -65,7 +65,7 @@ import sys import termios termios.tcsetattr(sys.stdin, 1, [16640, 4, 191, 2608, 15, 15, ['\x03', '\x1c', '\x7f', '\x15', '\x04', 0, 1, '\x00', '\x11', '\x13', '\x1a', '\x00', '\x12', '\x0f', '\x17', '\x16', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00']]) - print 'ok!' + print('ok!') """) f = udir.join("test_tcsetattr.py") f.write(source) @@ -76,9 +76,9 @@ source = py.code.Source(""" import termios import fcntl - lgt = len(fcntl.ioctl(2, termios.TIOCGWINSZ, '\000'*8)) + lgt = len(fcntl.ioctl(2, termios.TIOCGWINSZ, b'\000'*8)) assert lgt == 8 - print 'ok!' + print('ok!') """) f = udir.join("test_ioctl_termios.py") f.write(source) @@ -97,7 +97,7 @@ assert len([i for i in f[-1] if isinstance(i, int)]) == 2 assert isinstance(f[-1][termios.VMIN], int) assert isinstance(f[-1][termios.VTIME], int) - print 'ok!' + print('ok!') """) f = udir.join("test_ioctl_termios.py") f.write(source) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -157,6 +157,10 @@ "NOT_RPYTHON" raise NotImplementedError + def wrapbytes(self, x): + assert isinstance(x, str) + return w_some_obj() + def _see_interp2app(self, interp2app): "NOT_RPYTHON" activation = interp2app._code.activation @@ -278,7 +282,7 @@ for name in (ObjSpace.ConstantTable + ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', - 'dict', 'unicode', 'complex', 'slice', 'bool', + 'dict', 'bytes', 'complex', 'slice', 'bool', 'type', 'text', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # diff --git a/pypy/objspace/std/longtype.py b/pypy/objspace/std/longtype.py --- a/pypy/objspace/std/longtype.py +++ b/pypy/objspace/std/longtype.py @@ -40,6 +40,7 @@ w_obj = space.int(w_obj) else: w_obj = space.trunc(w_obj) + w_obj = space.int(w_obj) return w_obj else: base = space.int_w(w_base) From noreply at buildbot.pypy.org Tue Jan 17 13:20:59 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 13:20:59 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add a section Message-ID: <20120117122059.8DFC5820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4040:ebe7ed17d1d5 Date: 2012-01-17 14:20 +0200 http://bitbucket.org/pypy/extradoc/changeset/ebe7ed17d1d5/ Log: add a section diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -24,3 +24,10 @@ * keep subclass type when slicing, __array_finalize__ * ndarray.view + +OPTIMIZATIONS +------------- + +- SSE + +- count number of True's for bool arrays, so we don't have to recompute From noreply at buildbot.pypy.org Tue Jan 17 13:28:44 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 13:28:44 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: fix setitem with bool index Message-ID: <20120117122844.5CDF9820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51395:6fe5770303bb Date: 2012-01-17 14:28 +0200 http://bitbucket.org/pypy/pypy/changeset/6fe5770303bb/ Log: fix setitem with bool index diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -537,16 +537,18 @@ return res def setitem_filter(self, space, idx, val): - arr = SliceArray(self.shape, self.dtype, self, val) - shapelen = len(arr.shape) + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) sig = arr.find_sig() + shapelen = len(self.shape) frame = sig.create_frame(arr) idxi = idx.create_iter() while not frame.done(): if idx.dtype.getitem_bool(idx.storage, idxi.offset): sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) idxi = idxi.next(shapelen) - frame.next(shapelen) def descr_getitem(self, space, w_idx): if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -82,6 +82,16 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + def get_final_iter(self): final_iter = promote(self.final_iter) if final_iter < 0: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1327,7 +1327,7 @@ assert (a == [0, 1, 2, 3, 15, 15]).all() a = arange(6).reshape(3, 2) a[a & 1 == 1] = array([8, 9, 10]) - assert (a == [[0, 8], [3, 9], [5, 10]]).all() + assert (a == [[0, 8], [2, 9], [4, 10]]).all() class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): From noreply at buildbot.pypy.org Tue Jan 17 13:32:02 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 13:32:02 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: introduce a jit driver, although it's a bit pointless right now Message-ID: <20120117123202.DA524820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51396:b48aeb33bb41 Date: 2012-01-17 14:31 +0200 http://bitbucket.org/pypy/pypy/changeset/b48aeb33bb41/ Log: introduce a jit driver, although it's a bit pointless right now diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -52,6 +52,12 @@ reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], name='numpy_filter', ) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', +) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -544,6 +550,9 @@ frame = sig.create_frame(arr) idxi = idx.create_iter() while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) if idx.dtype.getitem_bool(idx.storage, idxi.offset): sig.eval(frame, arr) frame.next_from_second(1) From noreply at buildbot.pypy.org Tue Jan 17 13:33:31 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 13:33:31 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add two more items Message-ID: <20120117123331.16CC7820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4041:daaef6427a03 Date: 2012-01-17 14:33 +0200 http://bitbucket.org/pypy/extradoc/changeset/daaef6427a03/ Log: add two more items diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -31,3 +31,8 @@ - SSE - count number of True's for bool arrays, so we don't have to recompute + +- bridges are a tad too complicated - we need to store optimizer info + in order to improve though + +- iterators should possibly not require reinstantiating them. From noreply at buildbot.pypy.org Tue Jan 17 13:48:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 13:48:46 +0100 (CET) Subject: [pypy-commit] pypy default: (fijal, cfbolz) improve error reporting Message-ID: <20120117124846.D6538820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51397:865005356c2f Date: 2012-01-17 14:48 +0200 http://bitbucket.org/pypy/pypy/changeset/865005356c2f/ Log: (fijal, cfbolz) improve error reporting diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -64,6 +64,12 @@ check(', '.join([u'a']), u'a') check(', '.join(['a', u'b']), u'a, b') check(u', '.join(['a', 'b']), u'a, b') + try: + u''.join([u'a', 2, 3]) + except TypeError, e: + assert 'sequence item 1' in str(e) + else: + raise Exception("DID NOT RAISE") if sys.version_info >= (2,3): def test_contains_ex(self): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -218,19 +218,19 @@ self = w_self._value prealloc_size = len(self) * (size - 1) for i in range(size): - prealloc_size += len(space.unicode_w(list_w[i])) + try: + prealloc_size += len(space.unicode_w(list_w[i])) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): From noreply at buildbot.pypy.org Tue Jan 17 14:52:50 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 17 Jan 2012 14:52:50 +0100 (CET) Subject: [pypy-commit] pypy stm: (antocuni, arigo) Message-ID: <20120117135250.CEFF4820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51398:29e9345db0e6 Date: 2012-01-17 11:57 +0100 http://bitbucket.org/pypy/pypy/changeset/29e9345db0e6/ Log: (antocuni, arigo) Test and fix: handle the case of stm_{get,set}field() called after descriptor_init() but outside a transaction. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -80,15 +80,19 @@ #endif unsigned int spinloop_counter; owner_version_t my_lock_word; - unsigned init_counter; struct RedoLog redolog; /* last item, because it's the biggest one */ int transaction_active; }; +static const struct tx_descriptor null_tx = { + .transaction_active = 0 +}; +#define NULL_TX ((struct tx_descriptor *)(&null_tx)) + /* global_timestamp contains in its lowest bit a flag equal to 1 if there is an inevitable transaction running */ static volatile unsigned long global_timestamp = 2; -static __thread struct tx_descriptor *thread_descriptor = NULL; +static __thread struct tx_descriptor *thread_descriptor = NULL_TX; #ifdef COMMIT_OTHER_INEV static struct tx_descriptor *volatile thread_descriptor_inev; static volatile unsigned long d_inev_checking = 0; @@ -509,11 +513,8 @@ long stm_read_word(long* addr) { struct tx_descriptor *d = thread_descriptor; - if (!d) + if (!d->transaction_active) return *addr; -#ifdef RPY_STM_ASSERT - assert(d->transaction_active); -#endif // check writeset first wlog_t* found; @@ -575,23 +576,18 @@ void stm_write_word(long* addr, long val) { struct tx_descriptor *d = thread_descriptor; - if (!d) - { - *addr = val; - return; - } -#ifdef RPY_STM_ASSERT - assert(d->transaction_active); -#endif + if (!d->transaction_active) { + *addr = val; + return; + } redolog_insert(&d->redolog, addr, val); } void stm_descriptor_init(void) { - if (thread_descriptor != NULL) - thread_descriptor->init_counter++; - else + assert(thread_descriptor == NULL_TX); + if (1) /* for hg diff */ { struct tx_descriptor *d = malloc(sizeof(struct tx_descriptor)); memset(d, 0, sizeof(struct tx_descriptor)); @@ -606,7 +602,6 @@ d->my_lock_word = ~d->my_lock_word; assert(IS_LOCKED(d->my_lock_word)); d->spinloop_counter = (unsigned int)(d->my_lock_word | 1); - d->init_counter = 1; thread_descriptor = d; @@ -621,11 +616,9 @@ void stm_descriptor_done(void) { struct tx_descriptor *d = thread_descriptor; - d->init_counter--; - if (d->init_counter > 0) - return; + assert(d != NULL_TX); - thread_descriptor = NULL; + thread_descriptor = NULL_TX; #ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-done"); @@ -671,7 +664,7 @@ { void *result; /* you need to call descriptor_init() before calling stm_perform_transaction */ - assert(thread_descriptor != NULL); + assert(thread_descriptor != NULL_TX); STM_begin_transaction(); result = callback(arg); stm_commit_transaction(); @@ -767,7 +760,7 @@ by another thread. We set the lowest bit in global_timestamp to 1. */ struct tx_descriptor *d = thread_descriptor; - if (!d) + if (!d->transaction_active) return; #ifdef RPY_STM_ASSERT @@ -965,7 +958,7 @@ long stm_debug_get_state(void) { struct tx_descriptor *d = thread_descriptor; - if (!d) + if (d == NULL_TX) return -1; if (!d->transaction_active) return 0; diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -257,6 +257,16 @@ t, cbuilder = self.compile(do_stm_getfield) cbuilder.cmdexec('') + def test_getfield_all_sizes_outside_transaction(self): + def do_stm_getfield(argv): + stm_descriptor_init() + # we have a descriptor, but we don't call it in a transaction + _play_with_getfield(None) + stm_descriptor_done() + return 0 + t, cbuilder = self.compile(do_stm_getfield) + cbuilder.cmdexec('') + def test_setfield_all_sizes(self): def do_stm_setfield(argv): _play_with_setfields(None) @@ -277,6 +287,15 @@ t, cbuilder = self.compile(do_stm_setfield) cbuilder.cmdexec('') + def test_setfield_all_sizes_outside_transaction(self): + def do_stm_setfield(argv): + stm_descriptor_init() + _play_with_setfields(None) + stm_descriptor_done() + return 0 + t, cbuilder = self.compile(do_stm_setfield) + cbuilder.cmdexec('') + def test_getarrayitem_all_sizes(self): def do_stm_getarrayitem(argv): _play_with_getarrayitem(None) From noreply at buildbot.pypy.org Tue Jan 17 14:52:52 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 17 Jan 2012 14:52:52 +0100 (CET) Subject: [pypy-commit] pypy stm: (antocuni, arigo) Message-ID: <20120117135252.033EA82110@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51399:4ef670f3a925 Date: 2012-01-17 12:39 +0100 http://bitbucket.org/pypy/pypy/changeset/4ef670f3a925/ Log: (antocuni, arigo) Refactored targetdemo.py to use the new interface. Allow debug_start/debug_print/debug_stop to work (as long as you don't say PYPYLOG=..:file) by making the flag thread-local. diff --git a/pypy/translator/c/src/debug_print.c b/pypy/translator/c/src/debug_print.c --- a/pypy/translator/c/src/debug_print.c +++ b/pypy/translator/c/src/debug_print.c @@ -15,8 +15,8 @@ #include "src/profiling.h" #include "src/debug_print.h" -long pypy_have_debug_prints = -1; -FILE *pypy_debug_file = NULL; +__thread long pypy_have_debug_prints = -1; +FILE *pypy_debug_file = NULL; /* XXX make it thread-local too? */ static unsigned char debug_ready = 0; static unsigned char debug_profile = 0; static char *debug_start_colors_1 = ""; diff --git a/pypy/translator/c/src/debug_print.h b/pypy/translator/c/src/debug_print.h --- a/pypy/translator/c/src/debug_print.h +++ b/pypy/translator/c/src/debug_print.h @@ -38,7 +38,7 @@ void pypy_debug_stop(const char *category); long pypy_debug_offset(void); -extern long pypy_have_debug_prints; +extern __thread long pypy_have_debug_prints; extern FILE *pypy_debug_file; #define OP_LL_READ_TIMESTAMP(val) READ_TIMESTAMP(val) diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -1,6 +1,7 @@ import time from pypy.module.thread import ll_thread -#from pypy.translator.stm import rstm +from pypy.rlib import rstm +from pypy.rlib.debug import debug_print class Node: @@ -15,8 +16,13 @@ anchor = Node(-1) glob = Global() +class Arg: + _alloc_nonmovable_ = True -def add_at_end_of_chained_list(node, value): + +def add_at_end_of_chained_list(arg): + node = arg.anchor + value = arg.value x = Node(value) while node.next: node = node.next @@ -46,14 +52,23 @@ print "check ok!" -def run_me(): - print "thread starting..." - for i in range(glob.LENGTH): - add_at_end_of_chained_list(glob.anchor, i) - rstm.transaction_boundary() +def increment_done(arg): print "thread done." glob.done += 1 +def run_me(): + debug_print("thread starting...") + arg = Arg() + rstm.descriptor_init() + try: + for i in range(glob.LENGTH): + arg.anchor = glob.anchor + arg.value = i + rstm.perform_transaction(add_at_end_of_chained_list, Arg, arg) + rstm.perform_transaction(increment_done, Arg, arg) + finally: + rstm.descriptor_done() + # __________ Entry point __________ From noreply at buildbot.pypy.org Tue Jan 17 14:52:53 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 17 Jan 2012 14:52:53 +0100 (CET) Subject: [pypy-commit] pypy default: Use default=False, and enable it only in -O2/O3/Ojit, like the Message-ID: <20120117135253.29A5382C03@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51400:3d6e00235d87 Date: 2011-11-05 19:36 +0100 http://bitbucket.org/pypy/pypy/changeset/3d6e00235d87/ Log: Use default=False, and enable it only in -O2/O3/Ojit, like the other optimizations. Fixes an issue if weakrefs are disabled. (transplanted from 5eb3fc96c89290e270737f0652388f29154d3b2c) diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -340,7 +340,7 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=True), + default=False), ]), ]) @@ -370,6 +370,7 @@ config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) config.objspace.std.suggest(withspecialisedtuple=True) + config.objspace.std.suggest(withidentitydict=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) From noreply at buildbot.pypy.org Tue Jan 17 14:52:54 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 17 Jan 2012 14:52:54 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120117135254.6308382CF4@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51401:f34f0c11299f Date: 2012-01-17 14:52 +0100 http://bitbucket.org/pypy/pypy/changeset/f34f0c11299f/ Log: merge heads diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -64,6 +64,12 @@ check(', '.join([u'a']), u'a') check(', '.join(['a', u'b']), u'a, b') check(u', '.join(['a', 'b']), u'a, b') + try: + u''.join([u'a', 2, 3]) + except TypeError, e: + assert 'sequence item 1' in str(e) + else: + raise Exception("DID NOT RAISE") if sys.version_info >= (2,3): def test_contains_ex(self): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -216,21 +216,21 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - prealloc_size = 0 + prealloc_size = len(self) * (size - 1) for i in range(size): - prealloc_size += len(space.unicode_w(list_w[i])) + try: + prealloc_size += len(space.unicode_w(list_w[i])) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -420,7 +420,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Tue Jan 17 16:03:11 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 16:03:11 +0100 (CET) Subject: [pypy-commit] pypy release-1.8.x: Create a release branch Message-ID: <20120117150311.9E208820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: release-1.8.x Changeset: r51402:2b32f9c1533d Date: 2012-01-17 17:02 +0200 http://bitbucket.org/pypy/pypy/changeset/2b32f9c1533d/ Log: Create a release branch diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 0, "final", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) From noreply at buildbot.pypy.org Tue Jan 17 16:04:43 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 16:04:43 +0100 (CET) Subject: [pypy-commit] pypy default: mark patchlevel as well Message-ID: <20120117150443.D8473820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51403:a4261375b359 Date: 2012-01-17 17:03 +0200 http://bitbucket.org/pypy/pypy/changeset/a4261375b359/ Log: mark patchlevel as well diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.7.1" +#define PYPY_VERSION "1.8.0" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ From noreply at buildbot.pypy.org Tue Jan 17 16:04:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 16:04:45 +0100 (CET) Subject: [pypy-commit] pypy default: update it here as well Message-ID: <20120117150445.0BC30820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51404:96334e1f84f3 Date: 2012-01-17 17:04 +0200 http://bitbucket.org/pypy/pypy/changeset/96334e1f84f3/ Log: update it here as well diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.8.0" +#define PYPY_VERSION "1.8.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) From noreply at buildbot.pypy.org Tue Jan 17 16:22:17 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Tue, 17 Jan 2012 16:22:17 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni,romain) Adapt the test to py3k Message-ID: <20120117152217.68F60820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51405:f3ca10db7f5e Date: 2012-01-17 14:44 +0100 http://bitbucket.org/pypy/pypy/changeset/f3ca10db7f5e/ Log: (antocuni,romain) Adapt the test to py3k diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -38,7 +38,7 @@ """, info=info) assert tree.type == syms.file_input assert info.encoding == "iso-8859-1" - sentence = u"u'Die Männer ärgen sich!'" + sentence = u"'Die Männer ärgen sich!'" input = (u"# coding: utf-7\nstuff = %s" % (sentence,)).encode("utf-7") tree = self.parse(input, info=info) assert info.encoding == "utf-7" From noreply at buildbot.pypy.org Tue Jan 17 16:22:18 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Tue, 17 Jan 2012 16:22:18 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain) started to parse the new tuple unpacking (only the case with no parenthesis is implemented so far) Message-ID: <20120117152218.94201820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51406:5dd563b2360f Date: 2012-01-17 16:21 +0100 http://bitbucket.org/pypy/pypy/changeset/5dd563b2360f/ Log: (antocuni, romain) started to parse the new tuple unpacking (only the case with no parenthesis is implemented so far) diff --git a/pypy/interpreter/astcompiler/astbuilder.py b/pypy/interpreter/astcompiler/astbuilder.py --- a/pypy/interpreter/astcompiler/astbuilder.py +++ b/pypy/interpreter/astcompiler/astbuilder.py @@ -680,7 +680,7 @@ self.set_context(target_expr, ast.Store) targets.append(target_expr) value_child = stmt.children[-1] - if value_child.type == syms.testlist: + if value_child.type == syms.testlist or value_child.type == syms.testlist_star_expr: value_expr = self.handle_testlist(value_child) else: value_expr = self.handle_expr(value_child) diff --git a/pypy/interpreter/pyparser/data/Grammar3.2 b/pypy/interpreter/pyparser/data/Grammar3.2 --- a/pypy/interpreter/pyparser/data/Grammar3.2 +++ b/pypy/interpreter/pyparser/data/Grammar3.2 @@ -37,8 +37,9 @@ simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) -expr_stmt: testlist (augassign (yield_expr|testlist) | - ('=' (yield_expr|testlist))*) +expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) | + ('=' (yield_expr|testlist_star_expr))*) +testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal assignments, additional restrictions enforced by the interpreter @@ -93,6 +94,7 @@ not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' +star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -157,4 +157,5 @@ py.test.raises(SyntaxError, self.parse, '0777') def test_py3k_extended_unpacking(self): + self.parse('a, *rest, b = 1, 2, 3, 4, 5') self.parse('(a, *rest, b) = 1, 2, 3, 4, 5') From noreply at buildbot.pypy.org Tue Jan 17 17:20:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 17 Jan 2012 17:20:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain): most of the time the parser does not need a space, thus we instantiate it only for the tests (only one) which require it. This is useful because if we change the grammar, we might break the applevel code which is run during the initialization phase, thus resulting in an annoying and unrelated failure Message-ID: <20120117162032.C8CDE820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51407:dc00842db627 Date: 2012-01-17 17:20 +0100 http://bitbucket.org/pypy/pypy/changeset/dc00842db627/ Log: (antocuni, romain): most of the time the parser does not need a space, thus we instantiate it only for the tests (only one) which require it. This is useful because if we change the grammar, we might break the applevel code which is run during the initialization phase, thus resulting in an annoying and unrelated failure diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -6,10 +6,10 @@ from pypy.interpreter.astcompiler import consts -class TestPythonParser: +class TestPythonParserWithoutSpace: def setup_class(self): - self.parser = pyparse.PythonParser(self.space) + self.parser = pyparse.PythonParser(None) def parse(self, source, mode="exec", info=None): if info is None: @@ -31,39 +31,6 @@ tree = self.parse("name = 32") assert self.parser.root is None - def test_encoding(self): - info = pyparse.CompileInfo("", "exec") - tree = self.parse("""# coding: latin-1 -stuff = "nothing" -""", info=info) - assert tree.type == syms.file_input - assert info.encoding == "iso-8859-1" - sentence = u"'Die Männer ärgen sich!'" - input = (u"# coding: utf-7\nstuff = %s" % (sentence,)).encode("utf-7") - tree = self.parse(input, info=info) - assert info.encoding == "utf-7" - input = "# coding: iso-8859-15\nx" - self.parse(input, info=info) - assert info.encoding == "iso-8859-15" - input = "\xEF\xBB\xBF# coding: utf-8\nx" - self.parse(input, info=info) - assert info.encoding == "utf-8" - input = "# coding: utf-8\nx" - info.flags |= consts.PyCF_SOURCE_IS_UTF8 - exc = py.test.raises(SyntaxError, self.parse, input, info=info).value - info.flags &= ~consts.PyCF_SOURCE_IS_UTF8 - assert exc.msg == "coding declaration in unicode string" - input = "\xEF\xBB\xBF# coding: latin-1\nx" - exc = py.test.raises(SyntaxError, self.parse, input).value - assert exc.msg == "UTF-8 BOM with non-utf8 coding cookie" - input = "# coding: not-here" - exc = py.test.raises(SyntaxError, self.parse, input).value - assert exc.msg == "Unknown encoding: not-here" - input = u"# coding: ascii\n\xe2".encode('utf-8') - exc = py.test.raises(SyntaxError, self.parse, input).value - assert exc.msg == ("'ascii' codec can't decode byte 0xc3 " - "in position 16: ordinal not in range(128)") - def test_encoding_pep3120(self): info = pyparse.CompileInfo("", "exec") tree = self.parse("""foo = '日本'""", info=info) @@ -159,3 +126,43 @@ def test_py3k_extended_unpacking(self): self.parse('a, *rest, b = 1, 2, 3, 4, 5') self.parse('(a, *rest, b) = 1, 2, 3, 4, 5') + + +class TestPythonParserWithSpace(TestPythonParserWithoutSpace): + + def setup_class(self): + self.parser = pyparse.PythonParser(self.space) + + def test_encoding(self): + info = pyparse.CompileInfo("", "exec") + tree = self.parse("""# coding: latin-1 +stuff = "nothing" +""", info=info) + assert tree.type == syms.file_input + assert info.encoding == "iso-8859-1" + sentence = u"'Die Männer ärgen sich!'" + input = (u"# coding: utf-7\nstuff = %s" % (sentence,)).encode("utf-7") + tree = self.parse(input, info=info) + assert info.encoding == "utf-7" + input = "# coding: iso-8859-15\nx" + self.parse(input, info=info) + assert info.encoding == "iso-8859-15" + input = "\xEF\xBB\xBF# coding: utf-8\nx" + self.parse(input, info=info) + assert info.encoding == "utf-8" + input = "# coding: utf-8\nx" + info.flags |= consts.PyCF_SOURCE_IS_UTF8 + exc = py.test.raises(SyntaxError, self.parse, input, info=info).value + info.flags &= ~consts.PyCF_SOURCE_IS_UTF8 + assert exc.msg == "coding declaration in unicode string" + input = "\xEF\xBB\xBF# coding: latin-1\nx" + exc = py.test.raises(SyntaxError, self.parse, input).value + assert exc.msg == "UTF-8 BOM with non-utf8 coding cookie" + input = "# coding: not-here" + exc = py.test.raises(SyntaxError, self.parse, input).value + assert exc.msg == "Unknown encoding: not-here" + input = u"# coding: ascii\n\xe2".encode('utf-8') + exc = py.test.raises(SyntaxError, self.parse, input).value + assert exc.msg == ("'ascii' codec can't decode byte 0xc3 " + "in position 16: ordinal not in range(128)") + From noreply at buildbot.pypy.org Tue Jan 17 17:21:51 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 17:21:51 +0100 (CET) Subject: [pypy-commit] benchmarks default: add cpython documentation generation by sphinx Message-ID: <20120117162151.C3D65710260@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r157:70c28be43161 Date: 2012-01-17 18:07 +0200 http://bitbucket.org/pypy/benchmarks/changeset/70c28be43161/ Log: add cpython documentation generation by sphinx diff too long, truncating to 10000 out of 486679 lines diff --git a/benchmarks.py b/benchmarks.py --- a/benchmarks.py +++ b/benchmarks.py @@ -144,3 +144,31 @@ result.append((name, data)) return result BM_translate.benchmark_name = 'trans' + +def BM_cpython_doc(base_python, changed_python, options): + from unladen_swallow.perf import RawResult + import subprocess, shutil + t = [] + + for python in [base_python, changed_python]: + maindir = relative('lib/cpython-doc') + builddir = os.path.join(os.path.join(maindir, 'tools'), 'build') + shutil.rmtree(builddir) + build = relative('lib/cpython-doc/tools/sphinx-build.py') + os.mkdir(builddir) + docdir = os.path.join(builddir, 'doctrees') + os.mkdir(docdir) + htmldir = os.path.join(builddir, 'html') + os.mkdir(htmldir) + args = base_python + [build, '-b', 'html', '-d', docdir, maindir, htmldir] + proc = subprocess.Popen(args, stderr=subprocess.PIPE) + out, err = proc.communicate() + retcode = proc.poll() + if retcode != 0: + print out + print err + raise Exception("sphinx-build.py failed") + t.append(float(out.splitlines(-1).strip())) + return RawResult([t[0]], [t[1]]) + +BM_cpython_doc.benchmark_name = 'sphinx' diff --git a/lib/cpython-doc/ACKS.txt b/lib/cpython-doc/ACKS.txt new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/ACKS.txt @@ -0,0 +1,237 @@ +Contributors to the Python Documentation +---------------------------------------- + +This section lists people who have contributed in some way to the Python +documentation. It is probably not complete -- if you feel that you or +anyone else should be on this list, please let us know (send email to +docs at python.org), and we'll be glad to correct the problem. + +.. acks:: + + * Aahz + * Michael Abbott + * Steve Alexander + * Jim Ahlstrom + * Fred Allen + * A. Amoroso + * Pehr Anderson + * Oliver Andrich + * Heidi Annexstad + * Jesús Cea Avión + * Manuel Balsera + * Daniel Barclay + * Chris Barker + * Don Bashford + * Anthony Baxter + * Alexander Belopolsky + * Bennett Benson + * Jonathan Black + * Robin Boerdijk + * Michal Bozon + * Aaron Brancotti + * Georg Brandl + * Keith Briggs + * Ian Bruntlett + * Lee Busby + * Arnaud Calmettes + * Lorenzo M. Catucci + * Carl Cerecke + * Mauro Cicognini + * Gilles Civario + * Mike Clarkson + * Steve Clift + * Dave Cole + * Matthew Cowles + * Jeremy Craven + * Andrew Dalke + * Ben Darnell + * L. Peter Deutsch + * Robert Donohue + * Fred L. Drake, Jr. + * Jacques Ducasse + * Josip Dzolonga + * Jeff Epler + * Michael Ernst + * Blame Andy Eskilsson + * Carey Evans + * Martijn Faassen + * Carl Feynman + * Dan Finnie + * Hernán Martínez Foffani + * Michael Foord + * Stefan Franke + * Jim Fulton + * Peter Funk + * Lele Gaifax + * Matthew Gallagher + * Gabriel Genellina + * Ben Gertzfield + * Nadim Ghaznavi + * Jonathan Giddy + * Matt Giuca + * Shelley Gooch + * Nathaniel Gray + * Grant Griffin + * Thomas Guettler + * Anders Hammarquist + * Mark Hammond + * Harald Hanche-Olsen + * Manus Hand + * Gerhard Häring + * Travis B. Hartwell + * Tim Hatch + * Janko Hauser + * Ben Hayden + * Thomas Heller + * Bernhard Herzog + * Magnus L. Hetland + * Konrad Hinsen + * Stefan Hoffmeister + * Albert Hofkamp + * Gregor Hoffleit + * Steve Holden + * Thomas Holenstein + * Gerrit Holl + * Rob Hooft + * Brian Hooper + * Randall Hopper + * Michael Hudson + * Eric Huss + * Jeremy Hylton + * Roger Irwin + * Jack Jansen + * Philip H. Jensen + * Pedro Diaz Jimenez + * Kent Johnson + * Lucas de Jonge + * Andreas Jung + * Robert Kern + * Jim Kerr + * Jan Kim + * Kamil Kisiel + * Greg Kochanski + * Guido Kollerie + * Peter A. Koren + * Daniel Kozan + * Andrew M. Kuchling + * Dave Kuhlman + * Erno Kuusela + * Ross Lagerwall + * Thomas Lamb + * Detlef Lannert + * Piers Lauder + * Glyph Lefkowitz + * Robert Lehmann + * Marc-André Lemburg + * Ross Light + * Gediminas Liktaras + * Ulf A. Lindgren + * Everett Lipman + * Mirko Liss + * Martin von Löwis + * Fredrik Lundh + * Jeff MacDonald + * John Machin + * Andrew MacIntyre + * Vladimir Marangozov + * Vincent Marchetti + * Westley Martínez + * Laura Matson + * Daniel May + * Rebecca McCreary + * Doug Mennella + * Paolo Milani + * Skip Montanaro + * Paul Moore + * Ross Moore + * Sjoerd Mullender + * Dale Nagata + * Trent Nelson + * Michal Nowikowski + * Steffen Daode Nurpmeso + * Ng Pheng Siong + * Koray Oner + * Tomas Oppelstrup + * Denis S. Otkidach + * Zooko O'Whielacronx + * Shriphani Palakodety + * William Park + * Joonas Paalasmaa + * Harri Pasanen + * Bo Peng + * Tim Peters + * Benjamin Peterson + * Christopher Petrilli + * Justin D. Pettit + * Chris Phoenix + * François Pinard + * Paul Prescod + * Eric S. Raymond + * Edward K. Ream + * Terry J. Reedy + * Sean Reifschneider + * Bernhard Reiter + * Armin Rigo + * Wes Rishel + * Armin Ronacher + * Jim Roskind + * Guido van Rossum + * Donald Wallace Rouse II + * Mark Russell + * Nick Russo + * Chris Ryland + * Constantina S. + * Hugh Sasse + * Bob Savage + * Scott Schram + * Neil Schemenauer + * Barry Scott + * Joakim Sernbrant + * Justin Sheehy + * Charlie Shepherd + * Yue Shuaijie + * SilentGhost + * Michael Simcich + * Ionel Simionescu + * Michael Sloan + * Gregory P. Smith + * Roy Smith + * Clay Spence + * Nicholas Spies + * Tage Stabell-Kulo + * Frank Stajano + * Anthony Starks + * Greg Stein + * Peter Stoehr + * Mark Summerfield + * Reuben Sumner + * Kalle Svensson + * Jim Tittsler + * David Turner + * Sandro Tosi + * Ville Vainio + * Nadeem Vawda + * Martijn Vries + * Charles G. Waldman + * Greg Ward + * Barry Warsaw + * Corran Webster + * Glyn Webster + * Bob Weiner + * Eddy Welbourne + * Jeff Wheeler + * Mats Wichmann + * Gerry Wiener + * Timothy Wild + * Paul Winkler + * Collin Winter + * Blake Winton + * Dan Wolfe + * Adam Woodbeck + * Steven Work + * Thomas Wouters + * Ka-Ping Yee + * Rory Yorke + * Moshe Zadka + * Milan Zamazal + * Cheng Zhang diff --git a/lib/cpython-doc/Makefile b/lib/cpython-doc/Makefile new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/Makefile @@ -0,0 +1,193 @@ +# +# Makefile for Python documentation +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# + +# You can set these variables from the command line. +PYTHON = python +SVNROOT = http://svn.python.org/projects +SPHINXOPTS = +PAPER = +SOURCES = +DISTVERSION = $(shell $(PYTHON) tools/sphinxext/patchlevel.py) + +ALLSPHINXOPTS = -b $(BUILDER) -d build/doctrees -D latex_paper_size=$(PAPER) \ + $(SPHINXOPTS) . build/$(BUILDER) $(SOURCES) + +.PHONY: help checkout update build html htmlhelp latex text changes linkcheck \ + suspicious coverage doctest pydoc-topics htmlview clean dist check serve \ + autobuild-dev autobuild-stable + +help: + @echo "Please use \`make ' where is one of" + @echo " clean to remove build files" + @echo " update to update build tools" + @echo " html to make standalone HTML files" + @echo " htmlhelp to make HTML files and a HTML help project" + @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " text to make plain text files" + @echo " epub to make EPUB files" + @echo " changes to make an overview over all changed/added/deprecated items" + @echo " linkcheck to check all external links for integrity" + @echo " coverage to check documentation coverage for library and C API" + @echo " doctest to run doctests in the documentation" + @echo " pydoc-topics to regenerate the pydoc topics file" + @echo " dist to create a \"dist\" directory with archived docs for download" + @echo " suspicious to check for suspicious markup in output text" + @echo " check to run a check for frequent markup errors" + @echo " serve to serve the documentation on the localhost (8000)" + +# Note: if you update versions here, do the same in make.bat and README.txt +checkout: + @if [ ! -d tools/sphinx ]; then \ + echo "Checking out Sphinx..."; \ + svn checkout $(SVNROOT)/external/Sphinx-1.0.7/sphinx tools/sphinx; \ + fi + @if [ ! -d tools/docutils ]; then \ + echo "Checking out Docutils..."; \ + svn checkout $(SVNROOT)/external/docutils-0.6/docutils tools/docutils; \ + fi + @if [ ! -d tools/jinja2 ]; then \ + echo "Checking out Jinja..."; \ + svn checkout $(SVNROOT)/external/Jinja-2.3.1/jinja2 tools/jinja2; \ + fi + @if [ ! -d tools/pygments ]; then \ + echo "Checking out Pygments..."; \ + svn checkout $(SVNROOT)/external/Pygments-1.3.1/pygments tools/pygments; \ + fi + +update: clean checkout + +build: checkout + mkdir -p build/$(BUILDER) build/doctrees + echo $(ALLSPHINXOPTS) + $(PYTHON) tools/sphinx-build.py $(ALLSPHINXOPTS) + @echo + +html: BUILDER = html +html: build + @echo "Build finished. The HTML pages are in build/html." + +htmlhelp: BUILDER = htmlhelp +htmlhelp: build + @echo "Build finished; now you can run HTML Help Workshop with the" \ + "build/htmlhelp/pydoc.hhp project file." + +latex: BUILDER = latex +latex: build + @echo "Build finished; the LaTeX files are in build/latex." + @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ + "run these through (pdf)latex." + +text: BUILDER = text +text: build + @echo "Build finished; the text files are in build/text." + +epub: BUILDER = epub +epub: build + @echo "Build finished; the epub files are in build/epub." + +changes: BUILDER = changes +changes: build + @echo "The overview file is in build/changes." + +linkcheck: BUILDER = linkcheck +linkcheck: build + @echo "Link check complete; look for any errors in the above output" \ + "or in build/$(BUILDER)/output.txt" + +suspicious: BUILDER = suspicious +suspicious: build + @echo "Suspicious check complete; look for any errors in the above output" \ + "or in build/$(BUILDER)/suspicious.csv. If all issues are false" \ + "positives, append that file to tools/sphinxext/susp-ignored.csv." + +coverage: BUILDER = coverage +coverage: build + @echo "Coverage finished; see c.txt and python.txt in build/coverage" + +doctest: BUILDER = doctest +doctest: build + @echo "Testing of doctests in the sources finished, look at the" \ + "results in build/doctest/output.txt" + +pydoc-topics: BUILDER = pydoc-topics +pydoc-topics: build + @echo "Building finished; now copy build/pydoc-topics/topics.py" \ + "to ../Lib/pydoc_data/topics.py" + +htmlview: html + $(PYTHON) -c "import webbrowser; webbrowser.open('build/html/index.html')" + +clean: + -rm -rf build/* + +dist: + rm -rf dist + mkdir -p dist + + # archive the HTML + make html + cp -pPR build/html dist/python-$(DISTVERSION)-docs-html + tar -C dist -cf dist/python-$(DISTVERSION)-docs-html.tar python-$(DISTVERSION)-docs-html + bzip2 -9 -k dist/python-$(DISTVERSION)-docs-html.tar + (cd dist; zip -q -r -9 python-$(DISTVERSION)-docs-html.zip python-$(DISTVERSION)-docs-html) + rm -r dist/python-$(DISTVERSION)-docs-html + rm dist/python-$(DISTVERSION)-docs-html.tar + + # archive the text build + make text + cp -pPR build/text dist/python-$(DISTVERSION)-docs-text + tar -C dist -cf dist/python-$(DISTVERSION)-docs-text.tar python-$(DISTVERSION)-docs-text + bzip2 -9 -k dist/python-$(DISTVERSION)-docs-text.tar + (cd dist; zip -q -r -9 python-$(DISTVERSION)-docs-text.zip python-$(DISTVERSION)-docs-text) + rm -r dist/python-$(DISTVERSION)-docs-text + rm dist/python-$(DISTVERSION)-docs-text.tar + + # archive the A4 latex + rm -rf build/latex + make latex PAPER=a4 + -sed -i 's/makeindex/makeindex -q/' build/latex/Makefile + (cd build/latex; make clean && make all-pdf && make FMT=pdf zip bz2) + cp build/latex/docs-pdf.zip dist/python-$(DISTVERSION)-docs-pdf-a4.zip + cp build/latex/docs-pdf.tar.bz2 dist/python-$(DISTVERSION)-docs-pdf-a4.tar.bz2 + + # archive the letter latex + rm -rf build/latex + make latex PAPER=letter + -sed -i 's/makeindex/makeindex -q/' build/latex/Makefile + (cd build/latex; make clean && make all-pdf && make FMT=pdf zip bz2) + cp build/latex/docs-pdf.zip dist/python-$(DISTVERSION)-docs-pdf-letter.zip + cp build/latex/docs-pdf.tar.bz2 dist/python-$(DISTVERSION)-docs-pdf-letter.tar.bz2 + + # archive the epub build + rm -rf build/epub + make epub + mkdir -p dist/python-$(DISTVERSION)-docs-epub + cp -pPR build/epub/*.epub dist/python-$(DISTVERSION)-docs-epub/ + tar -C dist -cf dist/python-$(DISTVERSION)-docs-epub.tar python-$(DISTVERSION)-docs-epub + bzip2 -9 -k dist/python-$(DISTVERSION)-docs-epub.tar + (cd dist; zip -q -r -9 python-$(DISTVERSION)-docs-epub.zip python-$(DISTVERSION)-docs-epub) + rm -r dist/python-$(DISTVERSION)-docs-epub + rm dist/python-$(DISTVERSION)-docs-epub.tar + +check: + $(PYTHON) tools/rstlint.py -i tools + +serve: + ../Tools/scripts/serve.py build/html + +# Targets for daily automated doc build + +# for development releases: always build +autobuild-dev: + make update + make dist SPHINXOPTS='-A daily=1' + +# for stable releases: only build if not in pre-release stage (alpha, beta, rc) +autobuild-stable: + @case $(DISTVERSION) in *[abc]*) \ + echo "Not building; $(DISTVERSION) is not a release version."; \ + exit 1;; \ + esac + @make autobuild-dev diff --git a/lib/cpython-doc/README.txt b/lib/cpython-doc/README.txt new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/README.txt @@ -0,0 +1,152 @@ +Python Documentation README +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This directory contains the reStructuredText (reST) sources to the Python +documentation. You don't need to build them yourself, prebuilt versions are +available at http://docs.python.org/download/. + +Documentation on the authoring Python documentation, including information about +both style and markup, is available in the "Documenting Python" chapter of the +documentation. There's also a chapter intended to point out differences to +those familiar with the previous docs written in LaTeX. + + +Building the docs +================= + +You need to have Python 2.4 or higher installed; the toolset used to build the +docs is written in Python. It is called *Sphinx*, it is not included in this +tree, but maintained separately. Also needed are the docutils, supplying the +base markup that Sphinx uses, Jinja, a templating engine, and optionally +Pygments, a code highlighter. + + +Using make +---------- + +Luckily, a Makefile has been prepared so that on Unix, provided you have +installed Python and Subversion, you can just run :: + + make html + +to check out the necessary toolset in the `tools/` subdirectory and build the +HTML output files. To view the generated HTML, point your favorite browser at +the top-level index `build/html/index.html` after running "make". + +To use a Python interpreter that's not called ``python``, use the standard +way to set Makefile variables, using e.g. :: + + make html PYTHON=/usr/bin/python2.5 + +Available make targets are: + + * "html", which builds standalone HTML files for offline viewing. + + * "htmlhelp", which builds HTML files and a HTML Help project file usable to + convert them into a single Compiled HTML (.chm) file -- these are popular + under Microsoft Windows, but very handy on every platform. + + To create the CHM file, you need to run the Microsoft HTML Help Workshop over + the generated project (.hhp) file. + + * "latex", which builds LaTeX source files as input to "pdflatex" to produce + PDF documents. + + * "text", which builds a plain text file for each source file. + + * "epub", which builds an EPUB document, suitable to be viewed on e-book + readers. + + * "linkcheck", which checks all external references to see whether they are + broken, redirected or malformed, and outputs this information to stdout as + well as a plain-text (.txt) file. + + * "changes", which builds an overview over all versionadded/versionchanged/ + deprecated items in the current version. This is meant as a help for the + writer of the "What's New" document. + + * "coverage", which builds a coverage overview for standard library modules and + C API. + + * "pydoc-topics", which builds a Python module containing a dictionary with + plain text documentation for the labels defined in + `tools/sphinxext/pyspecific.py` -- pydoc needs these to show topic and + keyword help. + +A "make update" updates the Subversion checkouts in `tools/`. + + +Without make +------------ + +You'll need to install the Sphinx package, either by checking it out via :: + + svn co http://svn.python.org/projects/external/Sphinx-1.0.7/sphinx tools/sphinx + +or by installing it from PyPI. + +Then, you need to install Docutils, either by checking it out via :: + + svn co http://svn.python.org/projects/external/docutils-0.6/docutils tools/docutils + +or by installing it from http://docutils.sf.net/. + +You also need Jinja2, either by checking it out via :: + + svn co http://svn.python.org/projects/external/Jinja-2.3.1/jinja2 tools/jinja2 + +or by installing it from PyPI. + +You can optionally also install Pygments, either as a checkout via :: + + svn co http://svn.python.org/projects/external/Pygments-1.3.1/pygments tools/pygments + +or from PyPI at http://pypi.python.org/pypi/Pygments. + + +Then, make an output directory, e.g. under `build/`, and run :: + + python tools/sphinx-build.py -b . build/ + +where `` is one of html, text, latex, or htmlhelp (for explanations see +the make targets above). + + +Contributing +============ + +Bugs in the content should be reported to the Python bug tracker at +http://bugs.python.org. + +Bugs in the toolset should be reported in the Sphinx bug tracker at +http://www.bitbucket.org/birkenfeld/sphinx/issues/. + +You can also send a mail to the Python Documentation Team at docs at python.org, +and we will process your request as soon as possible. + +If you want to help the Documentation Team, you are always welcome. Just send +a mail to docs at python.org. + + +Copyright notice +================ + +The Python source is copyrighted, but you can freely use and copy it +as long as you don't change or remove the copyright notice: + +---------------------------------------------------------------------- +Copyright (c) 2000-2012 Python Software Foundation. +All rights reserved. + +Copyright (c) 2000 BeOpen.com. +All rights reserved. + +Copyright (c) 1995-2000 Corporation for National Research Initiatives. +All rights reserved. + +Copyright (c) 1991-1995 Stichting Mathematisch Centrum. +All rights reserved. + +See the file "license.rst" for information on usage and redistribution +of this file, and for a DISCLAIMER OF ALL WARRANTIES. +---------------------------------------------------------------------- diff --git a/lib/cpython-doc/about.rst b/lib/cpython-doc/about.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/about.rst @@ -0,0 +1,36 @@ +===================== +About these documents +===================== + + +These documents are generated from `reStructuredText`_ sources by `Sphinx`_, a +document processor specifically written for the Python documentation. + +.. _reStructuredText: http://docutils.sf.net/rst.html +.. _Sphinx: http://sphinx.pocoo.org/ + +.. In the online version of these documents, you can submit comments and suggest + changes directly on the documentation pages. + +Development of the documentation and its toolchain takes place on the +docs at python.org mailing list. We're always looking for volunteers wanting +to help with the docs, so feel free to send a mail there! + +Many thanks go to: + +* Fred L. Drake, Jr., the creator of the original Python documentation toolset + and writer of much of the content; +* the `Docutils `_ project for creating + reStructuredText and the Docutils suite; +* Fredrik Lundh for his `Alternative Python Reference + `_ project from which Sphinx got many good + ideas. + +See :ref:`reporting-bugs` for information how to report bugs in this +documentation, or Python itself. + +.. including the ACKS file here so that it can be maintained separately +.. include:: ACKS.txt + +It is only with the input and contributions of the Python community +that Python has such wonderful documentation -- Thank You! diff --git a/lib/cpython-doc/bugs.rst b/lib/cpython-doc/bugs.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/bugs.rst @@ -0,0 +1,75 @@ +.. _reporting-bugs: + +************** +Reporting Bugs +************** + +Python is a mature programming language which has established a reputation for +stability. In order to maintain this reputation, the developers would like to +know of any deficiencies you find in Python. + + +Documentation bugs +================== + +If you find a bug in this documentation or would like to propose an improvement, +please send an e-mail to docs at python.org describing the bug and where you found +it. If you have a suggestion how to fix it, include that as well. + +docs at python.org is a mailing list run by volunteers; your request will be +noticed, even if it takes a while to be processed. + +Of course, if you want a more persistent record of your issue, you can use the +issue tracker for documentation bugs as well. + + +Using the Python issue tracker +============================== + +Bug reports for Python itself should be submitted via the Python Bug Tracker +(http://bugs.python.org/). The bug tracker offers a Web form which allows +pertinent information to be entered and submitted to the developers. + +The first step in filing a report is to determine whether the problem has +already been reported. The advantage in doing so, aside from saving the +developers time, is that you learn what has been done to fix it; it may be that +the problem has already been fixed for the next release, or additional +information is needed (in which case you are welcome to provide it if you can!). +To do this, search the bug database using the search box on the top of the page. + +If the problem you're reporting is not already in the bug tracker, go back to +the Python Bug Tracker and log in. If you don't already have a tracker account, +select the "Register" link or, if you use OpenID, one of the OpenID provider +logos in the sidebar. It is not possible to submit a bug report anonymously. + +Being now logged in, you can submit a bug. Select the "Create New" link in the +sidebar to open the bug reporting form. + +The submission form has a number of fields. For the "Title" field, enter a +*very* short description of the problem; less than ten words is good. In the +"Type" field, select the type of your problem; also select the "Component" and +"Versions" to which the bug relates. + +In the "Comment" field, describe the problem in detail, including what you +expected to happen and what did happen. Be sure to include whether any +extension modules were involved, and what hardware and software platform you +were using (including version information as appropriate). + +Each bug report will be assigned to a developer who will determine what needs to +be done to correct the problem. You will receive an update each time action is +taken on the bug. + + +.. seealso:: + + `Python Developer's Guide `_ + Detailed description of the issue workflow and developers tools. + + `How to Report Bugs Effectively `_ + Article which goes into some detail about how to create a useful bug report. + This describes what kind of information is useful and why it is useful. + + `Bug Writing Guidelines `_ + Information about writing a good bug report. Some of this is specific to the + Mozilla project, but describes general good practices. + diff --git a/lib/cpython-doc/c-api/abstract.rst b/lib/cpython-doc/c-api/abstract.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/abstract.rst @@ -0,0 +1,26 @@ +.. highlightlang:: c + +.. _abstract: + +********************** +Abstract Objects Layer +********************** + +The functions in this chapter interact with Python objects regardless of their +type, or with wide classes of object types (e.g. all numerical types, or all +sequence types). When used on object types for which they do not apply, they +will raise a Python exception. + +It is not possible to use these functions on objects that are not properly +initialized, such as a list object that has been created by :c:func:`PyList_New`, +but whose items have not been set to some non-\ ``NULL`` value yet. + +.. toctree:: + + object.rst + number.rst + sequence.rst + mapping.rst + iter.rst + buffer.rst + objbuffer.rst diff --git a/lib/cpython-doc/c-api/allocation.rst b/lib/cpython-doc/c-api/allocation.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/allocation.rst @@ -0,0 +1,71 @@ +.. highlightlang:: c + +.. _allocating-objects: + +Allocating Objects on the Heap +============================== + + +.. c:function:: PyObject* _PyObject_New(PyTypeObject *type) + + +.. c:function:: PyVarObject* _PyObject_NewVar(PyTypeObject *type, Py_ssize_t size) + + +.. c:function:: PyObject* PyObject_Init(PyObject *op, PyTypeObject *type) + + Initialize a newly-allocated object *op* with its type and initial + reference. Returns the initialized object. If *type* indicates that the + object participates in the cyclic garbage detector, it is added to the + detector's set of observed objects. Other fields of the object are not + affected. + + +.. c:function:: PyVarObject* PyObject_InitVar(PyVarObject *op, PyTypeObject *type, Py_ssize_t size) + + This does everything :c:func:`PyObject_Init` does, and also initializes the + length information for a variable-size object. + + +.. c:function:: TYPE* PyObject_New(TYPE, PyTypeObject *type) + + Allocate a new Python object using the C structure type *TYPE* and the + Python type object *type*. Fields not defined by the Python object header + are not initialized; the object's reference count will be one. The size of + the memory allocation is determined from the :attr:`tp_basicsize` field of + the type object. + + +.. c:function:: TYPE* PyObject_NewVar(TYPE, PyTypeObject *type, Py_ssize_t size) + + Allocate a new Python object using the C structure type *TYPE* and the + Python type object *type*. Fields not defined by the Python object header + are not initialized. The allocated memory allows for the *TYPE* structure + plus *size* fields of the size given by the :attr:`tp_itemsize` field of + *type*. This is useful for implementing objects like tuples, which are + able to determine their size at construction time. Embedding the array of + fields into the same allocation decreases the number of allocations, + improving the memory management efficiency. + + +.. c:function:: void PyObject_Del(PyObject *op) + + Releases memory allocated to an object using :c:func:`PyObject_New` or + :c:func:`PyObject_NewVar`. This is normally called from the + :attr:`tp_dealloc` handler specified in the object's type. The fields of + the object should not be accessed after this call as the memory is no + longer a valid Python object. + + +.. c:var:: PyObject _Py_NoneStruct + + Object which is visible in Python as ``None``. This should only be accessed + using the :c:macro:`Py_None` macro, which evaluates to a pointer to this + object. + + +.. seealso:: + + :c:func:`PyModule_Create` + To allocate and create extension modules. + diff --git a/lib/cpython-doc/c-api/arg.rst b/lib/cpython-doc/c-api/arg.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/arg.rst @@ -0,0 +1,619 @@ +.. highlightlang:: c + +.. _arg-parsing: + +Parsing arguments and building values +===================================== + +These functions are useful when creating your own extensions functions and +methods. Additional information and examples are available in +:ref:`extending-index`. + +The first three of these functions described, :c:func:`PyArg_ParseTuple`, +:c:func:`PyArg_ParseTupleAndKeywords`, and :c:func:`PyArg_Parse`, all use *format +strings* which are used to tell the function about the expected arguments. The +format strings use the same syntax for each of these functions. + +----------------- +Parsing arguments +----------------- + +A format string consists of zero or more "format units." A format unit +describes one Python object; it is usually a single character or a parenthesized +sequence of format units. With a few exceptions, a format unit that is not a +parenthesized sequence normally corresponds to a single address argument to +these functions. In the following description, the quoted form is the format +unit; the entry in (round) parentheses is the Python object type that matches +the format unit; and the entry in [square] brackets is the type of the C +variable(s) whose address should be passed. + +Strings and buffers +------------------- + +These formats allow to access an object as a contiguous chunk of memory. +You don't have to provide raw storage for the returned unicode or bytes +area. Also, you won't have to release any memory yourself, except with the +``es``, ``es#``, ``et`` and ``et#`` formats. + +However, when a :c:type:`Py_buffer` structure gets filled, the underlying +buffer is locked so that the caller can subsequently use the buffer even +inside a :c:type:`Py_BEGIN_ALLOW_THREADS` block without the risk of mutable data +being resized or destroyed. As a result, **you have to call** +:c:func:`PyBuffer_Release` after you have finished processing the data (or +in any early abort case). + +Unless otherwise stated, buffers are not NUL-terminated. + +.. note:: + For all ``#`` variants of formats (``s#``, ``y#``, etc.), the type of + the length argument (int or :c:type:`Py_ssize_t`) is controlled by + defining the macro :c:macro:`PY_SSIZE_T_CLEAN` before including + :file:`Python.h`. If the macro was defined, length is a + :c:type:`Py_ssize_t` rather than an :c:type:`int`. This behavior will change + in a future Python version to only support :c:type:`Py_ssize_t` and + drop :c:type:`int` support. It is best to always define :c:macro:`PY_SSIZE_T_CLEAN`. + + +``s`` (:class:`str`) [const char \*] + Convert a Unicode object to a C pointer to a character string. + A pointer to an existing string is stored in the character pointer + variable whose address you pass. The C string is NUL-terminated. + The Python string must not contain embedded NUL bytes; if it does, + a :exc:`TypeError` exception is raised. Unicode objects are converted + to C strings using ``'utf-8'`` encoding. If this conversion fails, a + :exc:`UnicodeError` is raised. + + .. note:: + This format does not accept bytes-like objects. If you want to accept + filesystem paths and convert them to C character strings, it is + preferable to use the ``O&`` format with :c:func:`PyUnicode_FSConverter` + as *converter*. + +``s*`` (:class:`str`, :class:`bytes`, :class:`bytearray` or buffer compatible object) [Py_buffer] + This format accepts Unicode objects as well as objects supporting the + buffer protocol. + It fills a :c:type:`Py_buffer` structure provided by the caller. + In this case the resulting C string may contain embedded NUL bytes. + Unicode objects are converted to C strings using ``'utf-8'`` encoding. + +``s#`` (:class:`str`, :class:`bytes` or read-only buffer compatible object) [const char \*, int or :c:type:`Py_ssize_t`] + Like ``s*``, except that it doesn't accept mutable buffer-like objects + such as :class:`bytearray`. The result is stored into two C variables, + the first one a pointer to a C string, the second one its length. + The string may contain embedded null bytes. Unicode objects are converted + to C strings using ``'utf-8'`` encoding. + +``z`` (:class:`str` or ``None``) [const char \*] + Like ``s``, but the Python object may also be ``None``, in which case the C + pointer is set to *NULL*. + +``z*`` (:class:`str`, :class:`bytes`, :class:`bytearray`, buffer compatible object or ``None``) [Py_buffer] + Like ``s*``, but the Python object may also be ``None``, in which case the + ``buf`` member of the :c:type:`Py_buffer` structure is set to *NULL*. + +``z#`` (:class:`str`, :class:`bytes`, read-only buffer compatible object or ``None``) [const char \*, int] + Like ``s#``, but the Python object may also be ``None``, in which case the C + pointer is set to *NULL*. + +``y`` (:class:`bytes`) [const char \*] + This format converts a bytes-like object to a C pointer to a character + string; it does not accept Unicode objects. The bytes buffer must not + contain embedded NUL bytes; if it does, a :exc:`TypeError` + exception is raised. + +``y*`` (:class:`bytes`, :class:`bytearray` or buffer compatible object) [Py_buffer] + This variant on ``s*`` doesn't accept Unicode objects, only objects + supporting the buffer protocol. **This is the recommended way to accept + binary data.** + +``y#`` (:class:`bytes`) [const char \*, int] + This variant on ``s#`` doesn't accept Unicode objects, only bytes-like + objects. + +``S`` (:class:`bytes`) [PyBytesObject \*] + Requires that the Python object is a :class:`bytes` object, without + attempting any conversion. Raises :exc:`TypeError` if the object is not + a bytes object. The C variable may also be declared as :c:type:`PyObject\*`. + +``Y`` (:class:`bytearray`) [PyByteArrayObject \*] + Requires that the Python object is a :class:`bytearray` object, without + attempting any conversion. Raises :exc:`TypeError` if the object is not + a :class:`bytearray` object. The C variable may also be declared as :c:type:`PyObject\*`. + +``u`` (:class:`str`) [Py_UNICODE \*] + Convert a Python Unicode object to a C pointer to a NUL-terminated buffer of + Unicode characters. You must pass the address of a :c:type:`Py_UNICODE` + pointer variable, which will be filled with the pointer to an existing + Unicode buffer. Please note that the width of a :c:type:`Py_UNICODE` + character depends on compilation options (it is either 16 or 32 bits). + The Python string must not contain embedded NUL characters; if it does, + a :exc:`TypeError` exception is raised. + + .. note:: + Since ``u`` doesn't give you back the length of the string, and it + may contain embedded NUL characters, it is recommended to use ``u#`` + or ``U`` instead. + +``u#`` (:class:`str`) [Py_UNICODE \*, int] + This variant on ``u`` stores into two C variables, the first one a pointer to a + Unicode data buffer, the second one its length. + +``Z`` (:class:`str` or ``None``) [Py_UNICODE \*] + Like ``u``, but the Python object may also be ``None``, in which case the + :c:type:`Py_UNICODE` pointer is set to *NULL*. + +``Z#`` (:class:`str` or ``None``) [Py_UNICODE \*, int] + Like ``u#``, but the Python object may also be ``None``, in which case the + :c:type:`Py_UNICODE` pointer is set to *NULL*. + +``U`` (:class:`str`) [PyObject \*] + Requires that the Python object is a Unicode object, without attempting + any conversion. Raises :exc:`TypeError` if the object is not a Unicode + object. The C variable may also be declared as :c:type:`PyObject\*`. + +``w*`` (:class:`bytearray` or read-write byte-oriented buffer) [Py_buffer] + This format accepts any object which implements the read-write buffer + interface. It fills a :c:type:`Py_buffer` structure provided by the caller. + The buffer may contain embedded null bytes. The caller have to call + :c:func:`PyBuffer_Release` when it is done with the buffer. + +``es`` (:class:`str`) [const char \*encoding, char \*\*buffer] + This variant on ``s`` is used for encoding Unicode into a character buffer. + It only works for encoded data without embedded NUL bytes. + + This format requires two arguments. The first is only used as input, and + must be a :c:type:`const char\*` which points to the name of an encoding as a + NUL-terminated string, or *NULL*, in which case ``'utf-8'`` encoding is used. + An exception is raised if the named encoding is not known to Python. The + second argument must be a :c:type:`char\*\*`; the value of the pointer it + references will be set to a buffer with the contents of the argument text. + The text will be encoded in the encoding specified by the first argument. + + :c:func:`PyArg_ParseTuple` will allocate a buffer of the needed size, copy the + encoded data into this buffer and adjust *\*buffer* to reference the newly + allocated storage. The caller is responsible for calling :c:func:`PyMem_Free` to + free the allocated buffer after use. + +``et`` (:class:`str`, :class:`bytes` or :class:`bytearray`) [const char \*encoding, char \*\*buffer] + Same as ``es`` except that byte string objects are passed through without + recoding them. Instead, the implementation assumes that the byte string object uses + the encoding passed in as parameter. + +``es#`` (:class:`str`) [const char \*encoding, char \*\*buffer, int \*buffer_length] + This variant on ``s#`` is used for encoding Unicode into a character buffer. + Unlike the ``es`` format, this variant allows input data which contains NUL + characters. + + It requires three arguments. The first is only used as input, and must be a + :c:type:`const char\*` which points to the name of an encoding as a + NUL-terminated string, or *NULL*, in which case ``'utf-8'`` encoding is used. + An exception is raised if the named encoding is not known to Python. The + second argument must be a :c:type:`char\*\*`; the value of the pointer it + references will be set to a buffer with the contents of the argument text. + The text will be encoded in the encoding specified by the first argument. + The third argument must be a pointer to an integer; the referenced integer + will be set to the number of bytes in the output buffer. + + There are two modes of operation: + + If *\*buffer* points a *NULL* pointer, the function will allocate a buffer of + the needed size, copy the encoded data into this buffer and set *\*buffer* to + reference the newly allocated storage. The caller is responsible for calling + :c:func:`PyMem_Free` to free the allocated buffer after usage. + + If *\*buffer* points to a non-*NULL* pointer (an already allocated buffer), + :c:func:`PyArg_ParseTuple` will use this location as the buffer and interpret the + initial value of *\*buffer_length* as the buffer size. It will then copy the + encoded data into the buffer and NUL-terminate it. If the buffer is not large + enough, a :exc:`ValueError` will be set. + + In both cases, *\*buffer_length* is set to the length of the encoded data + without the trailing NUL byte. + +``et#`` (:class:`str`, :class:`bytes` or :class:`bytearray`) [const char \*encoding, char \*\*buffer, int \*buffer_length] + Same as ``es#`` except that byte string objects are passed through without recoding + them. Instead, the implementation assumes that the byte string object uses the + encoding passed in as parameter. + +Numbers +------- + +``b`` (:class:`int`) [unsigned char] + Convert a nonnegative Python integer to an unsigned tiny int, stored in a C + :c:type:`unsigned char`. + +``B`` (:class:`int`) [unsigned char] + Convert a Python integer to a tiny int without overflow checking, stored in a C + :c:type:`unsigned char`. + +``h`` (:class:`int`) [short int] + Convert a Python integer to a C :c:type:`short int`. + +``H`` (:class:`int`) [unsigned short int] + Convert a Python integer to a C :c:type:`unsigned short int`, without overflow + checking. + +``i`` (:class:`int`) [int] + Convert a Python integer to a plain C :c:type:`int`. + +``I`` (:class:`int`) [unsigned int] + Convert a Python integer to a C :c:type:`unsigned int`, without overflow + checking. + +``l`` (:class:`int`) [long int] + Convert a Python integer to a C :c:type:`long int`. + +``k`` (:class:`int`) [unsigned long] + Convert a Python integer to a C :c:type:`unsigned long` without + overflow checking. + +``L`` (:class:`int`) [PY_LONG_LONG] + Convert a Python integer to a C :c:type:`long long`. This format is only + available on platforms that support :c:type:`long long` (or :c:type:`_int64` on + Windows). + +``K`` (:class:`int`) [unsigned PY_LONG_LONG] + Convert a Python integer to a C :c:type:`unsigned long long` + without overflow checking. This format is only available on platforms that + support :c:type:`unsigned long long` (or :c:type:`unsigned _int64` on Windows). + +``n`` (:class:`int`) [Py_ssize_t] + Convert a Python integer to a C :c:type:`Py_ssize_t`. + +``c`` (:class:`bytes` or :class:`bytearray` of length 1) [char] + Convert a Python byte, represented as a :class:`bytes` or + :class:`bytearray` object of length 1, to a C :c:type:`char`. + + .. versionchanged:: 3.3 Allow :class:`bytearray` objects + +``C`` (:class:`str` of length 1) [int] + Convert a Python character, represented as a :class:`str` object of + length 1, to a C :c:type:`int`. + +``f`` (:class:`float`) [float] + Convert a Python floating point number to a C :c:type:`float`. + +``d`` (:class:`float`) [double] + Convert a Python floating point number to a C :c:type:`double`. + +``D`` (:class:`complex`) [Py_complex] + Convert a Python complex number to a C :c:type:`Py_complex` structure. + +Other objects +------------- + +``O`` (object) [PyObject \*] + Store a Python object (without any conversion) in a C object pointer. The C + program thus receives the actual object that was passed. The object's reference + count is not increased. The pointer stored is not *NULL*. + +``O!`` (object) [*typeobject*, PyObject \*] + Store a Python object in a C object pointer. This is similar to ``O``, but + takes two C arguments: the first is the address of a Python type object, the + second is the address of the C variable (of type :c:type:`PyObject\*`) into which + the object pointer is stored. If the Python object does not have the required + type, :exc:`TypeError` is raised. + +``O&`` (object) [*converter*, *anything*] + Convert a Python object to a C variable through a *converter* function. This + takes two arguments: the first is a function, the second is the address of a C + variable (of arbitrary type), converted to :c:type:`void \*`. The *converter* + function in turn is called as follows:: + + status = converter(object, address); + + where *object* is the Python object to be converted and *address* is the + :c:type:`void\*` argument that was passed to the :c:func:`PyArg_Parse\*` function. + The returned *status* should be ``1`` for a successful conversion and ``0`` if + the conversion has failed. When the conversion fails, the *converter* function + should raise an exception and leave the content of *address* unmodified. + + If the *converter* returns ``Py_CLEANUP_SUPPORTED``, it may get called a + second time if the argument parsing eventually fails, giving the converter a + chance to release any memory that it had already allocated. In this second + call, the *object* parameter will be NULL; *address* will have the same value + as in the original call. + + .. versionchanged:: 3.1 + ``Py_CLEANUP_SUPPORTED`` was added. + +``(items)`` (:class:`tuple`) [*matching-items*] + The object must be a Python sequence whose length is the number of format units + in *items*. The C arguments must correspond to the individual format units in + *items*. Format units for sequences may be nested. + +It is possible to pass "long" integers (integers whose value exceeds the +platform's :const:`LONG_MAX`) however no proper range checking is done --- the +most significant bits are silently truncated when the receiving field is too +small to receive the value (actually, the semantics are inherited from downcasts +in C --- your mileage may vary). + +A few other characters have a meaning in a format string. These may not occur +inside nested parentheses. They are: + +``|`` + Indicates that the remaining arguments in the Python argument list are optional. + The C variables corresponding to optional arguments should be initialized to + their default value --- when an optional argument is not specified, + :c:func:`PyArg_ParseTuple` does not touch the contents of the corresponding C + variable(s). + +``:`` + The list of format units ends here; the string after the colon is used as the + function name in error messages (the "associated value" of the exception that + :c:func:`PyArg_ParseTuple` raises). + +``;`` + The list of format units ends here; the string after the semicolon is used as + the error message *instead* of the default error message. ``:`` and ``;`` + mutually exclude each other. + +Note that any Python object references which are provided to the caller are +*borrowed* references; do not decrement their reference count! + +Additional arguments passed to these functions must be addresses of variables +whose type is determined by the format string; these are used to store values +from the input tuple. There are a few cases, as described in the list of format +units above, where these parameters are used as input values; they should match +what is specified for the corresponding format unit in that case. + +For the conversion to succeed, the *arg* object must match the format +and the format must be exhausted. On success, the +:c:func:`PyArg_Parse\*` functions return true, otherwise they return +false and raise an appropriate exception. When the +:c:func:`PyArg_Parse\*` functions fail due to conversion failure in one +of the format units, the variables at the addresses corresponding to that +and the following format units are left untouched. + +API Functions +------------- + +.. c:function:: int PyArg_ParseTuple(PyObject *args, const char *format, ...) + + Parse the parameters of a function that takes only positional parameters into + local variables. Returns true on success; on failure, it returns false and + raises the appropriate exception. + + +.. c:function:: int PyArg_VaParse(PyObject *args, const char *format, va_list vargs) + + Identical to :c:func:`PyArg_ParseTuple`, except that it accepts a va_list rather + than a variable number of arguments. + + +.. c:function:: int PyArg_ParseTupleAndKeywords(PyObject *args, PyObject *kw, const char *format, char *keywords[], ...) + + Parse the parameters of a function that takes both positional and keyword + parameters into local variables. Returns true on success; on failure, it + returns false and raises the appropriate exception. + + +.. c:function:: int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *kw, const char *format, char *keywords[], va_list vargs) + + Identical to :c:func:`PyArg_ParseTupleAndKeywords`, except that it accepts a + va_list rather than a variable number of arguments. + + +.. c:function:: int PyArg_ValidateKeywordArguments(PyObject *) + + Ensure that the keys in the keywords argument dictionary are strings. This + is only needed if :c:func:`PyArg_ParseTupleAndKeywords` is not used, since the + latter already does this check. + + .. versionadded:: 3.2 + + +.. XXX deprecated, will be removed +.. c:function:: int PyArg_Parse(PyObject *args, const char *format, ...) + + Function used to deconstruct the argument lists of "old-style" functions --- + these are functions which use the :const:`METH_OLDARGS` parameter parsing + method. This is not recommended for use in parameter parsing in new code, and + most code in the standard interpreter has been modified to no longer use this + for that purpose. It does remain a convenient way to decompose other tuples, + however, and may continue to be used for that purpose. + + +.. c:function:: int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) + + A simpler form of parameter retrieval which does not use a format string to + specify the types of the arguments. Functions which use this method to retrieve + their parameters should be declared as :const:`METH_VARARGS` in function or + method tables. The tuple containing the actual parameters should be passed as + *args*; it must actually be a tuple. The length of the tuple must be at least + *min* and no more than *max*; *min* and *max* may be equal. Additional + arguments must be passed to the function, each of which should be a pointer to a + :c:type:`PyObject\*` variable; these will be filled in with the values from + *args*; they will contain borrowed references. The variables which correspond + to optional parameters not given by *args* will not be filled in; these should + be initialized by the caller. This function returns true on success and false if + *args* is not a tuple or contains the wrong number of elements; an exception + will be set if there was a failure. + + This is an example of the use of this function, taken from the sources for the + :mod:`_weakref` helper module for weak references:: + + static PyObject * + weakref_ref(PyObject *self, PyObject *args) + { + PyObject *object; + PyObject *callback = NULL; + PyObject *result = NULL; + + if (PyArg_UnpackTuple(args, "ref", 1, 2, &object, &callback)) { + result = PyWeakref_NewRef(object, callback); + } + return result; + } + + The call to :c:func:`PyArg_UnpackTuple` in this example is entirely equivalent to + this call to :c:func:`PyArg_ParseTuple`:: + + PyArg_ParseTuple(args, "O|O:ref", &object, &callback) + + +--------------- +Building values +--------------- + +.. c:function:: PyObject* Py_BuildValue(const char *format, ...) + + Create a new value based on a format string similar to those accepted by the + :c:func:`PyArg_Parse\*` family of functions and a sequence of values. Returns + the value or *NULL* in the case of an error; an exception will be raised if + *NULL* is returned. + + :c:func:`Py_BuildValue` does not always build a tuple. It builds a tuple only if + its format string contains two or more format units. If the format string is + empty, it returns ``None``; if it contains exactly one format unit, it returns + whatever object is described by that format unit. To force it to return a tuple + of size 0 or one, parenthesize the format string. + + When memory buffers are passed as parameters to supply data to build objects, as + for the ``s`` and ``s#`` formats, the required data is copied. Buffers provided + by the caller are never referenced by the objects created by + :c:func:`Py_BuildValue`. In other words, if your code invokes :c:func:`malloc` + and passes the allocated memory to :c:func:`Py_BuildValue`, your code is + responsible for calling :c:func:`free` for that memory once + :c:func:`Py_BuildValue` returns. + + In the following description, the quoted form is the format unit; the entry in + (round) parentheses is the Python object type that the format unit will return; + and the entry in [square] brackets is the type of the C value(s) to be passed. + + The characters space, tab, colon and comma are ignored in format strings (but + not within format units such as ``s#``). This can be used to make long format + strings a tad more readable. + + ``s`` (:class:`str` or ``None``) [char \*] + Convert a null-terminated C string to a Python :class:`str` object using ``'utf-8'`` + encoding. If the C string pointer is *NULL*, ``None`` is used. + + ``s#`` (:class:`str` or ``None``) [char \*, int] + Convert a C string and its length to a Python :class:`str` object using ``'utf-8'`` + encoding. If the C string pointer is *NULL*, the length is ignored and + ``None`` is returned. + + ``y`` (:class:`bytes`) [char \*] + This converts a C string to a Python :func:`bytes` object. If the C + string pointer is *NULL*, ``None`` is returned. + + ``y#`` (:class:`bytes`) [char \*, int] + This converts a C string and its lengths to a Python object. If the C + string pointer is *NULL*, ``None`` is returned. + + ``z`` (:class:`str` or ``None``) [char \*] + Same as ``s``. + + ``z#`` (:class:`str` or ``None``) [char \*, int] + Same as ``s#``. + + ``u`` (:class:`str`) [Py_UNICODE \*] + Convert a null-terminated buffer of Unicode (UCS-2 or UCS-4) data to a Python + Unicode object. If the Unicode buffer pointer is *NULL*, ``None`` is returned. + + ``u#`` (:class:`str`) [Py_UNICODE \*, int] + Convert a Unicode (UCS-2 or UCS-4) data buffer and its length to a Python + Unicode object. If the Unicode buffer pointer is *NULL*, the length is ignored + and ``None`` is returned. + + ``U`` (:class:`str` or ``None``) [char \*] + Same as ``s``. + + ``U#`` (:class:`str` or ``None``) [char \*, int] + Same as ``s#``. + + ``i`` (:class:`int`) [int] + Convert a plain C :c:type:`int` to a Python integer object. + + ``b`` (:class:`int`) [char] + Convert a plain C :c:type:`char` to a Python integer object. + + ``h`` (:class:`int`) [short int] + Convert a plain C :c:type:`short int` to a Python integer object. + + ``l`` (:class:`int`) [long int] + Convert a C :c:type:`long int` to a Python integer object. + + ``B`` (:class:`int`) [unsigned char] + Convert a C :c:type:`unsigned char` to a Python integer object. + + ``H`` (:class:`int`) [unsigned short int] + Convert a C :c:type:`unsigned short int` to a Python integer object. + + ``I`` (:class:`int`) [unsigned int] + Convert a C :c:type:`unsigned int` to a Python integer object. + + ``k`` (:class:`int`) [unsigned long] + Convert a C :c:type:`unsigned long` to a Python integer object. + + ``L`` (:class:`int`) [PY_LONG_LONG] + Convert a C :c:type:`long long` to a Python integer object. Only available + on platforms that support :c:type:`long long` (or :c:type:`_int64` on + Windows). + + ``K`` (:class:`int`) [unsigned PY_LONG_LONG] + Convert a C :c:type:`unsigned long long` to a Python integer object. Only + available on platforms that support :c:type:`unsigned long long` (or + :c:type:`unsigned _int64` on Windows). + + ``n`` (:class:`int`) [Py_ssize_t] + Convert a C :c:type:`Py_ssize_t` to a Python integer. + + ``c`` (:class:`bytes` of length 1) [char] + Convert a C :c:type:`int` representing a byte to a Python :class:`bytes` object of + length 1. + + ``C`` (:class:`str` of length 1) [int] + Convert a C :c:type:`int` representing a character to Python :class:`str` + object of length 1. + + ``d`` (:class:`float`) [double] + Convert a C :c:type:`double` to a Python floating point number. + + ``f`` (:class:`float`) [float] + Convert a C :c:type:`float` to a Python floating point number. + + ``D`` (:class:`complex`) [Py_complex \*] + Convert a C :c:type:`Py_complex` structure to a Python complex number. + + ``O`` (object) [PyObject \*] + Pass a Python object untouched (except for its reference count, which is + incremented by one). If the object passed in is a *NULL* pointer, it is assumed + that this was caused because the call producing the argument found an error and + set an exception. Therefore, :c:func:`Py_BuildValue` will return *NULL* but won't + raise an exception. If no exception has been raised yet, :exc:`SystemError` is + set. + + ``S`` (object) [PyObject \*] + Same as ``O``. + + ``N`` (object) [PyObject \*] + Same as ``O``, except it doesn't increment the reference count on the object. + Useful when the object is created by a call to an object constructor in the + argument list. + + ``O&`` (object) [*converter*, *anything*] + Convert *anything* to a Python object through a *converter* function. The + function is called with *anything* (which should be compatible with :c:type:`void + \*`) as its argument and should return a "new" Python object, or *NULL* if an + error occurred. + + ``(items)`` (:class:`tuple`) [*matching-items*] + Convert a sequence of C values to a Python tuple with the same number of items. + + ``[items]`` (:class:`list`) [*matching-items*] + Convert a sequence of C values to a Python list with the same number of items. + + ``{items}`` (:class:`dict`) [*matching-items*] + Convert a sequence of C values to a Python dictionary. Each pair of consecutive + C values adds one item to the dictionary, serving as key and value, + respectively. + + If there is an error in the format string, the :exc:`SystemError` exception is + set and *NULL* returned. + +.. c:function:: PyObject* Py_VaBuildValue(const char *format, va_list vargs) + + Identical to :c:func:`Py_BuildValue`, except that it accepts a va_list + rather than a variable number of arguments. diff --git a/lib/cpython-doc/c-api/bool.rst b/lib/cpython-doc/c-api/bool.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/bool.rst @@ -0,0 +1,46 @@ +.. highlightlang:: c + +.. _boolobjects: + +Boolean Objects +--------------- + +Booleans in Python are implemented as a subclass of integers. There are only +two booleans, :const:`Py_False` and :const:`Py_True`. As such, the normal +creation and deletion functions don't apply to booleans. The following macros +are available, however. + + +.. c:function:: int PyBool_Check(PyObject *o) + + Return true if *o* is of type :c:data:`PyBool_Type`. + + +.. c:var:: PyObject* Py_False + + The Python ``False`` object. This object has no methods. It needs to be + treated just like any other object with respect to reference counts. + + +.. c:var:: PyObject* Py_True + + The Python ``True`` object. This object has no methods. It needs to be treated + just like any other object with respect to reference counts. + + +.. c:macro:: Py_RETURN_FALSE + + Return :const:`Py_False` from a function, properly incrementing its reference + count. + + +.. c:macro:: Py_RETURN_TRUE + + Return :const:`Py_True` from a function, properly incrementing its reference + count. + + +.. c:function:: PyObject* PyBool_FromLong(long v) + + Return a new reference to :const:`Py_True` or :const:`Py_False` depending on the + truth value of *v*. diff --git a/lib/cpython-doc/c-api/buffer.rst b/lib/cpython-doc/c-api/buffer.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/buffer.rst @@ -0,0 +1,326 @@ +.. highlightlang:: c + +.. _bufferobjects: + +Buffer Protocol +--------------- + +.. sectionauthor:: Greg Stein +.. sectionauthor:: Benjamin Peterson + + +.. index:: + single: buffer interface + +Certain objects available in Python wrap access to an underlying memory +array or *buffer*. Such objects include the built-in :class:`bytes` and +:class:`bytearray`, and some extension types like :class:`array.array`. +Third-party libraries may define their own types for special purposes, such +as image processing or numeric analysis. + +While each of these types have their own semantics, they share the common +characteristic of being backed by a possibly large memory buffer. It is +then desireable, in some situations, to access that buffer directly and +without intermediate copying. + +Python provides such a facility at the C level in the form of the *buffer +protocol*. This protocol has two sides: + +.. index:: single: PyBufferProcs + +- on the producer side, a type can export a "buffer interface" which allows + objects of that type to expose information about their underlying buffer. + This interface is described in the section :ref:`buffer-structs`; + +- on the consumer side, several means are available to obtain a pointer to + the raw underlying data of an object (for example a method parameter). + +Simple objects such as :class:`bytes` and :class:`bytearray` expose their +underlying buffer in byte-oriented form. Other forms are possible; for example, +the elements exposed by a :class:`array.array` can be multi-byte values. + +An example consumer of the buffer interface is the :meth:`~io.BufferedIOBase.write` +method of file objects: any object that can export a series of bytes through +the buffer interface can be written to a file. While :meth:`write` only +needs read-only access to the internal contents of the object passed to it, +other methods such as :meth:`~io.BufferedIOBase.readinto` need write access +to the contents of their argument. The buffer interface allows objects to +selectively allow or reject exporting of read-write and read-only buffers. + +There are two ways for a consumer of the buffer interface to acquire a buffer +over a target object: + +* call :c:func:`PyObject_GetBuffer` with the right parameters; + +* call :c:func:`PyArg_ParseTuple` (or one of its siblings) with one of the + ``y*``, ``w*`` or ``s*`` :ref:`format codes `. + +In both cases, :c:func:`PyBuffer_Release` must be called when the buffer +isn't needed anymore. Failure to do so could lead to various issues such as +resource leaks. + + +The buffer structure +==================== + +Buffer structures (or simply "buffers") are useful as a way to expose the +binary data from another object to the Python programmer. They can also be +used as a zero-copy slicing mechanism. Using their ability to reference a +block of memory, it is possible to expose any data to the Python programmer +quite easily. The memory could be a large, constant array in a C extension, +it could be a raw block of memory for manipulation before passing to an +operating system library, or it could be used to pass around structured data +in its native, in-memory format. + +Contrary to most data types exposed by the Python interpreter, buffers +are not :c:type:`PyObject` pointers but rather simple C structures. This +allows them to be created and copied very simply. When a generic wrapper +around a buffer is needed, a :ref:`memoryview ` object +can be created. + + +.. c:type:: Py_buffer + + .. c:member:: void *buf + + A pointer to the start of the memory for the object. + + .. c:member:: Py_ssize_t len + :noindex: + + The total length of the memory in bytes. + + .. c:member:: int readonly + + An indicator of whether the buffer is read only. + + .. c:member:: const char *format + :noindex: + + A *NULL* terminated string in :mod:`struct` module style syntax giving + the contents of the elements available through the buffer. If this is + *NULL*, ``"B"`` (unsigned bytes) is assumed. + + .. c:member:: int ndim + + The number of dimensions the memory represents as a multi-dimensional + array. If it is 0, :c:data:`strides` and :c:data:`suboffsets` must be + *NULL*. + + .. c:member:: Py_ssize_t *shape + + An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim` giving the + shape of the memory as a multi-dimensional array. Note that + ``((*shape)[0] * ... * (*shape)[ndims-1])*itemsize`` should be equal to + :c:data:`len`. + + .. c:member:: Py_ssize_t *strides + + An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim` giving the + number of bytes to skip to get to a new element in each dimension. + + .. c:member:: Py_ssize_t *suboffsets + + An array of :c:type:`Py_ssize_t`\s the length of :c:data:`ndim`. If these + suboffset numbers are greater than or equal to 0, then the value stored + along the indicated dimension is a pointer and the suboffset value + dictates how many bytes to add to the pointer after de-referencing. A + suboffset value that it negative indicates that no de-referencing should + occur (striding in a contiguous memory block). + + Here is a function that returns a pointer to the element in an N-D array + pointed to by an N-dimensional index when there are both non-NULL strides + and suboffsets:: + + void *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides, + Py_ssize_t *suboffsets, Py_ssize_t *indices) { + char *pointer = (char*)buf; + int i; + for (i = 0; i < ndim; i++) { + pointer += strides[i] * indices[i]; + if (suboffsets[i] >=0 ) { + pointer = *((char**)pointer) + suboffsets[i]; + } + } + return (void*)pointer; + } + + + .. c:member:: Py_ssize_t itemsize + + This is a storage for the itemsize (in bytes) of each element of the + shared memory. It is technically un-necessary as it can be obtained + using :c:func:`PyBuffer_SizeFromFormat`, however an exporter may know + this information without parsing the format string and it is necessary + to know the itemsize for proper interpretation of striding. Therefore, + storing it is more convenient and faster. + + .. c:member:: void *internal + + This is for use internally by the exporting object. For example, this + might be re-cast as an integer by the exporter and used to store flags + about whether or not the shape, strides, and suboffsets arrays must be + freed when the buffer is released. The consumer should never alter this + value. + + +Buffer-related functions +======================== + + +.. c:function:: int PyObject_CheckBuffer(PyObject *obj) + + Return 1 if *obj* supports the buffer interface otherwise 0. When 1 is + returned, it doesn't guarantee that :c:func:`PyObject_GetBuffer` will + succeed. + + +.. c:function:: int PyObject_GetBuffer(PyObject *obj, Py_buffer *view, int flags) + + Export a view over some internal data from the target object *obj*. + *obj* must not be NULL, and *view* must point to an existing + :c:type:`Py_buffer` structure allocated by the caller (most uses of + this function will simply declare a local variable of type + :c:type:`Py_buffer`). The *flags* argument is a bit field indicating + what kind of buffer is requested. The buffer interface allows + for complicated memory layout possibilities; however, some callers + won't want to handle all the complexity and instead request a simple + view of the target object (using :c:macro:`PyBUF_SIMPLE` for a read-only + view and :c:macro:`PyBUF_WRITABLE` for a read-write view). + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a :exc:`BufferError` unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + :c:data:`Py_buffer` structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + On success, 0 is returned and the *view* structure is filled with useful + values. On error, -1 is returned and an exception is raised; the *view* + is left in an undefined state. + + The following are the possible values to the *flags* arguments. + + .. c:macro:: PyBUF_SIMPLE + + This is the default flag. The returned buffer exposes a read-only + memory area. The format of data is assumed to be raw unsigned bytes, + without any particular structure. This is a "stand-alone" flag + constant. It never needs to be '|'d to the others. The exporter will + raise an error if it cannot provide such a contiguous buffer of bytes. + + .. c:macro:: PyBUF_WRITABLE + + Like :c:macro:`PyBUF_SIMPLE`, but the returned buffer is writable. If + the exporter doesn't support writable buffers, an error is raised. + + .. c:macro:: PyBUF_STRIDES + + This implies :c:macro:`PyBUF_ND`. The returned buffer must provide + strides information (i.e. the strides cannot be NULL). This would be + used when the consumer can handle strided, discontiguous arrays. + Handling strides automatically assumes you can handle shape. The + exporter can raise an error if a strided representation of the data is + not possible (i.e. without the suboffsets). + + .. c:macro:: PyBUF_ND + + The returned buffer must provide shape information. The memory will be + assumed C-style contiguous (last dimension varies the fastest). The + exporter may raise an error if it cannot provide this kind of + contiguous buffer. If this is not given then shape will be *NULL*. + + .. c:macro:: PyBUF_C_CONTIGUOUS + PyBUF_F_CONTIGUOUS + PyBUF_ANY_CONTIGUOUS + + These flags indicate that the contiguity returned buffer must be + respectively, C-contiguous (last dimension varies the fastest), Fortran + contiguous (first dimension varies the fastest) or either one. All of + these flags imply :c:macro:`PyBUF_STRIDES` and guarantee that the + strides buffer info structure will be filled in correctly. + + .. c:macro:: PyBUF_INDIRECT + + This flag indicates the returned buffer must have suboffsets + information (which can be NULL if no suboffsets are needed). This can + be used when the consumer can handle indirect array referencing implied + by these suboffsets. This implies :c:macro:`PyBUF_STRIDES`. + + .. c:macro:: PyBUF_FORMAT + + The returned buffer must have true format information if this flag is + provided. This would be used when the consumer is going to be checking + for what 'kind' of data is actually stored. An exporter should always + be able to provide this information if requested. If format is not + explicitly requested then the format must be returned as *NULL* (which + means ``'B'``, or unsigned bytes). + + .. c:macro:: PyBUF_STRIDED + + This is equivalent to ``(PyBUF_STRIDES | PyBUF_WRITABLE)``. + + .. c:macro:: PyBUF_STRIDED_RO + + This is equivalent to ``(PyBUF_STRIDES)``. + + .. c:macro:: PyBUF_RECORDS + + This is equivalent to ``(PyBUF_STRIDES | PyBUF_FORMAT | + PyBUF_WRITABLE)``. + + .. c:macro:: PyBUF_RECORDS_RO + + This is equivalent to ``(PyBUF_STRIDES | PyBUF_FORMAT)``. + + .. c:macro:: PyBUF_FULL + + This is equivalent to ``(PyBUF_INDIRECT | PyBUF_FORMAT | + PyBUF_WRITABLE)``. + + .. c:macro:: PyBUF_FULL_RO + + This is equivalent to ``(PyBUF_INDIRECT | PyBUF_FORMAT)``. + + .. c:macro:: PyBUF_CONTIG + + This is equivalent to ``(PyBUF_ND | PyBUF_WRITABLE)``. + + .. c:macro:: PyBUF_CONTIG_RO + + This is equivalent to ``(PyBUF_ND)``. + + +.. c:function:: void PyBuffer_Release(Py_buffer *view) + + Release the buffer *view*. This should be called when the buffer is no + longer being used as it may free memory from it. + + +.. c:function:: Py_ssize_t PyBuffer_SizeFromFormat(const char *) + + Return the implied :c:data:`~Py_buffer.itemsize` from the struct-stype + :c:data:`~Py_buffer.format`. + + +.. c:function:: int PyBuffer_IsContiguous(Py_buffer *view, char fortran) + + Return 1 if the memory defined by the *view* is C-style (*fortran* is + ``'C'``) or Fortran-style (*fortran* is ``'F'``) contiguous or either one + (*fortran* is ``'A'``). Return 0 otherwise. + + +.. c:function:: void PyBuffer_FillContiguousStrides(int ndim, Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t itemsize, char fortran) + + Fill the *strides* array with byte-strides of a contiguous (C-style if + *fortran* is ``'C'`` or Fortran-style if *fortran* is ``'F'``) array of the + given shape with the given number of bytes per element. + + +.. c:function:: int PyBuffer_FillInfo(Py_buffer *view, PyObject *obj, void *buf, Py_ssize_t len, int readonly, int infoflags) + + Fill in a buffer-info structure, *view*, correctly for an exporter that can + only share a contiguous chunk of memory of "unsigned bytes" of the given + length. Return 0 on success and -1 (with raising an error) on error. + diff --git a/lib/cpython-doc/c-api/bytearray.rst b/lib/cpython-doc/c-api/bytearray.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/bytearray.rst @@ -0,0 +1,86 @@ +.. highlightlang:: c + +.. _bytearrayobjects: + +Byte Array Objects +------------------ + +.. index:: object: bytearray + + +.. c:type:: PyByteArrayObject + + This subtype of :c:type:`PyObject` represents a Python bytearray object. + + +.. c:var:: PyTypeObject PyByteArray_Type + + This instance of :c:type:`PyTypeObject` represents the Python bytearray type; + it is the same object as :class:`bytearray` in the Python layer. + + +Type check macros +^^^^^^^^^^^^^^^^^ + +.. c:function:: int PyByteArray_Check(PyObject *o) + + Return true if the object *o* is a bytearray object or an instance of a + subtype of the bytearray type. + + +.. c:function:: int PyByteArray_CheckExact(PyObject *o) + + Return true if the object *o* is a bytearray object, but not an instance of a + subtype of the bytearray type. + + +Direct API functions +^^^^^^^^^^^^^^^^^^^^ + +.. c:function:: PyObject* PyByteArray_FromObject(PyObject *o) + + Return a new bytearray object from any object, *o*, that implements the + buffer protocol. + + .. XXX expand about the buffer protocol, at least somewhere + + +.. c:function:: PyObject* PyByteArray_FromStringAndSize(const char *string, Py_ssize_t len) + + Create a new bytearray object from *string* and its length, *len*. On + failure, *NULL* is returned. + + +.. c:function:: PyObject* PyByteArray_Concat(PyObject *a, PyObject *b) + + Concat bytearrays *a* and *b* and return a new bytearray with the result. + + +.. c:function:: Py_ssize_t PyByteArray_Size(PyObject *bytearray) + + Return the size of *bytearray* after checking for a *NULL* pointer. + + +.. c:function:: char* PyByteArray_AsString(PyObject *bytearray) + + Return the contents of *bytearray* as a char array after checking for a + *NULL* pointer. + + +.. c:function:: int PyByteArray_Resize(PyObject *bytearray, Py_ssize_t len) + + Resize the internal buffer of *bytearray* to *len*. + +Macros +^^^^^^ + +These macros trade safety for speed and they don't check pointers. + +.. c:function:: char* PyByteArray_AS_STRING(PyObject *bytearray) + + Macro version of :c:func:`PyByteArray_AsString`. + + +.. c:function:: Py_ssize_t PyByteArray_GET_SIZE(PyObject *bytearray) + + Macro version of :c:func:`PyByteArray_Size`. diff --git a/lib/cpython-doc/c-api/bytes.rst b/lib/cpython-doc/c-api/bytes.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/bytes.rst @@ -0,0 +1,192 @@ +.. highlightlang:: c + +.. _bytesobjects: + +Bytes Objects +------------- + +These functions raise :exc:`TypeError` when expecting a bytes parameter and are +called with a non-bytes parameter. + +.. index:: object: bytes + + +.. c:type:: PyBytesObject + + This subtype of :c:type:`PyObject` represents a Python bytes object. + + +.. c:var:: PyTypeObject PyBytes_Type + + This instance of :c:type:`PyTypeObject` represents the Python bytes type; it + is the same object as :class:`bytes` in the Python layer. + + +.. c:function:: int PyBytes_Check(PyObject *o) + + Return true if the object *o* is a bytes object or an instance of a subtype + of the bytes type. + + +.. c:function:: int PyBytes_CheckExact(PyObject *o) + + Return true if the object *o* is a bytes object, but not an instance of a + subtype of the bytes type. + + +.. c:function:: PyObject* PyBytes_FromString(const char *v) + + Return a new bytes object with a copy of the string *v* as value on success, + and *NULL* on failure. The parameter *v* must not be *NULL*; it will not be + checked. + + +.. c:function:: PyObject* PyBytes_FromStringAndSize(const char *v, Py_ssize_t len) + + Return a new bytes object with a copy of the string *v* as value and length + *len* on success, and *NULL* on failure. If *v* is *NULL*, the contents of + the bytes object are uninitialized. + + +.. c:function:: PyObject* PyBytes_FromFormat(const char *format, ...) + + Take a C :c:func:`printf`\ -style *format* string and a variable number of + arguments, calculate the size of the resulting Python bytes object and return + a bytes object with the values formatted into it. The variable arguments + must be C types and must correspond exactly to the format characters in the + *format* string. The following format characters are allowed: + + .. % XXX: This should be exactly the same as the table in PyErr_Format. + .. % One should just refer to the other. + .. % XXX: The descriptions for %zd and %zu are wrong, but the truth is complicated + .. % because not all compilers support the %z width modifier -- we fake it + .. % when necessary via interpolating PY_FORMAT_SIZE_T. + + +-------------------+---------------+--------------------------------+ + | Format Characters | Type | Comment | + +===================+===============+================================+ + | :attr:`%%` | *n/a* | The literal % character. | + +-------------------+---------------+--------------------------------+ + | :attr:`%c` | int | A single character, | + | | | represented as an C int. | + +-------------------+---------------+--------------------------------+ + | :attr:`%d` | int | Exactly equivalent to | + | | | ``printf("%d")``. | + +-------------------+---------------+--------------------------------+ + | :attr:`%u` | unsigned int | Exactly equivalent to | + | | | ``printf("%u")``. | + +-------------------+---------------+--------------------------------+ + | :attr:`%ld` | long | Exactly equivalent to | + | | | ``printf("%ld")``. | + +-------------------+---------------+--------------------------------+ + | :attr:`%lu` | unsigned long | Exactly equivalent to | + | | | ``printf("%lu")``. | + +-------------------+---------------+--------------------------------+ + | :attr:`%zd` | Py_ssize_t | Exactly equivalent to | + | | | ``printf("%zd")``. | + +-------------------+---------------+--------------------------------+ + | :attr:`%zu` | size_t | Exactly equivalent to | + | | | ``printf("%zu")``. | + +-------------------+---------------+--------------------------------+ + | :attr:`%i` | int | Exactly equivalent to | + | | | ``printf("%i")``. | + +-------------------+---------------+--------------------------------+ + | :attr:`%x` | int | Exactly equivalent to | + | | | ``printf("%x")``. | + +-------------------+---------------+--------------------------------+ + | :attr:`%s` | char\* | A null-terminated C character | + | | | array. | + +-------------------+---------------+--------------------------------+ + | :attr:`%p` | void\* | The hex representation of a C | + | | | pointer. Mostly equivalent to | + | | | ``printf("%p")`` except that | + | | | it is guaranteed to start with | + | | | the literal ``0x`` regardless | + | | | of what the platform's | + | | | ``printf`` yields. | + +-------------------+---------------+--------------------------------+ + + An unrecognized format character causes all the rest of the format string to be + copied as-is to the result string, and any extra arguments discarded. + + +.. c:function:: PyObject* PyBytes_FromFormatV(const char *format, va_list vargs) + + Identical to :c:func:`PyBytes_FromFormat` except that it takes exactly two + arguments. + + +.. c:function:: PyObject* PyBytes_FromObject(PyObject *o) + + Return the bytes representation of object *o* that implements the buffer + protocol. + + +.. c:function:: Py_ssize_t PyBytes_Size(PyObject *o) + + Return the length of the bytes in bytes object *o*. + + +.. c:function:: Py_ssize_t PyBytes_GET_SIZE(PyObject *o) + + Macro form of :c:func:`PyBytes_Size` but without error checking. + + +.. c:function:: char* PyBytes_AsString(PyObject *o) + + Return a NUL-terminated representation of the contents of *o*. The pointer + refers to the internal buffer of *o*, not a copy. The data must not be + modified in any way, unless the string was just created using + ``PyBytes_FromStringAndSize(NULL, size)``. It must not be deallocated. If + *o* is not a string object at all, :c:func:`PyBytes_AsString` returns *NULL* + and raises :exc:`TypeError`. + + +.. c:function:: char* PyBytes_AS_STRING(PyObject *string) + + Macro form of :c:func:`PyBytes_AsString` but without error checking. + + +.. c:function:: int PyBytes_AsStringAndSize(PyObject *obj, char **buffer, Py_ssize_t *length) + + Return a NUL-terminated representation of the contents of the object *obj* + through the output variables *buffer* and *length*. + + If *length* is *NULL*, the resulting buffer may not contain NUL characters; + if it does, the function returns ``-1`` and a :exc:`TypeError` is raised. + + The buffer refers to an internal string buffer of *obj*, not a copy. The data + must not be modified in any way, unless the string was just created using + ``PyBytes_FromStringAndSize(NULL, size)``. It must not be deallocated. If + *string* is not a string object at all, :c:func:`PyBytes_AsStringAndSize` + returns ``-1`` and raises :exc:`TypeError`. + + +.. c:function:: void PyBytes_Concat(PyObject **bytes, PyObject *newpart) + + Create a new bytes object in *\*bytes* containing the contents of *newpart* + appended to *bytes*; the caller will own the new reference. The reference to + the old value of *bytes* will be stolen. If the new string cannot be + created, the old reference to *bytes* will still be discarded and the value + of *\*bytes* will be set to *NULL*; the appropriate exception will be set. + + +.. c:function:: void PyBytes_ConcatAndDel(PyObject **bytes, PyObject *newpart) + + Create a new string object in *\*bytes* containing the contents of *newpart* + appended to *bytes*. This version decrements the reference count of + *newpart*. + + +.. c:function:: int _PyBytes_Resize(PyObject **bytes, Py_ssize_t newsize) + + A way to resize a bytes object even though it is "immutable". Only use this + to build up a brand new bytes object; don't use this if the bytes may already + be known in other parts of the code. It is an error to call this function if + the refcount on the input bytes object is not one. Pass the address of an + existing bytes object as an lvalue (it may be written into), and the new size + desired. On success, *\*bytes* holds the resized bytes object and ``0`` is + returned; the address in *\*bytes* may differ from its input value. If the + reallocation fails, the original bytes object at *\*bytes* is deallocated, + *\*bytes* is set to *NULL*, a memory exception is set, and ``-1`` is + returned. diff --git a/lib/cpython-doc/c-api/capsule.rst b/lib/cpython-doc/c-api/capsule.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/capsule.rst @@ -0,0 +1,150 @@ +.. highlightlang:: c + +.. _capsules: + +Capsules +-------- + +.. index:: object: Capsule + +Refer to :ref:`using-capsules` for more information on using these objects. + + +.. c:type:: PyCapsule + + This subtype of :c:type:`PyObject` represents an opaque value, useful for C + extension modules who need to pass an opaque value (as a :c:type:`void\*` + pointer) through Python code to other C code. It is often used to make a C + function pointer defined in one module available to other modules, so the + regular import mechanism can be used to access C APIs defined in dynamically + loaded modules. + +.. c:type:: PyCapsule_Destructor + + The type of a destructor callback for a capsule. Defined as:: + + typedef void (*PyCapsule_Destructor)(PyObject *); + + See :c:func:`PyCapsule_New` for the semantics of PyCapsule_Destructor + callbacks. + + +.. c:function:: int PyCapsule_CheckExact(PyObject *p) + + Return true if its argument is a :c:type:`PyCapsule`. + + +.. c:function:: PyObject* PyCapsule_New(void *pointer, const char *name, PyCapsule_Destructor destructor) + + Create a :c:type:`PyCapsule` encapsulating the *pointer*. The *pointer* + argument may not be *NULL*. + + On failure, set an exception and return *NULL*. + + The *name* string may either be *NULL* or a pointer to a valid C string. If + non-*NULL*, this string must outlive the capsule. (Though it is permitted to + free it inside the *destructor*.) + + If the *destructor* argument is not *NULL*, it will be called with the + capsule as its argument when it is destroyed. + + If this capsule will be stored as an attribute of a module, the *name* should + be specified as ``modulename.attributename``. This will enable other modules + to import the capsule using :c:func:`PyCapsule_Import`. + + +.. c:function:: void* PyCapsule_GetPointer(PyObject *capsule, const char *name) + + Retrieve the *pointer* stored in the capsule. On failure, set an exception + and return *NULL*. + + The *name* parameter must compare exactly to the name stored in the capsule. + If the name stored in the capsule is *NULL*, the *name* passed in must also + be *NULL*. Python uses the C function :c:func:`strcmp` to compare capsule + names. + + +.. c:function:: PyCapsule_Destructor PyCapsule_GetDestructor(PyObject *capsule) + + Return the current destructor stored in the capsule. On failure, set an + exception and return *NULL*. + + It is legal for a capsule to have a *NULL* destructor. This makes a *NULL* + return code somewhat ambiguous; use :c:func:`PyCapsule_IsValid` or + :c:func:`PyErr_Occurred` to disambiguate. + + +.. c:function:: void* PyCapsule_GetContext(PyObject *capsule) + + Return the current context stored in the capsule. On failure, set an + exception and return *NULL*. + + It is legal for a capsule to have a *NULL* context. This makes a *NULL* + return code somewhat ambiguous; use :c:func:`PyCapsule_IsValid` or + :c:func:`PyErr_Occurred` to disambiguate. + + +.. c:function:: const char* PyCapsule_GetName(PyObject *capsule) + + Return the current name stored in the capsule. On failure, set an exception + and return *NULL*. + + It is legal for a capsule to have a *NULL* name. This makes a *NULL* return + code somewhat ambiguous; use :c:func:`PyCapsule_IsValid` or + :c:func:`PyErr_Occurred` to disambiguate. + + +.. c:function:: void* PyCapsule_Import(const char *name, int no_block) + + Import a pointer to a C object from a capsule attribute in a module. The + *name* parameter should specify the full name to the attribute, as in + ``module.attribute``. The *name* stored in the capsule must match this + string exactly. If *no_block* is true, import the module without blocking + (using :c:func:`PyImport_ImportModuleNoBlock`). If *no_block* is false, + import the module conventionally (using :c:func:`PyImport_ImportModule`). + + Return the capsule's internal *pointer* on success. On failure, set an + exception and return *NULL*. However, if :c:func:`PyCapsule_Import` failed to + import the module, and *no_block* was true, no exception is set. + +.. c:function:: int PyCapsule_IsValid(PyObject *capsule, const char *name) + + Determines whether or not *capsule* is a valid capsule. A valid capsule is + non-*NULL*, passes :c:func:`PyCapsule_CheckExact`, has a non-*NULL* pointer + stored in it, and its internal name matches the *name* parameter. (See + :c:func:`PyCapsule_GetPointer` for information on how capsule names are + compared.) + + In other words, if :c:func:`PyCapsule_IsValid` returns a true value, calls to + any of the accessors (any function starting with :c:func:`PyCapsule_Get`) are + guaranteed to succeed. + + Return a nonzero value if the object is valid and matches the name passed in. + Return 0 otherwise. This function will not fail. + +.. c:function:: int PyCapsule_SetContext(PyObject *capsule, void *context) + + Set the context pointer inside *capsule* to *context*. + + Return 0 on success. Return nonzero and set an exception on failure. + +.. c:function:: int PyCapsule_SetDestructor(PyObject *capsule, PyCapsule_Destructor destructor) + + Set the destructor inside *capsule* to *destructor*. + + Return 0 on success. Return nonzero and set an exception on failure. + +.. c:function:: int PyCapsule_SetName(PyObject *capsule, const char *name) + + Set the name inside *capsule* to *name*. If non-*NULL*, the name must + outlive the capsule. If the previous *name* stored in the capsule was not + *NULL*, no attempt is made to free it. + + Return 0 on success. Return nonzero and set an exception on failure. + +.. c:function:: int PyCapsule_SetPointer(PyObject *capsule, void *pointer) + + Set the void pointer inside *capsule* to *pointer*. The pointer may not be + *NULL*. + + Return 0 on success. Return nonzero and set an exception on failure. diff --git a/lib/cpython-doc/c-api/cell.rst b/lib/cpython-doc/c-api/cell.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/cell.rst @@ -0,0 +1,62 @@ +.. highlightlang:: c + +.. _cell-objects: + +Cell Objects +------------ + +"Cell" objects are used to implement variables referenced by multiple scopes. +For each such variable, a cell object is created to store the value; the local +variables of each stack frame that references the value contains a reference to +the cells from outer scopes which also use that variable. When the value is +accessed, the value contained in the cell is used instead of the cell object +itself. This de-referencing of the cell object requires support from the +generated byte-code; these are not automatically de-referenced when accessed. +Cell objects are not likely to be useful elsewhere. + + +.. c:type:: PyCellObject + + The C structure used for cell objects. + + +.. c:var:: PyTypeObject PyCell_Type + + The type object corresponding to cell objects. + + +.. c:function:: int PyCell_Check(ob) + + Return true if *ob* is a cell object; *ob* must not be *NULL*. + + +.. c:function:: PyObject* PyCell_New(PyObject *ob) + + Create and return a new cell object containing the value *ob*. The parameter may + be *NULL*. + + +.. c:function:: PyObject* PyCell_Get(PyObject *cell) + + Return the contents of the cell *cell*. + + +.. c:function:: PyObject* PyCell_GET(PyObject *cell) + + Return the contents of the cell *cell*, but without checking that *cell* is + non-*NULL* and a cell object. + + +.. c:function:: int PyCell_Set(PyObject *cell, PyObject *value) + + Set the contents of the cell object *cell* to *value*. This releases the + reference to any current content of the cell. *value* may be *NULL*. *cell* + must be non-*NULL*; if it is not a cell object, ``-1`` will be returned. On + success, ``0`` will be returned. + + +.. c:function:: void PyCell_SET(PyObject *cell, PyObject *value) + + Sets the value of the cell object *cell* to *value*. No reference counts are + adjusted, and no checks are made for safety; *cell* must be non-*NULL* and must + be a cell object. diff --git a/lib/cpython-doc/c-api/code.rst b/lib/cpython-doc/c-api/code.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/code.rst @@ -0,0 +1,50 @@ +.. highlightlang:: c + +.. _codeobjects: + +Code Objects +------------ + +.. sectionauthor:: Jeffrey Yasskin + + +.. index:: + object: code + +Code objects are a low-level detail of the CPython implementation. +Each one represents a chunk of executable code that hasn't yet been +bound into a function. + +.. c:type:: PyCodeObject + + The C structure of the objects used to describe code objects. The + fields of this type are subject to change at any time. + + +.. c:var:: PyTypeObject PyCode_Type + + This is an instance of :c:type:`PyTypeObject` representing the Python + :class:`code` type. + + +.. c:function:: int PyCode_Check(PyObject *co) + + Return true if *co* is a :class:`code` object + +.. c:function:: int PyCode_GetNumFree(PyObject *co) + + Return the number of free variables in *co*. + +.. c:function:: PyCodeObject *PyCode_New(int argcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, int firstlineno, PyObject *lnotab) + + Return a new code object. If you need a dummy code object to + create a frame, use :c:func:`PyCode_NewEmpty` instead. Calling + :c:func:`PyCode_New` directly can bind you to a precise Python + version since the definition of the bytecode changes often. + + +.. c:function:: int PyCode_NewEmpty(const char *filename, const char *funcname, int firstlineno) + + Return a new empty code object with the specified filename, + function name, and first line number. It is illegal to + :func:`exec` or :func:`eval` the resulting code object. diff --git a/lib/cpython-doc/c-api/codec.rst b/lib/cpython-doc/c-api/codec.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/codec.rst @@ -0,0 +1,118 @@ +.. _codec-registry: + +Codec registry and support functions +==================================== + +.. c:function:: int PyCodec_Register(PyObject *search_function) + + Register a new codec search function. + + As side effect, this tries to load the :mod:`encodings` package, if not yet + done, to make sure that it is always first in the list of search functions. + +.. c:function:: int PyCodec_KnownEncoding(const char *encoding) + + Return ``1`` or ``0`` depending on whether there is a registered codec for + the given *encoding*. + +.. c:function:: PyObject* PyCodec_Encode(PyObject *object, const char *encoding, const char *errors) + + Generic codec based encoding API. + + *object* is passed through the encoder function found for the given + *encoding* using the error handling method defined by *errors*. *errors* may + be *NULL* to use the default method defined for the codec. Raises a + :exc:`LookupError` if no encoder can be found. + +.. c:function:: PyObject* PyCodec_Decode(PyObject *object, const char *encoding, const char *errors) + + Generic codec based decoding API. + + *object* is passed through the decoder function found for the given + *encoding* using the error handling method defined by *errors*. *errors* may + be *NULL* to use the default method defined for the codec. Raises a + :exc:`LookupError` if no encoder can be found. + + +Codec lookup API +---------------- + +In the following functions, the *encoding* string is looked up converted to all +lower-case characters, which makes encodings looked up through this mechanism +effectively case-insensitive. If no codec is found, a :exc:`KeyError` is set +and *NULL* returned. + +.. c:function:: PyObject* PyCodec_Encoder(const char *encoding) + + Get an encoder function for the given *encoding*. + +.. c:function:: PyObject* PyCodec_Decoder(const char *encoding) + + Get a decoder function for the given *encoding*. + +.. c:function:: PyObject* PyCodec_IncrementalEncoder(const char *encoding, const char *errors) + + Get an :class:`IncrementalEncoder` object for the given *encoding*. + +.. c:function:: PyObject* PyCodec_IncrementalDecoder(const char *encoding, const char *errors) + + Get an :class:`IncrementalDecoder` object for the given *encoding*. + +.. c:function:: PyObject* PyCodec_StreamReader(const char *encoding, PyObject *stream, const char *errors) + + Get a :class:`StreamReader` factory function for the given *encoding*. + +.. c:function:: PyObject* PyCodec_StreamWriter(const char *encoding, PyObject *stream, const char *errors) + + Get a :class:`StreamWriter` factory function for the given *encoding*. + + +Registry API for Unicode encoding error handlers +------------------------------------------------ + +.. c:function:: int PyCodec_RegisterError(const char *name, PyObject *error) + + Register the error handling callback function *error* under the given *name*. + This callback function will be called by a codec when it encounters + unencodable characters/undecodable bytes and *name* is specified as the error + parameter in the call to the encode/decode function. + + The callback gets a single argument, an instance of + :exc:`UnicodeEncodeError`, :exc:`UnicodeDecodeError` or + :exc:`UnicodeTranslateError` that holds information about the problematic + sequence of characters or bytes and their offset in the original string (see + :ref:`unicodeexceptions` for functions to extract this information). The + callback must either raise the given exception, or return a two-item tuple + containing the replacement for the problematic sequence, and an integer + giving the offset in the original string at which encoding/decoding should be + resumed. + + Return ``0`` on success, ``-1`` on error. + +.. c:function:: PyObject* PyCodec_LookupError(const char *name) + + Lookup the error handling callback function registered under *name*. As a + special case *NULL* can be passed, in which case the error handling callback + for "strict" will be returned. + +.. c:function:: PyObject* PyCodec_StrictErrors(PyObject *exc) + + Raise *exc* as an exception. + +.. c:function:: PyObject* PyCodec_IgnoreErrors(PyObject *exc) + + Ignore the unicode error, skipping the faulty input. + +.. c:function:: PyObject* PyCodec_ReplaceErrors(PyObject *exc) + + Replace the unicode encode error with ``?`` or ``U+FFFD``. + +.. c:function:: PyObject* PyCodec_XMLCharRefReplaceErrors(PyObject *exc) + + Replace the unicode encode error with XML character references. + +.. c:function:: PyObject* PyCodec_BackslashReplaceErrors(PyObject *exc) + + Replace the unicode encode error with backslash escapes (``\x``, ``\u`` and + ``\U``). + diff --git a/lib/cpython-doc/c-api/complex.rst b/lib/cpython-doc/c-api/complex.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/complex.rst @@ -0,0 +1,132 @@ +.. highlightlang:: c + +.. _complexobjects: + +Complex Number Objects +---------------------- + +.. index:: object: complex number + +Python's complex number objects are implemented as two distinct types when +viewed from the C API: one is the Python object exposed to Python programs, and +the other is a C structure which represents the actual complex number value. +The API provides functions for working with both. + + +Complex Numbers as C Structures +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Note that the functions which accept these structures as parameters and return +them as results do so *by value* rather than dereferencing them through +pointers. This is consistent throughout the API. + + +.. c:type:: Py_complex + + The C structure which corresponds to the value portion of a Python complex + number object. Most of the functions for dealing with complex number objects + use structures of this type as input or output values, as appropriate. It is + defined as:: + + typedef struct { + double real; + double imag; + } Py_complex; + + +.. c:function:: Py_complex _Py_c_sum(Py_complex left, Py_complex right) + + Return the sum of two complex numbers, using the C :c:type:`Py_complex` + representation. + + +.. c:function:: Py_complex _Py_c_diff(Py_complex left, Py_complex right) + + Return the difference between two complex numbers, using the C + :c:type:`Py_complex` representation. + + +.. c:function:: Py_complex _Py_c_neg(Py_complex complex) + + Return the negation of the complex number *complex*, using the C + :c:type:`Py_complex` representation. + + +.. c:function:: Py_complex _Py_c_prod(Py_complex left, Py_complex right) + + Return the product of two complex numbers, using the C :c:type:`Py_complex` + representation. + + +.. c:function:: Py_complex _Py_c_quot(Py_complex dividend, Py_complex divisor) + + Return the quotient of two complex numbers, using the C :c:type:`Py_complex` + representation. + + If *divisor* is null, this method returns zero and sets + :c:data:`errno` to :c:data:`EDOM`. + + +.. c:function:: Py_complex _Py_c_pow(Py_complex num, Py_complex exp) + + Return the exponentiation of *num* by *exp*, using the C :c:type:`Py_complex` + representation. + + If *num* is null and *exp* is not a positive real number, + this method returns zero and sets :c:data:`errno` to :c:data:`EDOM`. + + +Complex Numbers as Python Objects +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + +.. c:type:: PyComplexObject + + This subtype of :c:type:`PyObject` represents a Python complex number object. + + +.. c:var:: PyTypeObject PyComplex_Type + + This instance of :c:type:`PyTypeObject` represents the Python complex number + type. It is the same object as :class:`complex` in the Python layer. + + +.. c:function:: int PyComplex_Check(PyObject *p) + + Return true if its argument is a :c:type:`PyComplexObject` or a subtype of + :c:type:`PyComplexObject`. + + +.. c:function:: int PyComplex_CheckExact(PyObject *p) + + Return true if its argument is a :c:type:`PyComplexObject`, but not a subtype of + :c:type:`PyComplexObject`. + + +.. c:function:: PyObject* PyComplex_FromCComplex(Py_complex v) + + Create a new Python complex number object from a C :c:type:`Py_complex` value. + + +.. c:function:: PyObject* PyComplex_FromDoubles(double real, double imag) + + Return a new :c:type:`PyComplexObject` object from *real* and *imag*. + + +.. c:function:: double PyComplex_RealAsDouble(PyObject *op) + + Return the real part of *op* as a C :c:type:`double`. + + +.. c:function:: double PyComplex_ImagAsDouble(PyObject *op) + + Return the imaginary part of *op* as a C :c:type:`double`. + + +.. c:function:: Py_complex PyComplex_AsCComplex(PyObject *op) + + Return the :c:type:`Py_complex` value of the complex number *op*. + + If *op* is not a Python complex number object but has a :meth:`__complex__` + method, this method will first be called to convert *op* to a Python complex + number object. Upon failure, this method returns ``-1.0`` as a real value. diff --git a/lib/cpython-doc/c-api/concrete.rst b/lib/cpython-doc/c-api/concrete.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/concrete.rst @@ -0,0 +1,108 @@ +.. highlightlang:: c + + +.. _concrete: + +********************** +Concrete Objects Layer +********************** + +The functions in this chapter are specific to certain Python object types. +Passing them an object of the wrong type is not a good idea; if you receive an +object from a Python program and you are not sure that it has the right type, +you must perform a type check first; for example, to check that an object is a +dictionary, use :c:func:`PyDict_Check`. The chapter is structured like the +"family tree" of Python object types. + +.. warning:: + + While the functions described in this chapter carefully check the type of the + objects which are passed in, many of them do not check for *NULL* being passed + instead of a valid object. Allowing *NULL* to be passed in can cause memory + access violations and immediate termination of the interpreter. + + +.. _fundamental: + +Fundamental Objects +=================== + +This section describes Python type objects and the singleton object ``None``. + +.. toctree:: + + type.rst + none.rst + + +.. _numericobjects: + +Numeric Objects +=============== + +.. index:: object: numeric + +.. toctree:: + + long.rst + bool.rst + float.rst + complex.rst + + +.. _sequenceobjects: + +Sequence Objects +================ + +.. index:: object: sequence + +Generic operations on sequence objects were discussed in the previous chapter; +this section deals with the specific kinds of sequence objects that are +intrinsic to the Python language. + +.. XXX sort out unicode, str, bytes and bytearray + +.. toctree:: + + bytes.rst + bytearray.rst + unicode.rst + tuple.rst + list.rst + + +.. _mapobjects: + +Mapping Objects +=============== + +.. index:: object: mapping + +.. toctree:: + + dict.rst + + +.. _otherobjects: + +Other Objects +============= + +.. toctree:: + + set.rst + function.rst + method.rst + file.rst + module.rst + iterator.rst + descriptor.rst + slice.rst + memoryview.rst + weakref.rst + capsule.rst + cell.rst + gen.rst + datetime.rst + code.rst diff --git a/lib/cpython-doc/c-api/conversion.rst b/lib/cpython-doc/c-api/conversion.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/conversion.rst @@ -0,0 +1,131 @@ +.. highlightlang:: c + +.. _string-conversion: + +String conversion and formatting +================================ + +Functions for number conversion and formatted string output. + + +.. c:function:: int PyOS_snprintf(char *str, size_t size, const char *format, ...) + + Output not more than *size* bytes to *str* according to the format string + *format* and the extra arguments. See the Unix man page :manpage:`snprintf(2)`. + + +.. c:function:: int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) + + Output not more than *size* bytes to *str* according to the format string + *format* and the variable argument list *va*. Unix man page + :manpage:`vsnprintf(2)`. + +:c:func:`PyOS_snprintf` and :c:func:`PyOS_vsnprintf` wrap the Standard C library +functions :c:func:`snprintf` and :c:func:`vsnprintf`. Their purpose is to +guarantee consistent behavior in corner cases, which the Standard C functions do +not. + +The wrappers ensure that *str*[*size*-1] is always ``'\0'`` upon return. They +never write more than *size* bytes (including the trailing ``'\0'``) into str. +Both functions require that ``str != NULL``, ``size > 0`` and ``format != +NULL``. + +If the platform doesn't have :c:func:`vsnprintf` and the buffer size needed to +avoid truncation exceeds *size* by more than 512 bytes, Python aborts with a +*Py_FatalError*. + +The return value (*rv*) for these functions should be interpreted as follows: + +* When ``0 <= rv < size``, the output conversion was successful and *rv* + characters were written to *str* (excluding the trailing ``'\0'`` byte at + *str*[*rv*]). + +* When ``rv >= size``, the output conversion was truncated and a buffer with + ``rv + 1`` bytes would have been needed to succeed. *str*[*size*-1] is ``'\0'`` + in this case. + +* When ``rv < 0``, "something bad happened." *str*[*size*-1] is ``'\0'`` in + this case too, but the rest of *str* is undefined. The exact cause of the error + depends on the underlying platform. + +The following functions provide locale-independent string to number conversions. + + +.. c:function:: double PyOS_string_to_double(const char *s, char **endptr, PyObject *overflow_exception) + + Convert a string ``s`` to a :c:type:`double`, raising a Python + exception on failure. The set of accepted strings corresponds to + the set of strings accepted by Python's :func:`float` constructor, + except that ``s`` must not have leading or trailing whitespace. + The conversion is independent of the current locale. + + If ``endptr`` is ``NULL``, convert the whole string. Raise + ValueError and return ``-1.0`` if the string is not a valid + representation of a floating-point number. + + If endptr is not ``NULL``, convert as much of the string as + possible and set ``*endptr`` to point to the first unconverted + character. If no initial segment of the string is the valid + representation of a floating-point number, set ``*endptr`` to point + to the beginning of the string, raise ValueError, and return + ``-1.0``. + + If ``s`` represents a value that is too large to store in a float + (for example, ``"1e500"`` is such a string on many platforms) then + if ``overflow_exception`` is ``NULL`` return ``Py_HUGE_VAL`` (with + an appropriate sign) and don't set any exception. Otherwise, + ``overflow_exception`` must point to a Python exception object; + raise that exception and return ``-1.0``. In both cases, set + ``*endptr`` to point to the first character after the converted value. + + If any other error occurs during the conversion (for example an + out-of-memory error), set the appropriate Python exception and + return ``-1.0``. + + .. versionadded:: 3.1 + + +.. c:function:: char* PyOS_double_to_string(double val, char format_code, int precision, int flags, int *ptype) + + Convert a :c:type:`double` *val* to a string using supplied + *format_code*, *precision*, and *flags*. + + *format_code* must be one of ``'e'``, ``'E'``, ``'f'``, ``'F'``, + ``'g'``, ``'G'`` or ``'r'``. For ``'r'``, the supplied *precision* + must be 0 and is ignored. The ``'r'`` format code specifies the + standard :func:`repr` format. + + *flags* can be zero or more of the values *Py_DTSF_SIGN*, + *Py_DTSF_ADD_DOT_0*, or *Py_DTSF_ALT*, or-ed together: + + * *Py_DTSF_SIGN* means to always precede the returned string with a sign + character, even if *val* is non-negative. + + * *Py_DTSF_ADD_DOT_0* means to ensure that the returned string will not look + like an integer. + + * *Py_DTSF_ALT* means to apply "alternate" formatting rules. See the + documentation for the :c:func:`PyOS_snprintf` ``'#'`` specifier for + details. + + If *ptype* is non-NULL, then the value it points to will be set to one of + *Py_DTST_FINITE*, *Py_DTST_INFINITE*, or *Py_DTST_NAN*, signifying that + *val* is a finite number, an infinite number, or not a number, respectively. + + The return value is a pointer to *buffer* with the converted string or + *NULL* if the conversion failed. The caller is responsible for freeing the + returned string by calling :c:func:`PyMem_Free`. + + .. versionadded:: 3.1 + + +.. c:function:: char* PyOS_stricmp(char *s1, char *s2) + + Case insensitive comparison of strings. The function works almost + identically to :c:func:`strcmp` except that it ignores the case. + + +.. c:function:: char* PyOS_strnicmp(char *s1, char *s2, Py_ssize_t size) + + Case insensitive comparison of strings. The function works almost + identically to :c:func:`strncmp` except that it ignores the case. diff --git a/lib/cpython-doc/c-api/datetime.rst b/lib/cpython-doc/c-api/datetime.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/datetime.rst @@ -0,0 +1,184 @@ +.. highlightlang:: c + +.. _datetimeobjects: + +DateTime Objects +---------------- + +Various date and time objects are supplied by the :mod:`datetime` module. +Before using any of these functions, the header file :file:`datetime.h` must be +included in your source (note that this is not included by :file:`Python.h`), +and the macro :c:macro:`PyDateTime_IMPORT` must be invoked, usually as part of +the module initialisation function. The macro puts a pointer to a C structure +into a static variable, :c:data:`PyDateTimeAPI`, that is used by the following +macros. + +Type-check macros: + +.. c:function:: int PyDate_Check(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_DateType` or a subtype of + :c:data:`PyDateTime_DateType`. *ob* must not be *NULL*. + + +.. c:function:: int PyDate_CheckExact(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_DateType`. *ob* must not be + *NULL*. + + +.. c:function:: int PyDateTime_Check(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_DateTimeType` or a subtype of + :c:data:`PyDateTime_DateTimeType`. *ob* must not be *NULL*. + + +.. c:function:: int PyDateTime_CheckExact(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_DateTimeType`. *ob* must not + be *NULL*. + + +.. c:function:: int PyTime_Check(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_TimeType` or a subtype of + :c:data:`PyDateTime_TimeType`. *ob* must not be *NULL*. + + +.. c:function:: int PyTime_CheckExact(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_TimeType`. *ob* must not be + *NULL*. + + +.. c:function:: int PyDelta_Check(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_DeltaType` or a subtype of + :c:data:`PyDateTime_DeltaType`. *ob* must not be *NULL*. + + +.. c:function:: int PyDelta_CheckExact(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_DeltaType`. *ob* must not be + *NULL*. + + +.. c:function:: int PyTZInfo_Check(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_TZInfoType` or a subtype of + :c:data:`PyDateTime_TZInfoType`. *ob* must not be *NULL*. + + +.. c:function:: int PyTZInfo_CheckExact(PyObject *ob) + + Return true if *ob* is of type :c:data:`PyDateTime_TZInfoType`. *ob* must not be + *NULL*. + + +Macros to create objects: + +.. c:function:: PyObject* PyDate_FromDate(int year, int month, int day) + + Return a ``datetime.date`` object with the specified year, month and day. + + +.. c:function:: PyObject* PyDateTime_FromDateAndTime(int year, int month, int day, int hour, int minute, int second, int usecond) + + Return a ``datetime.datetime`` object with the specified year, month, day, hour, + minute, second and microsecond. + + +.. c:function:: PyObject* PyTime_FromTime(int hour, int minute, int second, int usecond) + + Return a ``datetime.time`` object with the specified hour, minute, second and + microsecond. + + +.. c:function:: PyObject* PyDelta_FromDSU(int days, int seconds, int useconds) + + Return a ``datetime.timedelta`` object representing the given number of days, + seconds and microseconds. Normalization is performed so that the resulting + number of microseconds and seconds lie in the ranges documented for + ``datetime.timedelta`` objects. + + +Macros to extract fields from date objects. The argument must be an instance of +:c:data:`PyDateTime_Date`, including subclasses (such as +:c:data:`PyDateTime_DateTime`). The argument must not be *NULL*, and the type is +not checked: + +.. c:function:: int PyDateTime_GET_YEAR(PyDateTime_Date *o) + + Return the year, as a positive int. + + +.. c:function:: int PyDateTime_GET_MONTH(PyDateTime_Date *o) + + Return the month, as an int from 1 through 12. + + +.. c:function:: int PyDateTime_GET_DAY(PyDateTime_Date *o) + + Return the day, as an int from 1 through 31. + + +Macros to extract fields from datetime objects. The argument must be an +instance of :c:data:`PyDateTime_DateTime`, including subclasses. The argument +must not be *NULL*, and the type is not checked: + +.. c:function:: int PyDateTime_DATE_GET_HOUR(PyDateTime_DateTime *o) + + Return the hour, as an int from 0 through 23. + + +.. c:function:: int PyDateTime_DATE_GET_MINUTE(PyDateTime_DateTime *o) + + Return the minute, as an int from 0 through 59. + + +.. c:function:: int PyDateTime_DATE_GET_SECOND(PyDateTime_DateTime *o) + + Return the second, as an int from 0 through 59. + + +.. c:function:: int PyDateTime_DATE_GET_MICROSECOND(PyDateTime_DateTime *o) + + Return the microsecond, as an int from 0 through 999999. + + +Macros to extract fields from time objects. The argument must be an instance of +:c:data:`PyDateTime_Time`, including subclasses. The argument must not be *NULL*, +and the type is not checked: + +.. c:function:: int PyDateTime_TIME_GET_HOUR(PyDateTime_Time *o) + + Return the hour, as an int from 0 through 23. + + +.. c:function:: int PyDateTime_TIME_GET_MINUTE(PyDateTime_Time *o) + + Return the minute, as an int from 0 through 59. + + +.. c:function:: int PyDateTime_TIME_GET_SECOND(PyDateTime_Time *o) + + Return the second, as an int from 0 through 59. + + +.. c:function:: int PyDateTime_TIME_GET_MICROSECOND(PyDateTime_Time *o) + + Return the microsecond, as an int from 0 through 999999. + + +Macros for the convenience of modules implementing the DB API: + +.. c:function:: PyObject* PyDateTime_FromTimestamp(PyObject *args) + + Create and return a new ``datetime.datetime`` object given an argument tuple + suitable for passing to ``datetime.datetime.fromtimestamp()``. + + +.. c:function:: PyObject* PyDate_FromTimestamp(PyObject *args) + + Create and return a new ``datetime.date`` object given an argument tuple + suitable for passing to ``datetime.date.fromtimestamp()``. diff --git a/lib/cpython-doc/c-api/descriptor.rst b/lib/cpython-doc/c-api/descriptor.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/descriptor.rst @@ -0,0 +1,40 @@ +.. highlightlang:: c + +.. _descriptor-objects: + +Descriptor Objects +------------------ + +"Descriptors" are objects that describe some attribute of an object. They are +found in the dictionary of type objects. + +.. XXX document these! + +.. c:var:: PyTypeObject PyProperty_Type + + The type object for the built-in descriptor types. + + +.. c:function:: PyObject* PyDescr_NewGetSet(PyTypeObject *type, struct PyGetSetDef *getset) + + +.. c:function:: PyObject* PyDescr_NewMember(PyTypeObject *type, struct PyMemberDef *meth) + + +.. c:function:: PyObject* PyDescr_NewMethod(PyTypeObject *type, struct PyMethodDef *meth) + + +.. c:function:: PyObject* PyDescr_NewWrapper(PyTypeObject *type, struct wrapperbase *wrapper, void *wrapped) + + +.. c:function:: PyObject* PyDescr_NewClassMethod(PyTypeObject *type, PyMethodDef *method) + + +.. c:function:: int PyDescr_IsData(PyObject *descr) + + Return true if the descriptor objects *descr* describes a data attribute, or + false if it describes a method. *descr* must be a descriptor object; there is + no error checking. + + +.. c:function:: PyObject* PyWrapper_New(PyObject *, PyObject *) diff --git a/lib/cpython-doc/c-api/dict.rst b/lib/cpython-doc/c-api/dict.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/dict.rst @@ -0,0 +1,218 @@ +.. highlightlang:: c + +.. _dictobjects: + +Dictionary Objects +------------------ + +.. index:: object: dictionary + + +.. c:type:: PyDictObject + + This subtype of :c:type:`PyObject` represents a Python dictionary object. + + +.. c:var:: PyTypeObject PyDict_Type + + This instance of :c:type:`PyTypeObject` represents the Python dictionary + type. This is the same object as :class:`dict` in the Python layer. + + +.. c:function:: int PyDict_Check(PyObject *p) + + Return true if *p* is a dict object or an instance of a subtype of the dict + type. + + +.. c:function:: int PyDict_CheckExact(PyObject *p) + + Return true if *p* is a dict object, but not an instance of a subtype of + the dict type. + + +.. c:function:: PyObject* PyDict_New() + + Return a new empty dictionary, or *NULL* on failure. + + +.. c:function:: PyObject* PyDictProxy_New(PyObject *dict) + + Return a proxy object for a mapping which enforces read-only behavior. + This is normally used to create a proxy to prevent modification of the + dictionary for non-dynamic class types. + + +.. c:function:: void PyDict_Clear(PyObject *p) + + Empty an existing dictionary of all key-value pairs. + + +.. c:function:: int PyDict_Contains(PyObject *p, PyObject *key) + + Determine if dictionary *p* contains *key*. If an item in *p* is matches + *key*, return ``1``, otherwise return ``0``. On error, return ``-1``. + This is equivalent to the Python expression ``key in p``. + + +.. c:function:: PyObject* PyDict_Copy(PyObject *p) + + Return a new dictionary that contains the same key-value pairs as *p*. + + +.. c:function:: int PyDict_SetItem(PyObject *p, PyObject *key, PyObject *val) + + Insert *value* into the dictionary *p* with a key of *key*. *key* must be + :term:`hashable`; if it isn't, :exc:`TypeError` will be raised. Return + ``0`` on success or ``-1`` on failure. + + +.. c:function:: int PyDict_SetItemString(PyObject *p, const char *key, PyObject *val) + + .. index:: single: PyUnicode_FromString() + + Insert *value* into the dictionary *p* using *key* as a key. *key* should + be a :c:type:`char\*`. The key object is created using + ``PyUnicode_FromString(key)``. Return ``0`` on success or ``-1`` on + failure. + + +.. c:function:: int PyDict_DelItem(PyObject *p, PyObject *key) + + Remove the entry in dictionary *p* with key *key*. *key* must be hashable; + if it isn't, :exc:`TypeError` is raised. Return ``0`` on success or ``-1`` + on failure. + + +.. c:function:: int PyDict_DelItemString(PyObject *p, char *key) + + Remove the entry in dictionary *p* which has a key specified by the string + *key*. Return ``0`` on success or ``-1`` on failure. + + +.. c:function:: PyObject* PyDict_GetItem(PyObject *p, PyObject *key) + + Return the object from dictionary *p* which has a key *key*. Return *NULL* + if the key *key* is not present, but *without* setting an exception. + + +.. c:function:: PyObject* PyDict_GetItemWithError(PyObject *p, PyObject *key) + + Variant of :c:func:`PyDict_GetItem` that does not suppress + exceptions. Return *NULL* **with** an exception set if an exception + occurred. Return *NULL* **without** an exception set if the key + wasn't present. + + +.. c:function:: PyObject* PyDict_GetItemString(PyObject *p, const char *key) + + This is the same as :c:func:`PyDict_GetItem`, but *key* is specified as a + :c:type:`char\*`, rather than a :c:type:`PyObject\*`. + + +.. c:function:: PyObject* PyDict_Items(PyObject *p) + + Return a :c:type:`PyListObject` containing all the items from the dictionary. + + +.. c:function:: PyObject* PyDict_Keys(PyObject *p) + + Return a :c:type:`PyListObject` containing all the keys from the dictionary. + + +.. c:function:: PyObject* PyDict_Values(PyObject *p) + + Return a :c:type:`PyListObject` containing all the values from the dictionary + *p*. + + +.. c:function:: Py_ssize_t PyDict_Size(PyObject *p) + + .. index:: builtin: len + + Return the number of items in the dictionary. This is equivalent to + ``len(p)`` on a dictionary. + + +.. c:function:: int PyDict_Next(PyObject *p, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue) + + Iterate over all key-value pairs in the dictionary *p*. The + :c:type:`Py_ssize_t` referred to by *ppos* must be initialized to ``0`` + prior to the first call to this function to start the iteration; the + function returns true for each pair in the dictionary, and false once all + pairs have been reported. The parameters *pkey* and *pvalue* should either + point to :c:type:`PyObject\*` variables that will be filled in with each key + and value, respectively, or may be *NULL*. Any references returned through + them are borrowed. *ppos* should not be altered during iteration. Its + value represents offsets within the internal dictionary structure, and + since the structure is sparse, the offsets are not consecutive. + + For example:: + + PyObject *key, *value; + Py_ssize_t pos = 0; + + while (PyDict_Next(self->dict, &pos, &key, &value)) { + /* do something interesting with the values... */ + ... + } + + The dictionary *p* should not be mutated during iteration. It is safe to + modify the values of the keys as you iterate over the dictionary, but only + so long as the set of keys does not change. For example:: + + PyObject *key, *value; + Py_ssize_t pos = 0; + + while (PyDict_Next(self->dict, &pos, &key, &value)) { + long i = PyLong_AsLong(value); + if (i == -1 && PyErr_Occurred()) { + return -1; + } + PyObject *o = PyLong_FromLong(i + 1); + if (o == NULL) + return -1; + if (PyDict_SetItem(self->dict, key, o) < 0) { + Py_DECREF(o); + return -1; + } + Py_DECREF(o); + } + + +.. c:function:: int PyDict_Merge(PyObject *a, PyObject *b, int override) + + Iterate over mapping object *b* adding key-value pairs to dictionary *a*. + *b* may be a dictionary, or any object supporting :c:func:`PyMapping_Keys` + and :c:func:`PyObject_GetItem`. If *override* is true, existing pairs in *a* + will be replaced if a matching key is found in *b*, otherwise pairs will + only be added if there is not a matching key in *a*. Return ``0`` on + success or ``-1`` if an exception was raised. + + +.. c:function:: int PyDict_Update(PyObject *a, PyObject *b) + + This is the same as ``PyDict_Merge(a, b, 1)`` in C, or ``a.update(b)`` in + Python. Return ``0`` on success or ``-1`` if an exception was raised. + + +.. c:function:: int PyDict_MergeFromSeq2(PyObject *a, PyObject *seq2, int override) + + Update or merge into dictionary *a*, from the key-value pairs in *seq2*. + *seq2* must be an iterable object producing iterable objects of length 2, + viewed as key-value pairs. In case of duplicate keys, the last wins if + *override* is true, else the first wins. Return ``0`` on success or ``-1`` + if an exception was raised. Equivalent Python (except for the return + value):: + + def PyDict_MergeFromSeq2(a, seq2, override): + for key, value in seq2: + if override or key not in a: + a[key] = value + + +.. c:function:: int PyDict_ClearFreeList() + + Clear the free list. Return the total number of freed items. + + .. versionadded:: 3.3 diff --git a/lib/cpython-doc/c-api/exceptions.rst b/lib/cpython-doc/c-api/exceptions.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/exceptions.rst @@ -0,0 +1,750 @@ +.. highlightlang:: c + + +.. _exceptionhandling: + +****************** +Exception Handling +****************** + +The functions described in this chapter will let you handle and raise Python +exceptions. It is important to understand some of the basics of Python +exception handling. It works somewhat like the Unix :c:data:`errno` variable: +there is a global indicator (per thread) of the last error that occurred. Most +functions don't clear this on success, but will set it to indicate the cause of +the error on failure. Most functions also return an error indicator, usually +*NULL* if they are supposed to return a pointer, or ``-1`` if they return an +integer (exception: the :c:func:`PyArg_\*` functions return ``1`` for success and +``0`` for failure). + +When a function must fail because some function it called failed, it generally +doesn't set the error indicator; the function it called already set it. It is +responsible for either handling the error and clearing the exception or +returning after cleaning up any resources it holds (such as object references or +memory allocations); it should *not* continue normally if it is not prepared to +handle the error. If returning due to an error, it is important to indicate to +the caller that an error has been set. If the error is not handled or carefully +propagated, additional calls into the Python/C API may not behave as intended +and may fail in mysterious ways. + +The error indicator consists of three Python objects corresponding to the result +of ``sys.exc_info()``. API functions exist to interact with the error indicator +in various ways. There is a separate error indicator for each thread. + +.. XXX Order of these should be more thoughtful. + Either alphabetical or some kind of structure. + + +.. c:function:: void PyErr_PrintEx(int set_sys_last_vars) + + Print a standard traceback to ``sys.stderr`` and clear the error indicator. + Call this function only when the error indicator is set. (Otherwise it will + cause a fatal error!) + + If *set_sys_last_vars* is nonzero, the variables :data:`sys.last_type`, + :data:`sys.last_value` and :data:`sys.last_traceback` will be set to the + type, value and traceback of the printed exception, respectively. + + +.. c:function:: void PyErr_Print() + + Alias for ``PyErr_PrintEx(1)``. + + +.. c:function:: PyObject* PyErr_Occurred() + + Test whether the error indicator is set. If set, return the exception *type* + (the first argument to the last call to one of the :c:func:`PyErr_Set\*` + functions or to :c:func:`PyErr_Restore`). If not set, return *NULL*. You do not + own a reference to the return value, so you do not need to :c:func:`Py_DECREF` + it. + + .. note:: + + Do not compare the return value to a specific exception; use + :c:func:`PyErr_ExceptionMatches` instead, shown below. (The comparison could + easily fail since the exception may be an instance instead of a class, in the + case of a class exception, or it may the a subclass of the expected exception.) + + +.. c:function:: int PyErr_ExceptionMatches(PyObject *exc) + + Equivalent to ``PyErr_GivenExceptionMatches(PyErr_Occurred(), exc)``. This + should only be called when an exception is actually set; a memory access + violation will occur if no exception has been raised. + + +.. c:function:: int PyErr_GivenExceptionMatches(PyObject *given, PyObject *exc) + + Return true if the *given* exception matches the exception in *exc*. If + *exc* is a class object, this also returns true when *given* is an instance + of a subclass. If *exc* is a tuple, all exceptions in the tuple (and + recursively in subtuples) are searched for a match. + + +.. c:function:: void PyErr_NormalizeException(PyObject**exc, PyObject**val, PyObject**tb) + + Under certain circumstances, the values returned by :c:func:`PyErr_Fetch` below + can be "unnormalized", meaning that ``*exc`` is a class object but ``*val`` is + not an instance of the same class. This function can be used to instantiate + the class in that case. If the values are already normalized, nothing happens. + The delayed normalization is implemented to improve performance. + + +.. c:function:: void PyErr_Clear() + + Clear the error indicator. If the error indicator is not set, there is no + effect. + + +.. c:function:: void PyErr_Fetch(PyObject **ptype, PyObject **pvalue, PyObject **ptraceback) + + Retrieve the error indicator into three variables whose addresses are passed. + If the error indicator is not set, set all three variables to *NULL*. If it is + set, it will be cleared and you own a reference to each object retrieved. The + value and traceback object may be *NULL* even when the type object is not. + + .. note:: + + This function is normally only used by code that needs to handle exceptions or + by code that needs to save and restore the error indicator temporarily. + + +.. c:function:: void PyErr_Restore(PyObject *type, PyObject *value, PyObject *traceback) + + Set the error indicator from the three objects. If the error indicator is + already set, it is cleared first. If the objects are *NULL*, the error + indicator is cleared. Do not pass a *NULL* type and non-*NULL* value or + traceback. The exception type should be a class. Do not pass an invalid + exception type or value. (Violating these rules will cause subtle problems + later.) This call takes away a reference to each object: you must own a + reference to each object before the call and after the call you no longer own + these references. (If you don't understand this, don't use this function. I + warned you.) + + .. note:: + + This function is normally only used by code that needs to save and restore the + error indicator temporarily; use :c:func:`PyErr_Fetch` to save the current + exception state. + + +.. c:function:: void PyErr_SetString(PyObject *type, const char *message) + + This is the most common way to set the error indicator. The first argument + specifies the exception type; it is normally one of the standard exceptions, + e.g. :c:data:`PyExc_RuntimeError`. You need not increment its reference count. + The second argument is an error message; it is decoded from ``'utf-8``'. + + +.. c:function:: void PyErr_SetObject(PyObject *type, PyObject *value) + + This function is similar to :c:func:`PyErr_SetString` but lets you specify an + arbitrary Python object for the "value" of the exception. + + +.. c:function:: PyObject* PyErr_Format(PyObject *exception, const char *format, ...) + + This function sets the error indicator and returns *NULL*. *exception* + should be a Python exception class. The *format* and subsequent + parameters help format the error message; they have the same meaning and + values as in :c:func:`PyUnicode_FromFormat`. *format* is an ASCII-encoded + string. + + +.. c:function:: void PyErr_SetNone(PyObject *type) + + This is a shorthand for ``PyErr_SetObject(type, Py_None)``. + + +.. c:function:: int PyErr_BadArgument() + + This is a shorthand for ``PyErr_SetString(PyExc_TypeError, message)``, where + *message* indicates that a built-in operation was invoked with an illegal + argument. It is mostly for internal use. + + +.. c:function:: PyObject* PyErr_NoMemory() + + This is a shorthand for ``PyErr_SetNone(PyExc_MemoryError)``; it returns *NULL* + so an object allocation function can write ``return PyErr_NoMemory();`` when it + runs out of memory. + + +.. c:function:: PyObject* PyErr_SetFromErrno(PyObject *type) + + .. index:: single: strerror() + + This is a convenience function to raise an exception when a C library function + has returned an error and set the C variable :c:data:`errno`. It constructs a + tuple object whose first item is the integer :c:data:`errno` value and whose + second item is the corresponding error message (gotten from :c:func:`strerror`), + and then calls ``PyErr_SetObject(type, object)``. On Unix, when the + :c:data:`errno` value is :const:`EINTR`, indicating an interrupted system call, + this calls :c:func:`PyErr_CheckSignals`, and if that set the error indicator, + leaves it set to that. The function always returns *NULL*, so a wrapper + function around a system call can write ``return PyErr_SetFromErrno(type);`` + when the system call returns an error. + + +.. c:function:: PyObject* PyErr_SetFromErrnoWithFilename(PyObject *type, const char *filename) + + Similar to :c:func:`PyErr_SetFromErrno`, with the additional behavior that if + *filename* is not *NULL*, it is passed to the constructor of *type* as a third + parameter. In the case of exceptions such as :exc:`IOError` and :exc:`OSError`, + this is used to define the :attr:`filename` attribute of the exception instance. + *filename* is decoded from the filesystem encoding + (:func:`sys.getfilesystemencoding`). + + +.. c:function:: PyObject* PyErr_SetFromWindowsErr(int ierr) + + This is a convenience function to raise :exc:`WindowsError`. If called with + *ierr* of :c:data:`0`, the error code returned by a call to :c:func:`GetLastError` + is used instead. It calls the Win32 function :c:func:`FormatMessage` to retrieve + the Windows description of error code given by *ierr* or :c:func:`GetLastError`, + then it constructs a tuple object whose first item is the *ierr* value and whose + second item is the corresponding error message (gotten from + :c:func:`FormatMessage`), and then calls ``PyErr_SetObject(PyExc_WindowsError, + object)``. This function always returns *NULL*. Availability: Windows. + + +.. c:function:: PyObject* PyErr_SetExcFromWindowsErr(PyObject *type, int ierr) + + Similar to :c:func:`PyErr_SetFromWindowsErr`, with an additional parameter + specifying the exception type to be raised. Availability: Windows. + + +.. c:function:: PyObject* PyErr_SetFromWindowsErrWithFilename(int ierr, const char *filename) + + Similar to :c:func:`PyErr_SetFromWindowsErr`, with the additional behavior that + if *filename* is not *NULL*, it is passed to the constructor of + :exc:`WindowsError` as a third parameter. *filename* is decoded from the + filesystem encoding (:func:`sys.getfilesystemencoding`). Availability: + Windows. + + +.. c:function:: PyObject* PyErr_SetExcFromWindowsErrWithFilename(PyObject *type, int ierr, char *filename) + + Similar to :c:func:`PyErr_SetFromWindowsErrWithFilename`, with an additional + parameter specifying the exception type to be raised. Availability: Windows. + + +.. c:function:: void PyErr_SyntaxLocationEx(char *filename, int lineno, int col_offset) + + Set file, line, and offset information for the current exception. If the + current exception is not a :exc:`SyntaxError`, then it sets additional + attributes, which make the exception printing subsystem think the exception + is a :exc:`SyntaxError`. *filename* is decoded from the filesystem encoding + (:func:`sys.getfilesystemencoding`). + +.. versionadded:: 3.2 + + +.. c:function:: void PyErr_SyntaxLocation(char *filename, int lineno) + + Like :c:func:`PyErr_SyntaxLocationExc`, but the col_offset parameter is + omitted. + + +.. c:function:: void PyErr_BadInternalCall() + + This is a shorthand for ``PyErr_SetString(PyExc_SystemError, message)``, + where *message* indicates that an internal operation (e.g. a Python/C API + function) was invoked with an illegal argument. It is mostly for internal + use. + + +.. c:function:: int PyErr_WarnEx(PyObject *category, char *message, int stack_level) + + Issue a warning message. The *category* argument is a warning category (see + below) or *NULL*; the *message* argument is an UTF-8 encoded string. *stack_level* is a + positive number giving a number of stack frames; the warning will be issued from + the currently executing line of code in that stack frame. A *stack_level* of 1 + is the function calling :c:func:`PyErr_WarnEx`, 2 is the function above that, + and so forth. + + This function normally prints a warning message to *sys.stderr*; however, it is + also possible that the user has specified that warnings are to be turned into + errors, and in that case this will raise an exception. It is also possible that + the function raises an exception because of a problem with the warning machinery + (the implementation imports the :mod:`warnings` module to do the heavy lifting). + The return value is ``0`` if no exception is raised, or ``-1`` if an exception + is raised. (It is not possible to determine whether a warning message is + actually printed, nor what the reason is for the exception; this is + intentional.) If an exception is raised, the caller should do its normal + exception handling (for example, :c:func:`Py_DECREF` owned references and return + an error value). + + Warning categories must be subclasses of :c:data:`Warning`; the default warning + category is :c:data:`RuntimeWarning`. The standard Python warning categories are + available as global variables whose names are ``PyExc_`` followed by the Python + exception name. These have the type :c:type:`PyObject\*`; they are all class + objects. Their names are :c:data:`PyExc_Warning`, :c:data:`PyExc_UserWarning`, + :c:data:`PyExc_UnicodeWarning`, :c:data:`PyExc_DeprecationWarning`, + :c:data:`PyExc_SyntaxWarning`, :c:data:`PyExc_RuntimeWarning`, and + :c:data:`PyExc_FutureWarning`. :c:data:`PyExc_Warning` is a subclass of + :c:data:`PyExc_Exception`; the other warning categories are subclasses of + :c:data:`PyExc_Warning`. + + For information about warning control, see the documentation for the + :mod:`warnings` module and the :option:`-W` option in the command line + documentation. There is no C API for warning control. + + +.. c:function:: int PyErr_WarnExplicit(PyObject *category, const char *message, const char *filename, int lineno, const char *module, PyObject *registry) + + Issue a warning message with explicit control over all warning attributes. This + is a straightforward wrapper around the Python function + :func:`warnings.warn_explicit`, see there for more information. The *module* + and *registry* arguments may be set to *NULL* to get the default effect + described there. *message* and *module* are UTF-8 encoded strings, + *filename* is decoded from the filesystem encoding + (:func:`sys.getfilesystemencoding`). + + +.. c:function:: int PyErr_WarnFormat(PyObject *category, Py_ssize_t stack_level, const char *format, ...) + + Function similar to :c:func:`PyErr_WarnEx`, but use + :c:func:`PyUnicode_FromFormat` to format the warning message. *format* is + an ASCII-encoded string. + + .. versionadded:: 3.2 + +.. c:function:: int PyErr_CheckSignals() + + .. index:: + module: signal + single: SIGINT + single: KeyboardInterrupt (built-in exception) + + This function interacts with Python's signal handling. It checks whether a + signal has been sent to the processes and if so, invokes the corresponding + signal handler. If the :mod:`signal` module is supported, this can invoke a + signal handler written in Python. In all cases, the default effect for + :const:`SIGINT` is to raise the :exc:`KeyboardInterrupt` exception. If an + exception is raised the error indicator is set and the function returns ``-1``; + otherwise the function returns ``0``. The error indicator may or may not be + cleared if it was previously set. + + +.. c:function:: void PyErr_SetInterrupt() + + .. index:: + single: SIGINT + single: KeyboardInterrupt (built-in exception) + + This function simulates the effect of a :const:`SIGINT` signal arriving --- the + next time :c:func:`PyErr_CheckSignals` is called, :exc:`KeyboardInterrupt` will + be raised. It may be called without holding the interpreter lock. + + .. % XXX This was described as obsolete, but is used in + .. % _thread.interrupt_main() (used from IDLE), so it's still needed. + + +.. c:function:: int PySignal_SetWakeupFd(int fd) + + This utility function specifies a file descriptor to which a ``'\0'`` byte will + be written whenever a signal is received. It returns the previous such file + descriptor. The value ``-1`` disables the feature; this is the initial state. + This is equivalent to :func:`signal.set_wakeup_fd` in Python, but without any + error checking. *fd* should be a valid file descriptor. The function should + only be called from the main thread. + + +.. c:function:: PyObject* PyErr_NewException(char *name, PyObject *base, PyObject *dict) + + This utility function creates and returns a new exception class. The *name* + argument must be the name of the new exception, a C string of the form + ``module.classname``. The *base* and *dict* arguments are normally *NULL*. + This creates a class object derived from :exc:`Exception` (accessible in C as + :c:data:`PyExc_Exception`). + + The :attr:`__module__` attribute of the new class is set to the first part (up + to the last dot) of the *name* argument, and the class name is set to the last + part (after the last dot). The *base* argument can be used to specify alternate + base classes; it can either be only one class or a tuple of classes. The *dict* + argument can be used to specify a dictionary of class variables and methods. + + +.. c:function:: PyObject* PyErr_NewExceptionWithDoc(char *name, char *doc, PyObject *base, PyObject *dict) + + Same as :c:func:`PyErr_NewException`, except that the new exception class can + easily be given a docstring: If *doc* is non-*NULL*, it will be used as the + docstring for the exception class. + + .. versionadded:: 3.2 + + +.. c:function:: void PyErr_WriteUnraisable(PyObject *obj) + + This utility function prints a warning message to ``sys.stderr`` when an + exception has been set but it is impossible for the interpreter to actually + raise the exception. It is used, for example, when an exception occurs in an + :meth:`__del__` method. + + The function is called with a single argument *obj* that identifies the context + in which the unraisable exception occurred. The repr of *obj* will be printed in + the warning message. + + +Exception Objects +================= + +.. c:function:: PyObject* PyException_GetTraceback(PyObject *ex) + + Return the traceback associated with the exception as a new reference, as + accessible from Python through :attr:`__traceback__`. If there is no + traceback associated, this returns *NULL*. + + +.. c:function:: int PyException_SetTraceback(PyObject *ex, PyObject *tb) + + Set the traceback associated with the exception to *tb*. Use ``Py_None`` to + clear it. + + +.. c:function:: PyObject* PyException_GetContext(PyObject *ex) + + Return the context (another exception instance during whose handling *ex* was + raised) associated with the exception as a new reference, as accessible from + Python through :attr:`__context__`. If there is no context associated, this + returns *NULL*. + + +.. c:function:: void PyException_SetContext(PyObject *ex, PyObject *ctx) + + Set the context associated with the exception to *ctx*. Use *NULL* to clear + it. There is no type check to make sure that *ctx* is an exception instance. + This steals a reference to *ctx*. + + +.. c:function:: PyObject* PyException_GetCause(PyObject *ex) + + Return the cause (another exception instance set by ``raise ... from ...``) + associated with the exception as a new reference, as accessible from Python + through :attr:`__cause__`. If there is no cause associated, this returns + *NULL*. + + +.. c:function:: void PyException_SetCause(PyObject *ex, PyObject *ctx) + + Set the cause associated with the exception to *ctx*. Use *NULL* to clear + it. There is no type check to make sure that *ctx* is an exception instance. + This steals a reference to *ctx*. + + +.. _unicodeexceptions: + +Unicode Exception Objects +========================= + +The following functions are used to create and modify Unicode exceptions from C. + +.. c:function:: PyObject* PyUnicodeDecodeError_Create(const char *encoding, const char *object, Py_ssize_t length, Py_ssize_t start, Py_ssize_t end, const char *reason) + + Create a :class:`UnicodeDecodeError` object with the attributes *encoding*, + *object*, *length*, *start*, *end* and *reason*. *encoding* and *reason* are + UTF-8 encoded strings. + +.. c:function:: PyObject* PyUnicodeEncodeError_Create(const char *encoding, const Py_UNICODE *object, Py_ssize_t length, Py_ssize_t start, Py_ssize_t end, const char *reason) + + Create a :class:`UnicodeEncodeError` object with the attributes *encoding*, + *object*, *length*, *start*, *end* and *reason*. *encoding* and *reason* are + UTF-8 encoded strings. + +.. c:function:: PyObject* PyUnicodeTranslateError_Create(const Py_UNICODE *object, Py_ssize_t length, Py_ssize_t start, Py_ssize_t end, const char *reason) + + Create a :class:`UnicodeTranslateError` object with the attributes *object*, + *length*, *start*, *end* and *reason*. *reason* is an UTF-8 encoded string. + +.. c:function:: PyObject* PyUnicodeDecodeError_GetEncoding(PyObject *exc) + PyObject* PyUnicodeEncodeError_GetEncoding(PyObject *exc) + + Return the *encoding* attribute of the given exception object. + +.. c:function:: PyObject* PyUnicodeDecodeError_GetObject(PyObject *exc) + PyObject* PyUnicodeEncodeError_GetObject(PyObject *exc) + PyObject* PyUnicodeTranslateError_GetObject(PyObject *exc) + + Return the *object* attribute of the given exception object. + +.. c:function:: int PyUnicodeDecodeError_GetStart(PyObject *exc, Py_ssize_t *start) + int PyUnicodeEncodeError_GetStart(PyObject *exc, Py_ssize_t *start) + int PyUnicodeTranslateError_GetStart(PyObject *exc, Py_ssize_t *start) + + Get the *start* attribute of the given exception object and place it into + *\*start*. *start* must not be *NULL*. Return ``0`` on success, ``-1`` on + failure. + +.. c:function:: int PyUnicodeDecodeError_SetStart(PyObject *exc, Py_ssize_t start) + int PyUnicodeEncodeError_SetStart(PyObject *exc, Py_ssize_t start) + int PyUnicodeTranslateError_SetStart(PyObject *exc, Py_ssize_t start) + + Set the *start* attribute of the given exception object to *start*. Return + ``0`` on success, ``-1`` on failure. + +.. c:function:: int PyUnicodeDecodeError_GetEnd(PyObject *exc, Py_ssize_t *end) + int PyUnicodeEncodeError_GetEnd(PyObject *exc, Py_ssize_t *end) + int PyUnicodeTranslateError_GetEnd(PyObject *exc, Py_ssize_t *end) + + Get the *end* attribute of the given exception object and place it into + *\*end*. *end* must not be *NULL*. Return ``0`` on success, ``-1`` on + failure. + +.. c:function:: int PyUnicodeDecodeError_SetEnd(PyObject *exc, Py_ssize_t end) + int PyUnicodeEncodeError_SetEnd(PyObject *exc, Py_ssize_t end) + int PyUnicodeTranslateError_SetEnd(PyObject *exc, Py_ssize_t end) + + Set the *end* attribute of the given exception object to *end*. Return ``0`` + on success, ``-1`` on failure. + +.. c:function:: PyObject* PyUnicodeDecodeError_GetReason(PyObject *exc) + PyObject* PyUnicodeEncodeError_GetReason(PyObject *exc) + PyObject* PyUnicodeTranslateError_GetReason(PyObject *exc) + + Return the *reason* attribute of the given exception object. + +.. c:function:: int PyUnicodeDecodeError_SetReason(PyObject *exc, const char *reason) + int PyUnicodeEncodeError_SetReason(PyObject *exc, const char *reason) + int PyUnicodeTranslateError_SetReason(PyObject *exc, const char *reason) + + Set the *reason* attribute of the given exception object to *reason*. Return + ``0`` on success, ``-1`` on failure. + + +Recursion Control +================= + +These two functions provide a way to perform safe recursive calls at the C +level, both in the core and in extension modules. They are needed if the +recursive code does not necessarily invoke Python code (which tracks its +recursion depth automatically). + +.. c:function:: int Py_EnterRecursiveCall(char *where) + + Marks a point where a recursive C-level call is about to be performed. + + If :const:`USE_STACKCHECK` is defined, this function checks if the OS + stack overflowed using :c:func:`PyOS_CheckStack`. In this is the case, it + sets a :exc:`MemoryError` and returns a nonzero value. + + The function then checks if the recursion limit is reached. If this is the + case, a :exc:`RuntimeError` is set and a nonzero value is returned. + Otherwise, zero is returned. + + *where* should be a string such as ``" in instance check"`` to be + concatenated to the :exc:`RuntimeError` message caused by the recursion depth + limit. + +.. c:function:: void Py_LeaveRecursiveCall() + + Ends a :c:func:`Py_EnterRecursiveCall`. Must be called once for each + *successful* invocation of :c:func:`Py_EnterRecursiveCall`. + +Properly implementing :attr:`tp_repr` for container types requires +special recursion handling. In addition to protecting the stack, +:attr:`tp_repr` also needs to track objects to prevent cycles. The +following two functions facilitate this functionality. Effectively, +these are the C equivalent to :func:`reprlib.recursive_repr`. + +.. c:function:: int Py_ReprEnter(PyObject *object) + + Called at the beginning of the :attr:`tp_repr` implementation to + detect cycles. + + If the object has already been processed, the function returns a + positive integer. In that case the :attr:`tp_repr` implementation + should return a string object indicating a cycle. As examples, + :class:`dict` objects return ``{...}`` and :class:`list` objects + return ``[...]``. + + The function will return a negative integer if the recursion limit + is reached. In that case the :attr:`tp_repr` implementation should + typically return ``NULL``. + + Otherwise, the function returns zero and the :attr:`tp_repr` + implementation can continue normally. + +.. c:function:: void Py_ReprLeave(PyObject *object) + + Ends a :c:func:`Py_ReprEnter`. Must be called once for each + invocation of :c:func:`Py_ReprEnter` that returns zero. + + +.. _standardexceptions: + +Standard Exceptions +=================== + +All standard Python exceptions are available as global variables whose names are +``PyExc_`` followed by the Python exception name. These have the type +:c:type:`PyObject\*`; they are all class objects. For completeness, here are all +the variables: + ++-----------------------------------------+---------------------------------+----------+ +| C Name | Python Name | Notes | ++=========================================+=================================+==========+ +| :c:data:`PyExc_BaseException` | :exc:`BaseException` | \(1) | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_Exception` | :exc:`Exception` | \(1) | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ArithmeticError` | :exc:`ArithmeticError` | \(1) | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_LookupError` | :exc:`LookupError` | \(1) | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_AssertionError` | :exc:`AssertionError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_AttributeError` | :exc:`AttributeError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_BlockingIOError` | :exc:`BlockingIOError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_BrokenPipeError` | :exc:`BrokenPipeError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ChildProcessError` | :exc:`ChildProcessError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ConnectionError` | :exc:`ConnectionError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ConnectionAbortedError` | :exc:`ConnectionAbortedError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ConnectionRefusedError` | :exc:`ConnectionRefusedError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ConnectionResetError` | :exc:`ConnectionResetError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_FileExistsError` | :exc:`FileExistsError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_FileNotFoundError` | :exc:`FileNotFoundError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_EOFError` | :exc:`EOFError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_FloatingPointError` | :exc:`FloatingPointError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ImportError` | :exc:`ImportError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_IndexError` | :exc:`IndexError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_InterruptedError` | :exc:`InterruptedError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_IsADirectoryError` | :exc:`IsADirectoryError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_KeyError` | :exc:`KeyError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_KeyboardInterrupt` | :exc:`KeyboardInterrupt` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_MemoryError` | :exc:`MemoryError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_NameError` | :exc:`NameError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_NotADirectoryError` | :exc:`NotADirectoryError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_NotImplementedError` | :exc:`NotImplementedError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_OSError` | :exc:`OSError` | \(1) | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_OverflowError` | :exc:`OverflowError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_PermissionError` | :exc:`PermissionError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ProcessLookupError` | :exc:`ProcessLookupError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ReferenceError` | :exc:`ReferenceError` | \(2) | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_RuntimeError` | :exc:`RuntimeError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_SyntaxError` | :exc:`SyntaxError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_SystemError` | :exc:`SystemError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_TimeoutError` | :exc:`TimeoutError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_SystemExit` | :exc:`SystemExit` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_TypeError` | :exc:`TypeError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ValueError` | :exc:`ValueError` | | ++-----------------------------------------+---------------------------------+----------+ +| :c:data:`PyExc_ZeroDivisionError` | :exc:`ZeroDivisionError` | | ++-----------------------------------------+---------------------------------+----------+ + +.. versionadded:: 3.3 + :c:data:`PyExc_BlockingIOError`, :c:data:`PyExc_BrokenPipeError`, + :c:data:`PyExc_ChildProcessError`, :c:data:`PyExc_ConnectionError`, + :c:data:`PyExc_ConnectionAbortedError`, :c:data:`PyExc_ConnectionRefusedError`, + :c:data:`PyExc_ConnectionResetError`, :c:data:`PyExc_FileExistsError`, + :c:data:`PyExc_FileNotFoundError`, :c:data:`PyExc_InterruptedError`, + :c:data:`PyExc_IsADirectoryError`, :c:data:`PyExc_NotADirectoryError`, + :c:data:`PyExc_PermissionError`, :c:data:`PyExc_ProcessLookupError` + and :c:data:`PyExc_TimeoutError` were introduced following :pep:`3151`. + + +These are compatibility aliases to :c:data:`PyExc_OSError`: + ++-------------------------------------+----------+ +| C Name | Notes | ++=====================================+==========+ +| :c:data:`PyExc_EnvironmentError` | | ++-------------------------------------+----------+ +| :c:data:`PyExc_IOError` | | ++-------------------------------------+----------+ +| :c:data:`PyExc_WindowsError` | \(3) | ++-------------------------------------+----------+ + +.. versionchanged:: 3.3 + These aliases used to be separate exception types. + + +.. index:: + single: PyExc_BaseException + single: PyExc_Exception + single: PyExc_ArithmeticError + single: PyExc_LookupError + single: PyExc_AssertionError + single: PyExc_AttributeError + single: PyExc_BlockingIOError + single: PyExc_BrokenPipeError + single: PyExc_ConnectionError + single: PyExc_ConnectionAbortedError + single: PyExc_ConnectionRefusedError + single: PyExc_ConnectionResetError + single: PyExc_EOFError + single: PyExc_FileExistsError + single: PyExc_FileNotFoundError + single: PyExc_FloatingPointError + single: PyExc_ImportError + single: PyExc_IndexError + single: PyExc_InterruptedError + single: PyExc_IsADirectoryError + single: PyExc_KeyError + single: PyExc_KeyboardInterrupt + single: PyExc_MemoryError + single: PyExc_NameError + single: PyExc_NotADirectoryError + single: PyExc_NotImplementedError + single: PyExc_OSError + single: PyExc_OverflowError + single: PyExc_PermissionError + single: PyExc_ProcessLookupError + single: PyExc_ReferenceError + single: PyExc_RuntimeError + single: PyExc_SyntaxError + single: PyExc_SystemError + single: PyExc_SystemExit + single: PyExc_TimeoutError + single: PyExc_TypeError + single: PyExc_ValueError + single: PyExc_ZeroDivisionError + single: PyExc_EnvironmentError + single: PyExc_IOError + single: PyExc_WindowsError + +Notes: + +(1) + This is a base class for other standard exceptions. + +(2) + This is the same as :exc:`weakref.ReferenceError`. + +(3) + Only defined on Windows; protect code that uses this by testing that the + preprocessor macro ``MS_WINDOWS`` is defined. diff --git a/lib/cpython-doc/c-api/file.rst b/lib/cpython-doc/c-api/file.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/file.rst @@ -0,0 +1,75 @@ +.. highlightlang:: c + +.. _fileobjects: + +File Objects +------------ + +.. index:: object: file + +These APIs are a minimal emulation of the Python 2 C API for built-in file +objects, which used to rely on the buffered I/O (:c:type:`FILE\*`) support +from the C standard library. In Python 3, files and streams use the new +:mod:`io` module, which defines several layers over the low-level unbuffered +I/O of the operating system. The functions described below are +convenience C wrappers over these new APIs, and meant mostly for internal +error reporting in the interpreter; third-party code is advised to access +the :mod:`io` APIs instead. + + +.. c:function:: PyFile_FromFd(int fd, char *name, char *mode, int buffering, char *encoding, char *errors, char *newline, int closefd) + + Create a Python file object from the file descriptor of an already + opened file *fd*. The arguments *name*, *encoding*, *errors* and *newline* + can be *NULL* to use the defaults; *buffering* can be *-1* to use the + default. *name* is ignored and kept for backward compatibility. Return + *NULL* on failure. For a more comprehensive description of the arguments, + please refer to the :func:`io.open` function documentation. + + .. warning:: + + Since Python streams have their own buffering layer, mixing them with + OS-level file descriptors can produce various issues (such as unexpected + ordering of data). + + .. versionchanged:: 3.2 + Ignore *name* attribute. + + +.. c:function:: int PyObject_AsFileDescriptor(PyObject *p) + + Return the file descriptor associated with *p* as an :c:type:`int`. If the + object is an integer, its value is returned. If not, the + object's :meth:`fileno` method is called if it exists; the method must return + an integer, which is returned as the file descriptor value. Sets an + exception and returns ``-1`` on failure. + + +.. c:function:: PyObject* PyFile_GetLine(PyObject *p, int n) + + .. index:: single: EOFError (built-in exception) + + Equivalent to ``p.readline([n])``, this function reads one line from the + object *p*. *p* may be a file object or any object with a :meth:`readline` + method. If *n* is ``0``, exactly one line is read, regardless of the length of + the line. If *n* is greater than ``0``, no more than *n* bytes will be read + from the file; a partial line can be returned. In both cases, an empty string + is returned if the end of the file is reached immediately. If *n* is less than + ``0``, however, one line is read regardless of length, but :exc:`EOFError` is + raised if the end of the file is reached immediately. + + +.. c:function:: int PyFile_WriteObject(PyObject *obj, PyObject *p, int flags) + + .. index:: single: Py_PRINT_RAW + + Write object *obj* to file object *p*. The only supported flag for *flags* is + :const:`Py_PRINT_RAW`; if given, the :func:`str` of the object is written + instead of the :func:`repr`. Return ``0`` on success or ``-1`` on failure; the + appropriate exception will be set. + + +.. c:function:: int PyFile_WriteString(const char *s, PyObject *p) + + Write string *s* to file object *p*. Return ``0`` on success or ``-1`` on + failure; the appropriate exception will be set. diff --git a/lib/cpython-doc/c-api/float.rst b/lib/cpython-doc/c-api/float.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/float.rst @@ -0,0 +1,79 @@ +.. highlightlang:: c + +.. _floatobjects: + +Floating Point Objects +---------------------- + +.. index:: object: floating point + + +.. c:type:: PyFloatObject + + This subtype of :c:type:`PyObject` represents a Python floating point object. + + +.. c:var:: PyTypeObject PyFloat_Type + + This instance of :c:type:`PyTypeObject` represents the Python floating point + type. This is the same object as :class:`float` in the Python layer. + + +.. c:function:: int PyFloat_Check(PyObject *p) + + Return true if its argument is a :c:type:`PyFloatObject` or a subtype of + :c:type:`PyFloatObject`. + + +.. c:function:: int PyFloat_CheckExact(PyObject *p) + + Return true if its argument is a :c:type:`PyFloatObject`, but not a subtype of + :c:type:`PyFloatObject`. + + +.. c:function:: PyObject* PyFloat_FromString(PyObject *str) + + Create a :c:type:`PyFloatObject` object based on the string value in *str*, or + *NULL* on failure. + + +.. c:function:: PyObject* PyFloat_FromDouble(double v) + + Create a :c:type:`PyFloatObject` object from *v*, or *NULL* on failure. + + +.. c:function:: double PyFloat_AsDouble(PyObject *pyfloat) + + Return a C :c:type:`double` representation of the contents of *pyfloat*. If + *pyfloat* is not a Python floating point object but has a :meth:`__float__` + method, this method will first be called to convert *pyfloat* into a float. + This method returns ``-1.0`` upon failure, so one should call + :c:func:`PyErr_Occurred` to check for errors. + + +.. c:function:: double PyFloat_AS_DOUBLE(PyObject *pyfloat) + + Return a C :c:type:`double` representation of the contents of *pyfloat*, but + without error checking. + + +.. c:function:: PyObject* PyFloat_GetInfo(void) + + Return a structseq instance which contains information about the + precision, minimum and maximum values of a float. It's a thin wrapper + around the header file :file:`float.h`. + + +.. c:function:: double PyFloat_GetMax() + + Return the maximum representable finite float *DBL_MAX* as C :c:type:`double`. + + +.. c:function:: double PyFloat_GetMin() + + Return the minimum normalized positive float *DBL_MIN* as C :c:type:`double`. + +.. c:function:: int PyFloat_ClearFreeList() + + Clear the float free list. Return the number of items that could not + be freed. diff --git a/lib/cpython-doc/c-api/function.rst b/lib/cpython-doc/c-api/function.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/function.rst @@ -0,0 +1,107 @@ +.. highlightlang:: c + +.. _function-objects: + +Function Objects +---------------- + +.. index:: object: function + +There are a few functions specific to Python functions. + + +.. c:type:: PyFunctionObject + + The C structure used for functions. + + +.. c:var:: PyTypeObject PyFunction_Type + + .. index:: single: MethodType (in module types) + + This is an instance of :c:type:`PyTypeObject` and represents the Python function + type. It is exposed to Python programmers as ``types.FunctionType``. + + +.. c:function:: int PyFunction_Check(PyObject *o) + + Return true if *o* is a function object (has type :c:data:`PyFunction_Type`). + The parameter must not be *NULL*. + + +.. c:function:: PyObject* PyFunction_New(PyObject *code, PyObject *globals) + + Return a new function object associated with the code object *code*. *globals* + must be a dictionary with the global variables accessible to the function. + + The function's docstring, name and *__module__* are retrieved from the code + object, the argument defaults and closure are set to *NULL*. + + +.. c:function:: PyObject* PyFunction_NewWithQualName(PyObject *code, PyObject *globals, PyObject *qualname) + + As :c:func:`PyFunction_New`, but also allows to set the function object's + ``__qualname__`` attribute. *qualname* should be a unicode object or NULL; + if NULL, the ``__qualname__`` attribute is set to the same value as its + ``__name__`` attribute. + + .. versionadded:: 3.3 + + +.. c:function:: PyObject* PyFunction_GetCode(PyObject *op) + + Return the code object associated with the function object *op*. + + +.. c:function:: PyObject* PyFunction_GetGlobals(PyObject *op) + + Return the globals dictionary associated with the function object *op*. + + +.. c:function:: PyObject* PyFunction_GetModule(PyObject *op) + + Return the *__module__* attribute of the function object *op*. This is normally + a string containing the module name, but can be set to any other object by + Python code. + + +.. c:function:: PyObject* PyFunction_GetDefaults(PyObject *op) + + Return the argument default values of the function object *op*. This can be a + tuple of arguments or *NULL*. + + +.. c:function:: int PyFunction_SetDefaults(PyObject *op, PyObject *defaults) + + Set the argument default values for the function object *op*. *defaults* must be + *Py_None* or a tuple. + + Raises :exc:`SystemError` and returns ``-1`` on failure. + + +.. c:function:: PyObject* PyFunction_GetClosure(PyObject *op) + + Return the closure associated with the function object *op*. This can be *NULL* + or a tuple of cell objects. + + +.. c:function:: int PyFunction_SetClosure(PyObject *op, PyObject *closure) + + Set the closure associated with the function object *op*. *closure* must be + *Py_None* or a tuple of cell objects. + + Raises :exc:`SystemError` and returns ``-1`` on failure. + + +.. c:function:: PyObject *PyFunction_GetAnnotations(PyObject *op) + + Return the annotations of the function object *op*. This can be a + mutable dictionary or *NULL*. + + +.. c:function:: int PyFunction_SetAnnotations(PyObject *op, PyObject *annotations) + + Set the annotations for the function object *op*. *annotations* + must be a dictionary or *Py_None*. + + Raises :exc:`SystemError` and returns ``-1`` on failure. diff --git a/lib/cpython-doc/c-api/gcsupport.rst b/lib/cpython-doc/c-api/gcsupport.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/gcsupport.rst @@ -0,0 +1,152 @@ +.. highlightlang:: c + +.. _supporting-cycle-detection: + +Supporting Cyclic Garbage Collection +==================================== + +Python's support for detecting and collecting garbage which involves circular +references requires support from object types which are "containers" for other +objects which may also be containers. Types which do not store references to +other objects, or which only store references to atomic types (such as numbers +or strings), do not need to provide any explicit support for garbage +collection. + +To create a container type, the :attr:`tp_flags` field of the type object must +include the :const:`Py_TPFLAGS_HAVE_GC` and provide an implementation of the +:attr:`tp_traverse` handler. If instances of the type are mutable, a +:attr:`tp_clear` implementation must also be provided. + + +.. data:: Py_TPFLAGS_HAVE_GC + :noindex: + + Objects with a type with this flag set must conform with the rules + documented here. For convenience these objects will be referred to as + container objects. + +Constructors for container types must conform to two rules: + +#. The memory for the object must be allocated using :c:func:`PyObject_GC_New` + or :c:func:`PyObject_GC_NewVar`. + +#. Once all the fields which may contain references to other containers are + initialized, it must call :c:func:`PyObject_GC_Track`. + + +.. c:function:: TYPE* PyObject_GC_New(TYPE, PyTypeObject *type) + + Analogous to :c:func:`PyObject_New` but for container objects with the + :const:`Py_TPFLAGS_HAVE_GC` flag set. + + +.. c:function:: TYPE* PyObject_GC_NewVar(TYPE, PyTypeObject *type, Py_ssize_t size) + + Analogous to :c:func:`PyObject_NewVar` but for container objects with the + :const:`Py_TPFLAGS_HAVE_GC` flag set. + + +.. c:function:: TYPE* PyObject_GC_Resize(TYPE, PyVarObject *op, Py_ssize_t newsize) + + Resize an object allocated by :c:func:`PyObject_NewVar`. Returns the + resized object or *NULL* on failure. + + +.. c:function:: void PyObject_GC_Track(PyObject *op) + + Adds the object *op* to the set of container objects tracked by the + collector. The collector can run at unexpected times so objects must be + valid while being tracked. This should be called once all the fields + followed by the :attr:`tp_traverse` handler become valid, usually near the + end of the constructor. + + +.. c:function:: void _PyObject_GC_TRACK(PyObject *op) + + A macro version of :c:func:`PyObject_GC_Track`. It should not be used for + extension modules. + +Similarly, the deallocator for the object must conform to a similar pair of +rules: + +#. Before fields which refer to other containers are invalidated, + :c:func:`PyObject_GC_UnTrack` must be called. + +#. The object's memory must be deallocated using :c:func:`PyObject_GC_Del`. + + +.. c:function:: void PyObject_GC_Del(void *op) + + Releases memory allocated to an object using :c:func:`PyObject_GC_New` or + :c:func:`PyObject_GC_NewVar`. + + +.. c:function:: void PyObject_GC_UnTrack(void *op) + + Remove the object *op* from the set of container objects tracked by the + collector. Note that :c:func:`PyObject_GC_Track` can be called again on + this object to add it back to the set of tracked objects. The deallocator + (:attr:`tp_dealloc` handler) should call this for the object before any of + the fields used by the :attr:`tp_traverse` handler become invalid. + + +.. c:function:: void _PyObject_GC_UNTRACK(PyObject *op) + + A macro version of :c:func:`PyObject_GC_UnTrack`. It should not be used for + extension modules. + +The :attr:`tp_traverse` handler accepts a function parameter of this type: + + +.. c:type:: int (*visitproc)(PyObject *object, void *arg) + + Type of the visitor function passed to the :attr:`tp_traverse` handler. + The function should be called with an object to traverse as *object* and + the third parameter to the :attr:`tp_traverse` handler as *arg*. The + Python core uses several visitor functions to implement cyclic garbage + detection; it's not expected that users will need to write their own + visitor functions. + +The :attr:`tp_traverse` handler must have the following type: + + +.. c:type:: int (*traverseproc)(PyObject *self, visitproc visit, void *arg) + + Traversal function for a container object. Implementations must call the + *visit* function for each object directly contained by *self*, with the + parameters to *visit* being the contained object and the *arg* value passed + to the handler. The *visit* function must not be called with a *NULL* + object argument. If *visit* returns a non-zero value that value should be + returned immediately. + +To simplify writing :attr:`tp_traverse` handlers, a :c:func:`Py_VISIT` macro is +provided. In order to use this macro, the :attr:`tp_traverse` implementation +must name its arguments exactly *visit* and *arg*: + + +.. c:function:: void Py_VISIT(PyObject *o) + + Call the *visit* callback, with arguments *o* and *arg*. If *visit* returns + a non-zero value, then return it. Using this macro, :attr:`tp_traverse` + handlers look like:: + + static int + my_traverse(Noddy *self, visitproc visit, void *arg) + { + Py_VISIT(self->foo); + Py_VISIT(self->bar); + return 0; + } + +The :attr:`tp_clear` handler must be of the :c:type:`inquiry` type, or *NULL* +if the object is immutable. + + +.. c:type:: int (*inquiry)(PyObject *self) + + Drop references that may have created reference cycles. Immutable objects + do not have to define this method since they can never directly create + reference cycles. Note that the object must still be valid after calling + this method (don't just call :c:func:`Py_DECREF` on a reference). The + collector will call this method if it detects that this object is involved + in a reference cycle. diff --git a/lib/cpython-doc/c-api/gen.rst b/lib/cpython-doc/c-api/gen.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/gen.rst @@ -0,0 +1,38 @@ +.. highlightlang:: c + +.. _gen-objects: + +Generator Objects +----------------- + +Generator objects are what Python uses to implement generator iterators. They +are normally created by iterating over a function that yields values, rather +than explicitly calling :c:func:`PyGen_New`. + + +.. c:type:: PyGenObject + + The C structure used for generator objects. + + +.. c:var:: PyTypeObject PyGen_Type + + The type object corresponding to generator objects + + +.. c:function:: int PyGen_Check(ob) + + Return true if *ob* is a generator object; *ob* must not be *NULL*. + + +.. c:function:: int PyGen_CheckExact(ob) + + Return true if *ob*'s type is *PyGen_Type* is a generator object; *ob* must not + be *NULL*. + + +.. c:function:: PyObject* PyGen_New(PyFrameObject *frame) + + Create and return a new generator object based on the *frame* object. A + reference to *frame* is stolen by this function. The parameter must not be + *NULL*. diff --git a/lib/cpython-doc/c-api/import.rst b/lib/cpython-doc/c-api/import.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/import.rst @@ -0,0 +1,300 @@ +.. highlightlang:: c + +.. _importing: + +Importing Modules +================= + + +.. c:function:: PyObject* PyImport_ImportModule(const char *name) + + .. index:: + single: package variable; __all__ + single: __all__ (package variable) + single: modules (in module sys) + + This is a simplified interface to :c:func:`PyImport_ImportModuleEx` below, + leaving the *globals* and *locals* arguments set to *NULL* and *level* set + to 0. When the *name* + argument contains a dot (when it specifies a submodule of a package), the + *fromlist* argument is set to the list ``['*']`` so that the return value is the + named module rather than the top-level package containing it as would otherwise + be the case. (Unfortunately, this has an additional side effect when *name* in + fact specifies a subpackage instead of a submodule: the submodules specified in + the package's ``__all__`` variable are loaded.) Return a new reference to the + imported module, or *NULL* with an exception set on failure. A failing + import of a module doesn't leave the module in :data:`sys.modules`. + + This function always uses absolute imports. + + +.. c:function:: PyObject* PyImport_ImportModuleNoBlock(const char *name) + + This version of :c:func:`PyImport_ImportModule` does not block. It's intended + to be used in C functions that import other modules to execute a function. + The import may block if another thread holds the import lock. The function + :c:func:`PyImport_ImportModuleNoBlock` never blocks. It first tries to fetch + the module from sys.modules and falls back to :c:func:`PyImport_ImportModule` + unless the lock is held, in which case the function will raise an + :exc:`ImportError`. + + +.. c:function:: PyObject* PyImport_ImportModuleEx(char *name, PyObject *globals, PyObject *locals, PyObject *fromlist) + + .. index:: builtin: __import__ + + Import a module. This is best described by referring to the built-in Python + function :func:`__import__`, as the standard :func:`__import__` function calls + this function directly. + + The return value is a new reference to the imported module or top-level + package, or *NULL* with an exception set on failure. Like for + :func:`__import__`, the return value when a submodule of a package was + requested is normally the top-level package, unless a non-empty *fromlist* + was given. + + Failing imports remove incomplete module objects, like with + :c:func:`PyImport_ImportModule`. + + +.. c:function:: PyObject* PyImport_ImportModuleLevelObject(PyObject *name, PyObject *globals, PyObject *locals, PyObject *fromlist, int level) + + Import a module. This is best described by referring to the built-in Python + function :func:`__import__`, as the standard :func:`__import__` function calls + this function directly. + + The return value is a new reference to the imported module or top-level package, + or *NULL* with an exception set on failure. Like for :func:`__import__`, + the return value when a submodule of a package was requested is normally the + top-level package, unless a non-empty *fromlist* was given. + + .. versionadded:: 3.3 + + +.. c:function:: PyObject* PyImport_ImportModuleLevel(char *name, PyObject *globals, PyObject *locals, PyObject *fromlist, int level) + + Similar to :c:func:`PyImport_ImportModuleLevelObject`, but the name is an + UTF-8 encoded string instead of a Unicode object. + +.. c:function:: PyObject* PyImport_Import(PyObject *name) + + This is a higher-level interface that calls the current "import hook + function" (with an explicit *level* of 0, meaning absolute import). It + invokes the :func:`__import__` function from the ``__builtins__`` of the + current globals. This means that the import is done using whatever import + hooks are installed in the current environment. + + This function always uses absolute imports. + + +.. c:function:: PyObject* PyImport_ReloadModule(PyObject *m) + + Reload a module. Return a new reference to the reloaded module, or *NULL* with + an exception set on failure (the module still exists in this case). + + +.. c:function:: PyObject* PyImport_AddModuleObject(PyObject *name) + + Return the module object corresponding to a module name. The *name* argument + may be of the form ``package.module``. First check the modules dictionary if + there's one there, and if not, create a new one and insert it in the modules + dictionary. Return *NULL* with an exception set on failure. + + .. note:: + + This function does not load or import the module; if the module wasn't already + loaded, you will get an empty module object. Use :c:func:`PyImport_ImportModule` + or one of its variants to import a module. Package structures implied by a + dotted name for *name* are not created if not already present. + + .. versionadded:: 3.3 + + +.. c:function:: PyObject* PyImport_AddModule(const char *name) + + Similar to :c:func:`PyImport_AddModuleObject`, but the name is a UTF-8 + encoded string instead of a Unicode object. + + +.. c:function:: PyObject* PyImport_ExecCodeModule(char *name, PyObject *co) + + .. index:: builtin: compile + + Given a module name (possibly of the form ``package.module``) and a code object + read from a Python bytecode file or obtained from the built-in function + :func:`compile`, load the module. Return a new reference to the module object, + or *NULL* with an exception set if an error occurred. *name* + is removed from :attr:`sys.modules` in error cases, even if *name* was already + in :attr:`sys.modules` on entry to :c:func:`PyImport_ExecCodeModule`. Leaving + incompletely initialized modules in :attr:`sys.modules` is dangerous, as imports of + such modules have no way to know that the module object is an unknown (and + probably damaged with respect to the module author's intents) state. + + The module's :attr:`__file__` attribute will be set to the code object's + :c:member:`co_filename`. + + This function will reload the module if it was already imported. See + :c:func:`PyImport_ReloadModule` for the intended way to reload a module. + + If *name* points to a dotted name of the form ``package.module``, any package + structures not already created will still not be created. + + See also :c:func:`PyImport_ExecCodeModuleEx` and + :c:func:`PyImport_ExecCodeModuleWithPathnames`. + + +.. c:function:: PyObject* PyImport_ExecCodeModuleEx(char *name, PyObject *co, char *pathname) + + Like :c:func:`PyImport_ExecCodeModule`, but the :attr:`__file__` attribute of + the module object is set to *pathname* if it is non-``NULL``. + + See also :c:func:`PyImport_ExecCodeModuleWithPathnames`. + + +.. c:function:: PyObject* PyImport_ExecCodeModuleObject(PyObject *name, PyObject *co, PyObject *pathname, PyObject *cpathname) + + Like :c:func:`PyImport_ExecCodeModuleEx`, but the :attr:`__cached__` + attribute of the module object is set to *cpathname* if it is + non-``NULL``. Of the three functions, this is the preferred one to use. + + .. versionadded:: 3.3 + + +.. c:function:: PyObject* PyImport_ExecCodeModuleWithPathnames(char *name, PyObject *co, char *pathname, char *cpathname) + + Like :c:func:`PyImport_ExecCodeModuleObject`, but *name*, *pathname* and + *cpathname* are UTF-8 encoded strings. + + .. versionadded:: 3.2 + + +.. c:function:: long PyImport_GetMagicNumber() + + Return the magic number for Python bytecode files (a.k.a. :file:`.pyc` and + :file:`.pyo` files). The magic number should be present in the first four bytes + of the bytecode file, in little-endian byte order. + + +.. c:function:: const char * PyImport_GetMagicTag() + + Return the magic tag string for :pep:`3147` format Python bytecode file + names. + + .. versionadded:: 3.2 + +.. c:function:: PyObject* PyImport_GetModuleDict() + + Return the dictionary used for the module administration (a.k.a. + ``sys.modules``). Note that this is a per-interpreter variable. + + +.. c:function:: PyObject* PyImport_GetImporter(PyObject *path) + + Return an importer object for a :data:`sys.path`/:attr:`pkg.__path__` item + *path*, possibly by fetching it from the :data:`sys.path_importer_cache` + dict. If it wasn't yet cached, traverse :data:`sys.path_hooks` until a hook + is found that can handle the path item. Return ``None`` if no hook could; + this tells our caller it should fall back to the built-in import mechanism. + Cache the result in :data:`sys.path_importer_cache`. Return a new reference + to the importer object. + + +.. c:function:: void _PyImport_Init() + + Initialize the import mechanism. For internal use only. + + +.. c:function:: void PyImport_Cleanup() + + Empty the module table. For internal use only. + + +.. c:function:: void _PyImport_Fini() + + Finalize the import mechanism. For internal use only. + + +.. c:function:: PyObject* _PyImport_FindExtension(char *, char *) + + For internal use only. + + +.. c:function:: PyObject* _PyImport_FixupExtension(char *, char *) + + For internal use only. + + +.. c:function:: int PyImport_ImportFrozenModuleObject(PyObject *name) + + Load a frozen module named *name*. Return ``1`` for success, ``0`` if the + module is not found, and ``-1`` with an exception set if the initialization + failed. To access the imported module on a successful load, use + :c:func:`PyImport_ImportModule`. (Note the misnomer --- this function would + reload the module if it was already imported.) + + .. versionadded:: 3.3 + + +.. c:function:: int PyImport_ImportFrozenModule(char *name) + + Similar to :c:func:`PyImport_ImportFrozenModuleObject`, but the name is a + UTF-8 encoded string instead of a Unicode object. + + +.. c:type:: struct _frozen + + .. index:: single: freeze utility + + This is the structure type definition for frozen module descriptors, as + generated by the :program:`freeze` utility (see :file:`Tools/freeze/` in the + Python source distribution). Its definition, found in :file:`Include/import.h`, + is:: + + struct _frozen { + char *name; + unsigned char *code; + int size; + }; + + +.. c:var:: struct _frozen* PyImport_FrozenModules + + This pointer is initialized to point to an array of :c:type:`struct _frozen` + records, terminated by one whose members are all *NULL* or zero. When a frozen + module is imported, it is searched in this table. Third-party code could play + tricks with this to provide a dynamically created collection of frozen modules. + + +.. c:function:: int PyImport_AppendInittab(const char *name, PyObject* (*initfunc)(void)) + + Add a single module to the existing table of built-in modules. This is a + convenience wrapper around :c:func:`PyImport_ExtendInittab`, returning ``-1`` if + the table could not be extended. The new module can be imported by the name + *name*, and uses the function *initfunc* as the initialization function called + on the first attempted import. This should be called before + :c:func:`Py_Initialize`. + + +.. c:type:: struct _inittab + + Structure describing a single entry in the list of built-in modules. Each of + these structures gives the name and initialization function for a module built + into the interpreter. The name is an ASCII encoded string. Programs which + embed Python may use an array of these structures in conjunction with + :c:func:`PyImport_ExtendInittab` to provide additional built-in modules. + The structure is defined in :file:`Include/import.h` as:: + + struct _inittab { + char *name; /* ASCII encoded string */ + PyObject* (*initfunc)(void); + }; + + +.. c:function:: int PyImport_ExtendInittab(struct _inittab *newtab) + + Add a collection of modules to the table of built-in modules. The *newtab* + array must end with a sentinel entry which contains *NULL* for the :attr:`name` + field; failure to provide the sentinel value can result in a memory fault. + Returns ``0`` on success or ``-1`` if insufficient memory could be allocated to + extend the internal table. In the event of failure, no modules are added to the + internal table. This should be called before :c:func:`Py_Initialize`. diff --git a/lib/cpython-doc/c-api/index.rst b/lib/cpython-doc/c-api/index.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/index.rst @@ -0,0 +1,27 @@ +.. _c-api-index: + +################################## + Python/C API Reference Manual +################################## + +:Release: |version| +:Date: |today| + +This manual documents the API used by C and C++ programmers who want to write +extension modules or embed Python. It is a companion to :ref:`extending-index`, +which describes the general principles of extension writing but does not +document the API functions in detail. + +.. toctree:: + :maxdepth: 2 + + intro.rst + veryhigh.rst + refcounting.rst + exceptions.rst + utilities.rst + abstract.rst + concrete.rst + init.rst + memory.rst + objimpl.rst diff --git a/lib/cpython-doc/c-api/init.rst b/lib/cpython-doc/c-api/init.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/init.rst @@ -0,0 +1,1134 @@ +.. highlightlang:: c + + +.. _initialization: + +***************************************** +Initialization, Finalization, and Threads +***************************************** + + +Initializing and finalizing the interpreter +=========================================== + + +.. c:function:: void Py_Initialize() + + .. index:: + single: Py_SetProgramName() + single: PyEval_InitThreads() + single: modules (in module sys) + single: path (in module sys) + module: builtins + module: __main__ + module: sys + triple: module; search; path + single: PySys_SetArgv() + single: PySys_SetArgvEx() + single: Py_Finalize() + + Initialize the Python interpreter. In an application embedding Python, this + should be called before using any other Python/C API functions; with the + exception of :c:func:`Py_SetProgramName`, :c:func:`Py_SetPythonHome` and :c:func:`Py_SetPath`. This initializes + the table of loaded modules (``sys.modules``), and creates the fundamental + modules :mod:`builtins`, :mod:`__main__` and :mod:`sys`. It also initializes + the module search path (``sys.path``). It does not set ``sys.argv``; use + :c:func:`PySys_SetArgvEx` for that. This is a no-op when called for a second time + (without calling :c:func:`Py_Finalize` first). There is no return value; it is a + fatal error if the initialization fails. + + +.. c:function:: void Py_InitializeEx(int initsigs) + + This function works like :c:func:`Py_Initialize` if *initsigs* is 1. If + *initsigs* is 0, it skips initialization registration of signal handlers, which + might be useful when Python is embedded. + + +.. c:function:: int Py_IsInitialized() + + Return true (nonzero) when the Python interpreter has been initialized, false + (zero) if not. After :c:func:`Py_Finalize` is called, this returns false until + :c:func:`Py_Initialize` is called again. + + +.. c:function:: void Py_Finalize() + + Undo all initializations made by :c:func:`Py_Initialize` and subsequent use of + Python/C API functions, and destroy all sub-interpreters (see + :c:func:`Py_NewInterpreter` below) that were created and not yet destroyed since + the last call to :c:func:`Py_Initialize`. Ideally, this frees all memory + allocated by the Python interpreter. This is a no-op when called for a second + time (without calling :c:func:`Py_Initialize` again first). There is no return + value; errors during finalization are ignored. + + This function is provided for a number of reasons. An embedding application + might want to restart Python without having to restart the application itself. + An application that has loaded the Python interpreter from a dynamically + loadable library (or DLL) might want to free all memory allocated by Python + before unloading the DLL. During a hunt for memory leaks in an application a + developer might want to free all memory allocated by Python before exiting from + the application. + + **Bugs and caveats:** The destruction of modules and objects in modules is done + in random order; this may cause destructors (:meth:`__del__` methods) to fail + when they depend on other objects (even functions) or modules. Dynamically + loaded extension modules loaded by Python are not unloaded. Small amounts of + memory allocated by the Python interpreter may not be freed (if you find a leak, + please report it). Memory tied up in circular references between objects is not + freed. Some memory allocated by extension modules may not be freed. Some + extensions may not work properly if their initialization routine is called more + than once; this can happen if an application calls :c:func:`Py_Initialize` and + :c:func:`Py_Finalize` more than once. + + +Process-wide parameters +======================= + + +.. c:function:: void Py_SetProgramName(wchar_t *name) + + .. index:: + single: Py_Initialize() + single: main() + single: Py_GetPath() + + This function should be called before :c:func:`Py_Initialize` is called for + the first time, if it is called at all. It tells the interpreter the value + of the ``argv[0]`` argument to the :c:func:`main` function of the program + (converted to wide characters). + This is used by :c:func:`Py_GetPath` and some other functions below to find + the Python run-time libraries relative to the interpreter executable. The + default value is ``'python'``. The argument should point to a + zero-terminated wide character string in static storage whose contents will not + change for the duration of the program's execution. No code in the Python + interpreter will change the contents of this storage. + + +.. c:function:: wchar* Py_GetProgramName() + + .. index:: single: Py_SetProgramName() + + Return the program name set with :c:func:`Py_SetProgramName`, or the default. + The returned string points into static storage; the caller should not modify its + value. + + +.. c:function:: wchar_t* Py_GetPrefix() + + Return the *prefix* for installed platform-independent files. This is derived + through a number of complicated rules from the program name set with + :c:func:`Py_SetProgramName` and some environment variables; for example, if the + program name is ``'/usr/local/bin/python'``, the prefix is ``'/usr/local'``. The + returned string points into static storage; the caller should not modify its + value. This corresponds to the :makevar:`prefix` variable in the top-level + :file:`Makefile` and the ``--prefix`` argument to the :program:`configure` + script at build time. The value is available to Python code as ``sys.prefix``. + It is only useful on Unix. See also the next function. + + +.. c:function:: wchar_t* Py_GetExecPrefix() + + Return the *exec-prefix* for installed platform-*dependent* files. This is + derived through a number of complicated rules from the program name set with + :c:func:`Py_SetProgramName` and some environment variables; for example, if the + program name is ``'/usr/local/bin/python'``, the exec-prefix is + ``'/usr/local'``. The returned string points into static storage; the caller + should not modify its value. This corresponds to the :makevar:`exec_prefix` + variable in the top-level :file:`Makefile` and the ``--exec-prefix`` + argument to the :program:`configure` script at build time. The value is + available to Python code as ``sys.exec_prefix``. It is only useful on Unix. + + Background: The exec-prefix differs from the prefix when platform dependent + files (such as executables and shared libraries) are installed in a different + directory tree. In a typical installation, platform dependent files may be + installed in the :file:`/usr/local/plat` subtree while platform independent may + be installed in :file:`/usr/local`. + + Generally speaking, a platform is a combination of hardware and software + families, e.g. Sparc machines running the Solaris 2.x operating system are + considered the same platform, but Intel machines running Solaris 2.x are another + platform, and Intel machines running Linux are yet another platform. Different + major revisions of the same operating system generally also form different + platforms. Non-Unix operating systems are a different story; the installation + strategies on those systems are so different that the prefix and exec-prefix are + meaningless, and set to the empty string. Note that compiled Python bytecode + files are platform independent (but not independent from the Python version by + which they were compiled!). + + System administrators will know how to configure the :program:`mount` or + :program:`automount` programs to share :file:`/usr/local` between platforms + while having :file:`/usr/local/plat` be a different filesystem for each + platform. + + +.. c:function:: wchar_t* Py_GetProgramFullPath() + + .. index:: + single: Py_SetProgramName() + single: executable (in module sys) + + Return the full program name of the Python executable; this is computed as a + side-effect of deriving the default module search path from the program name + (set by :c:func:`Py_SetProgramName` above). The returned string points into + static storage; the caller should not modify its value. The value is available + to Python code as ``sys.executable``. + + +.. c:function:: wchar_t* Py_GetPath() + + .. index:: + triple: module; search; path + single: path (in module sys) + single: Py_SetPath() + + Return the default module search path; this is computed from the program name + (set by :c:func:`Py_SetProgramName` above) and some environment variables. + The returned string consists of a series of directory names separated by a + platform dependent delimiter character. The delimiter character is ``':'`` + on Unix and Mac OS X, ``';'`` on Windows. The returned string points into + static storage; the caller should not modify its value. The list + :data:`sys.path` is initialized with this value on interpreter startup; it + can be (and usually is) modified later to change the search path for loading + modules. + + .. XXX should give the exact rules + + +.. c:function:: void Py_SetPath(const wchar_t *) + + .. index:: + triple: module; search; path + single: path (in module sys) + single: Py_GetPath() + + Set the default module search path. If this function is called before + :c:func:`Py_Initialize`, then :c:func:`Py_GetPath` won't attempt to compute a + default search path but uses the one provided instead. This is useful if + Python is embedded by an application that has full knowledge of the location + of all modules. The path components should be separated by semicolons. + + This also causes :data:`sys.executable` to be set only to the raw program + name (see :c:func:`Py_SetProgramName`) and for :data:`sys.prefix` and + :data:`sys.exec_prefix` to be empty. It is up to the caller to modify these + if required after calling :c:func:`Py_Initialize`. + + +.. c:function:: const char* Py_GetVersion() + + Return the version of this Python interpreter. This is a string that looks + something like :: + + "3.0a5+ (py3k:63103M, May 12 2008, 00:53:55) \n[GCC 4.2.3]" + + .. index:: single: version (in module sys) + + The first word (up to the first space character) is the current Python version; + the first three characters are the major and minor version separated by a + period. The returned string points into static storage; the caller should not + modify its value. The value is available to Python code as :data:`sys.version`. + + +.. c:function:: const char* Py_GetPlatform() + + .. index:: single: platform (in module sys) + + Return the platform identifier for the current platform. On Unix, this is + formed from the "official" name of the operating system, converted to lower + case, followed by the major revision number; e.g., for Solaris 2.x, which is + also known as SunOS 5.x, the value is ``'sunos5'``. On Mac OS X, it is + ``'darwin'``. On Windows, it is ``'win'``. The returned string points into + static storage; the caller should not modify its value. The value is available + to Python code as ``sys.platform``. + + +.. c:function:: const char* Py_GetCopyright() + + Return the official copyright string for the current Python version, for example + + ``'Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam'`` + + .. index:: single: copyright (in module sys) + + The returned string points into static storage; the caller should not modify its + value. The value is available to Python code as ``sys.copyright``. + + +.. c:function:: const char* Py_GetCompiler() + + Return an indication of the compiler used to build the current Python version, + in square brackets, for example:: + + "[GCC 2.7.2.2]" + + .. index:: single: version (in module sys) + + The returned string points into static storage; the caller should not modify its + value. The value is available to Python code as part of the variable + ``sys.version``. + + +.. c:function:: const char* Py_GetBuildInfo() + + Return information about the sequence number and build date and time of the + current Python interpreter instance, for example :: + + "#67, Aug 1 1997, 22:34:28" + + .. index:: single: version (in module sys) + + The returned string points into static storage; the caller should not modify its + value. The value is available to Python code as part of the variable + ``sys.version``. + + +.. c:function:: void PySys_SetArgvEx(int argc, wchar_t **argv, int updatepath) + + .. index:: + single: main() + single: Py_FatalError() + single: argv (in module sys) + + Set :data:`sys.argv` based on *argc* and *argv*. These parameters are + similar to those passed to the program's :c:func:`main` function with the + difference that the first entry should refer to the script file to be + executed rather than the executable hosting the Python interpreter. If there + isn't a script that will be run, the first entry in *argv* can be an empty + string. If this function fails to initialize :data:`sys.argv`, a fatal + condition is signalled using :c:func:`Py_FatalError`. + + If *updatepath* is zero, this is all the function does. If *updatepath* + is non-zero, the function also modifies :data:`sys.path` according to the + following algorithm: + + - If the name of an existing script is passed in ``argv[0]``, the absolute + path of the directory where the script is located is prepended to + :data:`sys.path`. + - Otherwise (that is, if *argc* is 0 or ``argv[0]`` doesn't point + to an existing file name), an empty string is prepended to + :data:`sys.path`, which is the same as prepending the current working + directory (``"."``). + + .. note:: + It is recommended that applications embedding the Python interpreter + for purposes other than executing a single script pass 0 as *updatepath*, + and update :data:`sys.path` themselves if desired. + See `CVE-2008-5983 `_. + + On versions before 3.1.3, you can achieve the same effect by manually + popping the first :data:`sys.path` element after having called + :c:func:`PySys_SetArgv`, for example using:: + + PyRun_SimpleString("import sys; sys.path.pop(0)\n"); + + .. versionadded:: 3.1.3 + + .. XXX impl. doesn't seem consistent in allowing 0/NULL for the params; + check w/ Guido. + + +.. c:function:: void PySys_SetArgv(int argc, wchar_t **argv) + + This function works like :c:func:`PySys_SetArgvEx` with *updatepath* set to 1. + + +.. c:function:: void Py_SetPythonHome(wchar_t *home) + + Set the default "home" directory, that is, the location of the standard + Python libraries. See :envvar:`PYTHONHOME` for the meaning of the + argument string. + + The argument should point to a zero-terminated character string in static + storage whose contents will not change for the duration of the program's + execution. No code in the Python interpreter will change the contents of + this storage. + + +.. c:function:: w_char* Py_GetPythonHome() + + Return the default "home", that is, the value set by a previous call to + :c:func:`Py_SetPythonHome`, or the value of the :envvar:`PYTHONHOME` + environment variable if it is set. + + +.. _threads: + +Thread State and the Global Interpreter Lock +============================================ + +.. index:: + single: global interpreter lock + single: interpreter lock + single: lock, interpreter + +The Python interpreter is not fully thread-safe. In order to support +multi-threaded Python programs, there's a global lock, called the :term:`global +interpreter lock` or :term:`GIL`, that must be held by the current thread before +it can safely access Python objects. Without the lock, even the simplest +operations could cause problems in a multi-threaded program: for example, when +two threads simultaneously increment the reference count of the same object, the +reference count could end up being incremented only once instead of twice. + +.. index:: single: setswitchinterval() (in module sys) + +Therefore, the rule exists that only the thread that has acquired the +:term:`GIL` may operate on Python objects or call Python/C API functions. +In order to emulate concurrency of execution, the interpreter regularly +tries to switch threads (see :func:`sys.setswitchinterval`). The lock is also +released around potentially blocking I/O operations like reading or writing +a file, so that other Python threads can run in the meantime. + +.. index:: + single: PyThreadState + single: PyThreadState + +The Python interpreter keeps some thread-specific bookkeeping information +inside a data structure called :c:type:`PyThreadState`. There's also one +global variable pointing to the current :c:type:`PyThreadState`: it can +be retrieved using :c:func:`PyThreadState_Get`. + +Releasing the GIL from extension code +------------------------------------- + +Most extension code manipulating the :term:`GIL` has the following simple +structure:: + + Save the thread state in a local variable. + Release the global interpreter lock. + ... Do some blocking I/O operation ... + Reacquire the global interpreter lock. + Restore the thread state from the local variable. + +This is so common that a pair of macros exists to simplify it:: + + Py_BEGIN_ALLOW_THREADS + ... Do some blocking I/O operation ... + Py_END_ALLOW_THREADS + +.. index:: + single: Py_BEGIN_ALLOW_THREADS + single: Py_END_ALLOW_THREADS + +The :c:macro:`Py_BEGIN_ALLOW_THREADS` macro opens a new block and declares a +hidden local variable; the :c:macro:`Py_END_ALLOW_THREADS` macro closes the +block. These two macros are still available when Python is compiled without +thread support (they simply have an empty expansion). + +When thread support is enabled, the block above expands to the following code:: + + PyThreadState *_save; + + _save = PyEval_SaveThread(); + ...Do some blocking I/O operation... + PyEval_RestoreThread(_save); + +.. index:: + single: PyEval_RestoreThread() + single: PyEval_SaveThread() + +Here is how these functions work: the global interpreter lock is used to protect the pointer to the +current thread state. When releasing the lock and saving the thread state, +the current thread state pointer must be retrieved before the lock is released +(since another thread could immediately acquire the lock and store its own thread +state in the global variable). Conversely, when acquiring the lock and restoring +the thread state, the lock must be acquired before storing the thread state +pointer. + +.. note:: + Calling system I/O functions is the most common use case for releasing + the GIL, but it can also be useful before calling long-running computations + which don't need access to Python objects, such as compression or + cryptographic functions operating over memory buffers. For example, the + standard :mod:`zlib` and :mod:`hashlib` modules release the GIL when + compressing or hashing data. + +Non-Python created threads +-------------------------- + +When threads are created using the dedicated Python APIs (such as the +:mod:`threading` module), a thread state is automatically associated to them +and the code showed above is therefore correct. However, when threads are +created from C (for example by a third-party library with its own thread +management), they don't hold the GIL, nor is there a thread state structure +for them. + +If you need to call Python code from these threads (often this will be part +of a callback API provided by the aforementioned third-party library), +you must first register these threads with the interpreter by +creating a thread state data structure, then acquiring the GIL, and finally +storing their thread state pointer, before you can start using the Python/C +API. When you are done, you should reset the thread state pointer, release +the GIL, and finally free the thread state data structure. + +The :c:func:`PyGILState_Ensure` and :c:func:`PyGILState_Release` functions do +all of the above automatically. The typical idiom for calling into Python +from a C thread is:: + + PyGILState_STATE gstate; + gstate = PyGILState_Ensure(); + + /* Perform Python actions here. */ + result = CallSomeFunction(); + /* evaluate result or handle exception */ + + /* Release the thread. No Python API allowed beyond this point. */ + PyGILState_Release(gstate); + +Note that the :c:func:`PyGILState_\*` functions assume there is only one global +interpreter (created automatically by :c:func:`Py_Initialize`). Python +supports the creation of additional interpreters (using +:c:func:`Py_NewInterpreter`), but mixing multiple interpreters and the +:c:func:`PyGILState_\*` API is unsupported. + +Another important thing to note about threads is their behaviour in the face +of the C :c:func:`fork` call. On most systems with :c:func:`fork`, after a +process forks only the thread that issued the fork will exist. That also +means any locks held by other threads will never be released. Python solves +this for :func:`os.fork` by acquiring the locks it uses internally before +the fork, and releasing them afterwards. In addition, it resets any +:ref:`lock-objects` in the child. When extending or embedding Python, there +is no way to inform Python of additional (non-Python) locks that need to be +acquired before or reset after a fork. OS facilities such as +:c:func:`pthread_atfork` would need to be used to accomplish the same thing. +Additionally, when extending or embedding Python, calling :c:func:`fork` +directly rather than through :func:`os.fork` (and returning to or calling +into Python) may result in a deadlock by one of Python's internal locks +being held by a thread that is defunct after the fork. +:c:func:`PyOS_AfterFork` tries to reset the necessary locks, but is not +always able to. + + +High-level API +-------------- + +These are the most commonly used types and functions when writing C extension +code, or when embedding the Python interpreter: + +.. c:type:: PyInterpreterState + + This data structure represents the state shared by a number of cooperating + threads. Threads belonging to the same interpreter share their module + administration and a few other internal items. There are no public members in + this structure. + + Threads belonging to different interpreters initially share nothing, except + process state like available memory, open file descriptors and such. The global + interpreter lock is also shared by all threads, regardless of to which + interpreter they belong. + + +.. c:type:: PyThreadState + + This data structure represents the state of a single thread. The only public + data member is :c:type:`PyInterpreterState \*`:attr:`interp`, which points to + this thread's interpreter state. + + +.. c:function:: void PyEval_InitThreads() + + .. index:: + single: PyEval_AcquireThread() + single: PyEval_ReleaseThread() + single: PyEval_SaveThread() + single: PyEval_RestoreThread() + + Initialize and acquire the global interpreter lock. It should be called in the + main thread before creating a second thread or engaging in any other thread + operations such as ``PyEval_ReleaseThread(tstate)``. It is not needed before + calling :c:func:`PyEval_SaveThread` or :c:func:`PyEval_RestoreThread`. + + This is a no-op when called for a second time. + + .. versionchanged:: 3.2 + This function cannot be called before :c:func:`Py_Initialize()` anymore. + + .. index:: module: _thread + + .. note:: + When only the main thread exists, no GIL operations are needed. This is a + common situation (most Python programs do not use threads), and the lock + operations slow the interpreter down a bit. Therefore, the lock is not + created initially. This situation is equivalent to having acquired the lock: + when there is only a single thread, all object accesses are safe. Therefore, + when this function initializes the global interpreter lock, it also acquires + it. Before the Python :mod:`_thread` module creates a new thread, knowing + that either it has the lock or the lock hasn't been created yet, it calls + :c:func:`PyEval_InitThreads`. When this call returns, it is guaranteed that + the lock has been created and that the calling thread has acquired it. + + It is **not** safe to call this function when it is unknown which thread (if + any) currently has the global interpreter lock. + + This function is not available when thread support is disabled at compile time. + + +.. c:function:: int PyEval_ThreadsInitialized() + + Returns a non-zero value if :c:func:`PyEval_InitThreads` has been called. This + function can be called without holding the GIL, and therefore can be used to + avoid calls to the locking API when running single-threaded. This function is + not available when thread support is disabled at compile time. + + +.. c:function:: PyThreadState* PyEval_SaveThread() + + Release the global interpreter lock (if it has been created and thread + support is enabled) and reset the thread state to *NULL*, returning the + previous thread state (which is not *NULL*). If the lock has been created, + the current thread must have acquired it. (This function is available even + when thread support is disabled at compile time.) + + +.. c:function:: void PyEval_RestoreThread(PyThreadState *tstate) + + Acquire the global interpreter lock (if it has been created and thread + support is enabled) and set the thread state to *tstate*, which must not be + *NULL*. If the lock has been created, the current thread must not have + acquired it, otherwise deadlock ensues. (This function is available even + when thread support is disabled at compile time.) + + +.. c:function:: PyThreadState* PyThreadState_Get() + + Return the current thread state. The global interpreter lock must be held. + When the current thread state is *NULL*, this issues a fatal error (so that + the caller needn't check for *NULL*). + + +.. c:function:: PyThreadState* PyThreadState_Swap(PyThreadState *tstate) + + Swap the current thread state with the thread state given by the argument + *tstate*, which may be *NULL*. The global interpreter lock must be held + and is not released. + + +.. c:function:: void PyEval_ReInitThreads() + + This function is called from :c:func:`PyOS_AfterFork` to ensure that newly + created child processes don't hold locks referring to threads which + are not running in the child process. + + +The following functions use thread-local storage, and are not compatible +with sub-interpreters: + +.. c:function:: PyGILState_STATE PyGILState_Ensure() + + Ensure that the current thread is ready to call the Python C API regardless + of the current state of Python, or of the global interpreter lock. This may + be called as many times as desired by a thread as long as each call is + matched with a call to :c:func:`PyGILState_Release`. In general, other + thread-related APIs may be used between :c:func:`PyGILState_Ensure` and + :c:func:`PyGILState_Release` calls as long as the thread state is restored to + its previous state before the Release(). For example, normal usage of the + :c:macro:`Py_BEGIN_ALLOW_THREADS` and :c:macro:`Py_END_ALLOW_THREADS` macros is + acceptable. + + The return value is an opaque "handle" to the thread state when + :c:func:`PyGILState_Ensure` was called, and must be passed to + :c:func:`PyGILState_Release` to ensure Python is left in the same state. Even + though recursive calls are allowed, these handles *cannot* be shared - each + unique call to :c:func:`PyGILState_Ensure` must save the handle for its call + to :c:func:`PyGILState_Release`. + + When the function returns, the current thread will hold the GIL and be able + to call arbitrary Python code. Failure is a fatal error. + + +.. c:function:: void PyGILState_Release(PyGILState_STATE) + + Release any resources previously acquired. After this call, Python's state will + be the same as it was prior to the corresponding :c:func:`PyGILState_Ensure` call + (but generally this state will be unknown to the caller, hence the use of the + GILState API). + + Every call to :c:func:`PyGILState_Ensure` must be matched by a call to + :c:func:`PyGILState_Release` on the same thread. + + +.. c:function:: PyThreadState PyGILState_GetThisThreadState() + + Get the current thread state for this thread. May return ``NULL`` if no + GILState API has been used on the current thread. Note that the main thread + always has such a thread-state, even if no auto-thread-state call has been + made on the main thread. This is mainly a helper/diagnostic function. + + +The following macros are normally used without a trailing semicolon; look for +example usage in the Python source distribution. + + +.. c:macro:: Py_BEGIN_ALLOW_THREADS + + This macro expands to ``{ PyThreadState *_save; _save = PyEval_SaveThread();``. + Note that it contains an opening brace; it must be matched with a following + :c:macro:`Py_END_ALLOW_THREADS` macro. See above for further discussion of this + macro. It is a no-op when thread support is disabled at compile time. + + +.. c:macro:: Py_END_ALLOW_THREADS + + This macro expands to ``PyEval_RestoreThread(_save); }``. Note that it contains + a closing brace; it must be matched with an earlier + :c:macro:`Py_BEGIN_ALLOW_THREADS` macro. See above for further discussion of + this macro. It is a no-op when thread support is disabled at compile time. + + +.. c:macro:: Py_BLOCK_THREADS + + This macro expands to ``PyEval_RestoreThread(_save);``: it is equivalent to + :c:macro:`Py_END_ALLOW_THREADS` without the closing brace. It is a no-op when + thread support is disabled at compile time. + + +.. c:macro:: Py_UNBLOCK_THREADS + + This macro expands to ``_save = PyEval_SaveThread();``: it is equivalent to + :c:macro:`Py_BEGIN_ALLOW_THREADS` without the opening brace and variable + declaration. It is a no-op when thread support is disabled at compile time. + + +Low-level API +------------- + +All of the following functions are only available when thread support is enabled +at compile time, and must be called only when the global interpreter lock has +been created. + + +.. c:function:: PyInterpreterState* PyInterpreterState_New() + + Create a new interpreter state object. The global interpreter lock need not + be held, but may be held if it is necessary to serialize calls to this + function. + + +.. c:function:: void PyInterpreterState_Clear(PyInterpreterState *interp) + + Reset all information in an interpreter state object. The global interpreter + lock must be held. + + +.. c:function:: void PyInterpreterState_Delete(PyInterpreterState *interp) + + Destroy an interpreter state object. The global interpreter lock need not be + held. The interpreter state must have been reset with a previous call to + :c:func:`PyInterpreterState_Clear`. + + +.. c:function:: PyThreadState* PyThreadState_New(PyInterpreterState *interp) + + Create a new thread state object belonging to the given interpreter object. + The global interpreter lock need not be held, but may be held if it is + necessary to serialize calls to this function. + + +.. c:function:: void PyThreadState_Clear(PyThreadState *tstate) + + Reset all information in a thread state object. The global interpreter lock + must be held. + + +.. c:function:: void PyThreadState_Delete(PyThreadState *tstate) + + Destroy a thread state object. The global interpreter lock need not be held. + The thread state must have been reset with a previous call to + :c:func:`PyThreadState_Clear`. + + +.. c:function:: PyObject* PyThreadState_GetDict() + + Return a dictionary in which extensions can store thread-specific state + information. Each extension should use a unique key to use to store state in + the dictionary. It is okay to call this function when no current thread state + is available. If this function returns *NULL*, no exception has been raised and + the caller should assume no current thread state is available. + + +.. c:function:: int PyThreadState_SetAsyncExc(long id, PyObject *exc) + + Asynchronously raise an exception in a thread. The *id* argument is the thread + id of the target thread; *exc* is the exception object to be raised. This + function does not steal any references to *exc*. To prevent naive misuse, you + must write your own C extension to call this. Must be called with the GIL held. + Returns the number of thread states modified; this is normally one, but will be + zero if the thread id isn't found. If *exc* is :const:`NULL`, the pending + exception (if any) for the thread is cleared. This raises no exceptions. + + +.. c:function:: void PyEval_AcquireThread(PyThreadState *tstate) + + Acquire the global interpreter lock and set the current thread state to + *tstate*, which should not be *NULL*. The lock must have been created earlier. + If this thread already has the lock, deadlock ensues. + + :c:func:`PyEval_RestoreThread` is a higher-level function which is always + available (even when thread support isn't enabled or when threads have + not been initialized). + + +.. c:function:: void PyEval_ReleaseThread(PyThreadState *tstate) + + Reset the current thread state to *NULL* and release the global interpreter + lock. The lock must have been created earlier and must be held by the current + thread. The *tstate* argument, which must not be *NULL*, is only used to check + that it represents the current thread state --- if it isn't, a fatal error is + reported. + + :c:func:`PyEval_SaveThread` is a higher-level function which is always + available (even when thread support isn't enabled or when threads have + not been initialized). + + +.. c:function:: void PyEval_AcquireLock() + + Acquire the global interpreter lock. The lock must have been created earlier. + If this thread already has the lock, a deadlock ensues. + + .. deprecated:: 3.2 + This function does not update the current thread state. Please use + :c:func:`PyEval_RestoreThread` or :c:func:`PyEval_AcquireThread` + instead. + + +.. c:function:: void PyEval_ReleaseLock() + + Release the global interpreter lock. The lock must have been created earlier. + + .. deprecated:: 3.2 + This function does not update the current thread state. Please use + :c:func:`PyEval_SaveThread` or :c:func:`PyEval_ReleaseThread` + instead. + + +Sub-interpreter support +======================= + +While in most uses, you will only embed a single Python interpreter, there +are cases where you need to create several independent interpreters in the +same process and perhaps even in the same thread. Sub-interpreters allow +you to do that. You can switch between sub-interpreters using the +:c:func:`PyThreadState_Swap` function. You can create and destroy them +using the following functions: + + +.. c:function:: PyThreadState* Py_NewInterpreter() + + .. index:: + module: builtins + module: __main__ + module: sys + single: stdout (in module sys) + single: stderr (in module sys) + single: stdin (in module sys) + + Create a new sub-interpreter. This is an (almost) totally separate environment + for the execution of Python code. In particular, the new interpreter has + separate, independent versions of all imported modules, including the + fundamental modules :mod:`builtins`, :mod:`__main__` and :mod:`sys`. The + table of loaded modules (``sys.modules``) and the module search path + (``sys.path``) are also separate. The new environment has no ``sys.argv`` + variable. It has new standard I/O stream file objects ``sys.stdin``, + ``sys.stdout`` and ``sys.stderr`` (however these refer to the same underlying + file descriptors). + + The return value points to the first thread state created in the new + sub-interpreter. This thread state is made in the current thread state. + Note that no actual thread is created; see the discussion of thread states + below. If creation of the new interpreter is unsuccessful, *NULL* is + returned; no exception is set since the exception state is stored in the + current thread state and there may not be a current thread state. (Like all + other Python/C API functions, the global interpreter lock must be held before + calling this function and is still held when it returns; however, unlike most + other Python/C API functions, there needn't be a current thread state on + entry.) + + .. index:: + single: Py_Finalize() + single: Py_Initialize() + + Extension modules are shared between (sub-)interpreters as follows: the first + time a particular extension is imported, it is initialized normally, and a + (shallow) copy of its module's dictionary is squirreled away. When the same + extension is imported by another (sub-)interpreter, a new module is initialized + and filled with the contents of this copy; the extension's ``init`` function is + not called. Note that this is different from what happens when an extension is + imported after the interpreter has been completely re-initialized by calling + :c:func:`Py_Finalize` and :c:func:`Py_Initialize`; in that case, the extension's + ``initmodule`` function *is* called again. + + .. index:: single: close() (in module os) + + +.. c:function:: void Py_EndInterpreter(PyThreadState *tstate) + + .. index:: single: Py_Finalize() + + Destroy the (sub-)interpreter represented by the given thread state. The given + thread state must be the current thread state. See the discussion of thread + states below. When the call returns, the current thread state is *NULL*. All + thread states associated with this interpreter are destroyed. (The global + interpreter lock must be held before calling this function and is still held + when it returns.) :c:func:`Py_Finalize` will destroy all sub-interpreters that + haven't been explicitly destroyed at that point. + + +Bugs and caveats +---------------- + +Because sub-interpreters (and the main interpreter) are part of the same +process, the insulation between them isn't perfect --- for example, using +low-level file operations like :func:`os.close` they can +(accidentally or maliciously) affect each other's open files. Because of the +way extensions are shared between (sub-)interpreters, some extensions may not +work properly; this is especially likely when the extension makes use of +(static) global variables, or when the extension manipulates its module's +dictionary after its initialization. It is possible to insert objects created +in one sub-interpreter into a namespace of another sub-interpreter; this should +be done with great care to avoid sharing user-defined functions, methods, +instances or classes between sub-interpreters, since import operations executed +by such objects may affect the wrong (sub-)interpreter's dictionary of loaded +modules. + +Also note that combining this functionality with :c:func:`PyGILState_\*` APIs +is delicate, because these APIs assume a bijection between Python thread states +and OS-level threads, an assumption broken by the presence of sub-interpreters. +It is highly recommended that you don't switch sub-interpreters between a pair +of matching :c:func:`PyGILState_Ensure` and :c:func:`PyGILState_Release` calls. +Furthermore, extensions (such as :mod:`ctypes`) using these APIs to allow calling +of Python code from non-Python created threads will probably be broken when using +sub-interpreters. + + +Asynchronous Notifications +========================== + +A mechanism is provided to make asynchronous notifications to the main +interpreter thread. These notifications take the form of a function +pointer and a void argument. + +.. index:: single: setcheckinterval() (in module sys) + +Every check interval, when the global interpreter lock is released and +reacquired, Python will also call any such provided functions. This can be used +for example by asynchronous IO handlers. The notification can be scheduled from +a worker thread and the actual call than made at the earliest convenience by the +main thread where it has possession of the global interpreter lock and can +perform any Python API calls. + +.. c:function:: int Py_AddPendingCall(int (*func)(void *), void *arg) + + .. index:: single: Py_AddPendingCall() + + Post a notification to the Python main thread. If successful, *func* will be + called with the argument *arg* at the earliest convenience. *func* will be + called having the global interpreter lock held and can thus use the full + Python API and can take any action such as setting object attributes to + signal IO completion. It must return 0 on success, or -1 signalling an + exception. The notification function won't be interrupted to perform another + asynchronous notification recursively, but it can still be interrupted to + switch threads if the global interpreter lock is released, for example, if it + calls back into Python code. + + This function returns 0 on success in which case the notification has been + scheduled. Otherwise, for example if the notification buffer is full, it + returns -1 without setting any exception. + + This function can be called on any thread, be it a Python thread or some + other system thread. If it is a Python thread, it doesn't matter if it holds + the global interpreter lock or not. + + .. versionadded:: 3.1 + + +.. _profiling: + +Profiling and Tracing +===================== + +.. sectionauthor:: Fred L. Drake, Jr. + + +The Python interpreter provides some low-level support for attaching profiling +and execution tracing facilities. These are used for profiling, debugging, and +coverage analysis tools. + +This C interface allows the profiling or tracing code to avoid the overhead of +calling through Python-level callable objects, making a direct C function call +instead. The essential attributes of the facility have not changed; the +interface allows trace functions to be installed per-thread, and the basic +events reported to the trace function are the same as had been reported to the +Python-level trace functions in previous versions. + + +.. c:type:: int (*Py_tracefunc)(PyObject *obj, PyFrameObject *frame, int what, PyObject *arg) + + The type of the trace function registered using :c:func:`PyEval_SetProfile` and + :c:func:`PyEval_SetTrace`. The first parameter is the object passed to the + registration function as *obj*, *frame* is the frame object to which the event + pertains, *what* is one of the constants :const:`PyTrace_CALL`, + :const:`PyTrace_EXCEPTION`, :const:`PyTrace_LINE`, :const:`PyTrace_RETURN`, + :const:`PyTrace_C_CALL`, :const:`PyTrace_C_EXCEPTION`, or + :const:`PyTrace_C_RETURN`, and *arg* depends on the value of *what*: + + +------------------------------+--------------------------------------+ + | Value of *what* | Meaning of *arg* | + +==============================+======================================+ + | :const:`PyTrace_CALL` | Always *NULL*. | + +------------------------------+--------------------------------------+ + | :const:`PyTrace_EXCEPTION` | Exception information as returned by | + | | :func:`sys.exc_info`. | + +------------------------------+--------------------------------------+ + | :const:`PyTrace_LINE` | Always *NULL*. | + +------------------------------+--------------------------------------+ + | :const:`PyTrace_RETURN` | Value being returned to the caller, | + | | or *NULL* if caused by an exception. | + +------------------------------+--------------------------------------+ + | :const:`PyTrace_C_CALL` | Function object being called. | + +------------------------------+--------------------------------------+ + | :const:`PyTrace_C_EXCEPTION` | Function object being called. | + +------------------------------+--------------------------------------+ + | :const:`PyTrace_C_RETURN` | Function object being called. | + +------------------------------+--------------------------------------+ + + +.. c:var:: int PyTrace_CALL + + The value of the *what* parameter to a :c:type:`Py_tracefunc` function when a new + call to a function or method is being reported, or a new entry into a generator. + Note that the creation of the iterator for a generator function is not reported + as there is no control transfer to the Python bytecode in the corresponding + frame. + + +.. c:var:: int PyTrace_EXCEPTION + + The value of the *what* parameter to a :c:type:`Py_tracefunc` function when an + exception has been raised. The callback function is called with this value for + *what* when after any bytecode is processed after which the exception becomes + set within the frame being executed. The effect of this is that as exception + propagation causes the Python stack to unwind, the callback is called upon + return to each frame as the exception propagates. Only trace functions receives + these events; they are not needed by the profiler. + + +.. c:var:: int PyTrace_LINE + + The value passed as the *what* parameter to a trace function (but not a + profiling function) when a line-number event is being reported. + + +.. c:var:: int PyTrace_RETURN + + The value for the *what* parameter to :c:type:`Py_tracefunc` functions when a + call is returning without propagating an exception. + + +.. c:var:: int PyTrace_C_CALL + + The value for the *what* parameter to :c:type:`Py_tracefunc` functions when a C + function is about to be called. + + +.. c:var:: int PyTrace_C_EXCEPTION + + The value for the *what* parameter to :c:type:`Py_tracefunc` functions when a C + function has raised an exception. + + +.. c:var:: int PyTrace_C_RETURN + + The value for the *what* parameter to :c:type:`Py_tracefunc` functions when a C + function has returned. + + +.. c:function:: void PyEval_SetProfile(Py_tracefunc func, PyObject *obj) + + Set the profiler function to *func*. The *obj* parameter is passed to the + function as its first parameter, and may be any Python object, or *NULL*. If + the profile function needs to maintain state, using a different value for *obj* + for each thread provides a convenient and thread-safe place to store it. The + profile function is called for all monitored events except the line-number + events. + + +.. c:function:: void PyEval_SetTrace(Py_tracefunc func, PyObject *obj) + + Set the tracing function to *func*. This is similar to + :c:func:`PyEval_SetProfile`, except the tracing function does receive line-number + events. + +.. c:function:: PyObject* PyEval_GetCallStats(PyObject *self) + + Return a tuple of function call counts. There are constants defined for the + positions within the tuple: + + +-------------------------------+-------+ + | Name | Value | + +===============================+=======+ + | :const:`PCALL_ALL` | 0 | + +-------------------------------+-------+ + | :const:`PCALL_FUNCTION` | 1 | + +-------------------------------+-------+ + | :const:`PCALL_FAST_FUNCTION` | 2 | + +-------------------------------+-------+ + | :const:`PCALL_FASTER_FUNCTION`| 3 | + +-------------------------------+-------+ + | :const:`PCALL_METHOD` | 4 | + +-------------------------------+-------+ + | :const:`PCALL_BOUND_METHOD` | 5 | + +-------------------------------+-------+ + | :const:`PCALL_CFUNCTION` | 6 | + +-------------------------------+-------+ + | :const:`PCALL_TYPE` | 7 | + +-------------------------------+-------+ + | :const:`PCALL_GENERATOR` | 8 | + +-------------------------------+-------+ + | :const:`PCALL_OTHER` | 9 | + +-------------------------------+-------+ + | :const:`PCALL_POP` | 10 | + +-------------------------------+-------+ + + :const:`PCALL_FAST_FUNCTION` means no argument tuple needs to be created. + :const:`PCALL_FASTER_FUNCTION` means that the fast-path frame setup code is used. + + If there is a method call where the call can be optimized by changing + the argument tuple and calling the function directly, it gets recorded + twice. + + This function is only present if Python is compiled with :const:`CALL_PROFILE` + defined. + +.. _advanced-debugging: + +Advanced Debugger Support +========================= + +.. sectionauthor:: Fred L. Drake, Jr. + + +These functions are only intended to be used by advanced debugging tools. + + +.. c:function:: PyInterpreterState* PyInterpreterState_Head() + + Return the interpreter state object at the head of the list of all such objects. + + +.. c:function:: PyInterpreterState* PyInterpreterState_Next(PyInterpreterState *interp) + + Return the next interpreter state object after *interp* from the list of all + such objects. + + +.. c:function:: PyThreadState * PyInterpreterState_ThreadHead(PyInterpreterState *interp) + + Return the a pointer to the first :c:type:`PyThreadState` object in the list of + threads associated with the interpreter *interp*. + + +.. c:function:: PyThreadState* PyThreadState_Next(PyThreadState *tstate) + + Return the next thread state object after *tstate* from the list of all such + objects belonging to the same :c:type:`PyInterpreterState` object. + diff --git a/lib/cpython-doc/c-api/intro.rst b/lib/cpython-doc/c-api/intro.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/intro.rst @@ -0,0 +1,630 @@ +.. highlightlang:: c + + +.. _api-intro: + +************ +Introduction +************ + +The Application Programmer's Interface to Python gives C and C++ programmers +access to the Python interpreter at a variety of levels. The API is equally +usable from C++, but for brevity it is generally referred to as the Python/C +API. There are two fundamentally different reasons for using the Python/C API. +The first reason is to write *extension modules* for specific purposes; these +are C modules that extend the Python interpreter. This is probably the most +common use. The second reason is to use Python as a component in a larger +application; this technique is generally referred to as :dfn:`embedding` Python +in an application. + +Writing an extension module is a relatively well-understood process, where a +"cookbook" approach works well. There are several tools that automate the +process to some extent. While people have embedded Python in other +applications since its early existence, the process of embedding Python is less +straightforward than writing an extension. + +Many API functions are useful independent of whether you're embedding or +extending Python; moreover, most applications that embed Python will need to +provide a custom extension as well, so it's probably a good idea to become +familiar with writing an extension before attempting to embed Python in a real +application. + + +.. _api-includes: + +Include Files +============= + +All function, type and macro definitions needed to use the Python/C API are +included in your code by the following line:: + + #include "Python.h" + +This implies inclusion of the following standard headers: ````, +````, ````, ````, ```` and ```` +(if available). + +.. note:: + + Since Python may define some pre-processor definitions which affect the standard + headers on some systems, you *must* include :file:`Python.h` before any standard + headers are included. + +All user visible names defined by Python.h (except those defined by the included +standard headers) have one of the prefixes ``Py`` or ``_Py``. Names beginning +with ``_Py`` are for internal use by the Python implementation and should not be +used by extension writers. Structure member names do not have a reserved prefix. + +**Important:** user code should never define names that begin with ``Py`` or +``_Py``. This confuses the reader, and jeopardizes the portability of the user +code to future Python versions, which may define additional names beginning with +one of these prefixes. + +The header files are typically installed with Python. On Unix, these are +located in the directories :file:`{prefix}/include/pythonversion/` and +:file:`{exec_prefix}/include/pythonversion/`, where :envvar:`prefix` and +:envvar:`exec_prefix` are defined by the corresponding parameters to Python's +:program:`configure` script and *version* is ``sys.version[:3]``. On Windows, +the headers are installed in :file:`{prefix}/include`, where :envvar:`prefix` is +the installation directory specified to the installer. + +To include the headers, place both directories (if different) on your compiler's +search path for includes. Do *not* place the parent directories on the search +path and then use ``#include ``; this will break on +multi-platform builds since the platform independent headers under +:envvar:`prefix` include the platform specific headers from +:envvar:`exec_prefix`. + +C++ users should note that though the API is defined entirely using C, the +header files do properly declare the entry points to be ``extern "C"``, so there +is no need to do anything special to use the API from C++. + + +.. _api-objects: + +Objects, Types and Reference Counts +=================================== + +.. index:: object: type + +Most Python/C API functions have one or more arguments as well as a return value +of type :c:type:`PyObject\*`. This type is a pointer to an opaque data type +representing an arbitrary Python object. Since all Python object types are +treated the same way by the Python language in most situations (e.g., +assignments, scope rules, and argument passing), it is only fitting that they +should be represented by a single C type. Almost all Python objects live on the +heap: you never declare an automatic or static variable of type +:c:type:`PyObject`, only pointer variables of type :c:type:`PyObject\*` can be +declared. The sole exception are the type objects; since these must never be +deallocated, they are typically static :c:type:`PyTypeObject` objects. + +All Python objects (even Python integers) have a :dfn:`type` and a +:dfn:`reference count`. An object's type determines what kind of object it is +(e.g., an integer, a list, or a user-defined function; there are many more as +explained in :ref:`types`). For each of the well-known types there is a macro +to check whether an object is of that type; for instance, ``PyList_Check(a)`` is +true if (and only if) the object pointed to by *a* is a Python list. + + +.. _api-refcounts: + +Reference Counts +---------------- + +The reference count is important because today's computers have a finite (and +often severely limited) memory size; it counts how many different places there +are that have a reference to an object. Such a place could be another object, +or a global (or static) C variable, or a local variable in some C function. +When an object's reference count becomes zero, the object is deallocated. If +it contains references to other objects, their reference count is decremented. +Those other objects may be deallocated in turn, if this decrement makes their +reference count become zero, and so on. (There's an obvious problem with +objects that reference each other here; for now, the solution is "don't do +that.") + +.. index:: + single: Py_INCREF() + single: Py_DECREF() + +Reference counts are always manipulated explicitly. The normal way is to use +the macro :c:func:`Py_INCREF` to increment an object's reference count by one, +and :c:func:`Py_DECREF` to decrement it by one. The :c:func:`Py_DECREF` macro +is considerably more complex than the incref one, since it must check whether +the reference count becomes zero and then cause the object's deallocator to be +called. The deallocator is a function pointer contained in the object's type +structure. The type-specific deallocator takes care of decrementing the +reference counts for other objects contained in the object if this is a compound +object type, such as a list, as well as performing any additional finalization +that's needed. There's no chance that the reference count can overflow; at +least as many bits are used to hold the reference count as there are distinct +memory locations in virtual memory (assuming ``sizeof(Py_ssize_t) >= sizeof(void*)``). +Thus, the reference count increment is a simple operation. + +It is not necessary to increment an object's reference count for every local +variable that contains a pointer to an object. In theory, the object's +reference count goes up by one when the variable is made to point to it and it +goes down by one when the variable goes out of scope. However, these two +cancel each other out, so at the end the reference count hasn't changed. The +only real reason to use the reference count is to prevent the object from being +deallocated as long as our variable is pointing to it. If we know that there +is at least one other reference to the object that lives at least as long as +our variable, there is no need to increment the reference count temporarily. +An important situation where this arises is in objects that are passed as +arguments to C functions in an extension module that are called from Python; +the call mechanism guarantees to hold a reference to every argument for the +duration of the call. + +However, a common pitfall is to extract an object from a list and hold on to it +for a while without incrementing its reference count. Some other operation might +conceivably remove the object from the list, decrementing its reference count +and possible deallocating it. The real danger is that innocent-looking +operations may invoke arbitrary Python code which could do this; there is a code +path which allows control to flow back to the user from a :c:func:`Py_DECREF`, so +almost any operation is potentially dangerous. + +A safe approach is to always use the generic operations (functions whose name +begins with ``PyObject_``, ``PyNumber_``, ``PySequence_`` or ``PyMapping_``). +These operations always increment the reference count of the object they return. +This leaves the caller with the responsibility to call :c:func:`Py_DECREF` when +they are done with the result; this soon becomes second nature. + + +.. _api-refcountdetails: + +Reference Count Details +^^^^^^^^^^^^^^^^^^^^^^^ + +The reference count behavior of functions in the Python/C API is best explained +in terms of *ownership of references*. Ownership pertains to references, never +to objects (objects are not owned: they are always shared). "Owning a +reference" means being responsible for calling Py_DECREF on it when the +reference is no longer needed. Ownership can also be transferred, meaning that +the code that receives ownership of the reference then becomes responsible for +eventually decref'ing it by calling :c:func:`Py_DECREF` or :c:func:`Py_XDECREF` +when it's no longer needed---or passing on this responsibility (usually to its +caller). When a function passes ownership of a reference on to its caller, the +caller is said to receive a *new* reference. When no ownership is transferred, +the caller is said to *borrow* the reference. Nothing needs to be done for a +borrowed reference. + +Conversely, when a calling function passes in a reference to an object, there +are two possibilities: the function *steals* a reference to the object, or it +does not. *Stealing a reference* means that when you pass a reference to a +function, that function assumes that it now owns that reference, and you are not +responsible for it any longer. + +.. index:: + single: PyList_SetItem() + single: PyTuple_SetItem() + +Few functions steal references; the two notable exceptions are +:c:func:`PyList_SetItem` and :c:func:`PyTuple_SetItem`, which steal a reference +to the item (but not to the tuple or list into which the item is put!). These +functions were designed to steal a reference because of a common idiom for +populating a tuple or list with newly created objects; for example, the code to +create the tuple ``(1, 2, "three")`` could look like this (forgetting about +error handling for the moment; a better way to code this is shown below):: + + PyObject *t; + + t = PyTuple_New(3); + PyTuple_SetItem(t, 0, PyLong_FromLong(1L)); + PyTuple_SetItem(t, 1, PyLong_FromLong(2L)); + PyTuple_SetItem(t, 2, PyString_FromString("three")); + +Here, :c:func:`PyLong_FromLong` returns a new reference which is immediately +stolen by :c:func:`PyTuple_SetItem`. When you want to keep using an object +although the reference to it will be stolen, use :c:func:`Py_INCREF` to grab +another reference before calling the reference-stealing function. + +Incidentally, :c:func:`PyTuple_SetItem` is the *only* way to set tuple items; +:c:func:`PySequence_SetItem` and :c:func:`PyObject_SetItem` refuse to do this +since tuples are an immutable data type. You should only use +:c:func:`PyTuple_SetItem` for tuples that you are creating yourself. + +Equivalent code for populating a list can be written using :c:func:`PyList_New` +and :c:func:`PyList_SetItem`. + +However, in practice, you will rarely use these ways of creating and populating +a tuple or list. There's a generic function, :c:func:`Py_BuildValue`, that can +create most common objects from C values, directed by a :dfn:`format string`. +For example, the above two blocks of code could be replaced by the following +(which also takes care of the error checking):: + + PyObject *tuple, *list; + + tuple = Py_BuildValue("(iis)", 1, 2, "three"); + list = Py_BuildValue("[iis]", 1, 2, "three"); + +It is much more common to use :c:func:`PyObject_SetItem` and friends with items +whose references you are only borrowing, like arguments that were passed in to +the function you are writing. In that case, their behaviour regarding reference +counts is much saner, since you don't have to increment a reference count so you +can give a reference away ("have it be stolen"). For example, this function +sets all items of a list (actually, any mutable sequence) to a given item:: + + int + set_all(PyObject *target, PyObject *item) + { + int i, n; + + n = PyObject_Length(target); + if (n < 0) + return -1; + for (i = 0; i < n; i++) { + PyObject *index = PyLong_FromLong(i); + if (!index) + return -1; + if (PyObject_SetItem(target, index, item) < 0) + return -1; + Py_DECREF(index); + } + return 0; + } + +.. index:: single: set_all() + +The situation is slightly different for function return values. While passing +a reference to most functions does not change your ownership responsibilities +for that reference, many functions that return a reference to an object give +you ownership of the reference. The reason is simple: in many cases, the +returned object is created on the fly, and the reference you get is the only +reference to the object. Therefore, the generic functions that return object +references, like :c:func:`PyObject_GetItem` and :c:func:`PySequence_GetItem`, +always return a new reference (the caller becomes the owner of the reference). + +It is important to realize that whether you own a reference returned by a +function depends on which function you call only --- *the plumage* (the type of +the object passed as an argument to the function) *doesn't enter into it!* +Thus, if you extract an item from a list using :c:func:`PyList_GetItem`, you +don't own the reference --- but if you obtain the same item from the same list +using :c:func:`PySequence_GetItem` (which happens to take exactly the same +arguments), you do own a reference to the returned object. + +.. index:: + single: PyList_GetItem() + single: PySequence_GetItem() + +Here is an example of how you could write a function that computes the sum of +the items in a list of integers; once using :c:func:`PyList_GetItem`, and once +using :c:func:`PySequence_GetItem`. :: + + long + sum_list(PyObject *list) + { + int i, n; + long total = 0; + PyObject *item; + + n = PyList_Size(list); + if (n < 0) + return -1; /* Not a list */ + for (i = 0; i < n; i++) { + item = PyList_GetItem(list, i); /* Can't fail */ + if (!PyLong_Check(item)) continue; /* Skip non-integers */ + total += PyLong_AsLong(item); + } + return total; + } + +.. index:: single: sum_list() + +:: + + long + sum_sequence(PyObject *sequence) + { + int i, n; + long total = 0; + PyObject *item; + n = PySequence_Length(sequence); + if (n < 0) + return -1; /* Has no length */ + for (i = 0; i < n; i++) { + item = PySequence_GetItem(sequence, i); + if (item == NULL) + return -1; /* Not a sequence, or other failure */ + if (PyLong_Check(item)) + total += PyLong_AsLong(item); + Py_DECREF(item); /* Discard reference ownership */ + } + return total; + } + +.. index:: single: sum_sequence() + + +.. _api-types: + +Types +----- + +There are few other data types that play a significant role in the Python/C +API; most are simple C types such as :c:type:`int`, :c:type:`long`, +:c:type:`double` and :c:type:`char\*`. A few structure types are used to +describe static tables used to list the functions exported by a module or the +data attributes of a new object type, and another is used to describe the value +of a complex number. These will be discussed together with the functions that +use them. + + +.. _api-exceptions: + +Exceptions +========== + +The Python programmer only needs to deal with exceptions if specific error +handling is required; unhandled exceptions are automatically propagated to the +caller, then to the caller's caller, and so on, until they reach the top-level +interpreter, where they are reported to the user accompanied by a stack +traceback. + +.. index:: single: PyErr_Occurred() + +For C programmers, however, error checking always has to be explicit. All +functions in the Python/C API can raise exceptions, unless an explicit claim is +made otherwise in a function's documentation. In general, when a function +encounters an error, it sets an exception, discards any object references that +it owns, and returns an error indicator. If not documented otherwise, this +indicator is either *NULL* or ``-1``, depending on the function's return type. +A few functions return a Boolean true/false result, with false indicating an +error. Very few functions return no explicit error indicator or have an +ambiguous return value, and require explicit testing for errors with +:c:func:`PyErr_Occurred`. These exceptions are always explicitly documented. + +.. index:: + single: PyErr_SetString() + single: PyErr_Clear() + +Exception state is maintained in per-thread storage (this is equivalent to +using global storage in an unthreaded application). A thread can be in one of +two states: an exception has occurred, or not. The function +:c:func:`PyErr_Occurred` can be used to check for this: it returns a borrowed +reference to the exception type object when an exception has occurred, and +*NULL* otherwise. There are a number of functions to set the exception state: +:c:func:`PyErr_SetString` is the most common (though not the most general) +function to set the exception state, and :c:func:`PyErr_Clear` clears the +exception state. + +The full exception state consists of three objects (all of which can be +*NULL*): the exception type, the corresponding exception value, and the +traceback. These have the same meanings as the Python result of +``sys.exc_info()``; however, they are not the same: the Python objects represent +the last exception being handled by a Python :keyword:`try` ... +:keyword:`except` statement, while the C level exception state only exists while +an exception is being passed on between C functions until it reaches the Python +bytecode interpreter's main loop, which takes care of transferring it to +``sys.exc_info()`` and friends. + +.. index:: single: exc_info() (in module sys) + +Note that starting with Python 1.5, the preferred, thread-safe way to access the +exception state from Python code is to call the function :func:`sys.exc_info`, +which returns the per-thread exception state for Python code. Also, the +semantics of both ways to access the exception state have changed so that a +function which catches an exception will save and restore its thread's exception +state so as to preserve the exception state of its caller. This prevents common +bugs in exception handling code caused by an innocent-looking function +overwriting the exception being handled; it also reduces the often unwanted +lifetime extension for objects that are referenced by the stack frames in the +traceback. + +As a general principle, a function that calls another function to perform some +task should check whether the called function raised an exception, and if so, +pass the exception state on to its caller. It should discard any object +references that it owns, and return an error indicator, but it should *not* set +another exception --- that would overwrite the exception that was just raised, +and lose important information about the exact cause of the error. + +.. index:: single: sum_sequence() + +A simple example of detecting exceptions and passing them on is shown in the +:c:func:`sum_sequence` example above. It so happens that that example doesn't +need to clean up any owned references when it detects an error. The following +example function shows some error cleanup. First, to remind you why you like +Python, we show the equivalent Python code:: + + def incr_item(dict, key): + try: + item = dict[key] + except KeyError: + item = 0 + dict[key] = item + 1 + +.. index:: single: incr_item() + +Here is the corresponding C code, in all its glory:: + + int + incr_item(PyObject *dict, PyObject *key) + { + /* Objects all initialized to NULL for Py_XDECREF */ + PyObject *item = NULL, *const_one = NULL, *incremented_item = NULL; + int rv = -1; /* Return value initialized to -1 (failure) */ + + item = PyObject_GetItem(dict, key); + if (item == NULL) { + /* Handle KeyError only: */ + if (!PyErr_ExceptionMatches(PyExc_KeyError)) + goto error; + + /* Clear the error and use zero: */ + PyErr_Clear(); + item = PyLong_FromLong(0L); + if (item == NULL) + goto error; + } + const_one = PyLong_FromLong(1L); + if (const_one == NULL) + goto error; + + incremented_item = PyNumber_Add(item, const_one); + if (incremented_item == NULL) + goto error; + + if (PyObject_SetItem(dict, key, incremented_item) < 0) + goto error; + rv = 0; /* Success */ + /* Continue with cleanup code */ + + error: + /* Cleanup code, shared by success and failure path */ + + /* Use Py_XDECREF() to ignore NULL references */ + Py_XDECREF(item); + Py_XDECREF(const_one); + Py_XDECREF(incremented_item); + + return rv; /* -1 for error, 0 for success */ + } + +.. index:: single: incr_item() + +.. index:: + single: PyErr_ExceptionMatches() + single: PyErr_Clear() + single: Py_XDECREF() + +This example represents an endorsed use of the ``goto`` statement in C! +It illustrates the use of :c:func:`PyErr_ExceptionMatches` and +:c:func:`PyErr_Clear` to handle specific exceptions, and the use of +:c:func:`Py_XDECREF` to dispose of owned references that may be *NULL* (note the +``'X'`` in the name; :c:func:`Py_DECREF` would crash when confronted with a +*NULL* reference). It is important that the variables used to hold owned +references are initialized to *NULL* for this to work; likewise, the proposed +return value is initialized to ``-1`` (failure) and only set to success after +the final call made is successful. + + +.. _api-embedding: + +Embedding Python +================ + +The one important task that only embedders (as opposed to extension writers) of +the Python interpreter have to worry about is the initialization, and possibly +the finalization, of the Python interpreter. Most functionality of the +interpreter can only be used after the interpreter has been initialized. + +.. index:: + single: Py_Initialize() + module: builtins + module: __main__ + module: sys + triple: module; search; path + single: path (in module sys) + +The basic initialization function is :c:func:`Py_Initialize`. This initializes +the table of loaded modules, and creates the fundamental modules +:mod:`builtins`, :mod:`__main__`, and :mod:`sys`. It also +initializes the module search path (``sys.path``). + +.. index:: single: PySys_SetArgvEx() + +:c:func:`Py_Initialize` does not set the "script argument list" (``sys.argv``). +If this variable is needed by Python code that will be executed later, it must +be set explicitly with a call to ``PySys_SetArgvEx(argc, argv, updatepath)`` +after the call to :c:func:`Py_Initialize`. + +On most systems (in particular, on Unix and Windows, although the details are +slightly different), :c:func:`Py_Initialize` calculates the module search path +based upon its best guess for the location of the standard Python interpreter +executable, assuming that the Python library is found in a fixed location +relative to the Python interpreter executable. In particular, it looks for a +directory named :file:`lib/python{X.Y}` relative to the parent directory +where the executable named :file:`python` is found on the shell command search +path (the environment variable :envvar:`PATH`). + +For instance, if the Python executable is found in +:file:`/usr/local/bin/python`, it will assume that the libraries are in +:file:`/usr/local/lib/python{X.Y}`. (In fact, this particular path is also +the "fallback" location, used when no executable file named :file:`python` is +found along :envvar:`PATH`.) The user can override this behavior by setting the +environment variable :envvar:`PYTHONHOME`, or insert additional directories in +front of the standard path by setting :envvar:`PYTHONPATH`. + +.. index:: + single: Py_SetProgramName() + single: Py_GetPath() + single: Py_GetPrefix() + single: Py_GetExecPrefix() + single: Py_GetProgramFullPath() + +The embedding application can steer the search by calling +``Py_SetProgramName(file)`` *before* calling :c:func:`Py_Initialize`. Note that +:envvar:`PYTHONHOME` still overrides this and :envvar:`PYTHONPATH` is still +inserted in front of the standard path. An application that requires total +control has to provide its own implementation of :c:func:`Py_GetPath`, +:c:func:`Py_GetPrefix`, :c:func:`Py_GetExecPrefix`, and +:c:func:`Py_GetProgramFullPath` (all defined in :file:`Modules/getpath.c`). + +.. index:: single: Py_IsInitialized() + +Sometimes, it is desirable to "uninitialize" Python. For instance, the +application may want to start over (make another call to +:c:func:`Py_Initialize`) or the application is simply done with its use of +Python and wants to free memory allocated by Python. This can be accomplished +by calling :c:func:`Py_Finalize`. The function :c:func:`Py_IsInitialized` returns +true if Python is currently in the initialized state. More information about +these functions is given in a later chapter. Notice that :c:func:`Py_Finalize` +does *not* free all memory allocated by the Python interpreter, e.g. memory +allocated by extension modules currently cannot be released. + + +.. _api-debugging: + +Debugging Builds +================ + +Python can be built with several macros to enable extra checks of the +interpreter and extension modules. These checks tend to add a large amount of +overhead to the runtime so they are not enabled by default. + +A full list of the various types of debugging builds is in the file +:file:`Misc/SpecialBuilds.txt` in the Python source distribution. Builds are +available that support tracing of reference counts, debugging the memory +allocator, or low-level profiling of the main interpreter loop. Only the most +frequently-used builds will be described in the remainder of this section. + +Compiling the interpreter with the :c:macro:`Py_DEBUG` macro defined produces +what is generally meant by "a debug build" of Python. :c:macro:`Py_DEBUG` is +enabled in the Unix build by adding ``--with-pydebug`` to the +:file:`./configure` command. It is also implied by the presence of the +not-Python-specific :c:macro:`_DEBUG` macro. When :c:macro:`Py_DEBUG` is enabled +in the Unix build, compiler optimization is disabled. + +In addition to the reference count debugging described below, the following +extra checks are performed: + +* Extra checks are added to the object allocator. + +* Extra checks are added to the parser and compiler. + +* Downcasts from wide types to narrow types are checked for loss of information. + +* A number of assertions are added to the dictionary and set implementations. + In addition, the set object acquires a :meth:`test_c_api` method. + +* Sanity checks of the input arguments are added to frame creation. + +* The storage for ints is initialized with a known invalid pattern to catch + reference to uninitialized digits. + +* Low-level tracing and extra exception checking are added to the runtime + virtual machine. + +* Extra checks are added to the memory arena implementation. + +* Extra debugging is added to the thread module. + +There may be additional checks not mentioned here. + +Defining :c:macro:`Py_TRACE_REFS` enables reference tracing. When defined, a +circular doubly linked list of active objects is maintained by adding two extra +fields to every :c:type:`PyObject`. Total allocations are tracked as well. Upon +exit, all existing references are printed. (In interactive mode this happens +after every statement run by the interpreter.) Implied by :c:macro:`Py_DEBUG`. + +Please refer to :file:`Misc/SpecialBuilds.txt` in the Python source distribution +for more detailed information. + diff --git a/lib/cpython-doc/c-api/iter.rst b/lib/cpython-doc/c-api/iter.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/iter.rst @@ -0,0 +1,47 @@ +.. highlightlang:: c + +.. _iterator: + +Iterator Protocol +================= + +There are only a couple of functions specifically for working with iterators. + +.. c:function:: int PyIter_Check(PyObject *o) + + Return true if the object *o* supports the iterator protocol. + + +.. c:function:: PyObject* PyIter_Next(PyObject *o) + + Return the next value from the iteration *o*. If the object is an iterator, + this retrieves the next value from the iteration, and returns *NULL* with no + exception set if there are no remaining items. If the object is not an + iterator, :exc:`TypeError` is raised, or if there is an error in retrieving the + item, returns *NULL* and passes along the exception. + +To write a loop which iterates over an iterator, the C code should look +something like this:: + + PyObject *iterator = PyObject_GetIter(obj); + PyObject *item; + + if (iterator == NULL) { + /* propagate error */ + } + + while (item = PyIter_Next(iterator)) { + /* do something with item */ + ... + /* release reference when done */ + Py_DECREF(item); + } + + Py_DECREF(iterator); + + if (PyErr_Occurred()) { + /* propagate error */ + } + else { + /* continue doing useful work */ + } diff --git a/lib/cpython-doc/c-api/iterator.rst b/lib/cpython-doc/c-api/iterator.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/iterator.rst @@ -0,0 +1,50 @@ +.. highlightlang:: c + +.. _iterator-objects: + +Iterator Objects +---------------- + +Python provides two general-purpose iterator objects. The first, a sequence +iterator, works with an arbitrary sequence supporting the :meth:`__getitem__` +method. The second works with a callable object and a sentinel value, calling +the callable for each item in the sequence, and ending the iteration when the +sentinel value is returned. + + +.. c:var:: PyTypeObject PySeqIter_Type + + Type object for iterator objects returned by :c:func:`PySeqIter_New` and the + one-argument form of the :func:`iter` built-in function for built-in sequence + types. + + +.. c:function:: int PySeqIter_Check(op) + + Return true if the type of *op* is :c:data:`PySeqIter_Type`. + + +.. c:function:: PyObject* PySeqIter_New(PyObject *seq) + + Return an iterator that works with a general sequence object, *seq*. The + iteration ends when the sequence raises :exc:`IndexError` for the subscripting + operation. + + +.. c:var:: PyTypeObject PyCallIter_Type + + Type object for iterator objects returned by :c:func:`PyCallIter_New` and the + two-argument form of the :func:`iter` built-in function. + + +.. c:function:: int PyCallIter_Check(op) + + Return true if the type of *op* is :c:data:`PyCallIter_Type`. + + +.. c:function:: PyObject* PyCallIter_New(PyObject *callable, PyObject *sentinel) + + Return a new iterator. The first parameter, *callable*, can be any Python + callable object that can be called with no parameters; each call to it should + return the next item in the iteration. When *callable* returns a value equal to + *sentinel*, the iteration will be terminated. diff --git a/lib/cpython-doc/c-api/list.rst b/lib/cpython-doc/c-api/list.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/list.rst @@ -0,0 +1,151 @@ +.. highlightlang:: c + +.. _listobjects: + +List Objects +------------ + +.. index:: object: list + + +.. c:type:: PyListObject + + This subtype of :c:type:`PyObject` represents a Python list object. + + +.. c:var:: PyTypeObject PyList_Type + + This instance of :c:type:`PyTypeObject` represents the Python list type. + This is the same object as :class:`list` in the Python layer. + + +.. c:function:: int PyList_Check(PyObject *p) + + Return true if *p* is a list object or an instance of a subtype of the list + type. + + +.. c:function:: int PyList_CheckExact(PyObject *p) + + Return true if *p* is a list object, but not an instance of a subtype of + the list type. + + +.. c:function:: PyObject* PyList_New(Py_ssize_t len) + + Return a new list of length *len* on success, or *NULL* on failure. + + .. note:: + + If *len* is greater than zero, the returned list object's items are + set to ``NULL``. Thus you cannot use abstract API functions such as + :c:func:`PySequence_SetItem` or expose the object to Python code before + setting all items to a real object with :c:func:`PyList_SetItem`. + + +.. c:function:: Py_ssize_t PyList_Size(PyObject *list) + + .. index:: builtin: len + + Return the length of the list object in *list*; this is equivalent to + ``len(list)`` on a list object. + + +.. c:function:: Py_ssize_t PyList_GET_SIZE(PyObject *list) + + Macro form of :c:func:`PyList_Size` without error checking. + + +.. c:function:: PyObject* PyList_GetItem(PyObject *list, Py_ssize_t index) + + Return the object at position *index* in the list pointed to by *list*. The + position must be positive, indexing from the end of the list is not + supported. If *index* is out of bounds, return *NULL* and set an + :exc:`IndexError` exception. + + +.. c:function:: PyObject* PyList_GET_ITEM(PyObject *list, Py_ssize_t i) + + Macro form of :c:func:`PyList_GetItem` without error checking. + + +.. c:function:: int PyList_SetItem(PyObject *list, Py_ssize_t index, PyObject *item) + + Set the item at index *index* in list to *item*. Return ``0`` on success + or ``-1`` on failure. + + .. note:: + + This function "steals" a reference to *item* and discards a reference to + an item already in the list at the affected position. + + +.. c:function:: void PyList_SET_ITEM(PyObject *list, Py_ssize_t i, PyObject *o) + + Macro form of :c:func:`PyList_SetItem` without error checking. This is + normally only used to fill in new lists where there is no previous content. + + .. note:: + + This macro "steals" a reference to *item*, and, unlike + :c:func:`PyList_SetItem`, does *not* discard a reference to any item that + is being replaced; any reference in *list* at position *i* will be + leaked. + + +.. c:function:: int PyList_Insert(PyObject *list, Py_ssize_t index, PyObject *item) + + Insert the item *item* into list *list* in front of index *index*. Return + ``0`` if successful; return ``-1`` and set an exception if unsuccessful. + Analogous to ``list.insert(index, item)``. + + +.. c:function:: int PyList_Append(PyObject *list, PyObject *item) + + Append the object *item* at the end of list *list*. Return ``0`` if + successful; return ``-1`` and set an exception if unsuccessful. Analogous + to ``list.append(item)``. + + +.. c:function:: PyObject* PyList_GetSlice(PyObject *list, Py_ssize_t low, Py_ssize_t high) + + Return a list of the objects in *list* containing the objects *between* *low* + and *high*. Return *NULL* and set an exception if unsuccessful. Analogous + to ``list[low:high]``. Negative indices, as when slicing from Python, are not + supported. + + +.. c:function:: int PyList_SetSlice(PyObject *list, Py_ssize_t low, Py_ssize_t high, PyObject *itemlist) + + Set the slice of *list* between *low* and *high* to the contents of + *itemlist*. Analogous to ``list[low:high] = itemlist``. The *itemlist* may + be *NULL*, indicating the assignment of an empty list (slice deletion). + Return ``0`` on success, ``-1`` on failure. Negative indices, as when + slicing from Python, are not supported. + + +.. c:function:: int PyList_Sort(PyObject *list) + + Sort the items of *list* in place. Return ``0`` on success, ``-1`` on + failure. This is equivalent to ``list.sort()``. + + +.. c:function:: int PyList_Reverse(PyObject *list) + + Reverse the items of *list* in place. Return ``0`` on success, ``-1`` on + failure. This is the equivalent of ``list.reverse()``. + + +.. c:function:: PyObject* PyList_AsTuple(PyObject *list) + + .. index:: builtin: tuple + + Return a new tuple object containing the contents of *list*; equivalent to + ``tuple(list)``. + + +.. c:function:: int PyList_ClearFreeList() + + Clear the free list. Return the total number of freed items. + + .. versionadded:: 3.3 diff --git a/lib/cpython-doc/c-api/long.rst b/lib/cpython-doc/c-api/long.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/long.rst @@ -0,0 +1,237 @@ +.. highlightlang:: c + +.. _longobjects: + +Integer Objects +--------------- + +.. index:: object: long integer + object: integer + +All integers are implemented as "long" integer objects of arbitrary size. + +.. c:type:: PyLongObject + + This subtype of :c:type:`PyObject` represents a Python integer object. + + +.. c:var:: PyTypeObject PyLong_Type + + This instance of :c:type:`PyTypeObject` represents the Python integer type. + This is the same object as :class:`int` in the Python layer. + + +.. c:function:: int PyLong_Check(PyObject *p) + + Return true if its argument is a :c:type:`PyLongObject` or a subtype of + :c:type:`PyLongObject`. + + +.. c:function:: int PyLong_CheckExact(PyObject *p) + + Return true if its argument is a :c:type:`PyLongObject`, but not a subtype of + :c:type:`PyLongObject`. + + +.. c:function:: PyObject* PyLong_FromLong(long v) + + Return a new :c:type:`PyLongObject` object from *v*, or *NULL* on failure. + + The current implementation keeps an array of integer objects for all integers + between ``-5`` and ``256``, when you create an int in that range you actually + just get back a reference to the existing object. So it should be possible to + change the value of ``1``. I suspect the behaviour of Python in this case is + undefined. :-) + + +.. c:function:: PyObject* PyLong_FromUnsignedLong(unsigned long v) + + Return a new :c:type:`PyLongObject` object from a C :c:type:`unsigned long`, or + *NULL* on failure. + + +.. c:function:: PyObject* PyLong_FromSsize_t(Py_ssize_t v) + + Return a new :c:type:`PyLongObject` object from a C :c:type:`Py_ssize_t`, or + *NULL* on failure. + + +.. c:function:: PyObject* PyLong_FromSize_t(size_t v) + + Return a new :c:type:`PyLongObject` object from a C :c:type:`size_t`, or + *NULL* on failure. + + +.. c:function:: PyObject* PyLong_FromLongLong(PY_LONG_LONG v) + + Return a new :c:type:`PyLongObject` object from a C :c:type:`long long`, or *NULL* + on failure. + + +.. c:function:: PyObject* PyLong_FromUnsignedLongLong(unsigned PY_LONG_LONG v) + + Return a new :c:type:`PyLongObject` object from a C :c:type:`unsigned long long`, + or *NULL* on failure. + + +.. c:function:: PyObject* PyLong_FromDouble(double v) + + Return a new :c:type:`PyLongObject` object from the integer part of *v*, or + *NULL* on failure. + + +.. c:function:: PyObject* PyLong_FromString(char *str, char **pend, int base) + + Return a new :c:type:`PyLongObject` based on the string value in *str*, which + is interpreted according to the radix in *base*. If *pend* is non-*NULL*, + *\*pend* will point to the first character in *str* which follows the + representation of the number. If *base* is ``0``, the radix will be + determined based on the leading characters of *str*: if *str* starts with + ``'0x'`` or ``'0X'``, radix 16 will be used; if *str* starts with ``'0o'`` or + ``'0O'``, radix 8 will be used; if *str* starts with ``'0b'`` or ``'0B'``, + radix 2 will be used; otherwise radix 10 will be used. If *base* is not + ``0``, it must be between ``2`` and ``36``, inclusive. Leading spaces are + ignored. If there are no digits, :exc:`ValueError` will be raised. + + +.. c:function:: PyObject* PyLong_FromUnicode(Py_UNICODE *u, Py_ssize_t length, int base) + + Convert a sequence of Unicode digits to a Python integer value. The Unicode + string is first encoded to a byte string using :c:func:`PyUnicode_EncodeDecimal` + and then converted using :c:func:`PyLong_FromString`. + + .. deprecated-removed:: 3.3 4.0 + Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using + :c:func:`PyLong_FromUnicodeObject`. + + +.. c:function:: PyObject* PyLong_FromUnicodeObject(PyObject *u, int base) + + Convert a sequence of Unicode digits in the string *u* to a Python integer + value. The Unicode string is first encoded to a byte string using + :c:func:`PyUnicode_EncodeDecimal` and then converted using + :c:func:`PyLong_FromString`. + + .. versionadded:: 3.3 + + +.. c:function:: PyObject* PyLong_FromVoidPtr(void *p) + + Create a Python integer from the pointer *p*. The pointer value can be + retrieved from the resulting value using :c:func:`PyLong_AsVoidPtr`. + + +.. XXX alias PyLong_AS_LONG (for now) +.. c:function:: long PyLong_AsLong(PyObject *pylong) + + .. index:: + single: LONG_MAX + single: OverflowError (built-in exception) + + Return a C :c:type:`long` representation of the contents of *pylong*. If + *pylong* is greater than :const:`LONG_MAX`, raise an :exc:`OverflowError`, + and return -1. Convert non-long objects automatically to long first, + and return -1 if that raises exceptions. + +.. c:function:: long PyLong_AsLongAndOverflow(PyObject *pylong, int *overflow) + + Return a C :c:type:`long` representation of the contents of + *pylong*. If *pylong* is greater than :const:`LONG_MAX` or less + than :const:`LONG_MIN`, set *\*overflow* to ``1`` or ``-1``, + respectively, and return ``-1``; otherwise, set *\*overflow* to + ``0``. If any other exception occurs (for example a TypeError or + MemoryError), then ``-1`` will be returned and *\*overflow* will + be ``0``. + + +.. c:function:: PY_LONG_LONG PyLong_AsLongLongAndOverflow(PyObject *pylong, int *overflow) + + Return a C :c:type:`long long` representation of the contents of + *pylong*. If *pylong* is greater than :const:`PY_LLONG_MAX` or less + than :const:`PY_LLONG_MIN`, set *\*overflow* to ``1`` or ``-1``, + respectively, and return ``-1``; otherwise, set *\*overflow* to + ``0``. If any other exception occurs (for example a TypeError or + MemoryError), then ``-1`` will be returned and *\*overflow* will + be ``0``. + + .. versionadded:: 3.2 + + +.. c:function:: Py_ssize_t PyLong_AsSsize_t(PyObject *pylong) + + .. index:: + single: PY_SSIZE_T_MAX + single: OverflowError (built-in exception) + + Return a C :c:type:`Py_ssize_t` representation of the contents of *pylong*. + If *pylong* is greater than :const:`PY_SSIZE_T_MAX`, an :exc:`OverflowError` + is raised and ``-1`` will be returned. + + +.. c:function:: unsigned long PyLong_AsUnsignedLong(PyObject *pylong) + + .. index:: + single: ULONG_MAX + single: OverflowError (built-in exception) + + Return a C :c:type:`unsigned long` representation of the contents of *pylong*. + If *pylong* is greater than :const:`ULONG_MAX`, an :exc:`OverflowError` is + raised. + + +.. c:function:: size_t PyLong_AsSize_t(PyObject *pylong) + + Return a :c:type:`size_t` representation of the contents of *pylong*. If + *pylong* is greater than the maximum value for a :c:type:`size_t`, an + :exc:`OverflowError` is raised. + + +.. c:function:: PY_LONG_LONG PyLong_AsLongLong(PyObject *pylong) + + .. index:: + single: OverflowError (built-in exception) + + Return a C :c:type:`long long` from a Python integer. If *pylong* + cannot be represented as a :c:type:`long long`, an + :exc:`OverflowError` is raised and ``-1`` is returned. + + +.. c:function:: unsigned PY_LONG_LONG PyLong_AsUnsignedLongLong(PyObject *pylong) + + .. index:: + single: OverflowError (built-in exception) + + Return a C :c:type:`unsigned long long` from a Python integer. If + *pylong* cannot be represented as an :c:type:`unsigned long long`, + an :exc:`OverflowError` is raised and ``(unsigned long long)-1`` is + returned. + + .. versionchanged:: 3.1 + A negative *pylong* now raises :exc:`OverflowError`, not :exc:`TypeError`. + + +.. c:function:: unsigned long PyLong_AsUnsignedLongMask(PyObject *io) + + Return a C :c:type:`unsigned long` from a Python integer, without checking for + overflow. + + +.. c:function:: unsigned PY_LONG_LONG PyLong_AsUnsignedLongLongMask(PyObject *io) + + Return a C :c:type:`unsigned long long` from a Python integer, without + checking for overflow. + + +.. c:function:: double PyLong_AsDouble(PyObject *pylong) + + Return a C :c:type:`double` representation of the contents of *pylong*. If + *pylong* cannot be approximately represented as a :c:type:`double`, an + :exc:`OverflowError` exception is raised and ``-1.0`` will be returned. + + +.. c:function:: void* PyLong_AsVoidPtr(PyObject *pylong) + + Convert a Python integer *pylong* to a C :c:type:`void` pointer. + If *pylong* cannot be converted, an :exc:`OverflowError` will be raised. This + is only assured to produce a usable :c:type:`void` pointer for values created + with :c:func:`PyLong_FromVoidPtr`. diff --git a/lib/cpython-doc/c-api/mapping.rst b/lib/cpython-doc/c-api/mapping.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/mapping.rst @@ -0,0 +1,79 @@ +.. highlightlang:: c + +.. _mapping: + +Mapping Protocol +================ + + +.. c:function:: int PyMapping_Check(PyObject *o) + + Return ``1`` if the object provides mapping protocol, and ``0`` otherwise. This + function always succeeds. + + +.. c:function:: Py_ssize_t PyMapping_Size(PyObject *o) + Py_ssize_t PyMapping_Length(PyObject *o) + + .. index:: builtin: len + + Returns the number of keys in object *o* on success, and ``-1`` on failure. For + objects that do not provide mapping protocol, this is equivalent to the Python + expression ``len(o)``. + + +.. c:function:: int PyMapping_DelItemString(PyObject *o, char *key) + + Remove the mapping for object *key* from the object *o*. Return ``-1`` on + failure. This is equivalent to the Python statement ``del o[key]``. + + +.. c:function:: int PyMapping_DelItem(PyObject *o, PyObject *key) + + Remove the mapping for object *key* from the object *o*. Return ``-1`` on + failure. This is equivalent to the Python statement ``del o[key]``. + + +.. c:function:: int PyMapping_HasKeyString(PyObject *o, char *key) + + On success, return ``1`` if the mapping object has the key *key* and ``0`` + otherwise. This is equivalent to the Python expression ``key in o``. + This function always succeeds. + + +.. c:function:: int PyMapping_HasKey(PyObject *o, PyObject *key) + + Return ``1`` if the mapping object has the key *key* and ``0`` otherwise. This + is equivalent to the Python expression ``key in o``. This function always + succeeds. + + +.. c:function:: PyObject* PyMapping_Keys(PyObject *o) + + On success, return a list of the keys in object *o*. On failure, return *NULL*. + This is equivalent to the Python expression ``list(o.keys())``. + + +.. c:function:: PyObject* PyMapping_Values(PyObject *o) + + On success, return a list of the values in object *o*. On failure, return + *NULL*. This is equivalent to the Python expression ``list(o.values())``. + + +.. c:function:: PyObject* PyMapping_Items(PyObject *o) + + On success, return a list of the items in object *o*, where each item is a tuple + containing a key-value pair. On failure, return *NULL*. This is equivalent to + the Python expression ``list(o.items())``. + + +.. c:function:: PyObject* PyMapping_GetItemString(PyObject *o, char *key) + + Return element of *o* corresponding to the object *key* or *NULL* on failure. + This is the equivalent of the Python expression ``o[key]``. + + +.. c:function:: int PyMapping_SetItemString(PyObject *o, char *key, PyObject *v) + + Map the object *key* to the value *v* in object *o*. Returns ``-1`` on failure. + This is the equivalent of the Python statement ``o[key] = v``. diff --git a/lib/cpython-doc/c-api/marshal.rst b/lib/cpython-doc/c-api/marshal.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/marshal.rst @@ -0,0 +1,89 @@ +.. highlightlang:: c + +.. _marshalling-utils: + +Data marshalling support +======================== + +These routines allow C code to work with serialized objects using the same +data format as the :mod:`marshal` module. There are functions to write data +into the serialization format, and additional functions that can be used to +read the data back. Files used to store marshalled data must be opened in +binary mode. + +Numeric values are stored with the least significant byte first. + +The module supports two versions of the data format: version 0 is the +historical version, version 1 shares interned strings in the file, and upon +unmarshalling. Version 2 uses a binary format for floating point numbers. +*Py_MARSHAL_VERSION* indicates the current file format (currently 2). + + +.. c:function:: void PyMarshal_WriteLongToFile(long value, FILE *file, int version) + + Marshal a :c:type:`long` integer, *value*, to *file*. This will only write + the least-significant 32 bits of *value*; regardless of the size of the + native :c:type:`long` type. *version* indicates the file format. + + +.. c:function:: void PyMarshal_WriteObjectToFile(PyObject *value, FILE *file, int version) + + Marshal a Python object, *value*, to *file*. + *version* indicates the file format. + + +.. c:function:: PyObject* PyMarshal_WriteObjectToString(PyObject *value, int version) + + Return a string object containing the marshalled representation of *value*. + *version* indicates the file format. + + +The following functions allow marshalled values to be read back in. + +XXX What about error detection? It appears that reading past the end of the +file will always result in a negative numeric value (where that's relevant), +but it's not clear that negative values won't be handled properly when there's +no error. What's the right way to tell? Should only non-negative values be +written using these routines? + + +.. c:function:: long PyMarshal_ReadLongFromFile(FILE *file) + + Return a C :c:type:`long` from the data stream in a :c:type:`FILE\*` opened + for reading. Only a 32-bit value can be read in using this function, + regardless of the native size of :c:type:`long`. + + +.. c:function:: int PyMarshal_ReadShortFromFile(FILE *file) + + Return a C :c:type:`short` from the data stream in a :c:type:`FILE\*` opened + for reading. Only a 16-bit value can be read in using this function, + regardless of the native size of :c:type:`short`. + + +.. c:function:: PyObject* PyMarshal_ReadObjectFromFile(FILE *file) + + Return a Python object from the data stream in a :c:type:`FILE\*` opened for + reading. On error, sets the appropriate exception (:exc:`EOFError` or + :exc:`TypeError`) and returns *NULL*. + + +.. c:function:: PyObject* PyMarshal_ReadLastObjectFromFile(FILE *file) + + Return a Python object from the data stream in a :c:type:`FILE\*` opened for + reading. Unlike :c:func:`PyMarshal_ReadObjectFromFile`, this function + assumes that no further objects will be read from the file, allowing it to + aggressively load file data into memory so that the de-serialization can + operate from data in memory rather than reading a byte at a time from the + file. Only use these variant if you are certain that you won't be reading + anything else from the file. On error, sets the appropriate exception + (:exc:`EOFError` or :exc:`TypeError`) and returns *NULL*. + + +.. c:function:: PyObject* PyMarshal_ReadObjectFromString(char *string, Py_ssize_t len) + + Return a Python object from the data stream in a character buffer + containing *len* bytes pointed to by *string*. On error, sets the + appropriate exception (:exc:`EOFError` or :exc:`TypeError`) and returns + *NULL*. + diff --git a/lib/cpython-doc/c-api/memory.rst b/lib/cpython-doc/c-api/memory.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/memory.rst @@ -0,0 +1,209 @@ +.. highlightlang:: c + + +.. _memory: + +***************** +Memory Management +***************** + +.. sectionauthor:: Vladimir Marangozov + + + +.. _memoryoverview: + +Overview +======== + +Memory management in Python involves a private heap containing all Python +objects and data structures. The management of this private heap is ensured +internally by the *Python memory manager*. The Python memory manager has +different components which deal with various dynamic storage management aspects, +like sharing, segmentation, preallocation or caching. + +At the lowest level, a raw memory allocator ensures that there is enough room in +the private heap for storing all Python-related data by interacting with the +memory manager of the operating system. On top of the raw memory allocator, +several object-specific allocators operate on the same heap and implement +distinct memory management policies adapted to the peculiarities of every object +type. For example, integer objects are managed differently within the heap than +strings, tuples or dictionaries because integers imply different storage +requirements and speed/space tradeoffs. The Python memory manager thus delegates +some of the work to the object-specific allocators, but ensures that the latter +operate within the bounds of the private heap. + +It is important to understand that the management of the Python heap is +performed by the interpreter itself and that the user has no control over it, +even if she regularly manipulates object pointers to memory blocks inside that +heap. The allocation of heap space for Python objects and other internal +buffers is performed on demand by the Python memory manager through the Python/C +API functions listed in this document. + +.. index:: + single: malloc() + single: calloc() + single: realloc() + single: free() + +To avoid memory corruption, extension writers should never try to operate on +Python objects with the functions exported by the C library: :c:func:`malloc`, +:c:func:`calloc`, :c:func:`realloc` and :c:func:`free`. This will result in mixed +calls between the C allocator and the Python memory manager with fatal +consequences, because they implement different algorithms and operate on +different heaps. However, one may safely allocate and release memory blocks +with the C library allocator for individual purposes, as shown in the following +example:: + + PyObject *res; + char *buf = (char *) malloc(BUFSIZ); /* for I/O */ + + if (buf == NULL) + return PyErr_NoMemory(); + ...Do some I/O operation involving buf... + res = PyString_FromString(buf); + free(buf); /* malloc'ed */ + return res; + +In this example, the memory request for the I/O buffer is handled by the C +library allocator. The Python memory manager is involved only in the allocation +of the string object returned as a result. + +In most situations, however, it is recommended to allocate memory from the +Python heap specifically because the latter is under control of the Python +memory manager. For example, this is required when the interpreter is extended +with new object types written in C. Another reason for using the Python heap is +the desire to *inform* the Python memory manager about the memory needs of the +extension module. Even when the requested memory is used exclusively for +internal, highly-specific purposes, delegating all memory requests to the Python +memory manager causes the interpreter to have a more accurate image of its +memory footprint as a whole. Consequently, under certain circumstances, the +Python memory manager may or may not trigger appropriate actions, like garbage +collection, memory compaction or other preventive procedures. Note that by using +the C library allocator as shown in the previous example, the allocated memory +for the I/O buffer escapes completely the Python memory manager. + + +.. _memoryinterface: + +Memory Interface +================ + +The following function sets, modeled after the ANSI C standard, but specifying +behavior when requesting zero bytes, are available for allocating and releasing +memory from the Python heap: + + +.. c:function:: void* PyMem_Malloc(size_t n) + + Allocates *n* bytes and returns a pointer of type :c:type:`void\*` to the + allocated memory, or *NULL* if the request fails. Requesting zero bytes returns + a distinct non-*NULL* pointer if possible, as if :c:func:`PyMem_Malloc(1)` had + been called instead. The memory will not have been initialized in any way. + + +.. c:function:: void* PyMem_Realloc(void *p, size_t n) + + Resizes the memory block pointed to by *p* to *n* bytes. The contents will be + unchanged to the minimum of the old and the new sizes. If *p* is *NULL*, the + call is equivalent to :c:func:`PyMem_Malloc(n)`; else if *n* is equal to zero, + the memory block is resized but is not freed, and the returned pointer is + non-*NULL*. Unless *p* is *NULL*, it must have been returned by a previous call + to :c:func:`PyMem_Malloc` or :c:func:`PyMem_Realloc`. If the request fails, + :c:func:`PyMem_Realloc` returns *NULL* and *p* remains a valid pointer to the + previous memory area. + + +.. c:function:: void PyMem_Free(void *p) + + Frees the memory block pointed to by *p*, which must have been returned by a + previous call to :c:func:`PyMem_Malloc` or :c:func:`PyMem_Realloc`. Otherwise, or + if :c:func:`PyMem_Free(p)` has been called before, undefined behavior occurs. If + *p* is *NULL*, no operation is performed. + +The following type-oriented macros are provided for convenience. Note that +*TYPE* refers to any C type. + + +.. c:function:: TYPE* PyMem_New(TYPE, size_t n) + + Same as :c:func:`PyMem_Malloc`, but allocates ``(n * sizeof(TYPE))`` bytes of + memory. Returns a pointer cast to :c:type:`TYPE\*`. The memory will not have + been initialized in any way. + + +.. c:function:: TYPE* PyMem_Resize(void *p, TYPE, size_t n) + + Same as :c:func:`PyMem_Realloc`, but the memory block is resized to ``(n * + sizeof(TYPE))`` bytes. Returns a pointer cast to :c:type:`TYPE\*`. On return, + *p* will be a pointer to the new memory area, or *NULL* in the event of + failure. This is a C preprocessor macro; p is always reassigned. Save + the original value of p to avoid losing memory when handling errors. + + +.. c:function:: void PyMem_Del(void *p) + + Same as :c:func:`PyMem_Free`. + +In addition, the following macro sets are provided for calling the Python memory +allocator directly, without involving the C API functions listed above. However, +note that their use does not preserve binary compatibility across Python +versions and is therefore deprecated in extension modules. + +:c:func:`PyMem_MALLOC`, :c:func:`PyMem_REALLOC`, :c:func:`PyMem_FREE`. + +:c:func:`PyMem_NEW`, :c:func:`PyMem_RESIZE`, :c:func:`PyMem_DEL`. + + +.. _memoryexamples: + +Examples +======== + +Here is the example from section :ref:`memoryoverview`, rewritten so that the +I/O buffer is allocated from the Python heap by using the first function set:: + + PyObject *res; + char *buf = (char *) PyMem_Malloc(BUFSIZ); /* for I/O */ + + if (buf == NULL) + return PyErr_NoMemory(); + /* ...Do some I/O operation involving buf... */ + res = PyString_FromString(buf); + PyMem_Free(buf); /* allocated with PyMem_Malloc */ + return res; + +The same code using the type-oriented function set:: + + PyObject *res; + char *buf = PyMem_New(char, BUFSIZ); /* for I/O */ + + if (buf == NULL) + return PyErr_NoMemory(); + /* ...Do some I/O operation involving buf... */ + res = PyString_FromString(buf); + PyMem_Del(buf); /* allocated with PyMem_New */ + return res; + +Note that in the two examples above, the buffer is always manipulated via +functions belonging to the same set. Indeed, it is required to use the same +memory API family for a given memory block, so that the risk of mixing different +allocators is reduced to a minimum. The following code sequence contains two +errors, one of which is labeled as *fatal* because it mixes two different +allocators operating on different heaps. :: + + char *buf1 = PyMem_New(char, BUFSIZ); + char *buf2 = (char *) malloc(BUFSIZ); + char *buf3 = (char *) PyMem_Malloc(BUFSIZ); + ... + PyMem_Del(buf3); /* Wrong -- should be PyMem_Free() */ + free(buf2); /* Right -- allocated via malloc() */ + free(buf1); /* Fatal -- should be PyMem_Del() */ + +In addition to the functions aimed at handling raw memory blocks from the Python +heap, objects in Python are allocated and released with :c:func:`PyObject_New`, +:c:func:`PyObject_NewVar` and :c:func:`PyObject_Del`. + +These will be explained in the next chapter on defining and implementing new +object types in C. + diff --git a/lib/cpython-doc/c-api/memoryview.rst b/lib/cpython-doc/c-api/memoryview.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/memoryview.rst @@ -0,0 +1,52 @@ +.. highlightlang:: c + +.. _memoryview-objects: + +.. index:: + object: memoryview + +MemoryView objects +------------------ + +A :class:`memoryview` object exposes the C level :ref:`buffer interface +` as a Python object which can then be passed around like +any other object. + + +.. c:function:: PyObject *PyMemoryView_FromObject(PyObject *obj) + + Create a memoryview object from an object that provides the buffer interface. + If *obj* supports writable buffer exports, the memoryview object will be + readable and writable, otherwise it will be read-only. + + +.. c:function:: PyObject *PyMemoryView_FromBuffer(Py_buffer *view) + + Create a memoryview object wrapping the given buffer structure *view*. + The memoryview object then owns the buffer represented by *view*, which + means you shouldn't try to call :c:func:`PyBuffer_Release` yourself: it + will be done on deallocation of the memoryview object. + + +.. c:function:: PyObject *PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order) + + Create a memoryview object to a contiguous chunk of memory (in either + 'C' or 'F'ortran *order*) from an object that defines the buffer + interface. If memory is contiguous, the memoryview object points to the + original memory. Otherwise, a copy is made and the memoryview points to a + new bytes object. + + +.. c:function:: int PyMemoryView_Check(PyObject *obj) + + Return true if the object *obj* is a memoryview object. It is not + currently allowed to create subclasses of :class:`memoryview`. + + +.. c:function:: Py_buffer *PyMemoryView_GET_BUFFER(PyObject *obj) + + Return a pointer to the buffer structure wrapped by the given + memoryview object. The object **must** be a memoryview instance; + this macro doesn't check its type, you must do it yourself or you + will risk crashes. + diff --git a/lib/cpython-doc/c-api/method.rst b/lib/cpython-doc/c-api/method.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/method.rst @@ -0,0 +1,100 @@ +.. highlightlang:: c + +.. _instancemethod-objects: + +Instance Method Objects +----------------------- + +.. index:: object: instancemethod + +An instance method is a wrapper for a :c:data:`PyCFunction` and the new way +to bind a :c:data:`PyCFunction` to a class object. It replaces the former call +``PyMethod_New(func, NULL, class)``. + + +.. c:var:: PyTypeObject PyInstanceMethod_Type + + This instance of :c:type:`PyTypeObject` represents the Python instance + method type. It is not exposed to Python programs. + + +.. c:function:: int PyInstanceMethod_Check(PyObject *o) + + Return true if *o* is an instance method object (has type + :c:data:`PyInstanceMethod_Type`). The parameter must not be *NULL*. + + +.. c:function:: PyObject* PyInstanceMethod_New(PyObject *func) + + Return a new instance method object, with *func* being any callable object + *func* is the function that will be called when the instance method is + called. + + +.. c:function:: PyObject* PyInstanceMethod_Function(PyObject *im) + + Return the function object associated with the instance method *im*. + + +.. c:function:: PyObject* PyInstanceMethod_GET_FUNCTION(PyObject *im) + + Macro version of :c:func:`PyInstanceMethod_Function` which avoids error checking. + + +.. _method-objects: + +Method Objects +-------------- + +.. index:: object: method + +Methods are bound function objects. Methods are always bound to an instance of +an user-defined class. Unbound methods (methods bound to a class object) are +no longer available. + + +.. c:var:: PyTypeObject PyMethod_Type + + .. index:: single: MethodType (in module types) + + This instance of :c:type:`PyTypeObject` represents the Python method type. This + is exposed to Python programs as ``types.MethodType``. + + +.. c:function:: int PyMethod_Check(PyObject *o) + + Return true if *o* is a method object (has type :c:data:`PyMethod_Type`). The + parameter must not be *NULL*. + + +.. c:function:: PyObject* PyMethod_New(PyObject *func, PyObject *self) + + Return a new method object, with *func* being any callable object and *self* + the instance the method should be bound. *func* is the function that will + be called when the method is called. *self* must not be *NULL*. + + +.. c:function:: PyObject* PyMethod_Function(PyObject *meth) + + Return the function object associated with the method *meth*. + + +.. c:function:: PyObject* PyMethod_GET_FUNCTION(PyObject *meth) + + Macro version of :c:func:`PyMethod_Function` which avoids error checking. + + +.. c:function:: PyObject* PyMethod_Self(PyObject *meth) + + Return the instance associated with the method *meth*. + + +.. c:function:: PyObject* PyMethod_GET_SELF(PyObject *meth) + + Macro version of :c:func:`PyMethod_Self` which avoids error checking. + + +.. c:function:: int PyMethod_ClearFreeList() + + Clear the free list. Return the total number of freed items. + diff --git a/lib/cpython-doc/c-api/module.rst b/lib/cpython-doc/c-api/module.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/module.rst @@ -0,0 +1,231 @@ +.. highlightlang:: c + +.. _moduleobjects: + +Module Objects +-------------- + +.. index:: object: module + +There are only a few functions special to module objects. + + +.. c:var:: PyTypeObject PyModule_Type + + .. index:: single: ModuleType (in module types) + + This instance of :c:type:`PyTypeObject` represents the Python module type. This + is exposed to Python programs as ``types.ModuleType``. + + +.. c:function:: int PyModule_Check(PyObject *p) + + Return true if *p* is a module object, or a subtype of a module object. + + +.. c:function:: int PyModule_CheckExact(PyObject *p) + + Return true if *p* is a module object, but not a subtype of + :c:data:`PyModule_Type`. + + +.. c:function:: PyObject* PyModule_NewObject(PyObject *name) + + .. index:: + single: __name__ (module attribute) + single: __doc__ (module attribute) + single: __file__ (module attribute) + + Return a new module object with the :attr:`__name__` attribute set to *name*. + Only the module's :attr:`__doc__` and :attr:`__name__` attributes are filled in; + the caller is responsible for providing a :attr:`__file__` attribute. + + .. versionadded:: 3.3 + + +.. c:function:: PyObject* PyModule_New(const char *name) + + Similar to :c:func:`PyImport_NewObject`, but the name is an UTF-8 encoded + string instead of a Unicode object. + + +.. c:function:: PyObject* PyModule_GetDict(PyObject *module) + + .. index:: single: __dict__ (module attribute) + + Return the dictionary object that implements *module*'s namespace; this object + is the same as the :attr:`__dict__` attribute of the module object. This + function never fails. It is recommended extensions use other + :c:func:`PyModule_\*` and :c:func:`PyObject_\*` functions rather than directly + manipulate a module's :attr:`__dict__`. + + +.. c:function:: PyObject* PyModule_GetNameObject(PyObject *module) + + .. index:: + single: __name__ (module attribute) + single: SystemError (built-in exception) + + Return *module*'s :attr:`__name__` value. If the module does not provide one, + or if it is not a string, :exc:`SystemError` is raised and *NULL* is returned. + + .. versionadded:: 3.3 + + +.. c:function:: char* PyModule_GetName(PyObject *module) + + Similar to :c:func:`PyModule_GetNameObject` but return the name encoded to + ``'utf-8'``. + + +.. c:function:: PyObject* PyModule_GetFilenameObject(PyObject *module) + + .. index:: + single: __file__ (module attribute) + single: SystemError (built-in exception) + + Return the name of the file from which *module* was loaded using *module*'s + :attr:`__file__` attribute. If this is not defined, or if it is not a + unicode string, raise :exc:`SystemError` and return *NULL*; otherwise return + a reference to a Unicode object. + + .. versionadded:: 3.2 + + +.. c:function:: char* PyModule_GetFilename(PyObject *module) + + Similar to :c:func:`PyModule_GetFilenameObject` but return the filename + encoded to 'utf-8'. + + .. deprecated:: 3.2 + :c:func:`PyModule_GetFilename` raises :c:type:`UnicodeEncodeError` on + unencodable filenames, use :c:func:`PyModule_GetFilenameObject` instead. + + +.. c:function:: void* PyModule_GetState(PyObject *module) + + Return the "state" of the module, that is, a pointer to the block of memory + allocated at module creation time, or *NULL*. See + :c:member:`PyModuleDef.m_size`. + + +.. c:function:: PyModuleDef* PyModule_GetDef(PyObject *module) + + Return a pointer to the :c:type:`PyModuleDef` struct from which the module was + created, or *NULL* if the module wasn't created with + :c:func:`PyModule_Create`. + + +Initializing C modules +^^^^^^^^^^^^^^^^^^^^^^ + +These functions are usually used in the module initialization function. + +.. c:function:: PyObject* PyModule_Create(PyModuleDef *module) + + Create a new module object, given the definition in *module*. This behaves + like :c:func:`PyModule_Create2` with *module_api_version* set to + :const:`PYTHON_API_VERSION`. + + +.. c:function:: PyObject* PyModule_Create2(PyModuleDef *module, int module_api_version) + + Create a new module object, given the definition in *module*, assuming the + API version *module_api_version*. If that version does not match the version + of the running interpreter, a :exc:`RuntimeWarning` is emitted. + + .. note:: + + Most uses of this function should be using :c:func:`PyModule_Create` + instead; only use this if you are sure you need it. + + +.. c:type:: PyModuleDef + + This struct holds all information that is needed to create a module object. + There is usually only one static variable of that type for each module, which + is statically initialized and then passed to :c:func:`PyModule_Create` in the + module initialization function. + + .. c:member:: PyModuleDef_Base m_base + + Always initialize this member to :const:`PyModuleDef_HEAD_INIT`. + + .. c:member:: char* m_name + + Name for the new module. + + .. c:member:: char* m_doc + + Docstring for the module; usually a docstring variable created with + :c:func:`PyDoc_STRVAR` is used. + + .. c:member:: Py_ssize_t m_size + + If the module object needs additional memory, this should be set to the + number of bytes to allocate; a pointer to the block of memory can be + retrieved with :c:func:`PyModule_GetState`. If no memory is needed, set + this to ``-1``. + + This memory should be used, rather than static globals, to hold per-module + state, since it is then safe for use in multiple sub-interpreters. It is + freed when the module object is deallocated, after the :c:member:`m_free` + function has been called, if present. + + .. c:member:: PyMethodDef* m_methods + + A pointer to a table of module-level functions, described by + :c:type:`PyMethodDef` values. Can be *NULL* if no functions are present. + + .. c:member:: inquiry m_reload + + Currently unused, should be *NULL*. + + .. c:member:: traverseproc m_traverse + + A traversal function to call during GC traversal of the module object, or + *NULL* if not needed. + + .. c:member:: inquiry m_clear + + A clear function to call during GC clearing of the module object, or + *NULL* if not needed. + + .. c:member:: freefunc m_free + + A function to call during deallocation of the module object, or *NULL* if + not needed. + + +.. c:function:: int PyModule_AddObject(PyObject *module, const char *name, PyObject *value) + + Add an object to *module* as *name*. This is a convenience function which can + be used from the module's initialization function. This steals a reference to + *value*. Return ``-1`` on error, ``0`` on success. + + +.. c:function:: int PyModule_AddIntConstant(PyObject *module, const char *name, long value) + + Add an integer constant to *module* as *name*. This convenience function can be + used from the module's initialization function. Return ``-1`` on error, ``0`` on + success. + + +.. c:function:: int PyModule_AddStringConstant(PyObject *module, const char *name, const char *value) + + Add a string constant to *module* as *name*. This convenience function can be + used from the module's initialization function. The string *value* must be + null-terminated. Return ``-1`` on error, ``0`` on success. + + +.. c:function:: int PyModule_AddIntMacro(PyObject *module, macro) + + Add an int constant to *module*. The name and the value are taken from + *macro*. For example ``PyModule_AddIntMacro(module, AF_INET)`` adds the int + constant *AF_INET* with the value of *AF_INET* to *module*. + Return ``-1`` on error, ``0`` on success. + + +.. c:function:: int PyModule_AddStringMacro(PyObject *module, macro) + + Add a string constant to *module*. diff --git a/lib/cpython-doc/c-api/none.rst b/lib/cpython-doc/c-api/none.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/none.rst @@ -0,0 +1,26 @@ +.. highlightlang:: c + +.. _noneobject: + +The None Object +--------------- + +.. index:: object: None + +Note that the :c:type:`PyTypeObject` for ``None`` is not directly exposed in the +Python/C API. Since ``None`` is a singleton, testing for object identity (using +``==`` in C) is sufficient. There is no :c:func:`PyNone_Check` function for the +same reason. + + +.. c:var:: PyObject* Py_None + + The Python ``None`` object, denoting lack of value. This object has no methods. + It needs to be treated just like any other object with respect to reference + counts. + + +.. c:macro:: Py_RETURN_NONE + + Properly handle returning :c:data:`Py_None` from within a C function (that is, + increment the reference count of None and return it.) diff --git a/lib/cpython-doc/c-api/number.rst b/lib/cpython-doc/c-api/number.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/number.rst @@ -0,0 +1,265 @@ +.. highlightlang:: c + +.. _number: + +Number Protocol +=============== + + +.. c:function:: int PyNumber_Check(PyObject *o) + + Returns ``1`` if the object *o* provides numeric protocols, and false otherwise. + This function always succeeds. + + +.. c:function:: PyObject* PyNumber_Add(PyObject *o1, PyObject *o2) + + Returns the result of adding *o1* and *o2*, or *NULL* on failure. This is the + equivalent of the Python expression ``o1 + o2``. + + +.. c:function:: PyObject* PyNumber_Subtract(PyObject *o1, PyObject *o2) + + Returns the result of subtracting *o2* from *o1*, or *NULL* on failure. This is + the equivalent of the Python expression ``o1 - o2``. + + +.. c:function:: PyObject* PyNumber_Multiply(PyObject *o1, PyObject *o2) + + Returns the result of multiplying *o1* and *o2*, or *NULL* on failure. This is + the equivalent of the Python expression ``o1 * o2``. + + +.. c:function:: PyObject* PyNumber_FloorDivide(PyObject *o1, PyObject *o2) + + Return the floor of *o1* divided by *o2*, or *NULL* on failure. This is + equivalent to the "classic" division of integers. + + +.. c:function:: PyObject* PyNumber_TrueDivide(PyObject *o1, PyObject *o2) + + Return a reasonable approximation for the mathematical value of *o1* divided by + *o2*, or *NULL* on failure. The return value is "approximate" because binary + floating point numbers are approximate; it is not possible to represent all real + numbers in base two. This function can return a floating point value when + passed two integers. + + +.. c:function:: PyObject* PyNumber_Remainder(PyObject *o1, PyObject *o2) + + Returns the remainder of dividing *o1* by *o2*, or *NULL* on failure. This is + the equivalent of the Python expression ``o1 % o2``. + + +.. c:function:: PyObject* PyNumber_Divmod(PyObject *o1, PyObject *o2) + + .. index:: builtin: divmod + + See the built-in function :func:`divmod`. Returns *NULL* on failure. This is + the equivalent of the Python expression ``divmod(o1, o2)``. + + +.. c:function:: PyObject* PyNumber_Power(PyObject *o1, PyObject *o2, PyObject *o3) + + .. index:: builtin: pow + + See the built-in function :func:`pow`. Returns *NULL* on failure. This is the + equivalent of the Python expression ``pow(o1, o2, o3)``, where *o3* is optional. + If *o3* is to be ignored, pass :c:data:`Py_None` in its place (passing *NULL* for + *o3* would cause an illegal memory access). + + +.. c:function:: PyObject* PyNumber_Negative(PyObject *o) + + Returns the negation of *o* on success, or *NULL* on failure. This is the + equivalent of the Python expression ``-o``. + + +.. c:function:: PyObject* PyNumber_Positive(PyObject *o) + + Returns *o* on success, or *NULL* on failure. This is the equivalent of the + Python expression ``+o``. + + +.. c:function:: PyObject* PyNumber_Absolute(PyObject *o) + + .. index:: builtin: abs + + Returns the absolute value of *o*, or *NULL* on failure. This is the equivalent + of the Python expression ``abs(o)``. + + +.. c:function:: PyObject* PyNumber_Invert(PyObject *o) + + Returns the bitwise negation of *o* on success, or *NULL* on failure. This is + the equivalent of the Python expression ``~o``. + + +.. c:function:: PyObject* PyNumber_Lshift(PyObject *o1, PyObject *o2) + + Returns the result of left shifting *o1* by *o2* on success, or *NULL* on + failure. This is the equivalent of the Python expression ``o1 << o2``. + + +.. c:function:: PyObject* PyNumber_Rshift(PyObject *o1, PyObject *o2) + + Returns the result of right shifting *o1* by *o2* on success, or *NULL* on + failure. This is the equivalent of the Python expression ``o1 >> o2``. + + +.. c:function:: PyObject* PyNumber_And(PyObject *o1, PyObject *o2) + + Returns the "bitwise and" of *o1* and *o2* on success and *NULL* on failure. + This is the equivalent of the Python expression ``o1 & o2``. + + +.. c:function:: PyObject* PyNumber_Xor(PyObject *o1, PyObject *o2) + + Returns the "bitwise exclusive or" of *o1* by *o2* on success, or *NULL* on + failure. This is the equivalent of the Python expression ``o1 ^ o2``. + + +.. c:function:: PyObject* PyNumber_Or(PyObject *o1, PyObject *o2) + + Returns the "bitwise or" of *o1* and *o2* on success, or *NULL* on failure. + This is the equivalent of the Python expression ``o1 | o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceAdd(PyObject *o1, PyObject *o2) + + Returns the result of adding *o1* and *o2*, or *NULL* on failure. The operation + is done *in-place* when *o1* supports it. This is the equivalent of the Python + statement ``o1 += o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceSubtract(PyObject *o1, PyObject *o2) + + Returns the result of subtracting *o2* from *o1*, or *NULL* on failure. The + operation is done *in-place* when *o1* supports it. This is the equivalent of + the Python statement ``o1 -= o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceMultiply(PyObject *o1, PyObject *o2) + + Returns the result of multiplying *o1* and *o2*, or *NULL* on failure. The + operation is done *in-place* when *o1* supports it. This is the equivalent of + the Python statement ``o1 *= o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceFloorDivide(PyObject *o1, PyObject *o2) + + Returns the mathematical floor of dividing *o1* by *o2*, or *NULL* on failure. + The operation is done *in-place* when *o1* supports it. This is the equivalent + of the Python statement ``o1 //= o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceTrueDivide(PyObject *o1, PyObject *o2) + + Return a reasonable approximation for the mathematical value of *o1* divided by + *o2*, or *NULL* on failure. The return value is "approximate" because binary + floating point numbers are approximate; it is not possible to represent all real + numbers in base two. This function can return a floating point value when + passed two integers. The operation is done *in-place* when *o1* supports it. + + +.. c:function:: PyObject* PyNumber_InPlaceRemainder(PyObject *o1, PyObject *o2) + + Returns the remainder of dividing *o1* by *o2*, or *NULL* on failure. The + operation is done *in-place* when *o1* supports it. This is the equivalent of + the Python statement ``o1 %= o2``. + + +.. c:function:: PyObject* PyNumber_InPlacePower(PyObject *o1, PyObject *o2, PyObject *o3) + + .. index:: builtin: pow + + See the built-in function :func:`pow`. Returns *NULL* on failure. The operation + is done *in-place* when *o1* supports it. This is the equivalent of the Python + statement ``o1 **= o2`` when o3 is :c:data:`Py_None`, or an in-place variant of + ``pow(o1, o2, o3)`` otherwise. If *o3* is to be ignored, pass :c:data:`Py_None` + in its place (passing *NULL* for *o3* would cause an illegal memory access). + + +.. c:function:: PyObject* PyNumber_InPlaceLshift(PyObject *o1, PyObject *o2) + + Returns the result of left shifting *o1* by *o2* on success, or *NULL* on + failure. The operation is done *in-place* when *o1* supports it. This is the + equivalent of the Python statement ``o1 <<= o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceRshift(PyObject *o1, PyObject *o2) + + Returns the result of right shifting *o1* by *o2* on success, or *NULL* on + failure. The operation is done *in-place* when *o1* supports it. This is the + equivalent of the Python statement ``o1 >>= o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceAnd(PyObject *o1, PyObject *o2) + + Returns the "bitwise and" of *o1* and *o2* on success and *NULL* on failure. The + operation is done *in-place* when *o1* supports it. This is the equivalent of + the Python statement ``o1 &= o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceXor(PyObject *o1, PyObject *o2) + + Returns the "bitwise exclusive or" of *o1* by *o2* on success, or *NULL* on + failure. The operation is done *in-place* when *o1* supports it. This is the + equivalent of the Python statement ``o1 ^= o2``. + + +.. c:function:: PyObject* PyNumber_InPlaceOr(PyObject *o1, PyObject *o2) + + Returns the "bitwise or" of *o1* and *o2* on success, or *NULL* on failure. The + operation is done *in-place* when *o1* supports it. This is the equivalent of + the Python statement ``o1 |= o2``. + + +.. c:function:: PyObject* PyNumber_Long(PyObject *o) + + .. index:: builtin: int + + Returns the *o* converted to an integer object on success, or *NULL* on + failure. This is the equivalent of the Python expression ``int(o)``. + + +.. c:function:: PyObject* PyNumber_Float(PyObject *o) + + .. index:: builtin: float + + Returns the *o* converted to a float object on success, or *NULL* on failure. + This is the equivalent of the Python expression ``float(o)``. + + +.. c:function:: PyObject* PyNumber_Index(PyObject *o) + + Returns the *o* converted to a Python int on success or *NULL* with a + :exc:`TypeError` exception raised on failure. + + +.. c:function:: PyObject* PyNumber_ToBase(PyObject *n, int base) + + Returns the integer *n* converted to base *base* as a string. The *base* + argument must be one of 2, 8, 10, or 16. For base 2, 8, or 16, the + returned string is prefixed with a base marker of ``'0b'``, ``'0o'``, or + ``'0x'``, respectively. If *n* is not a Python int, it is converted with + :c:func:`PyNumber_Index` first. + + +.. c:function:: Py_ssize_t PyNumber_AsSsize_t(PyObject *o, PyObject *exc) + + Returns *o* converted to a Py_ssize_t value if *o* can be interpreted as an + integer. If the call fails, an exception is raised and -1 is returned. + + If *o* can be converted to a Python int but the attempt to + convert to a Py_ssize_t value would raise an :exc:`OverflowError`, then the + *exc* argument is the type of exception that will be raised (usually + :exc:`IndexError` or :exc:`OverflowError`). If *exc* is *NULL*, then the + exception is cleared and the value is clipped to *PY_SSIZE_T_MIN* for a negative + integer or *PY_SSIZE_T_MAX* for a positive integer. + + +.. c:function:: int PyIndex_Check(PyObject *o) + + Returns True if *o* is an index integer (has the nb_index slot of the + tp_as_number structure filled in). diff --git a/lib/cpython-doc/c-api/objbuffer.rst b/lib/cpython-doc/c-api/objbuffer.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/objbuffer.rst @@ -0,0 +1,51 @@ +.. highlightlang:: c + +Old Buffer Protocol +------------------- + +.. deprecated:: 3.0 + +These functions were part of the "old buffer protocol" API in Python 2. +In Python 3, this protocol doesn't exist anymore but the functions are still +exposed to ease porting 2.x code. They act as a compatibility wrapper +around the :ref:`new buffer protocol `, but they don't give +you control over the lifetime of the resources acquired when a buffer is +exported. + +Therefore, it is recommended that you call :c:func:`PyObject_GetBuffer` +(or the ``y*`` or ``w*`` :ref:`format codes ` with the +:c:func:`PyArg_ParseTuple` family of functions) to get a buffer view over +an object, and :c:func:`PyBuffer_Release` when the buffer view can be released. + + +.. c:function:: int PyObject_AsCharBuffer(PyObject *obj, const char **buffer, Py_ssize_t *buffer_len) + + Returns a pointer to a read-only memory location usable as character-based + input. The *obj* argument must support the single-segment character buffer + interface. On success, returns ``0``, sets *buffer* to the memory location + and *buffer_len* to the buffer length. Returns ``-1`` and sets a + :exc:`TypeError` on error. + + +.. c:function:: int PyObject_AsReadBuffer(PyObject *obj, const void **buffer, Py_ssize_t *buffer_len) + + Returns a pointer to a read-only memory location containing arbitrary data. + The *obj* argument must support the single-segment readable buffer + interface. On success, returns ``0``, sets *buffer* to the memory location + and *buffer_len* to the buffer length. Returns ``-1`` and sets a + :exc:`TypeError` on error. + + +.. c:function:: int PyObject_CheckReadBuffer(PyObject *o) + + Returns ``1`` if *o* supports the single-segment readable buffer interface. + Otherwise returns ``0``. + + +.. c:function:: int PyObject_AsWriteBuffer(PyObject *obj, void **buffer, Py_ssize_t *buffer_len) + + Returns a pointer to a writable memory location. The *obj* argument must + support the single-segment, character buffer interface. On success, + returns ``0``, sets *buffer* to the memory location and *buffer_len* to the + buffer length. Returns ``-1`` and sets a :exc:`TypeError` on error. + diff --git a/lib/cpython-doc/c-api/object.rst b/lib/cpython-doc/c-api/object.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/object.rst @@ -0,0 +1,361 @@ +.. highlightlang:: c + +.. _object: + +Object Protocol +=============== + + +.. c:var:: PyObject* Py_NotImplemented + + The ``NotImplemented`` singleton, used to signal that an operation is + not implemented for the given type combination. + + +.. c:macro:: Py_RETURN_NOTIMPLEMENTED + + Properly handle returning :c:data:`Py_NotImplemented` from within a C + function (that is, increment the reference count of NotImplemented and + return it). + + +.. c:function:: int PyObject_Print(PyObject *o, FILE *fp, int flags) + + Print an object *o*, on file *fp*. Returns ``-1`` on error. The flags argument + is used to enable certain printing options. The only option currently supported + is :const:`Py_PRINT_RAW`; if given, the :func:`str` of the object is written + instead of the :func:`repr`. + + +.. c:function:: int PyObject_HasAttr(PyObject *o, PyObject *attr_name) + + Returns ``1`` if *o* has the attribute *attr_name*, and ``0`` otherwise. This + is equivalent to the Python expression ``hasattr(o, attr_name)``. This function + always succeeds. + + +.. c:function:: int PyObject_HasAttrString(PyObject *o, const char *attr_name) + + Returns ``1`` if *o* has the attribute *attr_name*, and ``0`` otherwise. This + is equivalent to the Python expression ``hasattr(o, attr_name)``. This function + always succeeds. + + +.. c:function:: PyObject* PyObject_GetAttr(PyObject *o, PyObject *attr_name) + + Retrieve an attribute named *attr_name* from object *o*. Returns the attribute + value on success, or *NULL* on failure. This is the equivalent of the Python + expression ``o.attr_name``. + + +.. c:function:: PyObject* PyObject_GetAttrString(PyObject *o, const char *attr_name) + + Retrieve an attribute named *attr_name* from object *o*. Returns the attribute + value on success, or *NULL* on failure. This is the equivalent of the Python + expression ``o.attr_name``. + + +.. c:function:: PyObject* PyObject_GenericGetAttr(PyObject *o, PyObject *name) + + Generic attribute getter function that is meant to be put into a type + object's ``tp_getattro`` slot. It looks for a descriptor in the dictionary + of classes in the object's MRO as well as an attribute in the object's + :attr:`__dict__` (if present). As outlined in :ref:`descriptors`, data + descriptors take preference over instance attributes, while non-data + descriptors don't. Otherwise, an :exc:`AttributeError` is raised. + + +.. c:function:: int PyObject_SetAttr(PyObject *o, PyObject *attr_name, PyObject *v) + + Set the value of the attribute named *attr_name*, for object *o*, to the value + *v*. Returns ``-1`` on failure. This is the equivalent of the Python statement + ``o.attr_name = v``. + + +.. c:function:: int PyObject_SetAttrString(PyObject *o, const char *attr_name, PyObject *v) + + Set the value of the attribute named *attr_name*, for object *o*, to the value + *v*. Returns ``-1`` on failure. This is the equivalent of the Python statement + ``o.attr_name = v``. + + +.. c:function:: int PyObject_GenericSetAttr(PyObject *o, PyObject *name, PyObject *value) + + Generic attribute setter function that is meant to be put into a type + object's ``tp_setattro`` slot. It looks for a data descriptor in the + dictionary of classes in the object's MRO, and if found it takes preference + over setting the attribute in the instance dictionary. Otherwise, the + attribute is set in the object's :attr:`__dict__` (if present). Otherwise, + an :exc:`AttributeError` is raised and ``-1`` is returned. + + +.. c:function:: int PyObject_DelAttr(PyObject *o, PyObject *attr_name) + + Delete attribute named *attr_name*, for object *o*. Returns ``-1`` on failure. + This is the equivalent of the Python statement ``del o.attr_name``. + + +.. c:function:: int PyObject_DelAttrString(PyObject *o, const char *attr_name) + + Delete attribute named *attr_name*, for object *o*. Returns ``-1`` on failure. + This is the equivalent of the Python statement ``del o.attr_name``. + + +.. c:function:: PyObject* PyObject_RichCompare(PyObject *o1, PyObject *o2, int opid) + + Compare the values of *o1* and *o2* using the operation specified by *opid*, + which must be one of :const:`Py_LT`, :const:`Py_LE`, :const:`Py_EQ`, + :const:`Py_NE`, :const:`Py_GT`, or :const:`Py_GE`, corresponding to ``<``, + ``<=``, ``==``, ``!=``, ``>``, or ``>=`` respectively. This is the equivalent of + the Python expression ``o1 op o2``, where ``op`` is the operator corresponding + to *opid*. Returns the value of the comparison on success, or *NULL* on failure. + + +.. c:function:: int PyObject_RichCompareBool(PyObject *o1, PyObject *o2, int opid) + + Compare the values of *o1* and *o2* using the operation specified by *opid*, + which must be one of :const:`Py_LT`, :const:`Py_LE`, :const:`Py_EQ`, + :const:`Py_NE`, :const:`Py_GT`, or :const:`Py_GE`, corresponding to ``<``, + ``<=``, ``==``, ``!=``, ``>``, or ``>=`` respectively. Returns ``-1`` on error, + ``0`` if the result is false, ``1`` otherwise. This is the equivalent of the + Python expression ``o1 op o2``, where ``op`` is the operator corresponding to + *opid*. + +.. note:: + If *o1* and *o2* are the same object, :c:func:`PyObject_RichCompareBool` + will always return ``1`` for :const:`Py_EQ` and ``0`` for :const:`Py_NE`. + +.. c:function:: PyObject* PyObject_Repr(PyObject *o) + + .. index:: builtin: repr + + Compute a string representation of object *o*. Returns the string + representation on success, *NULL* on failure. This is the equivalent of the + Python expression ``repr(o)``. Called by the :func:`repr` built-in function. + + +.. c:function:: PyObject* PyObject_ASCII(PyObject *o) + + .. index:: builtin: ascii + + As :c:func:`PyObject_Repr`, compute a string representation of object *o*, but + escape the non-ASCII characters in the string returned by + :c:func:`PyObject_Repr` with ``\x``, ``\u`` or ``\U`` escapes. This generates + a string similar to that returned by :c:func:`PyObject_Repr` in Python 2. + Called by the :func:`ascii` built-in function. + + +.. c:function:: PyObject* PyObject_Str(PyObject *o) + + .. index:: builtin: str + + Compute a string representation of object *o*. Returns the string + representation on success, *NULL* on failure. This is the equivalent of the + Python expression ``str(o)``. Called by the :func:`str` built-in function + and, therefore, by the :func:`print` function. + +.. c:function:: PyObject* PyObject_Bytes(PyObject *o) + + .. index:: builtin: bytes + + Compute a bytes representation of object *o*. *NULL* is returned on + failure and a bytes object on success. This is equivalent to the Python + expression ``bytes(o)``, when *o* is not an integer. Unlike ``bytes(o)``, + a TypeError is raised when *o* is an integer instead of a zero-initialized + bytes object. + +.. c:function:: int PyObject_IsInstance(PyObject *inst, PyObject *cls) + + Returns ``1`` if *inst* is an instance of the class *cls* or a subclass of + *cls*, or ``0`` if not. On error, returns ``-1`` and sets an exception. If + *cls* is a type object rather than a class object, :c:func:`PyObject_IsInstance` + returns ``1`` if *inst* is of type *cls*. If *cls* is a tuple, the check will + be done against every entry in *cls*. The result will be ``1`` when at least one + of the checks returns ``1``, otherwise it will be ``0``. If *inst* is not a + class instance and *cls* is neither a type object, nor a class object, nor a + tuple, *inst* must have a :attr:`__class__` attribute --- the class relationship + of the value of that attribute with *cls* will be used to determine the result + of this function. + + +Subclass determination is done in a fairly straightforward way, but includes a +wrinkle that implementors of extensions to the class system may want to be aware +of. If :class:`A` and :class:`B` are class objects, :class:`B` is a subclass of +:class:`A` if it inherits from :class:`A` either directly or indirectly. If +either is not a class object, a more general mechanism is used to determine the +class relationship of the two objects. When testing if *B* is a subclass of +*A*, if *A* is *B*, :c:func:`PyObject_IsSubclass` returns true. If *A* and *B* +are different objects, *B*'s :attr:`__bases__` attribute is searched in a +depth-first fashion for *A* --- the presence of the :attr:`__bases__` attribute +is considered sufficient for this determination. + + +.. c:function:: int PyObject_IsSubclass(PyObject *derived, PyObject *cls) + + Returns ``1`` if the class *derived* is identical to or derived from the class + *cls*, otherwise returns ``0``. In case of an error, returns ``-1``. If *cls* + is a tuple, the check will be done against every entry in *cls*. The result will + be ``1`` when at least one of the checks returns ``1``, otherwise it will be + ``0``. If either *derived* or *cls* is not an actual class object (or tuple), + this function uses the generic algorithm described above. + + +.. c:function:: int PyCallable_Check(PyObject *o) + + Determine if the object *o* is callable. Return ``1`` if the object is callable + and ``0`` otherwise. This function always succeeds. + + +.. c:function:: PyObject* PyObject_Call(PyObject *callable_object, PyObject *args, PyObject *kw) + + Call a callable Python object *callable_object*, with arguments given by the + tuple *args*, and named arguments given by the dictionary *kw*. If no named + arguments are needed, *kw* may be *NULL*. *args* must not be *NULL*, use an + empty tuple if no arguments are needed. Returns the result of the call on + success, or *NULL* on failure. This is the equivalent of the Python expression + ``callable_object(*args, **kw)``. + + +.. c:function:: PyObject* PyObject_CallObject(PyObject *callable_object, PyObject *args) + + Call a callable Python object *callable_object*, with arguments given by the + tuple *args*. If no arguments are needed, then *args* may be *NULL*. Returns + the result of the call on success, or *NULL* on failure. This is the equivalent + of the Python expression ``callable_object(*args)``. + + +.. c:function:: PyObject* PyObject_CallFunction(PyObject *callable, char *format, ...) + + Call a callable Python object *callable*, with a variable number of C arguments. + The C arguments are described using a :c:func:`Py_BuildValue` style format + string. The format may be *NULL*, indicating that no arguments are provided. + Returns the result of the call on success, or *NULL* on failure. This is the + equivalent of the Python expression ``callable(*args)``. Note that if you only + pass :c:type:`PyObject \*` args, :c:func:`PyObject_CallFunctionObjArgs` is a + faster alternative. + + +.. c:function:: PyObject* PyObject_CallMethod(PyObject *o, char *method, char *format, ...) + + Call the method named *method* of object *o* with a variable number of C + arguments. The C arguments are described by a :c:func:`Py_BuildValue` format + string that should produce a tuple. The format may be *NULL*, indicating that + no arguments are provided. Returns the result of the call on success, or *NULL* + on failure. This is the equivalent of the Python expression ``o.method(args)``. + Note that if you only pass :c:type:`PyObject \*` args, + :c:func:`PyObject_CallMethodObjArgs` is a faster alternative. + + +.. c:function:: PyObject* PyObject_CallFunctionObjArgs(PyObject *callable, ..., NULL) + + Call a callable Python object *callable*, with a variable number of + :c:type:`PyObject\*` arguments. The arguments are provided as a variable number + of parameters followed by *NULL*. Returns the result of the call on success, or + *NULL* on failure. + + +.. c:function:: PyObject* PyObject_CallMethodObjArgs(PyObject *o, PyObject *name, ..., NULL) + + Calls a method of the object *o*, where the name of the method is given as a + Python string object in *name*. It is called with a variable number of + :c:type:`PyObject\*` arguments. The arguments are provided as a variable number + of parameters followed by *NULL*. Returns the result of the call on success, or + *NULL* on failure. + + +.. c:function:: Py_hash_t PyObject_Hash(PyObject *o) + + .. index:: builtin: hash + + Compute and return the hash value of an object *o*. On failure, return ``-1``. + This is the equivalent of the Python expression ``hash(o)``. + + .. versionchanged:: 3.2 + The return type is now Py_hash_t. This is a signed integer the same size + as Py_ssize_t. + + +.. c:function:: Py_hash_t PyObject_HashNotImplemented(PyObject *o) + + Set a :exc:`TypeError` indicating that ``type(o)`` is not hashable and return ``-1``. + This function receives special treatment when stored in a ``tp_hash`` slot, + allowing a type to explicitly indicate to the interpreter that it is not + hashable. + + +.. c:function:: int PyObject_IsTrue(PyObject *o) + + Returns ``1`` if the object *o* is considered to be true, and ``0`` otherwise. + This is equivalent to the Python expression ``not not o``. On failure, return + ``-1``. + + +.. c:function:: int PyObject_Not(PyObject *o) + + Returns ``0`` if the object *o* is considered to be true, and ``1`` otherwise. + This is equivalent to the Python expression ``not o``. On failure, return + ``-1``. + + +.. c:function:: PyObject* PyObject_Type(PyObject *o) + + .. index:: builtin: type + + When *o* is non-*NULL*, returns a type object corresponding to the object type + of object *o*. On failure, raises :exc:`SystemError` and returns *NULL*. This + is equivalent to the Python expression ``type(o)``. This function increments the + reference count of the return value. There's really no reason to use this + function instead of the common expression ``o->ob_type``, which returns a + pointer of type :c:type:`PyTypeObject\*`, except when the incremented reference + count is needed. + + +.. c:function:: int PyObject_TypeCheck(PyObject *o, PyTypeObject *type) + + Return true if the object *o* is of type *type* or a subtype of *type*. Both + parameters must be non-*NULL*. + + +.. c:function:: Py_ssize_t PyObject_Length(PyObject *o) + Py_ssize_t PyObject_Size(PyObject *o) + + .. index:: builtin: len + + Return the length of object *o*. If the object *o* provides either the sequence + and mapping protocols, the sequence length is returned. On error, ``-1`` is + returned. This is the equivalent to the Python expression ``len(o)``. + + +.. c:function:: PyObject* PyObject_GetItem(PyObject *o, PyObject *key) + + Return element of *o* corresponding to the object *key* or *NULL* on failure. + This is the equivalent of the Python expression ``o[key]``. + + +.. c:function:: int PyObject_SetItem(PyObject *o, PyObject *key, PyObject *v) + + Map the object *key* to the value *v*. Returns ``-1`` on failure. This is the + equivalent of the Python statement ``o[key] = v``. + + +.. c:function:: int PyObject_DelItem(PyObject *o, PyObject *key) + + Delete the mapping for *key* from *o*. Returns ``-1`` on failure. This is the + equivalent of the Python statement ``del o[key]``. + + +.. c:function:: PyObject* PyObject_Dir(PyObject *o) + + This is equivalent to the Python expression ``dir(o)``, returning a (possibly + empty) list of strings appropriate for the object argument, or *NULL* if there + was an error. If the argument is *NULL*, this is like the Python ``dir()``, + returning the names of the current locals; in this case, if no execution frame + is active then *NULL* is returned but :c:func:`PyErr_Occurred` will return false. + + +.. c:function:: PyObject* PyObject_GetIter(PyObject *o) + + This is equivalent to the Python expression ``iter(o)``. It returns a new + iterator for the object argument, or the object itself if the object is already + an iterator. Raises :exc:`TypeError` and returns *NULL* if the object cannot be + iterated. diff --git a/lib/cpython-doc/c-api/objimpl.rst b/lib/cpython-doc/c-api/objimpl.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/objimpl.rst @@ -0,0 +1,17 @@ +.. highlightlang:: c + +.. _newtypes: + +***************************** +Object Implementation Support +***************************** + +This chapter describes the functions, types, and macros used when defining new +object types. + +.. toctree:: + + allocation.rst + structures.rst + typeobj.rst + gcsupport.rst diff --git a/lib/cpython-doc/c-api/refcounting.rst b/lib/cpython-doc/c-api/refcounting.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/refcounting.rst @@ -0,0 +1,73 @@ +.. highlightlang:: c + + +.. _countingrefs: + +****************** +Reference Counting +****************** + +The macros in this section are used for managing reference counts of Python +objects. + + +.. c:function:: void Py_INCREF(PyObject *o) + + Increment the reference count for object *o*. The object must not be *NULL*; if + you aren't sure that it isn't *NULL*, use :c:func:`Py_XINCREF`. + + +.. c:function:: void Py_XINCREF(PyObject *o) + + Increment the reference count for object *o*. The object may be *NULL*, in + which case the macro has no effect. + + +.. c:function:: void Py_DECREF(PyObject *o) + + Decrement the reference count for object *o*. The object must not be *NULL*; if + you aren't sure that it isn't *NULL*, use :c:func:`Py_XDECREF`. If the reference + count reaches zero, the object's type's deallocation function (which must not be + *NULL*) is invoked. + + .. warning:: + + The deallocation function can cause arbitrary Python code to be invoked (e.g. + when a class instance with a :meth:`__del__` method is deallocated). While + exceptions in such code are not propagated, the executed code has free access to + all Python global variables. This means that any object that is reachable from + a global variable should be in a consistent state before :c:func:`Py_DECREF` is + invoked. For example, code to delete an object from a list should copy a + reference to the deleted object in a temporary variable, update the list data + structure, and then call :c:func:`Py_DECREF` for the temporary variable. + + +.. c:function:: void Py_XDECREF(PyObject *o) + + Decrement the reference count for object *o*. The object may be *NULL*, in + which case the macro has no effect; otherwise the effect is the same as for + :c:func:`Py_DECREF`, and the same warning applies. + + +.. c:function:: void Py_CLEAR(PyObject *o) + + Decrement the reference count for object *o*. The object may be *NULL*, in + which case the macro has no effect; otherwise the effect is the same as for + :c:func:`Py_DECREF`, except that the argument is also set to *NULL*. The warning + for :c:func:`Py_DECREF` does not apply with respect to the object passed because + the macro carefully uses a temporary variable and sets the argument to *NULL* + before decrementing its reference count. + + It is a good idea to use this macro whenever decrementing the value of a + variable that might be traversed during garbage collection. + + +The following functions are for runtime dynamic embedding of Python: +``Py_IncRef(PyObject *o)``, ``Py_DecRef(PyObject *o)``. They are +simply exported function versions of :c:func:`Py_XINCREF` and +:c:func:`Py_XDECREF`, respectively. + +The following functions or macros are only for use within the interpreter core: +:c:func:`_Py_Dealloc`, :c:func:`_Py_ForgetReference`, :c:func:`_Py_NewReference`, +as well as the global variable :c:data:`_Py_RefTotal`. + diff --git a/lib/cpython-doc/c-api/reflection.rst b/lib/cpython-doc/c-api/reflection.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/reflection.rst @@ -0,0 +1,49 @@ +.. highlightlang:: c + +.. _reflection: + +Reflection +========== + +.. c:function:: PyObject* PyEval_GetBuiltins() + + Return a dictionary of the builtins in the current execution frame, + or the interpreter of the thread state if no frame is currently executing. + + +.. c:function:: PyObject* PyEval_GetLocals() + + Return a dictionary of the local variables in the current execution frame, + or *NULL* if no frame is currently executing. + + +.. c:function:: PyObject* PyEval_GetGlobals() + + Return a dictionary of the global variables in the current execution frame, + or *NULL* if no frame is currently executing. + + +.. c:function:: PyFrameObject* PyEval_GetFrame() + + Return the current thread state's frame, which is *NULL* if no frame is + currently executing. + + +.. c:function:: int PyFrame_GetLineNumber(PyFrameObject *frame) + + Return the line number that *frame* is currently executing. + + +.. c:function:: const char* PyEval_GetFuncName(PyObject *func) + + Return the name of *func* if it is a function, class or instance object, else the + name of *func*\s type. + + +.. c:function:: const char* PyEval_GetFuncDesc(PyObject *func) + + Return a description string, depending on the type of *func*. + Return values include "()" for functions and methods, " constructor", + " instance", and " object". Concatenated with the result of + :c:func:`PyEval_GetFuncName`, the result will be a description of + *func*. diff --git a/lib/cpython-doc/c-api/sequence.rst b/lib/cpython-doc/c-api/sequence.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/sequence.rst @@ -0,0 +1,162 @@ +.. highlightlang:: c + +.. _sequence: + +Sequence Protocol +================= + + +.. c:function:: int PySequence_Check(PyObject *o) + + Return ``1`` if the object provides sequence protocol, and ``0`` otherwise. + This function always succeeds. + + +.. c:function:: Py_ssize_t PySequence_Size(PyObject *o) + Py_ssize_t PySequence_Length(PyObject *o) + + .. index:: builtin: len + + Returns the number of objects in sequence *o* on success, and ``-1`` on failure. + For objects that do not provide sequence protocol, this is equivalent to the + Python expression ``len(o)``. + + +.. c:function:: PyObject* PySequence_Concat(PyObject *o1, PyObject *o2) + + Return the concatenation of *o1* and *o2* on success, and *NULL* on failure. + This is the equivalent of the Python expression ``o1 + o2``. + + +.. c:function:: PyObject* PySequence_Repeat(PyObject *o, Py_ssize_t count) + + Return the result of repeating sequence object *o* *count* times, or *NULL* on + failure. This is the equivalent of the Python expression ``o * count``. + + +.. c:function:: PyObject* PySequence_InPlaceConcat(PyObject *o1, PyObject *o2) + + Return the concatenation of *o1* and *o2* on success, and *NULL* on failure. + The operation is done *in-place* when *o1* supports it. This is the equivalent + of the Python expression ``o1 += o2``. + + +.. c:function:: PyObject* PySequence_InPlaceRepeat(PyObject *o, Py_ssize_t count) + + Return the result of repeating sequence object *o* *count* times, or *NULL* on + failure. The operation is done *in-place* when *o* supports it. This is the + equivalent of the Python expression ``o *= count``. + + +.. c:function:: PyObject* PySequence_GetItem(PyObject *o, Py_ssize_t i) + + Return the *i*\ th element of *o*, or *NULL* on failure. This is the equivalent of + the Python expression ``o[i]``. + + +.. c:function:: PyObject* PySequence_GetSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2) + + Return the slice of sequence object *o* between *i1* and *i2*, or *NULL* on + failure. This is the equivalent of the Python expression ``o[i1:i2]``. + + +.. c:function:: int PySequence_SetItem(PyObject *o, Py_ssize_t i, PyObject *v) + + Assign object *v* to the *i*\ th element of *o*. Returns ``-1`` on failure. This + is the equivalent of the Python statement ``o[i] = v``. This function *does + not* steal a reference to *v*. + + +.. c:function:: int PySequence_DelItem(PyObject *o, Py_ssize_t i) + + Delete the *i*\ th element of object *o*. Returns ``-1`` on failure. This is the + equivalent of the Python statement ``del o[i]``. + + +.. c:function:: int PySequence_SetSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2, PyObject *v) + + Assign the sequence object *v* to the slice in sequence object *o* from *i1* to + *i2*. This is the equivalent of the Python statement ``o[i1:i2] = v``. + + +.. c:function:: int PySequence_DelSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2) + + Delete the slice in sequence object *o* from *i1* to *i2*. Returns ``-1`` on + failure. This is the equivalent of the Python statement ``del o[i1:i2]``. + + +.. c:function:: Py_ssize_t PySequence_Count(PyObject *o, PyObject *value) + + Return the number of occurrences of *value* in *o*, that is, return the number + of keys for which ``o[key] == value``. On failure, return ``-1``. This is + equivalent to the Python expression ``o.count(value)``. + + +.. c:function:: int PySequence_Contains(PyObject *o, PyObject *value) + + Determine if *o* contains *value*. If an item in *o* is equal to *value*, + return ``1``, otherwise return ``0``. On error, return ``-1``. This is + equivalent to the Python expression ``value in o``. + + +.. c:function:: Py_ssize_t PySequence_Index(PyObject *o, PyObject *value) + + Return the first index *i* for which ``o[i] == value``. On error, return + ``-1``. This is equivalent to the Python expression ``o.index(value)``. + + +.. c:function:: PyObject* PySequence_List(PyObject *o) + + Return a list object with the same contents as the arbitrary sequence *o*. The + returned list is guaranteed to be new. + + +.. c:function:: PyObject* PySequence_Tuple(PyObject *o) + + .. index:: builtin: tuple + + Return a tuple object with the same contents as the arbitrary sequence *o* or + *NULL* on failure. If *o* is a tuple, a new reference will be returned, + otherwise a tuple will be constructed with the appropriate contents. This is + equivalent to the Python expression ``tuple(o)``. + + +.. c:function:: PyObject* PySequence_Fast(PyObject *o, const char *m) + + Returns the sequence *o* as a tuple, unless it is already a tuple or list, in + which case *o* is returned. Use :c:func:`PySequence_Fast_GET_ITEM` to access the + members of the result. Returns *NULL* on failure. If the object is not a + sequence, raises :exc:`TypeError` with *m* as the message text. + + +.. c:function:: PyObject* PySequence_Fast_GET_ITEM(PyObject *o, Py_ssize_t i) + + Return the *i*\ th element of *o*, assuming that *o* was returned by + :c:func:`PySequence_Fast`, *o* is not *NULL*, and that *i* is within bounds. + + +.. c:function:: PyObject** PySequence_Fast_ITEMS(PyObject *o) + + Return the underlying array of PyObject pointers. Assumes that *o* was returned + by :c:func:`PySequence_Fast` and *o* is not *NULL*. + + Note, if a list gets resized, the reallocation may relocate the items array. + So, only use the underlying array pointer in contexts where the sequence + cannot change. + + +.. c:function:: PyObject* PySequence_ITEM(PyObject *o, Py_ssize_t i) + + Return the *i*\ th element of *o* or *NULL* on failure. Macro form of + :c:func:`PySequence_GetItem` but without checking that + :c:func:`PySequence_Check` on *o* is true and without adjustment for negative + indices. + + +.. c:function:: Py_ssize_t PySequence_Fast_GET_SIZE(PyObject *o) + + Returns the length of *o*, assuming that *o* was returned by + :c:func:`PySequence_Fast` and that *o* is not *NULL*. The size can also be + gotten by calling :c:func:`PySequence_Size` on *o*, but + :c:func:`PySequence_Fast_GET_SIZE` is faster because it can assume *o* is a list + or tuple. diff --git a/lib/cpython-doc/c-api/set.rst b/lib/cpython-doc/c-api/set.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/set.rst @@ -0,0 +1,166 @@ +.. highlightlang:: c + +.. _setobjects: + +Set Objects +----------- + +.. sectionauthor:: Raymond D. Hettinger + + +.. index:: + object: set + object: frozenset + +This section details the public API for :class:`set` and :class:`frozenset` +objects. Any functionality not listed below is best accessed using the either +the abstract object protocol (including :c:func:`PyObject_CallMethod`, +:c:func:`PyObject_RichCompareBool`, :c:func:`PyObject_Hash`, +:c:func:`PyObject_Repr`, :c:func:`PyObject_IsTrue`, :c:func:`PyObject_Print`, and +:c:func:`PyObject_GetIter`) or the abstract number protocol (including +:c:func:`PyNumber_And`, :c:func:`PyNumber_Subtract`, :c:func:`PyNumber_Or`, +:c:func:`PyNumber_Xor`, :c:func:`PyNumber_InPlaceAnd`, +:c:func:`PyNumber_InPlaceSubtract`, :c:func:`PyNumber_InPlaceOr`, and +:c:func:`PyNumber_InPlaceXor`). + + +.. c:type:: PySetObject + + This subtype of :c:type:`PyObject` is used to hold the internal data for both + :class:`set` and :class:`frozenset` objects. It is like a :c:type:`PyDictObject` + in that it is a fixed size for small sets (much like tuple storage) and will + point to a separate, variable sized block of memory for medium and large sized + sets (much like list storage). None of the fields of this structure should be + considered public and are subject to change. All access should be done through + the documented API rather than by manipulating the values in the structure. + + +.. c:var:: PyTypeObject PySet_Type + + This is an instance of :c:type:`PyTypeObject` representing the Python + :class:`set` type. + + +.. c:var:: PyTypeObject PyFrozenSet_Type + + This is an instance of :c:type:`PyTypeObject` representing the Python + :class:`frozenset` type. + +The following type check macros work on pointers to any Python object. Likewise, +the constructor functions work with any iterable Python object. + + +.. c:function:: int PySet_Check(PyObject *p) + + Return true if *p* is a :class:`set` object or an instance of a subtype. + +.. c:function:: int PyFrozenSet_Check(PyObject *p) + + Return true if *p* is a :class:`frozenset` object or an instance of a + subtype. + +.. c:function:: int PyAnySet_Check(PyObject *p) + + Return true if *p* is a :class:`set` object, a :class:`frozenset` object, or an + instance of a subtype. + + +.. c:function:: int PyAnySet_CheckExact(PyObject *p) + + Return true if *p* is a :class:`set` object or a :class:`frozenset` object but + not an instance of a subtype. + + +.. c:function:: int PyFrozenSet_CheckExact(PyObject *p) + + Return true if *p* is a :class:`frozenset` object but not an instance of a + subtype. + + +.. c:function:: PyObject* PySet_New(PyObject *iterable) + + Return a new :class:`set` containing objects returned by the *iterable*. The + *iterable* may be *NULL* to create a new empty set. Return the new set on + success or *NULL* on failure. Raise :exc:`TypeError` if *iterable* is not + actually iterable. The constructor is also useful for copying a set + (``c=set(s)``). + + +.. c:function:: PyObject* PyFrozenSet_New(PyObject *iterable) + + Return a new :class:`frozenset` containing objects returned by the *iterable*. + The *iterable* may be *NULL* to create a new empty frozenset. Return the new + set on success or *NULL* on failure. Raise :exc:`TypeError` if *iterable* is + not actually iterable. + + +The following functions and macros are available for instances of :class:`set` +or :class:`frozenset` or instances of their subtypes. + + +.. c:function:: Py_ssize_t PySet_Size(PyObject *anyset) + + .. index:: builtin: len + + Return the length of a :class:`set` or :class:`frozenset` object. Equivalent to + ``len(anyset)``. Raises a :exc:`PyExc_SystemError` if *anyset* is not a + :class:`set`, :class:`frozenset`, or an instance of a subtype. + + +.. c:function:: Py_ssize_t PySet_GET_SIZE(PyObject *anyset) + + Macro form of :c:func:`PySet_Size` without error checking. + + +.. c:function:: int PySet_Contains(PyObject *anyset, PyObject *key) + + Return 1 if found, 0 if not found, and -1 if an error is encountered. Unlike + the Python :meth:`__contains__` method, this function does not automatically + convert unhashable sets into temporary frozensets. Raise a :exc:`TypeError` if + the *key* is unhashable. Raise :exc:`PyExc_SystemError` if *anyset* is not a + :class:`set`, :class:`frozenset`, or an instance of a subtype. + + +.. c:function:: int PySet_Add(PyObject *set, PyObject *key) + + Add *key* to a :class:`set` instance. Also works with :class:`frozenset` + instances (like :c:func:`PyTuple_SetItem` it can be used to fill-in the values + of brand new frozensets before they are exposed to other code). Return 0 on + success or -1 on failure. Raise a :exc:`TypeError` if the *key* is + unhashable. Raise a :exc:`MemoryError` if there is no room to grow. Raise a + :exc:`SystemError` if *set* is an not an instance of :class:`set` or its + subtype. + + +The following functions are available for instances of :class:`set` or its +subtypes but not for instances of :class:`frozenset` or its subtypes. + + +.. c:function:: int PySet_Discard(PyObject *set, PyObject *key) + + Return 1 if found and removed, 0 if not found (no action taken), and -1 if an + error is encountered. Does not raise :exc:`KeyError` for missing keys. Raise a + :exc:`TypeError` if the *key* is unhashable. Unlike the Python :meth:`discard` + method, this function does not automatically convert unhashable sets into + temporary frozensets. Raise :exc:`PyExc_SystemError` if *set* is an not an + instance of :class:`set` or its subtype. + + +.. c:function:: PyObject* PySet_Pop(PyObject *set) + + Return a new reference to an arbitrary object in the *set*, and removes the + object from the *set*. Return *NULL* on failure. Raise :exc:`KeyError` if the + set is empty. Raise a :exc:`SystemError` if *set* is an not an instance of + :class:`set` or its subtype. + + +.. c:function:: int PySet_Clear(PyObject *set) + + Empty an existing set of all elements. + + +.. c:function:: int PySet_ClearFreeList() + + Clear the free list. Return the total number of freed items. + + .. versionadded:: 3.3 diff --git a/lib/cpython-doc/c-api/slice.rst b/lib/cpython-doc/c-api/slice.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/slice.rst @@ -0,0 +1,58 @@ +.. highlightlang:: c + +.. _slice-objects: + +Slice Objects +------------- + + +.. c:var:: PyTypeObject PySlice_Type + + The type object for slice objects. This is the same as :class:`slice` in the + Python layer. + + +.. c:function:: int PySlice_Check(PyObject *ob) + + Return true if *ob* is a slice object; *ob* must not be *NULL*. + + +.. c:function:: PyObject* PySlice_New(PyObject *start, PyObject *stop, PyObject *step) + + Return a new slice object with the given values. The *start*, *stop*, and + *step* parameters are used as the values of the slice object attributes of + the same names. Any of the values may be *NULL*, in which case the + ``None`` will be used for the corresponding attribute. Return *NULL* if + the new object could not be allocated. + + +.. c:function:: int PySlice_GetIndices(PyObject *slice, Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step) + + Retrieve the start, stop and step indices from the slice object *slice*, + assuming a sequence of length *length*. Treats indices greater than + *length* as errors. + + Returns 0 on success and -1 on error with no exception set (unless one of + the indices was not :const:`None` and failed to be converted to an integer, + in which case -1 is returned with an exception set). + + You probably do not want to use this function. + + .. versionchanged:: 3.2 + The parameter type for the *slice* parameter was ``PySliceObject*`` + before. + + +.. c:function:: int PySlice_GetIndicesEx(PyObject *slice, Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step, Py_ssize_t *slicelength) + + Usable replacement for :c:func:`PySlice_GetIndices`. Retrieve the start, + stop, and step indices from the slice object *slice* assuming a sequence of + length *length*, and store the length of the slice in *slicelength*. Out + of bounds indices are clipped in a manner consistent with the handling of + normal slices. + + Returns 0 on success and -1 on error with exception set. + + .. versionchanged:: 3.2 + The parameter type for the *slice* parameter was ``PySliceObject*`` + before. diff --git a/lib/cpython-doc/c-api/structures.rst b/lib/cpython-doc/c-api/structures.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/structures.rst @@ -0,0 +1,284 @@ +.. highlightlang:: c + +.. _common-structs: + +Common Object Structures +======================== + +There are a large number of structures which are used in the definition of +object types for Python. This section describes these structures and how they +are used. + +All Python objects ultimately share a small number of fields at the beginning +of the object's representation in memory. These are represented by the +:c:type:`PyObject` and :c:type:`PyVarObject` types, which are defined, in turn, +by the expansions of some macros also used, whether directly or indirectly, in +the definition of all other Python objects. + + +.. c:type:: PyObject + + All object types are extensions of this type. This is a type which + contains the information Python needs to treat a pointer to an object as an + object. In a normal "release" build, it contains only the object's + reference count and a pointer to the corresponding type object. It + corresponds to the fields defined by the expansion of the ``PyObject_HEAD`` + macro. + + +.. c:type:: PyVarObject + + This is an extension of :c:type:`PyObject` that adds the :attr:`ob_size` + field. This is only used for objects that have some notion of *length*. + This type does not often appear in the Python/C API. It corresponds to the + fields defined by the expansion of the ``PyObject_VAR_HEAD`` macro. + +These macros are used in the definition of :c:type:`PyObject` and +:c:type:`PyVarObject`: + +.. XXX need to document PEP 3123 changes here + +.. c:macro:: PyObject_HEAD + + This is a macro which expands to the declarations of the fields of the + :c:type:`PyObject` type; it is used when declaring new types which represent + objects without a varying length. The specific fields it expands to depend + on the definition of :c:macro:`Py_TRACE_REFS`. By default, that macro is + not defined, and :c:macro:`PyObject_HEAD` expands to:: + + Py_ssize_t ob_refcnt; + PyTypeObject *ob_type; + + When :c:macro:`Py_TRACE_REFS` is defined, it expands to:: + + PyObject *_ob_next, *_ob_prev; + Py_ssize_t ob_refcnt; + PyTypeObject *ob_type; + + +.. c:macro:: PyObject_VAR_HEAD + + This is a macro which expands to the declarations of the fields of the + :c:type:`PyVarObject` type; it is used when declaring new types which + represent objects with a length that varies from instance to instance. + This macro always expands to:: + + PyObject_HEAD + Py_ssize_t ob_size; + + Note that :c:macro:`PyObject_HEAD` is part of the expansion, and that its own + expansion varies depending on the definition of :c:macro:`Py_TRACE_REFS`. + + +.. c:macro:: PyObject_HEAD_INIT(type) + + This is a macro which expands to initialization values for a new + :c:type:`PyObject` type. This macro expands to:: + + _PyObject_EXTRA_INIT + 1, type, + + +.. c:macro:: PyVarObject_HEAD_INIT(type, size) + + This is a macro which expands to initialization values for a new + :c:type:`PyVarObject` type, including the :attr:`ob_size` field. + This macro expands to:: + + _PyObject_EXTRA_INIT + 1, type, size, + + +.. c:type:: PyCFunction + + Type of the functions used to implement most Python callables in C. + Functions of this type take two :c:type:`PyObject\*` parameters and return + one such value. If the return value is *NULL*, an exception shall have + been set. If not *NULL*, the return value is interpreted as the return + value of the function as exposed in Python. The function must return a new + reference. + + +.. c:type:: PyCFunctionWithKeywords + + Type of the functions used to implement Python callables in C that take + keyword arguments: they take three :c:type:`PyObject\*` parameters and return + one such value. See :c:type:`PyCFunction` above for the meaning of the return + value. + + +.. c:type:: PyMethodDef + + Structure used to describe a method of an extension type. This structure has + four fields: + + +------------------+-------------+-------------------------------+ + | Field | C Type | Meaning | + +==================+=============+===============================+ + | :attr:`ml_name` | char \* | name of the method | + +------------------+-------------+-------------------------------+ + | :attr:`ml_meth` | PyCFunction | pointer to the C | + | | | implementation | + +------------------+-------------+-------------------------------+ + | :attr:`ml_flags` | int | flag bits indicating how the | + | | | call should be constructed | + +------------------+-------------+-------------------------------+ + | :attr:`ml_doc` | char \* | points to the contents of the | + | | | docstring | + +------------------+-------------+-------------------------------+ + +The :attr:`ml_meth` is a C function pointer. The functions may be of different +types, but they always return :c:type:`PyObject\*`. If the function is not of +the :c:type:`PyCFunction`, the compiler will require a cast in the method table. +Even though :c:type:`PyCFunction` defines the first parameter as +:c:type:`PyObject\*`, it is common that the method implementation uses a the +specific C type of the *self* object. + +The :attr:`ml_flags` field is a bitfield which can include the following flags. +The individual flags indicate either a calling convention or a binding +convention. Of the calling convention flags, only :const:`METH_VARARGS` and +:const:`METH_KEYWORDS` can be combined (but note that :const:`METH_KEYWORDS` +alone is equivalent to ``METH_VARARGS | METH_KEYWORDS``). Any of the calling +convention flags can be combined with a binding flag. + + +.. data:: METH_VARARGS + + This is the typical calling convention, where the methods have the type + :c:type:`PyCFunction`. The function expects two :c:type:`PyObject\*` values. + The first one is the *self* object for methods; for module functions, it is + the module object. The second parameter (often called *args*) is a tuple + object representing all arguments. This parameter is typically processed + using :c:func:`PyArg_ParseTuple` or :c:func:`PyArg_UnpackTuple`. + + +.. data:: METH_KEYWORDS + + Methods with these flags must be of type :c:type:`PyCFunctionWithKeywords`. + The function expects three parameters: *self*, *args*, and a dictionary of + all the keyword arguments. The flag is typically combined with + :const:`METH_VARARGS`, and the parameters are typically processed using + :c:func:`PyArg_ParseTupleAndKeywords`. + + +.. data:: METH_NOARGS + + Methods without parameters don't need to check whether arguments are given if + they are listed with the :const:`METH_NOARGS` flag. They need to be of type + :c:type:`PyCFunction`. The first parameter is typically named *self* and will + hold a reference to the module or object instance. In all cases the second + parameter will be *NULL*. + + +.. data:: METH_O + + Methods with a single object argument can be listed with the :const:`METH_O` + flag, instead of invoking :c:func:`PyArg_ParseTuple` with a ``"O"`` argument. + They have the type :c:type:`PyCFunction`, with the *self* parameter, and a + :c:type:`PyObject\*` parameter representing the single argument. + + +These two constants are not used to indicate the calling convention but the +binding when use with methods of classes. These may not be used for functions +defined for modules. At most one of these flags may be set for any given +method. + + +.. data:: METH_CLASS + + .. index:: builtin: classmethod + + The method will be passed the type object as the first parameter rather + than an instance of the type. This is used to create *class methods*, + similar to what is created when using the :func:`classmethod` built-in + function. + + +.. data:: METH_STATIC + + .. index:: builtin: staticmethod + + The method will be passed *NULL* as the first parameter rather than an + instance of the type. This is used to create *static methods*, similar to + what is created when using the :func:`staticmethod` built-in function. + +One other constant controls whether a method is loaded in place of another +definition with the same method name. + + +.. data:: METH_COEXIST + + The method will be loaded in place of existing definitions. Without + *METH_COEXIST*, the default is to skip repeated definitions. Since slot + wrappers are loaded before the method table, the existence of a + *sq_contains* slot, for example, would generate a wrapped method named + :meth:`__contains__` and preclude the loading of a corresponding + PyCFunction with the same name. With the flag defined, the PyCFunction + will be loaded in place of the wrapper object and will co-exist with the + slot. This is helpful because calls to PyCFunctions are optimized more + than wrapper object calls. + + +.. c:type:: PyMemberDef + + Structure which describes an attribute of a type which corresponds to a C + struct member. Its fields are: + + +------------------+-------------+-------------------------------+ + | Field | C Type | Meaning | + +==================+=============+===============================+ + | :attr:`name` | char \* | name of the member | + +------------------+-------------+-------------------------------+ + | :attr:`type` | int | the type of the member in the | + | | | C struct | + +------------------+-------------+-------------------------------+ + | :attr:`offset` | Py_ssize_t | the offset in bytes that the | + | | | member is located on the | + | | | type's object struct | + +------------------+-------------+-------------------------------+ + | :attr:`flags` | int | flag bits indicating if the | + | | | field should be read-only or | + | | | writable | + +------------------+-------------+-------------------------------+ + | :attr:`doc` | char \* | points to the contents of the | + | | | docstring | + +------------------+-------------+-------------------------------+ + + :attr:`type` can be one of many ``T_`` macros corresponding to various C + types. When the member is accessed in Python, it will be converted to the + equivalent Python type. + + =============== ================== + Macro name C type + =============== ================== + T_SHORT short + T_INT int + T_LONG long + T_FLOAT float + T_DOUBLE double + T_STRING char \* + T_OBJECT PyObject \* + T_OBJECT_EX PyObject \* + T_CHAR char + T_BYTE char + T_UBYTE unsigned char + T_UINT unsigned int + T_USHORT unsigned short + T_ULONG unsigned long + T_BOOL char + T_LONGLONG long long + T_ULONGLONG unsigned long long + T_PYSSIZET Py_ssize_t + =============== ================== + + :c:macro:`T_OBJECT` and :c:macro:`T_OBJECT_EX` differ in that + :c:macro:`T_OBJECT` returns ``None`` if the member is *NULL* and + :c:macro:`T_OBJECT_EX` raises an :exc:`AttributeError`. Try to use + :c:macro:`T_OBJECT_EX` over :c:macro:`T_OBJECT` because :c:macro:`T_OBJECT_EX` + handles use of the :keyword:`del` statement on that attribute more correctly + than :c:macro:`T_OBJECT`. + + :attr:`flags` can be 0 for write and read access or :c:macro:`READONLY` for + read-only access. Using :c:macro:`T_STRING` for :attr:`type` implies + :c:macro:`READONLY`. Only :c:macro:`T_OBJECT` and :c:macro:`T_OBJECT_EX` + members can be deleted. (They are set to *NULL*). diff --git a/lib/cpython-doc/c-api/sys.rst b/lib/cpython-doc/c-api/sys.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/sys.rst @@ -0,0 +1,186 @@ +.. highlightlang:: c + +.. _os: + +Operating System Utilities +========================== + + +.. c:function:: int Py_FdIsInteractive(FILE *fp, const char *filename) + + Return true (nonzero) if the standard I/O file *fp* with name *filename* is + deemed interactive. This is the case for files for which ``isatty(fileno(fp))`` + is true. If the global flag :c:data:`Py_InteractiveFlag` is true, this function + also returns true if the *filename* pointer is *NULL* or if the name is equal to + one of the strings ``''`` or ``'???'``. + + +.. c:function:: void PyOS_AfterFork() + + Function to update some internal state after a process fork; this should be + called in the new process if the Python interpreter will continue to be used. + If a new executable is loaded into the new process, this function does not need + to be called. + + +.. c:function:: int PyOS_CheckStack() + + Return true when the interpreter runs out of stack space. This is a reliable + check, but is only available when :const:`USE_STACKCHECK` is defined (currently + on Windows using the Microsoft Visual C++ compiler). :const:`USE_STACKCHECK` + will be defined automatically; you should never change the definition in your + own code. + + +.. c:function:: PyOS_sighandler_t PyOS_getsig(int i) + + Return the current signal handler for signal *i*. This is a thin wrapper around + either :c:func:`sigaction` or :c:func:`signal`. Do not call those functions + directly! :c:type:`PyOS_sighandler_t` is a typedef alias for :c:type:`void + (\*)(int)`. + + +.. c:function:: PyOS_sighandler_t PyOS_setsig(int i, PyOS_sighandler_t h) + + Set the signal handler for signal *i* to be *h*; return the old signal handler. + This is a thin wrapper around either :c:func:`sigaction` or :c:func:`signal`. Do + not call those functions directly! :c:type:`PyOS_sighandler_t` is a typedef + alias for :c:type:`void (\*)(int)`. + +.. _systemfunctions: + +System Functions +================ + +These are utility functions that make functionality from the :mod:`sys` module +accessible to C code. They all work with the current interpreter thread's +:mod:`sys` module's dict, which is contained in the internal thread state structure. + +.. c:function:: PyObject *PySys_GetObject(char *name) + + Return the object *name* from the :mod:`sys` module or *NULL* if it does + not exist, without setting an exception. + +.. c:function:: FILE *PySys_GetFile(char *name, FILE *def) + + Return the :c:type:`FILE*` associated with the object *name* in the + :mod:`sys` module, or *def* if *name* is not in the module or is not associated + with a :c:type:`FILE*`. + +.. c:function:: int PySys_SetObject(char *name, PyObject *v) + + Set *name* in the :mod:`sys` module to *v* unless *v* is *NULL*, in which + case *name* is deleted from the sys module. Returns ``0`` on success, ``-1`` + on error. + +.. c:function:: void PySys_ResetWarnOptions() + + Reset :data:`sys.warnoptions` to an empty list. + +.. c:function:: void PySys_AddWarnOption(wchar_t *s) + + Append *s* to :data:`sys.warnoptions`. + +.. c:function:: void PySys_AddWarnOptionUnicode(PyObject *unicode) + + Append *unicode* to :data:`sys.warnoptions`. + +.. c:function:: void PySys_SetPath(wchar_t *path) + + Set :data:`sys.path` to a list object of paths found in *path* which should + be a list of paths separated with the platform's search path delimiter + (``:`` on Unix, ``;`` on Windows). + +.. c:function:: void PySys_WriteStdout(const char *format, ...) + + Write the output string described by *format* to :data:`sys.stdout`. No + exceptions are raised, even if truncation occurs (see below). + + *format* should limit the total size of the formatted output string to + 1000 bytes or less -- after 1000 bytes, the output string is truncated. + In particular, this means that no unrestricted "%s" formats should occur; + these should be limited using "%.s" where is a decimal number + calculated so that plus the maximum size of other formatted text does not + exceed 1000 bytes. Also watch out for "%f", which can print hundreds of + digits for very large numbers. + + If a problem occurs, or :data:`sys.stdout` is unset, the formatted message + is written to the real (C level) *stdout*. + +.. c:function:: void PySys_WriteStderr(const char *format, ...) + + As :c:func:`PySys_WriteStdout`, but write to :data:`sys.stderr` or *stderr* + instead. + +.. c:function:: void PySys_FormatStdout(const char *format, ...) + + Function similar to PySys_WriteStdout() but format the message using + :c:func:`PyUnicode_FromFormatV` and don't truncate the message to an + arbitrary length. + + .. versionadded:: 3.2 + +.. c:function:: void PySys_FormatStderr(const char *format, ...) + + As :c:func:`PySys_FormatStdout`, but write to :data:`sys.stderr` or *stderr* + instead. + + .. versionadded:: 3.2 + +.. c:function:: void PySys_AddXOption(const wchar_t *s) + + Parse *s* as a set of :option:`-X` options and add them to the current + options mapping as returned by :c:func:`PySys_GetXOptions`. + + .. versionadded:: 3.2 + +.. c:function:: PyObject *PySys_GetXOptions() + + Return the current dictionary of :option:`-X` options, similarly to + :data:`sys._xoptions`. On error, *NULL* is returned and an exception is + set. + + .. versionadded:: 3.2 + + +.. _processcontrol: + +Process Control +=============== + + +.. c:function:: void Py_FatalError(const char *message) + + .. index:: single: abort() + + Print a fatal error message and kill the process. No cleanup is performed. + This function should only be invoked when a condition is detected that would + make it dangerous to continue using the Python interpreter; e.g., when the + object administration appears to be corrupted. On Unix, the standard C library + function :c:func:`abort` is called which will attempt to produce a :file:`core` + file. + + +.. c:function:: void Py_Exit(int status) + + .. index:: + single: Py_Finalize() + single: exit() + + Exit the current process. This calls :c:func:`Py_Finalize` and then calls the + standard C library function ``exit(status)``. + + +.. c:function:: int Py_AtExit(void (*func) ()) + + .. index:: + single: Py_Finalize() + single: cleanup functions + + Register a cleanup function to be called by :c:func:`Py_Finalize`. The cleanup + function will be called with no arguments and should return no value. At most + 32 cleanup functions can be registered. When the registration is successful, + :c:func:`Py_AtExit` returns ``0``; on failure, it returns ``-1``. The cleanup + function registered last is called first. Each cleanup function will be called + at most once. Since Python's internal finalization will have completed before + the cleanup function, no Python APIs should be called by *func*. diff --git a/lib/cpython-doc/c-api/tuple.rst b/lib/cpython-doc/c-api/tuple.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/tuple.rst @@ -0,0 +1,110 @@ +.. highlightlang:: c + +.. _tupleobjects: + +Tuple Objects +------------- + +.. index:: object: tuple + + +.. c:type:: PyTupleObject + + This subtype of :c:type:`PyObject` represents a Python tuple object. + + +.. c:var:: PyTypeObject PyTuple_Type + + This instance of :c:type:`PyTypeObject` represents the Python tuple type; it + is the same object as :class:`tuple` in the Python layer. + + +.. c:function:: int PyTuple_Check(PyObject *p) + + Return true if *p* is a tuple object or an instance of a subtype of the tuple + type. + + +.. c:function:: int PyTuple_CheckExact(PyObject *p) + + Return true if *p* is a tuple object, but not an instance of a subtype of the + tuple type. + + +.. c:function:: PyObject* PyTuple_New(Py_ssize_t len) + + Return a new tuple object of size *len*, or *NULL* on failure. + + +.. c:function:: PyObject* PyTuple_Pack(Py_ssize_t n, ...) + + Return a new tuple object of size *n*, or *NULL* on failure. The tuple values + are initialized to the subsequent *n* C arguments pointing to Python objects. + ``PyTuple_Pack(2, a, b)`` is equivalent to ``Py_BuildValue("(OO)", a, b)``. + + +.. c:function:: Py_ssize_t PyTuple_Size(PyObject *p) + + Take a pointer to a tuple object, and return the size of that tuple. + + +.. c:function:: Py_ssize_t PyTuple_GET_SIZE(PyObject *p) + + Return the size of the tuple *p*, which must be non-*NULL* and point to a tuple; + no error checking is performed. + + +.. c:function:: PyObject* PyTuple_GetItem(PyObject *p, Py_ssize_t pos) + + Return the object at position *pos* in the tuple pointed to by *p*. If *pos* is + out of bounds, return *NULL* and sets an :exc:`IndexError` exception. + + +.. c:function:: PyObject* PyTuple_GET_ITEM(PyObject *p, Py_ssize_t pos) + + Like :c:func:`PyTuple_GetItem`, but does no checking of its arguments. + + +.. c:function:: PyObject* PyTuple_GetSlice(PyObject *p, Py_ssize_t low, Py_ssize_t high) + + Take a slice of the tuple pointed to by *p* from *low* to *high* and return it + as a new tuple. + + +.. c:function:: int PyTuple_SetItem(PyObject *p, Py_ssize_t pos, PyObject *o) + + Insert a reference to object *o* at position *pos* of the tuple pointed to by + *p*. Return ``0`` on success. + + .. note:: + + This function "steals" a reference to *o*. + + +.. c:function:: void PyTuple_SET_ITEM(PyObject *p, Py_ssize_t pos, PyObject *o) + + Like :c:func:`PyTuple_SetItem`, but does no error checking, and should *only* be + used to fill in brand new tuples. + + .. note:: + + This function "steals" a reference to *o*. + + +.. c:function:: int _PyTuple_Resize(PyObject **p, Py_ssize_t newsize) + + Can be used to resize a tuple. *newsize* will be the new length of the tuple. + Because tuples are *supposed* to be immutable, this should only be used if there + is only one reference to the object. Do *not* use this if the tuple may already + be known to some other part of the code. The tuple will always grow or shrink + at the end. Think of this as destroying the old tuple and creating a new one, + only more efficiently. Returns ``0`` on success. Client code should never + assume that the resulting value of ``*p`` will be the same as before calling + this function. If the object referenced by ``*p`` is replaced, the original + ``*p`` is destroyed. On failure, returns ``-1`` and sets ``*p`` to *NULL*, and + raises :exc:`MemoryError` or :exc:`SystemError`. + + +.. c:function:: int PyTuple_ClearFreeList() + + Clear the free list. Return the total number of freed items. diff --git a/lib/cpython-doc/c-api/type.rst b/lib/cpython-doc/c-api/type.rst new file mode 100644 --- /dev/null +++ b/lib/cpython-doc/c-api/type.rst @@ -0,0 +1,86 @@ +.. highlightlang:: c + +.. _typeobjects: + +Type Objects +------------ + +.. index:: object: type + + +.. c:type:: PyTypeObject + + The C structure of the objects used to describe built-in types. + + +.. c:var:: PyObject* PyType_Type + + This is the type object for type objects; it is the same object as + :class:`type` in the Python layer. + + +.. c:function:: int PyType_Check(PyObject *o) + + Return true if the object *o* is a type object, including instances of types + derived from the standard type object. Return false in all other cases. + + +.. c:function:: int PyType_CheckExact(PyObject *o) + + Return true if the object *o* is a type object, but not a subtype of the + standard type object. Return false in all other cases. + + +.. c:function:: unsigned int PyType_ClearCache() + + Clear the internal lookup cache. Return the current version tag. + +.. c:function:: long PyType_GetFlags(PyTypeObject* type) + + Return the :attr:`tp_flags` member of *type*. This function is primarily + meant for use with `Py_LIMITED_API`; the individual flag bits are + guaranteed to be stable across Python releases, but access to + :attr:`tp_flags` itself is not part of the limited API. + + .. versionadded:: 3.2 + +.. c:function:: void PyType_Modified(PyTypeObject *type) + + Invalidate the internal lookup cache for the type and all of its + subtypes. This function must be called after any manual + modification of the attributes or base classes of the type. + + +.. c:function:: int PyType_HasFeature(PyObject *o, int feature) + + Return true if the type object *o* sets the feature *feature*. Type features + are denoted by single bit flags. + + +.. c:function:: int PyType_IS_GC(PyObject *o) + + Return true if the type object includes support for the cycle detector; this + tests the type flag :const:`Py_TPFLAGS_HAVE_GC`. + + +.. c:function:: int PyType_IsSubtype(PyTypeObject *a, PyTypeObject *b) + + Return true if *a* is a subtype of *b*. + + +.. c:function:: PyObject* PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems) + + XXX: Document. + + +.. c:function:: PyObject* PyType_GenericNew(PyTypeObject *type, PyObject *args, PyObject *kwds) + + XXX: Document. + + +.. c:function:: int PyType_Ready(PyTypeObject *type) + + Finalize a type object. This should be called on all type objects to finish + their initialization. This function is responsible for adding inherited slots + from a type's base class. Return ``0`` on success, or return ``-1`` and sets an + exception on error. diff --git a/lib/cpython-doc/c-api/typeobj.rst b/lib/cpython-doc/c-api/typeobj.rst new file mode 100644 From noreply at buildbot.pypy.org Tue Jan 17 17:21:53 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 17:21:53 +0100 (CET) Subject: [pypy-commit] benchmarks default: merge Message-ID: <20120117162153.39E61710260@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r158:77aa6fd8e4ec Date: 2012-01-17 18:08 +0200 http://bitbucket.org/pypy/benchmarks/changeset/77aa6fd8e4ec/ Log: merge diff --git a/own/bm_gzip.py b/own/bm_gzip.py new file mode 100644 --- /dev/null +++ b/own/bm_gzip.py @@ -0,0 +1,45 @@ +import os +import time +import shutil +import tempfile +import tarfile + +DJANGO_DIR = os.path.join(os.path.dirname(__file__), os.pardir, + 'unladen_swallow', 'lib', 'django') + +def _bootstrap(): + fd, archive = tempfile.mkstemp() + os.close(fd) + with tarfile.open(archive, 'w:gz') as targz: + targz.add(DJANGO_DIR) + return archive + +def bench(archive): + dest = tempfile.mkdtemp() + try: + with tarfile.open(archive) as targz: + targz.extractall(dest) + finally: + shutil.rmtree(dest) + +def main(n): + archive = _bootstrap() + try: + times = [] + for k in range(n): + t0 = time.time() + bench(archive) + times.append(time.time() - t0) + return times + finally: + os.remove(archive) + +if __name__ == '__main__': + import util, optparse + parser = optparse.OptionParser( + usage="%prog [options]", + description="Test the performance of the GZip decompression benchmark") + util.add_standard_options_to(parser) + options, args = parser.parse_args() + + util.run_benchmark(options, options.num_runs, main) diff --git a/runner.py b/runner.py --- a/runner.py +++ b/runner.py @@ -80,7 +80,7 @@ default=','.join(BENCHMARK_SET), help=("Comma-separated list of benchmarks to run" " Valid benchmarks are: " + - ", ".join(BENCHMARK_SET))) + ", ".join(sorted(BENCHMARK_SET)))) parser.add_option('-p', '--pypy-c', default=sys.executable, help='pypy-c or other modified python to run against') parser.add_option('-r', '--revision', default=0, action="store", From noreply at buildbot.pypy.org Tue Jan 17 17:28:24 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 17 Jan 2012 17:28:24 +0100 (CET) Subject: [pypy-commit] pypy py3k: bah, if we inherit we run all the test twice Message-ID: <20120117162824.BF011710260@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51408:46e3b0574cc8 Date: 2012-01-17 17:28 +0100 http://bitbucket.org/pypy/pypy/changeset/46e3b0574cc8/ Log: bah, if we inherit we run all the test twice diff --git a/pypy/interpreter/pyparser/test/test_pyparse.py b/pypy/interpreter/pyparser/test/test_pyparse.py --- a/pypy/interpreter/pyparser/test/test_pyparse.py +++ b/pypy/interpreter/pyparser/test/test_pyparse.py @@ -128,11 +128,16 @@ self.parse('(a, *rest, b) = 1, 2, 3, 4, 5') -class TestPythonParserWithSpace(TestPythonParserWithoutSpace): +class TestPythonParserWithSpace: def setup_class(self): self.parser = pyparse.PythonParser(self.space) + def parse(self, source, mode="exec", info=None): + if info is None: + info = pyparse.CompileInfo("", mode) + return self.parser.parse_source(source, info) + def test_encoding(self): info = pyparse.CompileInfo("", "exec") tree = self.parse("""# coding: latin-1 From noreply at buildbot.pypy.org Tue Jan 17 17:30:36 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Tue, 17 Jan 2012 17:30:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain) port the py3k grammar for the new tuple unpacking (with parenthesis) Message-ID: <20120117163036.B5F58710260@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51409:e110ee00c88b Date: 2012-01-17 17:30 +0100 http://bitbucket.org/pypy/pypy/changeset/e110ee00c88b/ Log: (antocuni, romain) port the py3k grammar for the new tuple unpacking (with parenthesis) diff --git a/pypy/interpreter/pyparser/data/Grammar3.2 b/pypy/interpreter/pyparser/data/Grammar3.2 --- a/pypy/interpreter/pyparser/data/Grammar3.2 +++ b/pypy/interpreter/pyparser/data/Grammar3.2 @@ -104,11 +104,10 @@ factor: ('+'|'-'|'~') factor | power power: atom trailer* ['**' factor] atom: ('(' [yield_expr|testlist_comp] ')' | - '[' [listmaker] ']' | + '[' [testlist_comp] ']' | '{' [dictorsetmaker] '}' | NAME | NUMBER | STRING+) -listmaker: test ( list_for | (',' test)* [','] ) -testlist_comp: test ( comp_for | (',' test)* [','] ) +testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] ) lambdef: 'lambda' [varargslist] ':' test trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] From noreply at buildbot.pypy.org Tue Jan 17 17:37:52 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 17:37:52 +0100 (CET) Subject: [pypy-commit] benchmarks default: fix the benchmark Message-ID: <20120117163752.DC12D820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r159:c01ab59c2a28 Date: 2012-01-17 17:37 +0100 http://bitbucket.org/pypy/benchmarks/changeset/c01ab59c2a28/ Log: fix the benchmark diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -1,2 +1,3 @@ .*\.py[co] -.*~ \ No newline at end of file +.*~ +lib/cpython-doc/tools/build diff --git a/benchmarks.py b/benchmarks.py --- a/benchmarks.py +++ b/benchmarks.py @@ -161,14 +161,14 @@ htmldir = os.path.join(builddir, 'html') os.mkdir(htmldir) args = base_python + [build, '-b', 'html', '-d', docdir, maindir, htmldir] - proc = subprocess.Popen(args, stderr=subprocess.PIPE) + proc = subprocess.Popen(args, stderr=subprocess.PIPE, stdout=subprocess.PIPE) out, err = proc.communicate() retcode = proc.poll() if retcode != 0: print out print err raise Exception("sphinx-build.py failed") - t.append(float(out.splitlines(-1).strip())) + t.append(float(out.splitlines()[-1])) return RawResult([t[0]], [t[1]]) BM_cpython_doc.benchmark_name = 'sphinx' diff --git a/lib/cpython-doc/tools/build/doctrees/about.doctree b/lib/cpython-doc/tools/build/doctrees/about.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/about.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/bugs.doctree b/lib/cpython-doc/tools/build/doctrees/bugs.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/bugs.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/abstract.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/abstract.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/abstract.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/allocation.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/allocation.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/allocation.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/arg.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/arg.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/arg.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/bool.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/bool.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/bool.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/buffer.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/buffer.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/buffer.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/bytearray.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/bytearray.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/bytearray.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/bytes.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/bytes.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/bytes.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/capsule.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/capsule.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/capsule.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/cell.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/cell.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/cell.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/code.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/code.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/code.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/codec.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/codec.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/codec.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/complex.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/complex.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/complex.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/concrete.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/concrete.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/concrete.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/conversion.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/conversion.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/conversion.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/datetime.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/datetime.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/datetime.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/descriptor.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/descriptor.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/descriptor.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/dict.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/dict.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/dict.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/exceptions.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/exceptions.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/exceptions.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/file.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/file.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/file.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/float.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/float.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/float.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/function.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/function.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/function.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/gcsupport.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/gcsupport.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/gcsupport.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/gen.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/gen.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/gen.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/import.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/import.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/import.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/index.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/index.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/index.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/init.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/init.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/init.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/intro.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/intro.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/intro.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/iter.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/iter.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/iter.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/iterator.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/iterator.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/iterator.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/list.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/list.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/list.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/long.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/long.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/long.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/mapping.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/mapping.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/mapping.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/marshal.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/marshal.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/marshal.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/memory.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/memory.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/memory.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/memoryview.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/memoryview.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/memoryview.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/method.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/method.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/method.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/module.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/module.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/module.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/none.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/none.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/none.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/number.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/number.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/number.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/objbuffer.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/objbuffer.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/objbuffer.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/object.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/object.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/object.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/objimpl.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/objimpl.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/objimpl.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/refcounting.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/refcounting.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/refcounting.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/reflection.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/reflection.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/reflection.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/sequence.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/sequence.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/sequence.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/set.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/set.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/set.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/slice.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/slice.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/slice.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/structures.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/structures.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/structures.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/sys.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/sys.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/sys.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/tuple.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/tuple.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/tuple.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/type.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/type.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/type.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/typeobj.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/typeobj.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/typeobj.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/unicode.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/unicode.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/unicode.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/utilities.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/utilities.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/utilities.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/veryhigh.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/veryhigh.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/veryhigh.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/c-api/weakref.doctree b/lib/cpython-doc/tools/build/doctrees/c-api/weakref.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/c-api/weakref.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/contents.doctree b/lib/cpython-doc/tools/build/doctrees/contents.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/contents.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/copyright.doctree b/lib/cpython-doc/tools/build/doctrees/copyright.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/copyright.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/apiref.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/apiref.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/apiref.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/builtdist.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/builtdist.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/builtdist.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/commandref.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/commandref.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/commandref.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/configfile.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/configfile.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/configfile.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/examples.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/examples.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/examples.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/extending.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/extending.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/extending.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/index.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/index.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/index.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/install.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/install.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/install.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/introduction.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/introduction.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/introduction.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/packageindex.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/packageindex.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/packageindex.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/setupscript.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/setupscript.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/setupscript.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/sourcedist.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/sourcedist.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/sourcedist.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/distutils/uploading.doctree b/lib/cpython-doc/tools/build/doctrees/distutils/uploading.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/distutils/uploading.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/extending/building.doctree b/lib/cpython-doc/tools/build/doctrees/extending/building.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/extending/building.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/extending/embedding.doctree b/lib/cpython-doc/tools/build/doctrees/extending/embedding.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/extending/embedding.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/extending/extending.doctree b/lib/cpython-doc/tools/build/doctrees/extending/extending.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/extending/extending.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/extending/index.doctree b/lib/cpython-doc/tools/build/doctrees/extending/index.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/extending/index.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/extending/newtypes.doctree b/lib/cpython-doc/tools/build/doctrees/extending/newtypes.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/extending/newtypes.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/extending/windows.doctree b/lib/cpython-doc/tools/build/doctrees/extending/windows.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/extending/windows.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/design.doctree b/lib/cpython-doc/tools/build/doctrees/faq/design.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/design.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/extending.doctree b/lib/cpython-doc/tools/build/doctrees/faq/extending.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/extending.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/general.doctree b/lib/cpython-doc/tools/build/doctrees/faq/general.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/general.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/gui.doctree b/lib/cpython-doc/tools/build/doctrees/faq/gui.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/gui.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/index.doctree b/lib/cpython-doc/tools/build/doctrees/faq/index.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/index.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/installed.doctree b/lib/cpython-doc/tools/build/doctrees/faq/installed.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/installed.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/library.doctree b/lib/cpython-doc/tools/build/doctrees/faq/library.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/library.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/programming.doctree b/lib/cpython-doc/tools/build/doctrees/faq/programming.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/programming.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/faq/windows.doctree b/lib/cpython-doc/tools/build/doctrees/faq/windows.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/faq/windows.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/glossary.doctree b/lib/cpython-doc/tools/build/doctrees/glossary.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/glossary.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/advocacy.doctree b/lib/cpython-doc/tools/build/doctrees/howto/advocacy.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/advocacy.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/cporting.doctree b/lib/cpython-doc/tools/build/doctrees/howto/cporting.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/cporting.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/curses.doctree b/lib/cpython-doc/tools/build/doctrees/howto/curses.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/curses.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/descriptor.doctree b/lib/cpython-doc/tools/build/doctrees/howto/descriptor.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/descriptor.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/functional.doctree b/lib/cpython-doc/tools/build/doctrees/howto/functional.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/functional.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/index.doctree b/lib/cpython-doc/tools/build/doctrees/howto/index.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/index.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/logging-cookbook.doctree b/lib/cpython-doc/tools/build/doctrees/howto/logging-cookbook.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/logging-cookbook.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/logging.doctree b/lib/cpython-doc/tools/build/doctrees/howto/logging.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/logging.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/pyporting.doctree b/lib/cpython-doc/tools/build/doctrees/howto/pyporting.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/pyporting.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/regex.doctree b/lib/cpython-doc/tools/build/doctrees/howto/regex.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/regex.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/sockets.doctree b/lib/cpython-doc/tools/build/doctrees/howto/sockets.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/sockets.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/sorting.doctree b/lib/cpython-doc/tools/build/doctrees/howto/sorting.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/sorting.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/unicode.doctree b/lib/cpython-doc/tools/build/doctrees/howto/unicode.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/unicode.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/urllib2.doctree b/lib/cpython-doc/tools/build/doctrees/howto/urllib2.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/urllib2.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/howto/webservers.doctree b/lib/cpython-doc/tools/build/doctrees/howto/webservers.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/howto/webservers.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/install/index.doctree b/lib/cpython-doc/tools/build/doctrees/install/index.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/install/index.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/install/install.doctree b/lib/cpython-doc/tools/build/doctrees/install/install.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/install/install.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/install/pysetup-config.doctree b/lib/cpython-doc/tools/build/doctrees/install/pysetup-config.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/install/pysetup-config.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/install/pysetup-servers.doctree b/lib/cpython-doc/tools/build/doctrees/install/pysetup-servers.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/install/pysetup-servers.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/install/pysetup.doctree b/lib/cpython-doc/tools/build/doctrees/install/pysetup.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/install/pysetup.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/2to3.doctree b/lib/cpython-doc/tools/build/doctrees/library/2to3.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/2to3.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/__future__.doctree b/lib/cpython-doc/tools/build/doctrees/library/__future__.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/__future__.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/__main__.doctree b/lib/cpython-doc/tools/build/doctrees/library/__main__.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/__main__.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/_dummy_thread.doctree b/lib/cpython-doc/tools/build/doctrees/library/_dummy_thread.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/_dummy_thread.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/_thread.doctree b/lib/cpython-doc/tools/build/doctrees/library/_thread.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/_thread.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/abc.doctree b/lib/cpython-doc/tools/build/doctrees/library/abc.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/abc.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/aifc.doctree b/lib/cpython-doc/tools/build/doctrees/library/aifc.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/aifc.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/allos.doctree b/lib/cpython-doc/tools/build/doctrees/library/allos.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/allos.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/archiving.doctree b/lib/cpython-doc/tools/build/doctrees/library/archiving.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/archiving.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/argparse.doctree b/lib/cpython-doc/tools/build/doctrees/library/argparse.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/argparse.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/array.doctree b/lib/cpython-doc/tools/build/doctrees/library/array.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/array.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/ast.doctree b/lib/cpython-doc/tools/build/doctrees/library/ast.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/ast.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/asynchat.doctree b/lib/cpython-doc/tools/build/doctrees/library/asynchat.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/asynchat.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/asyncore.doctree b/lib/cpython-doc/tools/build/doctrees/library/asyncore.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/asyncore.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/atexit.doctree b/lib/cpython-doc/tools/build/doctrees/library/atexit.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/atexit.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/audioop.doctree b/lib/cpython-doc/tools/build/doctrees/library/audioop.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/audioop.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/base64.doctree b/lib/cpython-doc/tools/build/doctrees/library/base64.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/base64.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/bdb.doctree b/lib/cpython-doc/tools/build/doctrees/library/bdb.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/bdb.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/binascii.doctree b/lib/cpython-doc/tools/build/doctrees/library/binascii.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/binascii.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/binhex.doctree b/lib/cpython-doc/tools/build/doctrees/library/binhex.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/binhex.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/bisect.doctree b/lib/cpython-doc/tools/build/doctrees/library/bisect.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/bisect.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/builtins.doctree b/lib/cpython-doc/tools/build/doctrees/library/builtins.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/builtins.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/bz2.doctree b/lib/cpython-doc/tools/build/doctrees/library/bz2.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/bz2.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/calendar.doctree b/lib/cpython-doc/tools/build/doctrees/library/calendar.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/calendar.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/cgi.doctree b/lib/cpython-doc/tools/build/doctrees/library/cgi.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/cgi.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/cgitb.doctree b/lib/cpython-doc/tools/build/doctrees/library/cgitb.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/cgitb.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/chunk.doctree b/lib/cpython-doc/tools/build/doctrees/library/chunk.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/chunk.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/cmath.doctree b/lib/cpython-doc/tools/build/doctrees/library/cmath.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/cmath.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/cmd.doctree b/lib/cpython-doc/tools/build/doctrees/library/cmd.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/cmd.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/code.doctree b/lib/cpython-doc/tools/build/doctrees/library/code.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/code.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/codecs.doctree b/lib/cpython-doc/tools/build/doctrees/library/codecs.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/codecs.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/codeop.doctree b/lib/cpython-doc/tools/build/doctrees/library/codeop.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/codeop.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/collections.abc.doctree b/lib/cpython-doc/tools/build/doctrees/library/collections.abc.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/collections.abc.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/collections.doctree b/lib/cpython-doc/tools/build/doctrees/library/collections.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/collections.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/colorsys.doctree b/lib/cpython-doc/tools/build/doctrees/library/colorsys.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/colorsys.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/compileall.doctree b/lib/cpython-doc/tools/build/doctrees/library/compileall.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/compileall.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/concurrent.futures.doctree b/lib/cpython-doc/tools/build/doctrees/library/concurrent.futures.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/concurrent.futures.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/configparser.doctree b/lib/cpython-doc/tools/build/doctrees/library/configparser.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/configparser.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/constants.doctree b/lib/cpython-doc/tools/build/doctrees/library/constants.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/constants.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/contextlib.doctree b/lib/cpython-doc/tools/build/doctrees/library/contextlib.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/contextlib.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/copy.doctree b/lib/cpython-doc/tools/build/doctrees/library/copy.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/copy.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/copyreg.doctree b/lib/cpython-doc/tools/build/doctrees/library/copyreg.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/copyreg.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/crypt.doctree b/lib/cpython-doc/tools/build/doctrees/library/crypt.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/crypt.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/crypto.doctree b/lib/cpython-doc/tools/build/doctrees/library/crypto.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/crypto.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/csv.doctree b/lib/cpython-doc/tools/build/doctrees/library/csv.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/csv.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/ctypes.doctree b/lib/cpython-doc/tools/build/doctrees/library/ctypes.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/ctypes.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/curses.ascii.doctree b/lib/cpython-doc/tools/build/doctrees/library/curses.ascii.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/curses.ascii.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/curses.doctree b/lib/cpython-doc/tools/build/doctrees/library/curses.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/curses.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/curses.panel.doctree b/lib/cpython-doc/tools/build/doctrees/library/curses.panel.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/curses.panel.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/custominterp.doctree b/lib/cpython-doc/tools/build/doctrees/library/custominterp.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/custominterp.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/datatypes.doctree b/lib/cpython-doc/tools/build/doctrees/library/datatypes.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/datatypes.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/datetime.doctree b/lib/cpython-doc/tools/build/doctrees/library/datetime.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/datetime.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/dbm.doctree b/lib/cpython-doc/tools/build/doctrees/library/dbm.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/dbm.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/debug.doctree b/lib/cpython-doc/tools/build/doctrees/library/debug.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/debug.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/decimal.doctree b/lib/cpython-doc/tools/build/doctrees/library/decimal.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/decimal.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/development.doctree b/lib/cpython-doc/tools/build/doctrees/library/development.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/development.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/difflib.doctree b/lib/cpython-doc/tools/build/doctrees/library/difflib.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/difflib.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/dis.doctree b/lib/cpython-doc/tools/build/doctrees/library/dis.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/dis.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/distutils.doctree b/lib/cpython-doc/tools/build/doctrees/library/distutils.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/distutils.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/doctest.doctree b/lib/cpython-doc/tools/build/doctrees/library/doctest.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/doctest.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/dummy_threading.doctree b/lib/cpython-doc/tools/build/doctrees/library/dummy_threading.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/dummy_threading.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email-examples.doctree b/lib/cpython-doc/tools/build/doctrees/library/email-examples.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email-examples.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.charset.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.charset.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.charset.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.encoders.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.encoders.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.encoders.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.errors.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.errors.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.errors.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.generator.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.generator.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.generator.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.header.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.header.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.header.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.iterators.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.iterators.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.iterators.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.message.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.message.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.message.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.mime.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.mime.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.mime.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.parser.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.parser.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.parser.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.policy.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.policy.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.policy.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/email.util.doctree b/lib/cpython-doc/tools/build/doctrees/library/email.util.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/email.util.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/errno.doctree b/lib/cpython-doc/tools/build/doctrees/library/errno.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/errno.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/exceptions.doctree b/lib/cpython-doc/tools/build/doctrees/library/exceptions.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/exceptions.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/faulthandler.doctree b/lib/cpython-doc/tools/build/doctrees/library/faulthandler.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/faulthandler.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/fcntl.doctree b/lib/cpython-doc/tools/build/doctrees/library/fcntl.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/fcntl.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/filecmp.doctree b/lib/cpython-doc/tools/build/doctrees/library/filecmp.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/filecmp.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/fileformats.doctree b/lib/cpython-doc/tools/build/doctrees/library/fileformats.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/fileformats.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/fileinput.doctree b/lib/cpython-doc/tools/build/doctrees/library/fileinput.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/fileinput.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/filesys.doctree b/lib/cpython-doc/tools/build/doctrees/library/filesys.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/filesys.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/fnmatch.doctree b/lib/cpython-doc/tools/build/doctrees/library/fnmatch.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/fnmatch.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/formatter.doctree b/lib/cpython-doc/tools/build/doctrees/library/formatter.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/formatter.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/fpectl.doctree b/lib/cpython-doc/tools/build/doctrees/library/fpectl.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/fpectl.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/fractions.doctree b/lib/cpython-doc/tools/build/doctrees/library/fractions.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/fractions.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/frameworks.doctree b/lib/cpython-doc/tools/build/doctrees/library/frameworks.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/frameworks.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/ftplib.doctree b/lib/cpython-doc/tools/build/doctrees/library/ftplib.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/ftplib.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/functional.doctree b/lib/cpython-doc/tools/build/doctrees/library/functional.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/functional.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/functions.doctree b/lib/cpython-doc/tools/build/doctrees/library/functions.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/functions.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/functools.doctree b/lib/cpython-doc/tools/build/doctrees/library/functools.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/functools.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/gc.doctree b/lib/cpython-doc/tools/build/doctrees/library/gc.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/gc.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/getopt.doctree b/lib/cpython-doc/tools/build/doctrees/library/getopt.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/getopt.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/getpass.doctree b/lib/cpython-doc/tools/build/doctrees/library/getpass.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/getpass.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/gettext.doctree b/lib/cpython-doc/tools/build/doctrees/library/gettext.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/gettext.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/glob.doctree b/lib/cpython-doc/tools/build/doctrees/library/glob.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/glob.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/grp.doctree b/lib/cpython-doc/tools/build/doctrees/library/grp.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/grp.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/gzip.doctree b/lib/cpython-doc/tools/build/doctrees/library/gzip.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/gzip.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/hashlib.doctree b/lib/cpython-doc/tools/build/doctrees/library/hashlib.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/hashlib.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/heapq.doctree b/lib/cpython-doc/tools/build/doctrees/library/heapq.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/heapq.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/hmac.doctree b/lib/cpython-doc/tools/build/doctrees/library/hmac.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/hmac.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/html.doctree b/lib/cpython-doc/tools/build/doctrees/library/html.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/html.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/html.entities.doctree b/lib/cpython-doc/tools/build/doctrees/library/html.entities.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/html.entities.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/html.parser.doctree b/lib/cpython-doc/tools/build/doctrees/library/html.parser.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/html.parser.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/http.client.doctree b/lib/cpython-doc/tools/build/doctrees/library/http.client.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/http.client.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/http.cookiejar.doctree b/lib/cpython-doc/tools/build/doctrees/library/http.cookiejar.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/http.cookiejar.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/http.cookies.doctree b/lib/cpython-doc/tools/build/doctrees/library/http.cookies.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/http.cookies.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/http.server.doctree b/lib/cpython-doc/tools/build/doctrees/library/http.server.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/http.server.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/i18n.doctree b/lib/cpython-doc/tools/build/doctrees/library/i18n.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/i18n.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/idle.doctree b/lib/cpython-doc/tools/build/doctrees/library/idle.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/idle.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/imaplib.doctree b/lib/cpython-doc/tools/build/doctrees/library/imaplib.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/imaplib.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/imghdr.doctree b/lib/cpython-doc/tools/build/doctrees/library/imghdr.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/imghdr.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/imp.doctree b/lib/cpython-doc/tools/build/doctrees/library/imp.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/imp.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/importlib.doctree b/lib/cpython-doc/tools/build/doctrees/library/importlib.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/importlib.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/index.doctree b/lib/cpython-doc/tools/build/doctrees/library/index.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/index.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/inspect.doctree b/lib/cpython-doc/tools/build/doctrees/library/inspect.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/inspect.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/internet.doctree b/lib/cpython-doc/tools/build/doctrees/library/internet.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/internet.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/intro.doctree b/lib/cpython-doc/tools/build/doctrees/library/intro.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/intro.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/io.doctree b/lib/cpython-doc/tools/build/doctrees/library/io.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/io.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/ipc.doctree b/lib/cpython-doc/tools/build/doctrees/library/ipc.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/ipc.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/itertools.doctree b/lib/cpython-doc/tools/build/doctrees/library/itertools.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/itertools.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/json.doctree b/lib/cpython-doc/tools/build/doctrees/library/json.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/json.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/keyword.doctree b/lib/cpython-doc/tools/build/doctrees/library/keyword.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/keyword.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/language.doctree b/lib/cpython-doc/tools/build/doctrees/library/language.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/language.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/linecache.doctree b/lib/cpython-doc/tools/build/doctrees/library/linecache.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/linecache.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/locale.doctree b/lib/cpython-doc/tools/build/doctrees/library/locale.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/locale.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/logging.config.doctree b/lib/cpython-doc/tools/build/doctrees/library/logging.config.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/logging.config.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/logging.doctree b/lib/cpython-doc/tools/build/doctrees/library/logging.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/logging.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/logging.handlers.doctree b/lib/cpython-doc/tools/build/doctrees/library/logging.handlers.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/logging.handlers.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/lzma.doctree b/lib/cpython-doc/tools/build/doctrees/library/lzma.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/lzma.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/macpath.doctree b/lib/cpython-doc/tools/build/doctrees/library/macpath.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/macpath.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/mailbox.doctree b/lib/cpython-doc/tools/build/doctrees/library/mailbox.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/mailbox.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/mailcap.doctree b/lib/cpython-doc/tools/build/doctrees/library/mailcap.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/mailcap.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/markup.doctree b/lib/cpython-doc/tools/build/doctrees/library/markup.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/markup.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/marshal.doctree b/lib/cpython-doc/tools/build/doctrees/library/marshal.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/marshal.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/math.doctree b/lib/cpython-doc/tools/build/doctrees/library/math.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/math.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/mimetypes.doctree b/lib/cpython-doc/tools/build/doctrees/library/mimetypes.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/mimetypes.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/misc.doctree b/lib/cpython-doc/tools/build/doctrees/library/misc.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/misc.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/mm.doctree b/lib/cpython-doc/tools/build/doctrees/library/mm.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/mm.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/mmap.doctree b/lib/cpython-doc/tools/build/doctrees/library/mmap.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/mmap.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/modulefinder.doctree b/lib/cpython-doc/tools/build/doctrees/library/modulefinder.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/modulefinder.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/modules.doctree b/lib/cpython-doc/tools/build/doctrees/library/modules.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/modules.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/msilib.doctree b/lib/cpython-doc/tools/build/doctrees/library/msilib.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/msilib.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/msvcrt.doctree b/lib/cpython-doc/tools/build/doctrees/library/msvcrt.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/msvcrt.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/multiprocessing.doctree b/lib/cpython-doc/tools/build/doctrees/library/multiprocessing.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/multiprocessing.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/netdata.doctree b/lib/cpython-doc/tools/build/doctrees/library/netdata.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/netdata.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/netrc.doctree b/lib/cpython-doc/tools/build/doctrees/library/netrc.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/netrc.doctree has changed diff --git a/lib/cpython-doc/tools/build/doctrees/library/nis.doctree b/lib/cpython-doc/tools/build/doctrees/library/nis.doctree deleted file mode 100644 Binary file lib/cpython-doc/tools/build/doctrees/library/nis.doctree has changed From noreply at buildbot.pypy.org Tue Jan 17 18:51:41 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Tue, 17 Jan 2012 18:51:41 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain) Add support for the new unpacking at the ast level Message-ID: <20120117175141.BBC7D820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51410:fbf25d09b601 Date: 2012-01-17 18:51 +0100 http://bitbucket.org/pypy/pypy/changeset/fbf25d09b601/ Log: (antocuni,romain) Add support for the new unpacking at the ast level fixed list comprehension diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -1507,6 +1507,11 @@ for node in self.comparators: node.sync_app_attrs(space) +class Starred(expr): + def __init__(self, value, ctx, lineno, col_offset): + self.value = value + self.ctx = ctx + expr.__init__(self, lineno, col_offset) class Call(expr): diff --git a/pypy/interpreter/astcompiler/astbuilder.py b/pypy/interpreter/astcompiler/astbuilder.py --- a/pypy/interpreter/astcompiler/astbuilder.py +++ b/pypy/interpreter/astcompiler/astbuilder.py @@ -680,7 +680,7 @@ self.set_context(target_expr, ast.Store) targets.append(target_expr) value_child = stmt.children[-1] - if value_child.type == syms.testlist or value_child.type == syms.testlist_star_expr: + if value_child.type == syms.testlist_star_expr: value_expr = self.handle_testlist(value_child) else: value_expr = self.handle_expr(value_child) @@ -740,6 +740,8 @@ operands.append(self.handle_expr(expr_node.children[i + 1])) return ast.Compare(expr, operators, operands, expr_node.lineno, expr_node.column) + elif expr_node_type == syms.star_expr: + return self.handle_star_expr(expr_node) elif expr_node_type == syms.expr or \ expr_node_type == syms.xor_expr or \ expr_node_type == syms.and_expr or \ @@ -766,6 +768,10 @@ else: raise AssertionError("unknown expr") + def handle_star_expr(self, star_expr_node): + expr = self.handle_expr(star_expr_node.children[1]) + return ast.Starred(expr, ast.Load, star_expr_node.lineno, star_expr_node.column) + def handle_lambdef(self, lambdef_node): expr = self.handle_expr(lambdef_node.children[-1]) if len(lambdef_node.children) == 3: @@ -1229,8 +1235,8 @@ elt = self.handle_expr(listcomp_node.children[0]) comps = self.comprehension_helper(listcomp_node.children[1], "handle_testlist", - syms.list_for, syms.list_if, - syms.list_iter, + syms.comp_for, syms.comp_if, + syms.comp_iter, comp_fix_unamed_tuple_location=True) return ast.ListComp(elt, comps, listcomp_node.lineno, listcomp_node.column) diff --git a/pypy/interpreter/astcompiler/asthelpers.py b/pypy/interpreter/astcompiler/asthelpers.py --- a/pypy/interpreter/astcompiler/asthelpers.py +++ b/pypy/interpreter/astcompiler/asthelpers.py @@ -133,6 +133,13 @@ _description = "comparison" +class __extend__(ast.Starred): + + _description = "starred expression" + + def set_context(self, ctx): + self.ctx = ctx + self.value.set_context(ctx) class __extend__(ast.IfExp): diff --git a/pypy/interpreter/astcompiler/test/test_astbuilder.py b/pypy/interpreter/astcompiler/test/test_astbuilder.py --- a/pypy/interpreter/astcompiler/test/test_astbuilder.py +++ b/pypy/interpreter/astcompiler/test/test_astbuilder.py @@ -600,6 +600,17 @@ assert isinstance(tup.elts[0], ast.Name) assert tup.elts[0].ctx == ast.Store + def test_assign_starred(self): + assign = self.get_first_stmt("*a, b = x") + assert isinstance(assign, ast.Assign) + assert len(assign.targets) == 1 + names = assign.targets[0] + assert len(names.elts) == 2 + assert isinstance(names.elts[0], ast.Starred) + assert isinstance(names.elts[1], ast.Name) + assert isinstance(names.elts[0].value, ast.Name) + assert names.elts[0].value.id == "a" + def test_name(self): name = self.get_first_expr("hi") assert isinstance(name, ast.Name) diff --git a/pypy/interpreter/pyparser/data/Grammar3.2 b/pypy/interpreter/pyparser/data/Grammar3.2 --- a/pypy/interpreter/pyparser/data/Grammar3.2 +++ b/pypy/interpreter/pyparser/data/Grammar3.2 @@ -127,11 +127,6 @@ # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. argument: test [comp_for] | test '=' test - -list_iter: list_for | list_if -list_for: 'for' exprlist 'in' testlist_safe [list_iter] -list_if: 'if' old_test [list_iter] - comp_iter: comp_for | comp_if comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_if: 'if' old_test [comp_iter] From noreply at buildbot.pypy.org Tue Jan 17 19:23:24 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 17 Jan 2012 19:23:24 +0100 (CET) Subject: [pypy-commit] pypy default: move some stuff that's in fromnumeric out of the appnumpy file. Message-ID: <20120117182324.E4D7D820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51411:8dea12b8117a Date: 2012-01-17 12:23 -0600 http://bitbucket.org/pypy/pypy/changeset/8dea12b8117a/ Log: move some stuff that's in fromnumeric out of the appnumpy file. diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -1,7 +1,7 @@ - from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -class AppTestFromNumeric(BaseNumpyAppTest): + +class AppTestFromNumeric(BaseNumpyAppTest): def test_argmax(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, argmax @@ -23,7 +23,7 @@ b = arange(6) b[1] = 0 assert argmin(b) == 0 - + def test_shape(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, identity, shape @@ -44,7 +44,7 @@ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 - + def test_amin(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, amin @@ -86,14 +86,14 @@ assert ndim([[1,2,3],[4,5,6]]) == 2 assert ndim(array([[1,2,3],[4,5,6]])) == 2 assert ndim(1) == 0 - + def test_rank(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, rank assert rank([[1,2,3],[4,5,6]]) == 2 assert rank(array([[1,2,3],[4,5,6]])) == 2 assert rank(1) == 0 - + def test_var(self): from numpypy import array, var a = array([[1,2],[3,4]]) @@ -107,3 +107,21 @@ assert std(a) == 1.1180339887498949 # assert (std(a, axis=0) == array([ 1., 1.])).all() # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + + def test_mean(self): + from numpypy import array, mean + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, 1, -1).shape == (1, 105) + assert reshape(a, 1, 1, -1).shape == (1, 1, 105) + assert reshape(a, -1, 1, 1).shape == (105, 1, 1) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -90,7 +90,6 @@ appleveldefs = { 'average': 'app_numpy.average', - 'mean': 'app_numpy.mean', 'sum': 'app_numpy.sum', 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', @@ -99,5 +98,4 @@ 'e': 'app_numpy.e', 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', - 'reshape': 'app_numpy.reshape', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -11,23 +11,20 @@ def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! - return mean(a) + if not hasattr(a, "mean"): + a = _numpypy.array(a) + return a.mean() def identity(n, dtype=None): - a = _numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n, n), dtype=dtype) for i in range(n): a[i][i] = 1 return a -def mean(a, axis=None): - if not hasattr(a, "mean"): - a = _numpypy.array(a) - return a.mean(axis) - def sum(a,axis=None): '''sum(a, axis=None) Sum of array elements over a given axis. - + Parameters ---------- a : array_like @@ -35,7 +32,7 @@ axis : integer, optional Axis over which the sum is taken. By default `axis` is None, and all elements are summed. - + Returns ------- sum_along_axis : ndarray @@ -43,7 +40,7 @@ axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar is returned. If an output array is specified, a reference to `out` is returned. - + See Also -------- ndarray.sum : Equivalent method. @@ -79,40 +76,3 @@ arr[j] = i i += step return arr - - -def reshape(a, shape): - '''reshape(a, newshape) - Gives a new shape to an array without changing its data. - - Parameters - ---------- - a : array_like - Array to be reshaped. - newshape : int or tuple of ints - The new shape should be compatible with the original shape. If - an integer, then the result will be a 1-D array of that length. - One shape dimension can be -1. In this case, the value is inferred - from the length of the array and remaining dimensions. - - Returns - ------- - reshaped_array : ndarray - This will be a new view object if possible; otherwise, it will - be a copy. - - - See Also - -------- - ndarray.reshape : Equivalent method. - - Notes - ----- - - It is not always possible to change the shape of an array without - copying the data. If you want an error to be raise if the data is copied, - you should assign the new shape to the shape attribute of the array -''' - if not hasattr(a, 'reshape'): - a = _numpypy.array(a) - return a.reshape(shape) diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -726,16 +726,16 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array, mean + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - b = mean(a, axis=0) - b[0,0]==35. + b = a.mean(axis=0) + b[0, 0]==35. assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from _numpypy import array @@ -1550,18 +1550,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = array(range(105)).reshape(3, 5, 7) - assert a.reshape(1, -1).shape == (1, 105) - assert a.reshape(1, 1, -1).shape == (1, 1, 105) - assert a.reshape(-1, 1, 1).shape == (105, 1, 1) From noreply at buildbot.pypy.org Tue Jan 17 19:25:39 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 17 Jan 2012 19:25:39 +0100 (CET) Subject: [pypy-commit] pypy default: oops, fix tests Message-ID: <20120117182539.6D275820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51412:593bcb6b2e6f Date: 2012-01-17 12:25 -0600 http://bitbucket.org/pypy/pypy/changeset/593bcb6b2e6f/ Log: oops, fix tests diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -122,6 +122,6 @@ b = reshape(a, (3, 4)) assert b.shape == (3, 4) a = array(range(105)).reshape(3, 5, 7) - assert reshape(a, 1, -1).shape == (1, 105) - assert reshape(a, 1, 1, -1).shape == (1, 1, 105) - assert reshape(a, -1, 1, 1).shape == (105, 1, 1) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) From noreply at buildbot.pypy.org Tue Jan 17 19:28:04 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 17 Jan 2012 19:28:04 +0100 (CET) Subject: [pypy-commit] pypy default: uncomment a test Message-ID: <20120117182804.4371C820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51413:f602adef8b38 Date: 2012-01-17 12:27 -0600 http://bitbucket.org/pypy/pypy/changeset/f602adef8b38/ Log: uncomment a test diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -18,8 +18,8 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - # assert (argmax(a, axis=0) == array([0, 0, 0])).all() - # assert (argmax(a, axis=1) == array([0, 0])).all() + assert (argmin(a, axis=0) == array([0, 0, 0])).all() + assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 From noreply at buildbot.pypy.org Tue Jan 17 19:51:58 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 17 Jan 2012 19:51:58 +0100 (CET) Subject: [pypy-commit] pypy default: split _codecs.lookup_codec into parts with and without a loop, so the JIT can inline most of lookup_codec, eventually it might even make sense to make this codec_search_cache thing be a purefunction, but I'm not sure what it's sematnics are ATM Message-ID: <20120117185158.35F89820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51414:33bd443587ba Date: 2012-01-17 12:51 -0600 http://bitbucket.org/pypy/pypy/changeset/33bd443587ba/ Log: split _codecs.lookup_codec into parts with and without a loop, so the JIT can inline most of lookup_codec, eventually it might even make sense to make this codec_search_cache thing be a purefunction, but I'm not sure what it's sematnics are ATM diff --git a/pypy/module/_codecs/interp_codecs.py b/pypy/module/_codecs/interp_codecs.py --- a/pypy/module/_codecs/interp_codecs.py +++ b/pypy/module/_codecs/interp_codecs.py @@ -108,6 +108,10 @@ w_result = state.codec_search_cache.get(normalized_encoding, None) if w_result is not None: return w_result + return _lookup_codec_loop(space, encoding, normalized_encoding) + +def _lookup_codec_loop(space, encoding, normalized_encoding): + state = space.fromcache(CodecState) if state.codec_need_encodings: w_import = space.getattr(space.builtin, space.wrap("__import__")) # registers new codecs From noreply at buildbot.pypy.org Tue Jan 17 20:27:29 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 20:27:29 +0100 (CET) Subject: [pypy-commit] pypy py3k: Remove more L suffixes and long() calls Message-ID: <20120117192729.D461E820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51415:d682184291a6 Date: 2012-01-17 20:25 +0100 http://bitbucket.org/pypy/pypy/changeset/d682184291a6/ Log: Remove more L suffixes and long() calls diff --git a/lib_pypy/_md5.py b/lib_pypy/_md5.py --- a/lib_pypy/_md5.py +++ b/lib_pypy/_md5.py @@ -48,7 +48,7 @@ "Transform a list of characters into a list of longs." imax = len(list) // 4 - hl = [0L] * imax + hl = [0] * imax j = 0 i = 0 @@ -100,16 +100,16 @@ (now summed-up in one function). """ - res = 0L + res = 0 res = res + a + func(b, c, d) res = res + x res = res + ac - res = res & 0xffffffffL + res = res & 0xffffffff res = _rotateLeft(res, s) - res = res & 0xffffffffL + res = res & 0xffffffff res = res + b - return res & 0xffffffffL + return res & 0xffffffff class md5: @@ -122,7 +122,7 @@ "Initialisation." # Initial message length in bits(!). - self.length = 0L + self.length = 0 self.count = [0, 0] # Initial empty message as a sequence of bytes (8 bit characters). @@ -139,15 +139,15 @@ def init(self): "Initialize the message-digest and set all fields to zero." - self.length = 0L + self.length = 0 self.count = [0, 0] self.input = [] # Load magic initialization constants. - self.A = 0x67452301L - self.B = 0xefcdab89L - self.C = 0x98badcfeL - self.D = 0x10325476L + self.A = 0x67452301 + self.B = 0xefcdab89 + self.C = 0x98badcfe + self.D = 0x10325476 def _transform(self, inp): @@ -164,90 +164,90 @@ S11, S12, S13, S14 = 7, 12, 17, 22 - a = XX(F, a, b, c, d, inp[ 0], S11, 0xD76AA478L) # 1 - d = XX(F, d, a, b, c, inp[ 1], S12, 0xE8C7B756L) # 2 - c = XX(F, c, d, a, b, inp[ 2], S13, 0x242070DBL) # 3 - b = XX(F, b, c, d, a, inp[ 3], S14, 0xC1BDCEEEL) # 4 - a = XX(F, a, b, c, d, inp[ 4], S11, 0xF57C0FAFL) # 5 - d = XX(F, d, a, b, c, inp[ 5], S12, 0x4787C62AL) # 6 - c = XX(F, c, d, a, b, inp[ 6], S13, 0xA8304613L) # 7 - b = XX(F, b, c, d, a, inp[ 7], S14, 0xFD469501L) # 8 - a = XX(F, a, b, c, d, inp[ 8], S11, 0x698098D8L) # 9 - d = XX(F, d, a, b, c, inp[ 9], S12, 0x8B44F7AFL) # 10 - c = XX(F, c, d, a, b, inp[10], S13, 0xFFFF5BB1L) # 11 - b = XX(F, b, c, d, a, inp[11], S14, 0x895CD7BEL) # 12 - a = XX(F, a, b, c, d, inp[12], S11, 0x6B901122L) # 13 - d = XX(F, d, a, b, c, inp[13], S12, 0xFD987193L) # 14 - c = XX(F, c, d, a, b, inp[14], S13, 0xA679438EL) # 15 - b = XX(F, b, c, d, a, inp[15], S14, 0x49B40821L) # 16 + a = XX(F, a, b, c, d, inp[ 0], S11, 0xD76AA478) # 1 + d = XX(F, d, a, b, c, inp[ 1], S12, 0xE8C7B756) # 2 + c = XX(F, c, d, a, b, inp[ 2], S13, 0x242070DB) # 3 + b = XX(F, b, c, d, a, inp[ 3], S14, 0xC1BDCEEE) # 4 + a = XX(F, a, b, c, d, inp[ 4], S11, 0xF57C0FAF) # 5 + d = XX(F, d, a, b, c, inp[ 5], S12, 0x4787C62A) # 6 + c = XX(F, c, d, a, b, inp[ 6], S13, 0xA8304613) # 7 + b = XX(F, b, c, d, a, inp[ 7], S14, 0xFD469501) # 8 + a = XX(F, a, b, c, d, inp[ 8], S11, 0x698098D8) # 9 + d = XX(F, d, a, b, c, inp[ 9], S12, 0x8B44F7AF) # 10 + c = XX(F, c, d, a, b, inp[10], S13, 0xFFFF5BB1) # 11 + b = XX(F, b, c, d, a, inp[11], S14, 0x895CD7BE) # 12 + a = XX(F, a, b, c, d, inp[12], S11, 0x6B901122) # 13 + d = XX(F, d, a, b, c, inp[13], S12, 0xFD987193) # 14 + c = XX(F, c, d, a, b, inp[14], S13, 0xA679438E) # 15 + b = XX(F, b, c, d, a, inp[15], S14, 0x49B40821) # 16 # Round 2. S21, S22, S23, S24 = 5, 9, 14, 20 - a = XX(G, a, b, c, d, inp[ 1], S21, 0xF61E2562L) # 17 - d = XX(G, d, a, b, c, inp[ 6], S22, 0xC040B340L) # 18 - c = XX(G, c, d, a, b, inp[11], S23, 0x265E5A51L) # 19 - b = XX(G, b, c, d, a, inp[ 0], S24, 0xE9B6C7AAL) # 20 - a = XX(G, a, b, c, d, inp[ 5], S21, 0xD62F105DL) # 21 - d = XX(G, d, a, b, c, inp[10], S22, 0x02441453L) # 22 - c = XX(G, c, d, a, b, inp[15], S23, 0xD8A1E681L) # 23 - b = XX(G, b, c, d, a, inp[ 4], S24, 0xE7D3FBC8L) # 24 - a = XX(G, a, b, c, d, inp[ 9], S21, 0x21E1CDE6L) # 25 - d = XX(G, d, a, b, c, inp[14], S22, 0xC33707D6L) # 26 - c = XX(G, c, d, a, b, inp[ 3], S23, 0xF4D50D87L) # 27 - b = XX(G, b, c, d, a, inp[ 8], S24, 0x455A14EDL) # 28 - a = XX(G, a, b, c, d, inp[13], S21, 0xA9E3E905L) # 29 - d = XX(G, d, a, b, c, inp[ 2], S22, 0xFCEFA3F8L) # 30 - c = XX(G, c, d, a, b, inp[ 7], S23, 0x676F02D9L) # 31 - b = XX(G, b, c, d, a, inp[12], S24, 0x8D2A4C8AL) # 32 + a = XX(G, a, b, c, d, inp[ 1], S21, 0xF61E2562) # 17 + d = XX(G, d, a, b, c, inp[ 6], S22, 0xC040B340) # 18 + c = XX(G, c, d, a, b, inp[11], S23, 0x265E5A51) # 19 + b = XX(G, b, c, d, a, inp[ 0], S24, 0xE9B6C7AA) # 20 + a = XX(G, a, b, c, d, inp[ 5], S21, 0xD62F105D) # 21 + d = XX(G, d, a, b, c, inp[10], S22, 0x02441453) # 22 + c = XX(G, c, d, a, b, inp[15], S23, 0xD8A1E681) # 23 + b = XX(G, b, c, d, a, inp[ 4], S24, 0xE7D3FBC8) # 24 + a = XX(G, a, b, c, d, inp[ 9], S21, 0x21E1CDE6) # 25 + d = XX(G, d, a, b, c, inp[14], S22, 0xC33707D6) # 26 + c = XX(G, c, d, a, b, inp[ 3], S23, 0xF4D50D87) # 27 + b = XX(G, b, c, d, a, inp[ 8], S24, 0x455A14ED) # 28 + a = XX(G, a, b, c, d, inp[13], S21, 0xA9E3E905) # 29 + d = XX(G, d, a, b, c, inp[ 2], S22, 0xFCEFA3F8) # 30 + c = XX(G, c, d, a, b, inp[ 7], S23, 0x676F02D9) # 31 + b = XX(G, b, c, d, a, inp[12], S24, 0x8D2A4C8A) # 32 # Round 3. S31, S32, S33, S34 = 4, 11, 16, 23 - a = XX(H, a, b, c, d, inp[ 5], S31, 0xFFFA3942L) # 33 - d = XX(H, d, a, b, c, inp[ 8], S32, 0x8771F681L) # 34 - c = XX(H, c, d, a, b, inp[11], S33, 0x6D9D6122L) # 35 - b = XX(H, b, c, d, a, inp[14], S34, 0xFDE5380CL) # 36 - a = XX(H, a, b, c, d, inp[ 1], S31, 0xA4BEEA44L) # 37 - d = XX(H, d, a, b, c, inp[ 4], S32, 0x4BDECFA9L) # 38 - c = XX(H, c, d, a, b, inp[ 7], S33, 0xF6BB4B60L) # 39 - b = XX(H, b, c, d, a, inp[10], S34, 0xBEBFBC70L) # 40 - a = XX(H, a, b, c, d, inp[13], S31, 0x289B7EC6L) # 41 - d = XX(H, d, a, b, c, inp[ 0], S32, 0xEAA127FAL) # 42 - c = XX(H, c, d, a, b, inp[ 3], S33, 0xD4EF3085L) # 43 - b = XX(H, b, c, d, a, inp[ 6], S34, 0x04881D05L) # 44 - a = XX(H, a, b, c, d, inp[ 9], S31, 0xD9D4D039L) # 45 - d = XX(H, d, a, b, c, inp[12], S32, 0xE6DB99E5L) # 46 - c = XX(H, c, d, a, b, inp[15], S33, 0x1FA27CF8L) # 47 - b = XX(H, b, c, d, a, inp[ 2], S34, 0xC4AC5665L) # 48 + a = XX(H, a, b, c, d, inp[ 5], S31, 0xFFFA3942) # 33 + d = XX(H, d, a, b, c, inp[ 8], S32, 0x8771F681) # 34 + c = XX(H, c, d, a, b, inp[11], S33, 0x6D9D6122) # 35 + b = XX(H, b, c, d, a, inp[14], S34, 0xFDE5380C) # 36 + a = XX(H, a, b, c, d, inp[ 1], S31, 0xA4BEEA44) # 37 + d = XX(H, d, a, b, c, inp[ 4], S32, 0x4BDECFA9) # 38 + c = XX(H, c, d, a, b, inp[ 7], S33, 0xF6BB4B60) # 39 + b = XX(H, b, c, d, a, inp[10], S34, 0xBEBFBC70) # 40 + a = XX(H, a, b, c, d, inp[13], S31, 0x289B7EC6) # 41 + d = XX(H, d, a, b, c, inp[ 0], S32, 0xEAA127FA) # 42 + c = XX(H, c, d, a, b, inp[ 3], S33, 0xD4EF3085) # 43 + b = XX(H, b, c, d, a, inp[ 6], S34, 0x04881D05) # 44 + a = XX(H, a, b, c, d, inp[ 9], S31, 0xD9D4D039) # 45 + d = XX(H, d, a, b, c, inp[12], S32, 0xE6DB99E5) # 46 + c = XX(H, c, d, a, b, inp[15], S33, 0x1FA27CF8) # 47 + b = XX(H, b, c, d, a, inp[ 2], S34, 0xC4AC5665) # 48 # Round 4. S41, S42, S43, S44 = 6, 10, 15, 21 - a = XX(I, a, b, c, d, inp[ 0], S41, 0xF4292244L) # 49 - d = XX(I, d, a, b, c, inp[ 7], S42, 0x432AFF97L) # 50 - c = XX(I, c, d, a, b, inp[14], S43, 0xAB9423A7L) # 51 - b = XX(I, b, c, d, a, inp[ 5], S44, 0xFC93A039L) # 52 - a = XX(I, a, b, c, d, inp[12], S41, 0x655B59C3L) # 53 - d = XX(I, d, a, b, c, inp[ 3], S42, 0x8F0CCC92L) # 54 - c = XX(I, c, d, a, b, inp[10], S43, 0xFFEFF47DL) # 55 - b = XX(I, b, c, d, a, inp[ 1], S44, 0x85845DD1L) # 56 - a = XX(I, a, b, c, d, inp[ 8], S41, 0x6FA87E4FL) # 57 - d = XX(I, d, a, b, c, inp[15], S42, 0xFE2CE6E0L) # 58 - c = XX(I, c, d, a, b, inp[ 6], S43, 0xA3014314L) # 59 - b = XX(I, b, c, d, a, inp[13], S44, 0x4E0811A1L) # 60 - a = XX(I, a, b, c, d, inp[ 4], S41, 0xF7537E82L) # 61 - d = XX(I, d, a, b, c, inp[11], S42, 0xBD3AF235L) # 62 - c = XX(I, c, d, a, b, inp[ 2], S43, 0x2AD7D2BBL) # 63 - b = XX(I, b, c, d, a, inp[ 9], S44, 0xEB86D391L) # 64 + a = XX(I, a, b, c, d, inp[ 0], S41, 0xF4292244) # 49 + d = XX(I, d, a, b, c, inp[ 7], S42, 0x432AFF97) # 50 + c = XX(I, c, d, a, b, inp[14], S43, 0xAB9423A7) # 51 + b = XX(I, b, c, d, a, inp[ 5], S44, 0xFC93A039) # 52 + a = XX(I, a, b, c, d, inp[12], S41, 0x655B59C3) # 53 + d = XX(I, d, a, b, c, inp[ 3], S42, 0x8F0CCC92) # 54 + c = XX(I, c, d, a, b, inp[10], S43, 0xFFEFF47D) # 55 + b = XX(I, b, c, d, a, inp[ 1], S44, 0x85845DD1) # 56 + a = XX(I, a, b, c, d, inp[ 8], S41, 0x6FA87E4F) # 57 + d = XX(I, d, a, b, c, inp[15], S42, 0xFE2CE6E0) # 58 + c = XX(I, c, d, a, b, inp[ 6], S43, 0xA3014314) # 59 + b = XX(I, b, c, d, a, inp[13], S44, 0x4E0811A1) # 60 + a = XX(I, a, b, c, d, inp[ 4], S41, 0xF7537E82) # 61 + d = XX(I, d, a, b, c, inp[11], S42, 0xBD3AF235) # 62 + c = XX(I, c, d, a, b, inp[ 2], S43, 0x2AD7D2BB) # 63 + b = XX(I, b, c, d, a, inp[ 9], S44, 0xEB86D391) # 64 - A = (A + a) & 0xffffffffL - B = (B + b) & 0xffffffffL - C = (C + c) & 0xffffffffL - D = (D + d) & 0xffffffffL + A = (A + a) & 0xffffffff + B = (B + b) & 0xffffffff + C = (C + c) & 0xffffffff + D = (D + d) & 0xffffffff self.A, self.B, self.C, self.D = A, B, C, D @@ -273,7 +273,7 @@ leninBuf = long(len(inBuf)) # Compute number of bytes mod 64. - index = (self.count[0] >> 3) & 0x3FL + index = (self.count[0] >> 3) & 0x3F # Update number of bits. self.count[0] = self.count[0] + (leninBuf << 3) @@ -312,7 +312,7 @@ input = [] + self.input count = [] + self.count - index = (self.count[0] >> 3) & 0x3fL + index = (self.count[0] >> 3) & 0x3f if index < 56: padLen = 56 - index diff --git a/lib_pypy/_sha1.py b/lib_pypy/_sha1.py --- a/lib_pypy/_sha1.py +++ b/lib_pypy/_sha1.py @@ -38,7 +38,7 @@ s = b'' pack = struct.pack while n > 0: - s = pack('>I', n & 0xffffffffL) + s + s = pack('>I', n & 0xffffffff) + s n = n >> 32 # Strip off leading zeros. @@ -64,7 +64,7 @@ "Transform a list of characters into a list of longs." imax = len(list) // 4 - hl = [0L] * imax + hl = [0] * imax j = 0 i = 0 @@ -108,10 +108,10 @@ # Constants to be used K = [ - 0x5A827999L, # ( 0 <= t <= 19) - 0x6ED9EBA1L, # (20 <= t <= 39) - 0x8F1BBCDCL, # (40 <= t <= 59) - 0xCA62C1D6L # (60 <= t <= 79) + 0x5A827999, # ( 0 <= t <= 19) + 0x6ED9EBA1, # (20 <= t <= 39) + 0x8F1BBCDC, # (40 <= t <= 59) + 0xCA62C1D6 # (60 <= t <= 79) ] class sha: @@ -124,7 +124,7 @@ "Initialisation." # Initial message length in bits(!). - self.length = 0L + self.length = 0 self.count = [0, 0] # Initial empty message as a sequence of bytes (8 bit characters). @@ -138,21 +138,21 @@ def init(self): "Initialize the message-digest and set all fields to zero." - self.length = 0L + self.length = 0 self.input = [] # Initial 160 bit message digest (5 times 32 bit). - self.H0 = 0x67452301L - self.H1 = 0xEFCDAB89L - self.H2 = 0x98BADCFEL - self.H3 = 0x10325476L - self.H4 = 0xC3D2E1F0L + self.H0 = 0x67452301 + self.H1 = 0xEFCDAB89 + self.H2 = 0x98BADCFE + self.H3 = 0x10325476 + self.H4 = 0xC3D2E1F0 def _transform(self, W): for t in range(16, 80): W.append(_rotateLeft( - W[t-3] ^ W[t-8] ^ W[t-14] ^ W[t-16], 1) & 0xffffffffL) + W[t-3] ^ W[t-8] ^ W[t-14] ^ W[t-16], 1) & 0xffffffff) A = self.H0 B = self.H1 @@ -166,49 +166,49 @@ TEMP = _rotateLeft(A, 5) + f[t/20] + E + W[t] + K[t/20] E = D D = C - C = _rotateLeft(B, 30) & 0xffffffffL + C = _rotateLeft(B, 30) & 0xffffffff B = A - A = TEMP & 0xffffffffL + A = TEMP & 0xffffffff """ for t in range(0, 20): TEMP = _rotateLeft(A, 5) + ((B & C) | ((~ B) & D)) + E + W[t] + K[0] E = D D = C - C = _rotateLeft(B, 30) & 0xffffffffL + C = _rotateLeft(B, 30) & 0xffffffff B = A - A = TEMP & 0xffffffffL + A = TEMP & 0xffffffff for t in range(20, 40): TEMP = _rotateLeft(A, 5) + (B ^ C ^ D) + E + W[t] + K[1] E = D D = C - C = _rotateLeft(B, 30) & 0xffffffffL + C = _rotateLeft(B, 30) & 0xffffffff B = A - A = TEMP & 0xffffffffL + A = TEMP & 0xffffffff for t in range(40, 60): TEMP = _rotateLeft(A, 5) + ((B & C) | (B & D) | (C & D)) + E + W[t] + K[2] E = D D = C - C = _rotateLeft(B, 30) & 0xffffffffL + C = _rotateLeft(B, 30) & 0xffffffff B = A - A = TEMP & 0xffffffffL + A = TEMP & 0xffffffff for t in range(60, 80): TEMP = _rotateLeft(A, 5) + (B ^ C ^ D) + E + W[t] + K[3] E = D D = C - C = _rotateLeft(B, 30) & 0xffffffffL + C = _rotateLeft(B, 30) & 0xffffffff B = A - A = TEMP & 0xffffffffL + A = TEMP & 0xffffffff - self.H0 = (self.H0 + A) & 0xffffffffL - self.H1 = (self.H1 + B) & 0xffffffffL - self.H2 = (self.H2 + C) & 0xffffffffL - self.H3 = (self.H3 + D) & 0xffffffffL - self.H4 = (self.H4 + E) & 0xffffffffL + self.H0 = (self.H0 + A) & 0xffffffff + self.H1 = (self.H1 + B) & 0xffffffff + self.H2 = (self.H2 + C) & 0xffffffff + self.H3 = (self.H3 + D) & 0xffffffff + self.H4 = (self.H4 + E) & 0xffffffff # Down from here all methods follow the Python Standard Library @@ -230,10 +230,10 @@ to the hashed string. """ - leninBuf = long(len(inBuf)) + leninBuf = len(inBuf) # Compute number of bytes mod 64. - index = (self.count[1] >> 3) & 0x3FL + index = (self.count[1] >> 3) & 0x3F # Update number of bits. self.count[1] = self.count[1] + (leninBuf << 3) @@ -273,7 +273,7 @@ input = [] + self.input count = [] + self.count - index = (self.count[1] >> 3) & 0x3fL + index = (self.count[1] >> 3) & 0x3f if index < 56: padLen = 56 - index diff --git a/lib_pypy/_sha512.py b/lib_pypy/_sha512.py --- a/lib_pypy/_sha512.py +++ b/lib_pypy/_sha512.py @@ -10,7 +10,7 @@ def new_shaobject(): return { - 'digest': [0L]*8, + 'digest': [0]*8, 'count_lo': 0, 'count_hi': 0, 'data': [0]* SHA_BLOCKSIZE, From noreply at buildbot.pypy.org Tue Jan 17 20:27:31 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 20:27:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: Another fix in termios Message-ID: <20120117192731.1757E820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51416:03901e509c25 Date: 2012-01-17 20:25 +0100 http://bitbucket.org/pypy/pypy/changeset/03901e509c25/ Log: Another fix in termios diff --git a/pypy/module/termios/interp_termios.py b/pypy/module/termios/interp_termios.py --- a/pypy/module/termios/interp_termios.py +++ b/pypy/module/termios/interp_termios.py @@ -46,7 +46,7 @@ iflag, oflag, cflag, lflag, ispeed, ospeed, cc = tup l_w = [space.wrap(i) for i in [iflag, oflag, cflag, lflag, ispeed, ospeed]] # last one need to be chosen carefully - cc_w = [space.wrap(i) for i in cc] + cc_w = [space.wrapbytes(i) for i in cc] if lflag & termios.ICANON: cc_w[termios.VMIN] = space.wrap(ord(cc[termios.VMIN][0])) cc_w[termios.VTIME] = space.wrap(ord(cc[termios.VTIME][0])) From noreply at buildbot.pypy.org Tue Jan 17 20:27:32 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 20:27:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: Add a reminder about the broken pyrepl. Message-ID: <20120117192732.4CE39820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51417:2c6715a45957 Date: 2012-01-17 20:26 +0100 http://bitbucket.org/pypy/pypy/changeset/2c6715a45957/ Log: Add a reminder about the broken pyrepl. diff --git a/lib_pypy/_pypy_interact.py b/lib_pypy/_pypy_interact.py --- a/lib_pypy/_pypy_interact.py +++ b/lib_pypy/_pypy_interact.py @@ -36,6 +36,9 @@ from pyrepl.simple_interact import run_multiline_interactive_console except ImportError: run_simple_interactive_console(mainmodule) + except SyntaxError: + print("Warning: 'import pyrepl' failed with SyntaxError") + run_simple_interactive_console(mainmodule) else: run_multiline_interactive_console(mainmodule) From noreply at buildbot.pypy.org Tue Jan 17 20:27:33 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 17 Jan 2012 20:27:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: ast.py is generated. Add the Starred in Python.asdl, and regenerate Message-ID: <20120117192733.8DDA7820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51418:5b195e604794 Date: 2012-01-17 20:26 +0100 http://bitbucket.org/pypy/pypy/changeset/5b195e604794/ Log: ast.py is generated. Add the Starred in Python.asdl, and regenerate with "python interpreter/astcompiler/tools/asdl_py.py" diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -1507,11 +1507,6 @@ for node in self.comparators: node.sync_app_attrs(space) -class Starred(expr): - def __init__(self, value, ctx, lineno, col_offset): - self.value = value - self.ctx = ctx - expr.__init__(self, lineno, col_offset) class Call(expr): @@ -1666,6 +1661,29 @@ self.slice.sync_app_attrs(space) +class Starred(expr): + + def __init__(self, value, ctx, lineno, col_offset): + self.value = value + self.ctx = ctx + expr.__init__(self, lineno, col_offset) + self.initialization_state = 15 + + def walkabout(self, visitor): + visitor.visit_Starred(self) + + def mutate_over(self, visitor): + self.value = self.value.mutate_over(visitor) + return visitor.visit_Starred(self) + + def sync_app_attrs(self, space): + if (self.initialization_state & ~0) ^ 15: + self.missing_field(space, ['lineno', 'col_offset', 'value', 'ctx'], 'Starred') + else: + pass + self.value.sync_app_attrs(space) + + class Name(expr): def __init__(self, id, ctx, lineno, col_offset): @@ -2453,6 +2471,8 @@ return self.default_visitor(node) def visit_Subscript(self, node): return self.default_visitor(node) + def visit_Starred(self, node): + return self.default_visitor(node) def visit_Name(self, node): return self.default_visitor(node) def visit_List(self, node): @@ -2663,6 +2683,9 @@ node.value.walkabout(self) node.slice.walkabout(self) + def visit_Starred(self, node): + node.value.walkabout(self) + def visit_Name(self, node): pass @@ -5814,6 +5837,77 @@ __init__=interp2app(Subscript_init), ) +def Starred_get_value(space, w_self): + if w_self.w_dict is not None: + w_obj = w_self.getdictvalue(space, 'value') + if w_obj is not None: + return w_obj + if not w_self.initialization_state & 4: + typename = space.type(w_self).getname(space) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'value') + return space.wrap(w_self.value) + +def Starred_set_value(space, w_self, w_new_value): + try: + w_self.value = space.interp_w(expr, w_new_value, False) + if type(w_self.value) is expr: + raise OperationError(space.w_TypeError, space.w_None) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + w_self.setdictvalue(space, 'value', w_new_value) + return + w_self.deldictvalue(space, 'value') + w_self.initialization_state |= 4 + +def Starred_get_ctx(space, w_self): + if w_self.w_dict is not None: + w_obj = w_self.getdictvalue(space, 'ctx') + if w_obj is not None: + return w_obj + if not w_self.initialization_state & 8: + typename = space.type(w_self).getname(space) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'ctx') + return expr_context_to_class[w_self.ctx - 1]() + +def Starred_set_ctx(space, w_self, w_new_value): + try: + obj = space.interp_w(expr_context, w_new_value) + w_self.ctx = obj.to_simple_int(space) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + w_self.setdictvalue(space, 'ctx', w_new_value) + return + # need to save the original object too + w_self.setdictvalue(space, 'ctx', w_new_value) + w_self.initialization_state |= 8 + +_Starred_field_unroller = unrolling_iterable(['value', 'ctx']) +def Starred_init(space, w_self, __args__): + w_self = space.descr_self_interp_w(Starred, w_self) + args_w, kwargs_w = __args__.unpack() + if args_w: + if len(args_w) != 2: + w_err = space.wrap("Starred constructor takes either 0 or 2 positional arguments") + raise OperationError(space.w_TypeError, w_err) + i = 0 + for field in _Starred_field_unroller: + space.setattr(w_self, space.wrap(field), args_w[i]) + i += 1 + for field, w_value in kwargs_w.iteritems(): + space.setattr(w_self, space.wrap(field), w_value) + +Starred.typedef = typedef.TypeDef("Starred", + expr.typedef, + __module__='_ast', + _fields=_FieldsWrapper(['value', 'ctx']), + value=typedef.GetSetProperty(Starred_get_value, Starred_set_value, cls=Starred), + ctx=typedef.GetSetProperty(Starred_get_ctx, Starred_set_ctx, cls=Starred), + __new__=interp2app(get_AST_new(Starred)), + __init__=interp2app(Starred_init), +) + def Name_get_id(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'id') diff --git a/pypy/interpreter/astcompiler/tools/Python.asdl b/pypy/interpreter/astcompiler/tools/Python.asdl --- a/pypy/interpreter/astcompiler/tools/Python.asdl +++ b/pypy/interpreter/astcompiler/tools/Python.asdl @@ -73,6 +73,7 @@ -- the following expression can appear in assignment context | Attribute(expr value, identifier attr, expr_context ctx) | Subscript(expr value, slice slice, expr_context ctx) + | Starred(expr value, expr_context ctx) | Name(identifier id, expr_context ctx) | List(expr* elts, expr_context ctx) | Tuple(expr* elts, expr_context ctx) From noreply at buildbot.pypy.org Tue Jan 17 20:46:31 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 20:46:31 +0100 (CET) Subject: [pypy-commit] pypy default: a test and a fix Message-ID: <20120117194631.DDDB4820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51419:80570f9631d7 Date: 2012-01-17 21:29 +0200 http://bitbucket.org/pypy/pypy/changeset/80570f9631d7/ Log: a test and a fix diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -140,13 +140,15 @@ bytecode_name = None is_bytecode = True inline_level = None + has_dmp = False def parse_code_data(self, arg): m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', arg) if m is None: # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = arg + if arg: + self.bytecode_name = arg else: self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() self.startlineno = int(lineno) @@ -218,7 +220,7 @@ self.inputargs = inputargs self.chunks = chunks for chunk in self.chunks: - if chunk.filename is not None: + if chunk.bytecode_name is not None: self.startlineno = chunk.startlineno self.filename = chunk.filename self.name = chunk.name diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -283,3 +283,13 @@ assert loops[-1].count == 1234 assert loops[1].count == 123 assert loops[2].count == 12 + +def test_parse_nonpython(): + loop = parse(""" + [] + debug_merge_point(0, 'random') + debug_merge_point(0, ' #15 COMPARE_OP') + """) + f = Function.from_operations(loop.operations, LoopStorage()) + assert f.chunks[-1].filename == 'x.py' + assert f.filename is None From noreply at buildbot.pypy.org Tue Jan 17 20:46:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 20:46:33 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120117194633.23413820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51420:31b1dbaa7024 Date: 2012-01-17 21:45 +0200 http://bitbucket.org/pypy/pypy/changeset/31b1dbaa7024/ Log: merge diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/lib_pypy/numpypy/test/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/lib_pypy/numpypy/test/test_fromnumeric.py @@ -1,7 +1,7 @@ - from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -class AppTestFromNumeric(BaseNumpyAppTest): + +class AppTestFromNumeric(BaseNumpyAppTest): def test_argmax(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, argmax @@ -18,12 +18,12 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - # assert (argmax(a, axis=0) == array([0, 0, 0])).all() - # assert (argmax(a, axis=1) == array([0, 0])).all() + assert (argmin(a, axis=0) == array([0, 0, 0])).all() + assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 - + def test_shape(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, identity, shape @@ -44,7 +44,7 @@ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 - + def test_amin(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, amin @@ -86,14 +86,14 @@ assert ndim([[1,2,3],[4,5,6]]) == 2 assert ndim(array([[1,2,3],[4,5,6]])) == 2 assert ndim(1) == 0 - + def test_rank(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, rank assert rank([[1,2,3],[4,5,6]]) == 2 assert rank(array([[1,2,3],[4,5,6]])) == 2 assert rank(1) == 0 - + def test_var(self): from numpypy import array, var a = array([[1,2],[3,4]]) @@ -107,3 +107,21 @@ assert std(a) == 1.1180339887498949 # assert (std(a, axis=0) == array([ 1., 1.])).all() # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + + def test_mean(self): + from numpypy import array, mean + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) diff --git a/pypy/module/_codecs/interp_codecs.py b/pypy/module/_codecs/interp_codecs.py --- a/pypy/module/_codecs/interp_codecs.py +++ b/pypy/module/_codecs/interp_codecs.py @@ -108,6 +108,10 @@ w_result = state.codec_search_cache.get(normalized_encoding, None) if w_result is not None: return w_result + return _lookup_codec_loop(space, encoding, normalized_encoding) + +def _lookup_codec_loop(space, encoding, normalized_encoding): + state = space.fromcache(CodecState) if state.codec_need_encodings: w_import = space.getattr(space.builtin, space.wrap("__import__")) # registers new codecs diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -90,7 +90,6 @@ appleveldefs = { 'average': 'app_numpy.average', - 'mean': 'app_numpy.mean', 'sum': 'app_numpy.sum', 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', @@ -99,5 +98,4 @@ 'e': 'app_numpy.e', 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', - 'reshape': 'app_numpy.reshape', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -11,23 +11,20 @@ def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! - return mean(a) + if not hasattr(a, "mean"): + a = _numpypy.array(a) + return a.mean() def identity(n, dtype=None): - a = _numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n, n), dtype=dtype) for i in range(n): a[i][i] = 1 return a -def mean(a, axis=None): - if not hasattr(a, "mean"): - a = _numpypy.array(a) - return a.mean(axis) - def sum(a,axis=None): '''sum(a, axis=None) Sum of array elements over a given axis. - + Parameters ---------- a : array_like @@ -35,7 +32,7 @@ axis : integer, optional Axis over which the sum is taken. By default `axis` is None, and all elements are summed. - + Returns ------- sum_along_axis : ndarray @@ -43,7 +40,7 @@ axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar is returned. If an output array is specified, a reference to `out` is returned. - + See Also -------- ndarray.sum : Equivalent method. @@ -79,40 +76,3 @@ arr[j] = i i += step return arr - - -def reshape(a, shape): - '''reshape(a, newshape) - Gives a new shape to an array without changing its data. - - Parameters - ---------- - a : array_like - Array to be reshaped. - newshape : int or tuple of ints - The new shape should be compatible with the original shape. If - an integer, then the result will be a 1-D array of that length. - One shape dimension can be -1. In this case, the value is inferred - from the length of the array and remaining dimensions. - - Returns - ------- - reshaped_array : ndarray - This will be a new view object if possible; otherwise, it will - be a copy. - - - See Also - -------- - ndarray.reshape : Equivalent method. - - Notes - ----- - - It is not always possible to change the shape of an array without - copying the data. If you want an error to be raise if the data is copied, - you should assign the new shape to the shape attribute of the array -''' - if not hasattr(a, 'reshape'): - a = _numpypy.array(a) - return a.reshape(shape) diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -726,16 +726,16 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array, mean + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - b = mean(a, axis=0) - b[0,0]==35. + b = a.mean(axis=0) + b[0, 0]==35. assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from _numpypy import array @@ -1550,18 +1550,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = array(range(105)).reshape(3, 5, 7) - assert a.reshape(1, -1).shape == (1, 105) - assert a.reshape(1, 1, -1).shape == (1, 1, 105) - assert a.reshape(-1, 1, 1).shape == (105, 1, 1) From noreply at buildbot.pypy.org Tue Jan 17 21:46:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 21:46:23 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks-2: dmp creation Message-ID: <20120117204623.16E8A820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks-2 Changeset: r51421:0513eb237097 Date: 2012-01-17 22:25 +0200 http://bitbucket.org/pypy/pypy/changeset/0513eb237097/ Log: dmp creation diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty +from pypy.interpreter.typedef import TypeDef, GetSetProperty,\ + interp_attrproperty from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -150,6 +151,14 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + at unwrap_spec(repr=str, pycode=PyCode, bytecode_no=int) +def descr_new_dmp(space, w_tp, w_args, repr, pycode, bytecode_no): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, pycode, bytecode_no) + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -182,6 +191,17 @@ box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) +class DebugMergePoint(WrappedOp): + def __init__(self, op, repr_of_resop, pycode, bytecode_no): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.pycode = pycode + self.bytecode_no = bytecode_no + + def get_bytecode_no(self, space): + if self.pycode is None: + return space.w_None + return space.wrap(self.bytecode_no) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -195,3 +215,13 @@ WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + pycode = interp_attrproperty('pycode', cls=DebugMergePoint), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -92,6 +92,7 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) cls.orig_oplist = oplist @@ -211,3 +212,14 @@ assert op.getarg(0).getint() == 4 op.result = box assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], '', f.func_code, 0) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert op.num == self.dmp_num diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -52,7 +52,10 @@ from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] - res = _cast_to_box(llres) + if llres: + res = _cast_to_box(llres) + else: + res = None return _cast_to_gcref(ResOperation(no, args, res)) @register_helper(annmodel.SomePtr(llmemory.GCREF)) From noreply at buildbot.pypy.org Tue Jan 17 21:46:24 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 17 Jan 2012 21:46:24 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks-2: implement jithooks Message-ID: <20120117204624.4BD1A820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: better-jit-hooks-2 Changeset: r51422:1ef43f689d9b Date: 2012-01-17 22:45 +0200 http://bitbucket.org/pypy/pypy/changeset/1ef43f689d9b/ Log: implement jithooks diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -11,6 +11,7 @@ from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver class Cache(object): in_recursion = False @@ -112,13 +113,24 @@ def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd for op in operations: if ops_offset is None: ofs = -1 else: ofs = ops_offset.get(op, 0) - l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, - logops.repr_of_resop(op))) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_descr = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_descr)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) return l_w class WrappedBox(Wrappable): @@ -151,13 +163,14 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) - at unwrap_spec(repr=str, pycode=PyCode, bytecode_no=int) -def descr_new_dmp(space, w_tp, w_args, repr, pycode, bytecode_no): + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_descr): args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] num = rop.DEBUG_MERGE_POINT - return DebugMergePoint(jit_hooks.resop_new(num, args, jit_hooks.emptyval()), - repr, pycode, bytecode_no) + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_descr) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely @@ -192,15 +205,26 @@ jit_hooks.resop_setresult(self.op, box.llbox) class DebugMergePoint(WrappedOp): - def __init__(self, op, repr_of_resop, pycode, bytecode_no): + def __init__(self, space, op, repr_of_resop, jd_name, w_descr): WrappedOp.__init__(self, op, -1, repr_of_resop) - self.pycode = pycode - self.bytecode_no = bytecode_no + self.w_descr = w_descr + self.jd_name = jd_name + + def get_descr(self, space): + return self.w_descr + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_descr, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap('DebugMergePoint is not the main jitdriver DMP')) def get_bytecode_no(self, space): - if self.pycode is None: - return space.w_None - return space.wrap(self.bytecode_no) + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_descr, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap('DebugMergePoint is not the main jitdriver DMP')) + + def get_jitdriver(self, space): + return space.wrap(self.jd_name) WrappedOp.typedef = TypeDef( 'ResOperation', @@ -219,8 +243,10 @@ DebugMergePoint.typedef = TypeDef( 'DebugMergePoint', WrappedOp.typedef, __new__ = interp2app(descr_new_dmp), - pycode = interp_attrproperty('pycode', cls=DebugMergePoint), + descr = GetSetProperty(DebugMergePoint.get_descr), + pycode = GetSetProperty(DebugMergePoint.get_pycode), bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver), ) DebugMergePoint.acceptable_as_base_class = False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -118,6 +118,9 @@ assert elem[2][2] == False assert len(elem[3]) == 4 int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() @@ -219,7 +222,11 @@ def f(): pass - op = DebugMergePoint([Box(0)], '', f.func_code, 0) + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) assert op.bytecode_no == 0 assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') From noreply at buildbot.pypy.org Tue Jan 17 22:17:22 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 17 Jan 2012 22:17:22 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks-2: Some naming changes. Message-ID: <20120117211722.7FBBB820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks-2 Changeset: r51423:b906e1141138 Date: 2012-01-17 15:17 -0600 http://bitbucket.org/pypy/pypy/changeset/b906e1141138/ Log: Some naming changes. diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -216,14 +216,14 @@ def get_pycode(self, space): if self.jd_name == pypyjitdriver.name: return space.getitem(self.w_descr, space.wrap(0)) - raise OperationError(space.w_AttributeError, space.wrap('DebugMergePoint is not the main jitdriver DMP')) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) def get_bytecode_no(self, space): if self.jd_name == pypyjitdriver.name: return space.getitem(self.w_descr, space.wrap(1)) - raise OperationError(space.w_AttributeError, space.wrap('DebugMergePoint is not the main jitdriver DMP')) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) - def get_jitdriver(self, space): + def get_jitdriver_name(self, space): return space.wrap(self.jd_name) WrappedOp.typedef = TypeDef( @@ -246,7 +246,7 @@ descr = GetSetProperty(DebugMergePoint.get_descr), pycode = GetSetProperty(DebugMergePoint.get_pycode), bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), - jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), ) DebugMergePoint.acceptable_as_base_class = False From noreply at buildbot.pypy.org Tue Jan 17 22:27:44 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 17 Jan 2012 22:27:44 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks-2: change DMP.descr to DMP.greenkey Message-ID: <20120117212744.5C04A820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks-2 Changeset: r51424:6354270bb576 Date: 2012-01-17 15:27 -0600 http://bitbucket.org/pypy/pypy/changeset/6354270bb576/ Log: change DMP.descr to DMP.greenkey diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,6 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty,\ - interp_attrproperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -123,11 +123,11 @@ jd_sd = jitdrivers_sd[op.getarg(0).getint()] greenkey = op.getarglist()[2:] repr = jd_sd.warmstate.get_location_str(greenkey) - w_descr = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), logops.repr_of_resop(op), jd_sd.jitdriver.name, - w_descr)) + w_greenkey)) else: l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, logops.repr_of_resop(op))) @@ -164,13 +164,13 @@ return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) @unwrap_spec(repr=str, jd_name=str) -def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_descr): +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in space.listview(w_args)] num = rop.DEBUG_MERGE_POINT return DebugMergePoint(space, jit_hooks.resop_new(num, args, jit_hooks.emptyval()), - repr, jd_name, w_descr) + repr, jd_name, w_greenkey) class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely @@ -205,22 +205,19 @@ jit_hooks.resop_setresult(self.op, box.llbox) class DebugMergePoint(WrappedOp): - def __init__(self, space, op, repr_of_resop, jd_name, w_descr): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): WrappedOp.__init__(self, op, -1, repr_of_resop) - self.w_descr = w_descr + self.w_greenkey = w_greenkey self.jd_name = jd_name - def get_descr(self, space): - return self.w_descr - def get_pycode(self, space): if self.jd_name == pypyjitdriver.name: - return space.getitem(self.w_descr, space.wrap(0)) + return space.getitem(self.w_greenkey, space.wrap(0)) raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) def get_bytecode_no(self, space): if self.jd_name == pypyjitdriver.name: - return space.getitem(self.w_descr, space.wrap(1)) + return space.getitem(self.w_greenkey, space.wrap(1)) raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) def get_jitdriver_name(self, space): @@ -243,11 +240,11 @@ DebugMergePoint.typedef = TypeDef( 'DebugMergePoint', WrappedOp.typedef, __new__ = interp2app(descr_new_dmp), - descr = GetSetProperty(DebugMergePoint.get_descr), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), pycode = GetSetProperty(DebugMergePoint.get_pycode), bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), ) DebugMergePoint.acceptable_as_base_class = False - + diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -121,6 +121,7 @@ dmp = elem[3][1] assert isinstance(dmp, pypyjit.DebugMergePoint) assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() From noreply at buildbot.pypy.org Tue Jan 17 22:30:52 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 17 Jan 2012 22:30:52 +0100 (CET) Subject: [pypy-commit] pypy better-jit-hooks-2: Close branch for merge Message-ID: <20120117213052.8BC1C820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: better-jit-hooks-2 Changeset: r51425:886f6fbce8ef Date: 2012-01-17 15:29 -0600 http://bitbucket.org/pypy/pypy/changeset/886f6fbce8ef/ Log: Close branch for merge From noreply at buildbot.pypy.org Tue Jan 17 22:30:53 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 17 Jan 2012 22:30:53 +0100 (CET) Subject: [pypy-commit] pypy default: (fijal mostly) Merged better-jit-hooks-2. Message-ID: <20120117213053.C276B820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51426:c6ea8ef23064 Date: 2012-01-17 15:30 -0600 http://bitbucket.org/pypy/pypy/changeset/c6ea8ef23064/ Log: (fijal mostly) Merged better-jit-hooks-2. This exposes metadata about DebugMergePoints in jit hooks, specifically the current bytecode and the pycode object. diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -10,6 +11,7 @@ from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver class Cache(object): in_recursion = False @@ -111,13 +113,24 @@ def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd for op in operations: if ops_offset is None: ofs = -1 else: ofs = ops_offset.get(op, 0) - l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, - logops.repr_of_resop(op))) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_greenkey)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) return l_w class WrappedBox(Wrappable): @@ -150,6 +163,15 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_greenkey) + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -182,6 +204,25 @@ box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) +class DebugMergePoint(WrappedOp): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.w_greenkey = w_greenkey + self.jd_name = jd_name + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_bytecode_no(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_jitdriver_name(self, space): + return space.wrap(self.jd_name) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -195,3 +236,15 @@ WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), + pycode = GetSetProperty(DebugMergePoint.get_pycode), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -92,6 +92,7 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) cls.orig_oplist = oplist @@ -117,6 +118,10 @@ assert elem[2][2] == False assert len(elem[3]) == 4 int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() @@ -211,3 +216,18 @@ assert op.getarg(0).getint() == 4 op.result = box assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' + assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -52,7 +52,10 @@ from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] - res = _cast_to_box(llres) + if llres: + res = _cast_to_box(llres) + else: + res = None return _cast_to_gcref(ResOperation(no, args, res)) @register_helper(annmodel.SomePtr(llmemory.GCREF)) From noreply at buildbot.pypy.org Wed Jan 18 00:11:10 2012 From: noreply at buildbot.pypy.org (mikefc) Date: Wed, 18 Jan 2012 00:11:10 +0100 (CET) Subject: [pypy-commit] pypy default: Move fromnumeric into core/ subdirectory to follow numpy's layout Message-ID: <20120117231110.20F63710260@wyvern.cs.uni-duesseldorf.de> Author: mikefc Branch: Changeset: r51428:e4314d54ea7f Date: 2012-01-18 09:10 +1000 http://bitbucket.org/pypy/pypy/changeset/e4314d54ea7f/ Log: Move fromnumeric into core/ subdirectory to follow numpy's layout diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from .fromnumeric import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py rename from lib_pypy/numpypy/fromnumeric.py rename to lib_pypy/numpypy/core/fromnumeric.py From noreply at buildbot.pypy.org Wed Jan 18 00:29:52 2012 From: noreply at buildbot.pypy.org (mikefc) Date: Wed, 18 Jan 2012 00:29:52 +0100 (CET) Subject: [pypy-commit] pypy default: Add numpypy.transpose() function to call the array method Message-ID: <20120117232952.5E205710260@wyvern.cs.uni-duesseldorf.de> Author: mikefc Branch: Changeset: r51429:bc30fb8eed3e Date: 2012-01-18 09:29 +1000 http://bitbucket.org/pypy/pypy/changeset/bc30fb8eed3e/ Log: Add numpypy.transpose() function to call the array method diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/core/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -451,8 +451,11 @@ (2, 1, 3) """ - raise NotImplemented('Waiting on interp level method') - + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T def sort(a, axis=-1, kind='quicksort', order=None): """ diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -125,3 +125,13 @@ assert reshape(a, (1, -1)).shape == (1, 105) assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) + + def test_transpose(self): + from numpypy import arange, array, transpose, ones + x = arange(4).reshape((2,2)) + assert (transpose(x) == array([[0, 2],[1, 3]])).all() + # Once axes argument is implemented, add more tests + raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") + # x = ones((1, 2, 3)) + # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + From noreply at buildbot.pypy.org Wed Jan 18 00:32:39 2012 From: noreply at buildbot.pypy.org (mikefc) Date: Wed, 18 Jan 2012 00:32:39 +0100 (CET) Subject: [pypy-commit] pypy default: Use the proper exception for NotImplementedError Message-ID: <20120117233239.C8C0B710260@wyvern.cs.uni-duesseldorf.de> Author: mikefc Branch: Changeset: r51430:07bd76d923fd Date: 2012-01-18 09:32 +1000 http://bitbucket.org/pypy/pypy/changeset/07bd76d923fd/ Log: Use the proper exception for NotImplementedError diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/core/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -85,7 +85,7 @@ array([4, 3, 6]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') # not deprecated --- copy if necessary, view otherwise @@ -273,7 +273,7 @@ [-1, -2, -3, -4, -5]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def repeat(a, repeats, axis=None): @@ -315,7 +315,7 @@ [3, 4]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def put(a, ind, v, mode='raise'): @@ -366,7 +366,7 @@ array([ 0, 1, 2, 3, -5]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def swapaxes(a, axis1, axis2): @@ -410,7 +410,7 @@ [3, 7]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def transpose(a, axes=None): @@ -556,7 +556,7 @@ dtype=[('name', '|S10'), ('height', ' Author: Maciej Fijalkowski Branch: extradoc Changeset: r4043:5ddb005ca8e2 Date: 2012-01-18 01:40 +0200 http://bitbucket.org/pypy/extradoc/changeset/5ddb005ca8e2/ Log: some made up claims diff --git a/blog/draft/pypy-2011.rst b/blog/draft/pypy-2011.rst --- a/blog/draft/pypy-2011.rst +++ b/blog/draft/pypy-2011.rst @@ -24,7 +24,10 @@ * We made PyPy over 2x faster (on a select set of benchmarks), `1.4 and nightly`_ comparison +* We made PyPy 17x more compatible (of course this is unmeasurable, so why + not just claim it) - +* We probably have [XXX insert unicode for infinity] infinitely many times more users now, since noone used + 1.4 in production .. _`1.4 and nightly` http://speed.pypy.org/comparison/?exe=1%2B172%2C1%2BL%2Bdefault&ben=1%2C34%2C27%2C2%2C25%2C3%2C4%2C5%2C22%2C6%2C39%2C7%2C8%2C23%2C24%2C9%2C10%2C11%2C12%2C13%2C14%2C15%2C35%2C36%2C37%2C38%2C16%2C28%2C30%2C32%2C29%2C33%2C17%2C18%2C19%2C20&env=1%2C2&hor=true&bas=1%2B172&chart=normal+bars From noreply at buildbot.pypy.org Wed Jan 18 00:53:54 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 18 Jan 2012 00:53:54 +0100 (CET) Subject: [pypy-commit] benchmarks default: unbreak benchmarks Message-ID: <20120117235354.3C98E71063B@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r160:b6376749f6e5 Date: 2012-01-18 01:53 +0200 http://bitbucket.org/pypy/benchmarks/changeset/b6376749f6e5/ Log: unbreak benchmarks diff --git a/benchmarks.py b/benchmarks.py --- a/benchmarks.py +++ b/benchmarks.py @@ -153,7 +153,10 @@ for python in [base_python, changed_python]: maindir = relative('lib/cpython-doc') builddir = os.path.join(os.path.join(maindir, 'tools'), 'build') - shutil.rmtree(builddir) + try: + shutil.rmtree(builddir) + except OSError: + pass build = relative('lib/cpython-doc/tools/sphinx-build.py') os.mkdir(builddir) docdir = os.path.join(builddir, 'doctrees') From noreply at buildbot.pypy.org Wed Jan 18 00:38:35 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 18 Jan 2012 00:38:35 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: start drafting blog post Message-ID: <20120117233835.398AD710260@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: extradoc Changeset: r4042:7288adc71948 Date: 2012-01-18 01:38 +0200 http://bitbucket.org/pypy/extradoc/changeset/7288adc71948/ Log: start drafting blog post diff --git a/blog/draft/pypy-2011.rst b/blog/draft/pypy-2011.rst new file mode 100644 --- /dev/null +++ b/blog/draft/pypy-2011.rst @@ -0,0 +1,30 @@ +PyPy development in 2011 +======================== + +Hello everyone. + +PyPy development is fast, sometimes very fast. We try to do 3-4 releases a year +and yet a lot of time when someone complains about the last release being slow +we usually say "oh, but it's the release, that was AGES ago". That makes +something as big as summarizing a year next to impossible, but using my +external IQ amplifiers like internet and hg log, I will try to provide you +with some semi-serious statistics: + +* We made 3 releases last year, boringly, 1.5, 1.6 and 1.7 + +* Number of visitors on pypy.org grew from about 200 to 800 per day (averaged + weekly) + +* We spoke at 15 conferences/meetups + +* We made 10660 commits, or 29 per day (mostly typos ;-) + +* We published 43 blog posts, keeping you entertained almost weekly + +* We made PyPy over 2x faster (on a select set of benchmarks), + `1.4 and nightly`_ comparison + + + + +.. _`1.4 and nightly` http://speed.pypy.org/comparison/?exe=1%2B172%2C1%2BL%2Bdefault&ben=1%2C34%2C27%2C2%2C25%2C3%2C4%2C5%2C22%2C6%2C39%2C7%2C8%2C23%2C24%2C9%2C10%2C11%2C12%2C13%2C14%2C15%2C35%2C36%2C37%2C38%2C16%2C28%2C30%2C32%2C29%2C33%2C17%2C18%2C19%2C20&env=1%2C2&hor=true&bas=1%2B172&chart=normal+bars From noreply at buildbot.pypy.org Wed Jan 18 01:22:47 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 18 Jan 2012 01:22:47 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: merge from default Message-ID: <20120118002247.8B9A5710260@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51431:d512cc3a2cf4 Date: 2012-01-17 21:06 +0200 http://bitbucket.org/pypy/pypy/changeset/d512cc3a2cf4/ Log: merge from default diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -340,7 +340,7 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=True), + default=False), ]), ]) @@ -370,6 +370,7 @@ config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) config.objspace.std.suggest(withspecialisedtuple=True) + config.objspace.std.suggest(withidentitydict=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.7.1" +#define PYPY_VERSION "1.8.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -157,6 +157,8 @@ offset += self.strides[i] break else: + if i == self.dim: + first_line = True indices[i] = 0 offset -= self.backstrides[i] else: diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -287,11 +287,11 @@ descr_rmod = _binop_right_impl("mod") def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): - def impl(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, w_dim) + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") @@ -635,14 +635,14 @@ ) return w_result - def descr_mean(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) w_denom = space.wrap(self.size) else: - dim = space.int_w(w_dim) + dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) - return space.div(self.descr_sum_promote(space, w_dim), w_denom) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) def descr_var(self, space): # var = mean((values - mean(values)) ** 2) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -734,6 +734,7 @@ a = array(range(105)).reshape(3, 5, 7) b = mean(a, axis=0) b[0,0]==35. + assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() @@ -756,6 +757,7 @@ assert array([]).sum() == 0.0 raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() @@ -770,6 +772,8 @@ assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_identity(self): from _numpypy import identity, array diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -64,6 +64,12 @@ check(', '.join([u'a']), u'a') check(', '.join(['a', u'b']), u'a, b') check(u', '.join(['a', 'b']), u'a, b') + try: + u''.join([u'a', 2, 3]) + except TypeError, e: + assert 'sequence item 1' in str(e) + else: + raise Exception("DID NOT RAISE") if sys.version_info >= (2,3): def test_contains_ex(self): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -201,7 +201,7 @@ return space.newbool(container.find(item) != -1) def unicode_join__Unicode_ANY(space, w_self, w_list): - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -216,22 +216,21 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = len(self) * (size - 1) + for i in range(size): + try: + prealloc_size += len(space.unicode_w(list_w[i])) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if isinstance(w_s, W_UnicodeObject): - # shortcut for performance - sb.append(w_s._value) - else: - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -420,7 +420,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Wed Jan 18 01:22:48 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 18 Jan 2012 01:22:48 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: passes a test, needs cleanup Message-ID: <20120118002248.C74A6710260@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51432:f62709780578 Date: 2012-01-18 02:22 +0200 http://bitbucket.org/pypy/pypy/changeset/f62709780578/ Log: passes a test, needs cleanup diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -2,7 +2,7 @@ from pypy.rlib import jit from pypy.rlib.objectmodel import instantiate from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ - calculate_slice_strides + calculate_slice_strides, calculate_dot_strides class BaseTransform(object): pass @@ -16,6 +16,11 @@ def __init__(self, res_shape): self.res_shape = res_shape +class DotTransform(BaseTransform): + def __init__(self, res_shape, skip_dims): + self.res_shape = res_shape + self.skip_dims = skip_dims + class BaseIterator(object): def next(self, shapelen): raise NotImplementedError @@ -85,6 +90,10 @@ self.strides, self.backstrides, t.chunks) return ViewIterator(r[1], r[2], r[3], r[0]) + elif isinstance(t, DotTransform): + r = calculate_dot_strides(self.strides, self.backstrides, + t.res_shape, t.skip_dims) + return ViewIterator(self.offset, r[0], r[1], t.res_shape) @jit.unroll_safe def next(self, shapelen): @@ -130,6 +139,7 @@ def transform(self, arr, t): pass + class AxisIterator(BaseIterator): def __init__(self, start, dim, shape, strides, backstrides): self.res_shape = shape[:] diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,13 +3,14 @@ from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature -from pypy.module.micronumpy.strides import calculate_slice_strides +from pypy.module.micronumpy.strides import calculate_slice_strides,\ + calculate_dot_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + SkipLastAxisIterator, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -211,6 +212,28 @@ n_old_elems_to_use *= old_shape[oldI] return new_strides +def match_dot_shapes(space, self, other): + my_critical_dim_size = self.shape[-1] + other_critical_dim_size = other.shape[0] + other_critical_dim = 0 + other_critical_dim_stride = other.strides[0] + out_shape = [] + if len(other.shape) > 1: + other_critical_dim = len(other.shape) - 2 + other_critical_dim_size = other.shape[other_critical_dim] + other_critical_dim_stride = other.strides[other_critical_dim] + assert other_critical_dim >= 0 + out_shape += self.shape[:-1] + \ + other.shape[0:other_critical_dim] + \ + other.shape[other_critical_dim + 1:] + elif len(other.shape) > 0: + #dot does not reduce for scalars + out_shape += self.shape[:-1] + if my_critical_dim_size != other_critical_dim_size: + raise OperationError(space.w_ValueError, space.wrap( + "objects are not aligned")) + return out_shape, other_critical_dim + class BaseArray(Wrappable): _attrs_ = ["invalidates", "shape", 'size'] @@ -384,70 +407,62 @@ the second-to-last of `b`:: dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])''' - w_other = convert_to_array(space, w_other) - if isinstance(w_other, Scalar): - return self.descr_mul(space, w_other) - elif len(self.shape) < 2 and len(w_other.shape) < 2: - w_res = self.descr_mul(space, w_other) + other = convert_to_array(space, w_other) + if isinstance(other, Scalar): + return self.descr_mul(space, other) + elif len(self.shape) < 2 and len(other.shape) < 2: + w_res = self.descr_mul(space, other) assert isinstance(w_res, BaseArray) return w_res.descr_sum(space, space.wrap(-1)) dtype = interp_ufuncs.find_binop_result_dtype(space, - self.find_dtype(), w_other.find_dtype()) - if self.size < 1 and w_other.size < 1: + self.find_dtype(), other.find_dtype()) + if self.size < 1 and other.size < 1: #numpy compatability return scalar_w(space, dtype, space.wrap(0)) #Do the dims match? - my_critical_dim_size = self.shape[-1] - other_critical_dim_size = w_other.shape[0] - other_critical_dim = 0 - other_critical_dim_stride = w_other.strides[0] - out_shape = [] - if len(w_other.shape) > 1: - other_critical_dim = len(w_other.shape) - 2 - other_critical_dim_size = w_other.shape[other_critical_dim] - other_critical_dim_stride = w_other.strides[other_critical_dim] - assert other_critical_dim >= 0 - out_shape += self.shape[:-1] + \ - w_other.shape[0:other_critical_dim] + \ - w_other.shape[other_critical_dim + 1:] - elif len(w_other.shape) > 0: - #dot does not reduce for scalars - out_shape += self.shape[:-1] - if my_critical_dim_size != other_critical_dim_size: - raise OperationError(space.w_ValueError, space.wrap( - "objects are not aligned")) + out_shape, other_critical_dim = match_dot_shapes(space, self, other) out_size = 1 - for os in out_shape: - out_size *= os - out_ndims = len(out_shape) - # TODO: what should the order be? C or F? - arr = W_NDimArray(out_size, out_shape, dtype=dtype) - # TODO: this is all a bogus mess of previous work, - # rework within the context of transformations - ''' - out_iter = ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) - # TODO: invalidate self, w_other with arr ? - while not out_iter.done(): - my_index = self.start - other_index = w_other.start - i = 0 - while i < len(self.shape) - 1: - my_index += out_iter.indices[i] * self.strides[i] - i += 1 - for j in range(len(w_other.shape) - 2): - other_index += out_iter.indices[i] * w_other.strides[j] - other_index += out_iter.indices[-1] * w_other.strides[-1] - w_ssd = space.newlist([space.wrap(my_index), - space.wrap(len(self.shape) - 1)]) - w_osd = space.newlist([space.wrap(other_index), - space.wrap(other_critical_dim)]) - w_res = self.descr_mul(space, w_other) - assert isinstance(w_res, BaseArray) - value = w_res.descr_sum(space) - arr.setitem(out_iter.get_offset(), value) - out_iter = out_iter.next(out_ndims) - ''' - return arr + for o in out_shape: + out_size *= o + result = W_NDimArray(out_size, out_shape, dtype) + # given a.shape == [3, 5, 7], + # b.shape == [2, 7, 4] + # result.shape == [3, 5, 2, 4] + # all iterators shapes should be [3, 5, 2, 7, 4] + # result should skip dims 3 which is results.ndims - 1 + # a should skip 2, 4 which is a.ndims-1 + range(b.ndims) + # except where it==(b.ndims-2) + # b should skip 0, 1 + mul = interp_ufuncs.get(space).multiply.func + add = interp_ufuncs.get(space).add.func + broadcast_shape = self.shape[:-1] + other.shape + #Aww, cmon, this is the product of a warped mind. + left_skip = [len(self.shape) - 1 + i for i in range(len(other.shape)) if i != other_critical_dim] + right_skip = range(len(self.shape) - 1) + arr = DotArray(mul, 'DotName', out_shape, dtype, self, other, + left_skip, right_skip) + arr.broadcast_shape = broadcast_shape + arr.result_skip = [len(out_shape) - 1] + #Make this lazy someday... + sig = signature.find_sig(signature.DotSignature(mul, 'dot', dtype, + self.create_sig(), other.create_sig()), arr) + assert isinstance(sig, signature.DotSignature) + self.do_dot_loop(sig, result, arr, add) + return result + + def do_dot_loop(self, sig, result, arr, add): + frame = sig.create_frame(arr) + shapelen = len(arr.broadcast_shape) + _r = calculate_dot_strides(result.strides, result.backstrides, + arr.broadcast_shape, arr.result_skip) + ri = ViewIterator(0, _r[0], _r[1], arr.broadcast_shape) + while not frame.done(): + v = sig.eval(frame, arr).convert_to(sig.calc_dtype) + z = result.getitem(ri.offset) + value = add(sig.calc_dtype, v, result.getitem(ri.offset)) + result.setitem(ri.offset, value) + frame.next(shapelen) + ri = ri.next(shapelen) def get_concrete(self): raise NotImplementedError @@ -919,6 +934,23 @@ left, right) self.dim = dim +class DotArray(Call2): + """ NOTE: this is only used as a container, you should never + encounter such things in the wild. Remove this comment + when we'll make Dot lazy + """ + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, shape, dtype, left, right, left_skip, right_skip): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.left_skip = left_skip + self.right_skip = right_skip + def create_sig(self): + #if self.forced_result is not None: + # return self.forced_result.create_sig() + assert NotImplementedError + class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -192,17 +192,17 @@ sig=sig, identity=identity, shapelen=shapelen, arr=arr) - iter = frame.get_final_iter() + iterator = frame.get_final_iter() v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - if iter.first_line: + if iterator.first_line: if identity is not None: value = self.func(sig.calc_dtype, identity, v) else: value = v else: - cur = arr.left.getitem(iter.offset) + cur = arr.left.getitem(iterator.offset) value = self.func(sig.calc_dtype, cur, v) - arr.left.setitem(iter.offset, value) + arr.left.setitem(iterator.offset, value) frame.next(shapelen) def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -2,7 +2,7 @@ from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform + BroadcastTransform, DotTransform from pypy.rlib.jit import hint, unroll_safe, promote """ Signature specifies both the numpy expression that has been constructed @@ -331,7 +331,6 @@ assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) - return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): @@ -450,3 +449,21 @@ def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + +class DotSignature(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import DotArray + + assert isinstance(arr, DotArray) + rtransforms = transforms + [DotTransform(arr.broadcast_shape, arr.right_skip)] + ltransforms = transforms + [DotTransform(arr.broadcast_shape, arr.left_skip)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + + def debug_repr(self): + return 'DotSig(%s, %s %s)' % (self.name, self.right.debug_repr(), + self.left.debug_repr()) diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -37,3 +37,17 @@ rstrides = [0] * (len(res_shape) - len(orig_shape)) + rstrides rbackstrides = [0] * (len(res_shape) - len(orig_shape)) + rbackstrides return rstrides, rbackstrides + +def calculate_dot_strides(strides, backstrides, res_shape, skip_dims): + rstrides = [] + rbackstrides = [] + j=0 + for i in range(len(res_shape)): + if i in skip_dims: + rstrides.append(0) + rbackstrides.append(0) + else: + rstrides.append(strides[j]) + rbackstrides.append(backstrides[j]) + j += 1 + return rstrides, rbackstrides diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -867,7 +867,7 @@ assert c.any() == False def test_dot(self): - from _numpypy import array, dot + from _numpypy import array, dot, arange a = array(range(5)) assert a.dot(a) == 30.0 @@ -876,13 +876,12 @@ assert dot(range(5), range(5)) == 30 assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all() - a = array([range(4), range(4, 8), range(8, 12)]) - b = array([range(3), range(3, 6), range(6, 9), range(9, 12)]) + a = arange(12).reshape(3, 4) + b = arange(12).reshape(4, 3) c = a.dot(b) assert (c == [[ 42, 48, 54], [114, 136, 158], [186, 224, 262]]).all() - a = array([[range(4), range(4, 8), range(8, 12)], - [range(12, 16), range(16, 20), range(20, 24)]]) + a = arange(24).reshape(2, 3, 4) raises(ValueError, "a.dot(a)") b = a[0, :, :].T #Superfluous shape test makes the intention of the test clearer From noreply at buildbot.pypy.org Wed Jan 18 09:50:45 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 18 Jan 2012 09:50:45 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: add a new point Message-ID: <20120118085045.D4ECE820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: extradoc Changeset: r4044:f07d98b31047 Date: 2012-01-18 09:44 +0100 http://bitbucket.org/pypy/extradoc/changeset/f07d98b31047/ Log: add a new point diff --git a/blog/draft/pypy-2011.rst b/blog/draft/pypy-2011.rst --- a/blog/draft/pypy-2011.rst +++ b/blog/draft/pypy-2011.rst @@ -24,6 +24,10 @@ * We made PyPy over 2x faster (on a select set of benchmarks), `1.4 and nightly`_ comparison +* We got 71 new people which did at least one commit in the PyPy repository + (XXX: this is what I got by doing wc on contributors.txt at 2011-01-01 and + the current contributors.rst, but we still need to update the latter) + * We made PyPy 17x more compatible (of course this is unmeasurable, so why not just claim it) From noreply at buildbot.pypy.org Wed Jan 18 10:46:27 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Wed, 18 Jan 2012 10:46:27 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Planning for today Message-ID: <20120118094627.E56D0820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: extradoc Changeset: r4045:3fb931e40174 Date: 2012-01-18 10:46 +0100 http://bitbucket.org/pypy/extradoc/changeset/3fb931e40174/ Log: Planning for today diff --git a/sprintinfo/leysin-winter-2012/planning.txt b/sprintinfo/leysin-winter-2012/planning.txt --- a/sprintinfo/leysin-winter-2012/planning.txt +++ b/sprintinfo/leysin-winter-2012/planning.txt @@ -5,23 +5,25 @@ * Antonio Cuni * Armin Rigo * Romain Guillebert +* David Schneider Things we want to do -------------------- -* some skiing: wednesday (last day of sun :-/ ) +* some skiing (anto, armin) * review the JVM backend pull request (DONE) -* py3k (romain, anto later) +* py3k (romain) * ffistruct * Cython backend +* Debug the ARM backend (bivab) + * STM - refactored the RPython API: mostly done, must adapt targetdemo*.py - (anto, arigo) - - start work on the GC (arigo later?) + - start work on the GC * concurrent-marksweep GC From noreply at buildbot.pypy.org Wed Jan 18 11:04:56 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 18 Jan 2012 11:04:56 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: refactored allocation of boxes Message-ID: <20120118100456.A5F88820D8@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51433:4c659707392a Date: 2012-01-18 11:04 +0100 http://bitbucket.org/pypy/pypy/changeset/4c659707392a/ Log: refactored allocation of boxes diff --git a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py @@ -1,5 +1,12 @@ from pypy.jit.metainterp.history import ConstInt from pypy.rlib.objectmodel import we_are_translated +from pypy.jit.metainterp.history import Box + + +def check_imm_box(arg, size=0xFF, allow_zero=True): + if isinstance(arg, ConstInt): + return _check_imm_arg(arg.getint(), size, allow_zero) + return False def _check_imm_arg(arg, size=0xFF, allow_zero=True): #assert not isinstance(arg, ConstInt) @@ -19,34 +26,36 @@ arg0, arg1 = boxes imm_a0 = _check_imm_arg(arg0) imm_a1 = _check_imm_arg(arg1) - l0, box = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) - boxes.append(box) + l0 = self._ensure_value_is_boxed(arg0, forbidden_vars=boxes) + if imm_a1 and not imm_a0: l1 = self.make_sure_var_in_reg(arg1, boxes) else: - l1, box = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) - boxes.append(box) - self.possibly_free_vars(boxes) + l1 = self._ensure_value_is_boxed(arg1, forbidden_vars=boxes) + + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) + #self.possibly_free_var(op.result) return [l0, l1, res] return f def prepare_unary_cmp(): def f(self, op): a0 = op.getarg(0) - reg, box = self._ensure_value_is_boxed(a0) - res = self.force_allocate_reg(op.result, [box]) - self.possibly_free_vars([a0, box, op.result]) + assert isinstance(a0, Box) + reg = self.make_sure_var_in_reg(a0) + self.possibly_free_vars_for_op(op) + res = self.force_allocate_reg(op.result, [a0]) return [reg, res] return f def prepare_unary_int_op(): def f(self, op): - l0, box = self._ensure_value_is_boxed(op.getarg(0)) - self.possibly_free_var(box) + l0 = self._ensure_value_is_boxed(op.getarg(0)) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) - self.possibly_free_var(op.result) return [l0, res] return f @@ -57,20 +66,17 @@ imm_b0 = _check_imm_arg(b0) imm_b1 = _check_imm_arg(b1) if not imm_b0 and imm_b1: - l0, box = self._ensure_value_is_boxed(b0) - l1 = self.make_sure_var_in_reg(b1, [b0]) - boxes.append(box) + l0 = self._ensure_value_is_boxed(b0) + l1 = self.make_sure_var_in_reg(b1, boxes) elif imm_b0 and not imm_b1: l0 = self.make_sure_var_in_reg(b0) - l1, box = self._ensure_value_is_boxed(b1, [b0]) - boxes.append(box) + l1 = self._ensure_value_is_boxed(b1, boxes) else: - l0, box = self._ensure_value_is_boxed(b0) - boxes.append(box) - l1, box = self._ensure_value_is_boxed(b1, [box]) - boxes.append(box) + l0 = self._ensure_value_is_boxed(b0) + l1 = self._ensure_value_is_boxed(b1, boxes) locs = [l0, l1] - self.possibly_free_vars(boxes) + self.possibly_free_vars_for_op(op) + self.free_temp_vars() res = self.force_allocate_reg(op.result) return locs + [res] return f @@ -80,12 +86,9 @@ boxes = list(op.getarglist()) b0, b1 = boxes - reg1, box = self._ensure_value_is_boxed(b0, forbidden_vars=boxes) - boxes.append(box) - reg2, box = self._ensure_value_is_boxed(b1, forbidden_vars=boxes) - boxes.append(box) + reg1 = self._ensure_value_is_boxed(b0, forbidden_vars=boxes) + reg2 = self._ensure_value_is_boxed(b1, forbidden_vars=boxes) - self.possibly_free_vars(boxes) self.possibly_free_vars_for_op(op) res = self.force_allocate_reg(op.result) self.possibly_free_var(op.result) diff --git a/pypy/jit/backend/ppc/ppcgen/opassembler.py b/pypy/jit/backend/ppc/ppcgen/opassembler.py --- a/pypy/jit/backend/ppc/ppcgen/opassembler.py +++ b/pypy/jit/backend/ppc/ppcgen/opassembler.py @@ -6,7 +6,7 @@ GPR_SAVE_AREA, BACKCHAIN_SIZE, MAX_REG_PARAMS) -from pypy.jit.metainterp.history import (JitCellToken, TargetToken, +from pypy.jit.metainterp.history import (JitCellToken, TargetToken, Box, AbstractFailDescr, FLOAT, INT, REF) from pypy.rlib.objectmodel import we_are_translated from pypy.jit.backend.ppc.ppcgen.helper.assembler import (count_reg_args, @@ -704,11 +704,9 @@ def _emit_copystrcontent(self, op, regalloc, is_unicode): # compute the source address - args = list(op.getarglist()) - base_loc, box = regalloc._ensure_value_is_boxed(args[0], args) - args.append(box) - ofs_loc, box = regalloc._ensure_value_is_boxed(args[2], args) - args.append(box) + args = op.getarglist() + base_loc = regalloc._ensure_value_is_boxed(args[0], args) + ofs_loc = regalloc._ensure_value_is_boxed(args[2], args) assert args[0] is not args[1] # forbidden case of aliasing regalloc.possibly_free_var(args[0]) if args[3] is not args[2] is not args[4]: # MESS MESS MESS: don't free @@ -724,41 +722,44 @@ dstaddr_box = TempPtr() dstaddr_loc = regalloc.force_allocate_reg(dstaddr_box) forbidden_vars.append(dstaddr_box) - base_loc, box = regalloc._ensure_value_is_boxed(args[1], forbidden_vars) - args.append(box) - forbidden_vars.append(box) - ofs_loc, box = regalloc._ensure_value_is_boxed(args[3], forbidden_vars) - args.append(box) + base_loc = regalloc._ensure_value_is_boxed(args[1], forbidden_vars) + ofs_loc = regalloc._ensure_value_is_boxed(args[3], forbidden_vars) assert base_loc.is_reg() assert ofs_loc.is_reg() regalloc.possibly_free_var(args[1]) if args[3] is not args[4]: # more of the MESS described above regalloc.possibly_free_var(args[3]) + regalloc.free_temp_vars() self._gen_address_inside_string(base_loc, ofs_loc, dstaddr_loc, is_unicode=is_unicode) # compute the length in bytes forbidden_vars = [srcaddr_box, dstaddr_box] - length_loc, length_box = regalloc._ensure_value_is_boxed(args[4], forbidden_vars) - args.append(length_box) + if isinstance(args[4], Box): + length_box = args[4] + length_loc = regalloc.make_sure_var_in_reg(args[4], forbidden_vars) + else: + length_box = TempInt() + length_loc = regalloc.force_allocate_reg(length_box, forbidden_vars) + imm = regalloc.convert_to_imm(args[4]) + self.load(length_loc, imm) if is_unicode: - forbidden_vars = [srcaddr_box, dstaddr_box] bytes_box = TempPtr() bytes_loc = regalloc.force_allocate_reg(bytes_box, forbidden_vars) scale = self._get_unicode_item_scale() assert length_loc.is_reg() - self.mc.li(r.r0.value, 1< Author: Romain Guillebert Branch: py3k Changeset: r51434:68762432f02d Date: 2012-01-18 12:07 +0100 http://bitbucket.org/pypy/pypy/changeset/68762432f02d/ Log: Add a codegen test for py3k's new tuple unpacking diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -798,6 +798,13 @@ yield self.st, test, "f()", 42 # This line is needed for py.code to find the source. + def test_tuple_unpacking(self): + func = """def f(): + (a, *b, c) = 1, 2, 3, 4, 5 + return a, b, c + """ + yield self.st, func, "f()", (1, [2, 3, 4], 5) + class AppTestCompiler: From noreply at buildbot.pypy.org Wed Jan 18 12:35:38 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 12:35:38 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20120118113538.02036820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51435:280cca48c156 Date: 2012-01-18 09:57 +0100 http://bitbucket.org/pypy/pypy/changeset/280cca48c156/ Log: merge default diff too long, truncating to 10000 out of 10623 lines diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -3,6 +3,9 @@ *.sw[po] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at @@ -37,43 +37,47 @@ Armin Rigo Maciej Fijalkowski Carl Friedrich Bolz + Amaury Forgeot d'Arc Antonio Cuni - Amaury Forgeot d'Arc Samuele Pedroni Michael Hudson Holger Krekel - Benjamin Peterson + Alex Gaynor Christian Tismer Hakan Ardo - Alex Gaynor + Benjamin Peterson + David Schneider Eric van Riet Paap Anders Chrigstrom - David Schneider Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer + Lukas Diekmann Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis + Wim Lavrijsen + Matti Picus Jason Creighton Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij - Wim Lavrijsen Stephan Diehl Michael Foord Stefan Schwarzer @@ -84,34 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton - Justin Peel + David Edelsohn Jean-Paul Calderone John Witulski - Lukas Diekmann + Timo Paulssen holger krekel - Wim Lavrijsen Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp - Ronny Pfannschmidt Boris Feigin Oscar Nierstrasz David Malcolm Eugene Oden Henry Mason - Sven Hager + Jeff Terrace Lukas Renggli - Ilya Osadchiy Guenter Jantzen + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -142,7 +148,6 @@ Anders Qvist Beatrice During Alexander Sedov - Timo Paulssen Corbin Simpson Vincent Legoll Romain Guillebert @@ -165,9 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki - Philip Jenvey + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -179,17 +185,17 @@ Kristjan Valur Jonsson Bobby Impollonia Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson Floris Bruynooghe Jacek Generowicz Dan Colish Zooko Wilcox-O Hearn - Dan Villiom Podlaski Christiansen - Anders Hammarquist + Dan Loewenherz Chris Lambacher Dinu Gherman - Dan Colish Brett Cannon Daniel Neuhäuser Michael Chermside diff --git a/lib_pypy/_ctypes/structure.py b/lib_pypy/_ctypes/structure.py --- a/lib_pypy/_ctypes/structure.py +++ b/lib_pypy/_ctypes/structure.py @@ -73,8 +73,12 @@ class Field(object): def __init__(self, name, offset, size, ctype, num, is_bitfield): - for k in ('name', 'offset', 'size', 'ctype', 'num', 'is_bitfield'): - self.__dict__[k] = locals()[k] + self.__dict__['name'] = name + self.__dict__['offset'] = offset + self.__dict__['size'] = size + self.__dict__['ctype'] = ctype + self.__dict__['num'] = num + self.__dict__['is_bitfield'] = is_bitfield def __setattr__(self, name, value): raise AttributeError(name) diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/__init__.py @@ -0,0 +1,2 @@ +from _numpypy import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -0,0 +1,2403 @@ +###################################################################### +# This is a copy of numpy/core/fromnumeric.py modified for numpypy +###################################################################### +# Each name in __all__ was a function in 'numeric' that is now +# a method in 'numpy'. +# When the corresponding method is added to numpypy BaseArray +# each function should be added as a module function +# at the applevel +# This can be as simple as doing the following +# +# def func(a, ...): +# if not hasattr(a, 'func') +# a = numpypy.array(a) +# return a.func(...) +# +###################################################################### + +import numpypy + +# Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" + +# functions that are now methods +__all__ = ['take', 'reshape', 'choose', 'repeat', 'put', + 'swapaxes', 'transpose', 'sort', 'argsort', 'argmax', 'argmin', + 'searchsorted', 'alen', + 'resize', 'diagonal', 'trace', 'ravel', 'nonzero', 'shape', + 'compress', 'clip', 'sum', 'product', 'prod', 'sometrue', 'alltrue', + 'any', 'all', 'cumsum', 'cumproduct', 'cumprod', 'ptp', 'ndim', + 'rank', 'size', 'around', 'round_', 'mean', 'std', 'var', 'squeeze', + 'amax', 'amin', + ] + +def take(a, indices, axis=None, out=None, mode='raise'): + """ + Take elements from an array along an axis. + + This function does the same thing as "fancy" indexing (indexing arrays + using arrays); however, it can be easier to use if you need elements + along a given axis. + + Parameters + ---------- + a : array_like + The source array. + indices : array_like + The indices of the values to extract. + axis : int, optional + The axis over which to select values. By default, the flattened + input array is used. + out : ndarray, optional + If provided, the result will be placed in this array. It should + be of the appropriate shape and dtype. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + Returns + ------- + subarray : ndarray + The returned array has the same type as `a`. + + See Also + -------- + ndarray.take : equivalent method + + Examples + -------- + >>> a = [4, 3, 5, 7, 6, 8] + >>> indices = [0, 1, 4] + >>> np.take(a, indices) + array([4, 3, 6]) + + In this example if `a` is an ndarray, "fancy" indexing can be used. + + >>> a = np.array(a) + >>> a[indices] + array([4, 3, 6]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +# not deprecated --- copy if necessary, view otherwise +def reshape(a, newshape, order='C'): + """ + Gives a new shape to an array without changing its data. + + Parameters + ---------- + a : array_like + Array to be reshaped. + newshape : int or tuple of ints + The new shape should be compatible with the original shape. If + an integer, then the result will be a 1-D array of that length. + One shape dimension can be -1. In this case, the value is inferred + from the length of the array and remaining dimensions. + order : {'C', 'F', 'A'}, optional + Determines whether the array data should be viewed as in C + (row-major) order, FORTRAN (column-major) order, or the C/FORTRAN + order should be preserved. + + Returns + ------- + reshaped_array : ndarray + This will be a new view object if possible; otherwise, it will + be a copy. + + + See Also + -------- + ndarray.reshape : Equivalent method. + + Notes + ----- + + It is not always possible to change the shape of an array without + copying the data. If you want an error to be raise if the data is copied, + you should assign the new shape to the shape attribute of the array:: + + >>> a = np.zeros((10, 2)) + # A transpose make the array non-contiguous + >>> b = a.T + # Taking a view makes it possible to modify the shape without modiying the + # initial object. + >>> c = b.view() + >>> c.shape = (20) + AttributeError: incompatible shape for a non-contiguous array + + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> np.reshape(a, 6) + array([1, 2, 3, 4, 5, 6]) + >>> np.reshape(a, 6, order='F') + array([1, 4, 2, 5, 3, 6]) + + >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 + array([[1, 2], + [3, 4], + [5, 6]]) + + """ + if not hasattr(a, 'reshape'): + a = numpypy.array(a) + return a.reshape(newshape) + + +def choose(a, choices, out=None, mode='raise'): + """ + Construct an array from an index array and a set of arrays to choose from. + + First of all, if confused or uncertain, definitely look at the Examples - + in its full generality, this function is less simple than it might + seem from the following code description (below ndi = + `numpy.lib.index_tricks`): + + ``np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])``. + + But this omits some subtleties. Here is a fully general summary: + + Given an "index" array (`a`) of integers and a sequence of `n` arrays + (`choices`), `a` and each choice array are first broadcast, as necessary, + to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = + 0,...,n-1* we have that, necessarily, ``Ba.shape == Bchoices[i].shape`` + for each `i`. Then, a new array with shape ``Ba.shape`` is created as + follows: + + * if ``mode=raise`` (the default), then, first of all, each element of + `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that + `i` (in that range) is the value at the `(j0, j1, ..., jm)` position + in `Ba` - then the value at the same position in the new array is the + value in `Bchoices[i]` at that same position; + + * if ``mode=wrap``, values in `a` (and thus `Ba`) may be any (signed) + integer; modular arithmetic is used to map integers outside the range + `[0, n-1]` back into that range; and then the new array is constructed + as above; + + * if ``mode=clip``, values in `a` (and thus `Ba`) may be any (signed) + integer; negative integers are mapped to 0; values greater than `n-1` + are mapped to `n-1`; and then the new array is constructed as above. + + Parameters + ---------- + a : int array + This array must contain integers in `[0, n-1]`, where `n` is the number + of choices, unless ``mode=wrap`` or ``mode=clip``, in which cases any + integers are permissible. + choices : sequence of arrays + Choice arrays. `a` and all of the choices must be broadcastable to the + same shape. If `choices` is itself an array (not recommended), then + its outermost dimension (i.e., the one corresponding to + ``choices.shape[0]``) is taken as defining the "sequence". + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + mode : {'raise' (default), 'wrap', 'clip'}, optional + Specifies how indices outside `[0, n-1]` will be treated: + + * 'raise' : an exception is raised + * 'wrap' : value becomes value mod `n` + * 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1 + + Returns + ------- + merged_array : array + The merged result. + + Raises + ------ + ValueError: shape mismatch + If `a` and each choice array are not all broadcastable to the same + shape. + + See Also + -------- + ndarray.choose : equivalent method + + Notes + ----- + To reduce the chance of misinterpretation, even though the following + "abuse" is nominally supported, `choices` should neither be, nor be + thought of as, a single array, i.e., the outermost sequence-like container + should be either a list or a tuple. + + Examples + -------- + + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + ... [20, 21, 22, 23], [30, 31, 32, 33]] + >>> np.choose([2, 3, 1, 0], choices + ... # the first element of the result will be the first element of the + ... # third (2+1) "array" in choices, namely, 20; the second element + ... # will be the second element of the fourth (3+1) choice array, i.e., + ... # 31, etc. + ... ) + array([20, 31, 12, 3]) + >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) + array([20, 31, 12, 3]) + >>> # because there are 4 choice arrays + >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) + array([20, 1, 12, 3]) + >>> # i.e., 0 + + A couple examples illustrating how choose broadcasts: + + >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] + >>> choices = [-10, 10] + >>> np.choose(a, choices) + array([[ 10, -10, 10], + [-10, 10, -10], + [ 10, -10, 10]]) + + >>> # With thanks to Anne Archibald + >>> a = np.array([0, 1]).reshape((2,1,1)) + >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) + >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) + >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 + array([[[ 1, 1, 1, 1, 1], + [ 2, 2, 2, 2, 2], + [ 3, 3, 3, 3, 3]], + [[-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5], + [-1, -2, -3, -4, -5]]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def repeat(a, repeats, axis=None): + """ + Repeat elements of an array. + + Parameters + ---------- + a : array_like + Input array. + repeats : {int, array of ints} + The number of repetitions for each element. `repeats` is broadcasted + to fit the shape of the given axis. + axis : int, optional + The axis along which to repeat values. By default, use the + flattened input array, and return a flat output array. + + Returns + ------- + repeated_array : ndarray + Output array which has the same shape as `a`, except along + the given axis. + + See Also + -------- + tile : Tile an array. + + Examples + -------- + >>> x = np.array([[1,2],[3,4]]) + >>> np.repeat(x, 2) + array([1, 1, 2, 2, 3, 3, 4, 4]) + >>> np.repeat(x, 3, axis=1) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + >>> np.repeat(x, [1, 2], axis=0) + array([[1, 2], + [3, 4], + [3, 4]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def put(a, ind, v, mode='raise'): + """ + Replaces specified elements of an array with given values. + + The indexing works on the flattened target array. `put` is roughly + equivalent to: + + :: + + a.flat[ind] = v + + Parameters + ---------- + a : ndarray + Target array. + ind : array_like + Target indices, interpreted as integers. + v : array_like + Values to place in `a` at target indices. If `v` is shorter than + `ind` it will be repeated as necessary. + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + + * 'raise' -- raise an error (default) + * 'wrap' -- wrap around + * 'clip' -- clip to the range + + 'clip' mode means that all indices that are too large are replaced + by the index that addresses the last element along that axis. Note + that this disables indexing with negative numbers. + + See Also + -------- + putmask, place + + Examples + -------- + >>> a = np.arange(5) + >>> np.put(a, [0, 2], [-44, -55]) + >>> a + array([-44, 1, -55, 3, 4]) + + >>> a = np.arange(5) + >>> np.put(a, 22, -5, mode='clip') + >>> a + array([ 0, 1, 2, 3, -5]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def swapaxes(a, axis1, axis2): + """ + Interchange two axes of an array. + + Parameters + ---------- + a : array_like + Input array. + axis1 : int + First axis. + axis2 : int + Second axis. + + Returns + ------- + a_swapped : ndarray + If `a` is an ndarray, then a view of `a` is returned; otherwise + a new array is created. + + Examples + -------- + >>> x = np.array([[1,2,3]]) + >>> np.swapaxes(x,0,1) + array([[1], + [2], + [3]]) + + >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + + >>> np.swapaxes(x,0,2) + array([[[0, 4], + [2, 6]], + [[1, 5], + [3, 7]]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def transpose(a, axes=None): + """ + Permute the dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. + axes : list of ints, optional + By default, reverse the dimensions, otherwise permute the axes + according to the values given. + + Returns + ------- + p : ndarray + `a` with its axes permuted. A view is returned whenever + possible. + + See Also + -------- + rollaxis + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.transpose(x) + array([[0, 2], + [1, 3]]) + + >>> x = np.ones((1, 2, 3)) + >>> np.transpose(x, (1, 0, 2)).shape + (2, 1, 3) + + """ + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T + +def sort(a, axis=-1, kind='quicksort', order=None): + """ + Return a sorted copy of an array. + + Parameters + ---------- + a : array_like + Array to be sorted. + axis : int or None, optional + Axis along which to sort. If None, the array is flattened before + sorting. The default is -1, which sorts along the last axis. + kind : {'quicksort', 'mergesort', 'heapsort'}, optional + Sorting algorithm. Default is 'quicksort'. + order : list, optional + When `a` is a structured array, this argument specifies which fields + to compare first, second, and so on. This list does not need to + include all of the fields. + + Returns + ------- + sorted_array : ndarray + Array of the same type and shape as `a`. + + See Also + -------- + ndarray.sort : Method to sort an array in-place. + argsort : Indirect sort. + lexsort : Indirect stable sort on multiple keys. + searchsorted : Find elements in a sorted array. + + Notes + ----- + The various sorting algorithms are characterized by their average speed, + worst case performance, work space size, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + =========== ======= ============= ============ ======= + kind speed worst case work space stable + =========== ======= ============= ============ ======= + 'quicksort' 1 O(n^2) 0 no + 'mergesort' 2 O(n*log(n)) ~n/2 yes + 'heapsort' 3 O(n*log(n)) 0 no + =========== ======= ============= ============ ======= + + All the sort algorithms make temporary copies of the data when + sorting along any but the last axis. Consequently, sorting along + the last axis is faster and uses less space than sorting along + any other axis. + + The sort order for complex numbers is lexicographic. If both the real + and imaginary parts are non-nan then the order is determined by the + real parts except when they are equal, in which case the order is + determined by the imaginary parts. + + Previous to numpy 1.4.0 sorting real and complex arrays containing nan + values led to undefined behaviour. In numpy versions >= 1.4.0 nan + values are sorted to the end. The extended sort order is: + + * Real: [R, nan] + * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] + + where R is a non-nan real value. Complex values with the same nan + placements are sorted according to the non-nan part if it exists. + Non-nan values are sorted as before. + + Examples + -------- + >>> a = np.array([[1,4],[3,1]]) + >>> np.sort(a) # sort along the last axis + array([[1, 4], + [1, 3]]) + >>> np.sort(a, axis=None) # sort the flattened array + array([1, 1, 3, 4]) + >>> np.sort(a, axis=0) # sort along the first axis + array([[1, 1], + [3, 4]]) + + Use the `order` keyword to specify a field to use when sorting a + structured array: + + >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] + >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), + ... ('Galahad', 1.7, 38)] + >>> a = np.array(values, dtype=dtype) # create a structured array + >>> np.sort(a, order='height') # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), + ('Lancelot', 1.8999999999999999, 38)], + dtype=[('name', '|S10'), ('height', '>> np.sort(a, order=['age', 'height']) # doctest: +SKIP + array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), + ('Arthur', 1.8, 41)], + dtype=[('name', '|S10'), ('height', '>> x = np.array([3, 1, 2]) + >>> np.argsort(x) + array([1, 2, 0]) + + Two-dimensional array: + + >>> x = np.array([[0, 3], [2, 2]]) + >>> x + array([[0, 3], + [2, 2]]) + + >>> np.argsort(x, axis=0) + array([[0, 1], + [1, 0]]) + + >>> np.argsort(x, axis=1) + array([[0, 1], + [0, 1]]) + + Sorting with keys: + + >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '>> x + array([(1, 0), (0, 1)], + dtype=[('x', '>> np.argsort(x, order=('x','y')) + array([1, 0]) + + >>> np.argsort(x, order=('y','x')) + array([0, 1]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def argmax(a, axis=None): + """ + Indices of the maximum values along an axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + By default, the index is into the flattened array, otherwise + along the specified axis. + + Returns + ------- + index_array : ndarray of ints + Array of indices into the array. It has the same shape as `a.shape` + with the dimension along `axis` removed. + + See Also + -------- + ndarray.argmax, argmin + amax : The maximum value along a given axis. + unravel_index : Convert a flat index into an index tuple. + + Notes + ----- + In case of multiple occurrences of the maximum values, the indices + corresponding to the first occurrence are returned. + + Examples + -------- + >>> a = np.arange(6).reshape(2,3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> np.argmax(a) + 5 + >>> np.argmax(a, axis=0) + array([1, 1, 1]) + >>> np.argmax(a, axis=1) + array([2, 2]) + + >>> b = np.arange(6) + >>> b[1] = 5 + >>> b + array([0, 5, 2, 3, 4, 5]) + >>> np.argmax(b) # Only the first occurrence is returned. + 1 + + """ + if not hasattr(a, 'argmax'): + a = numpypy.array(a) + return a.argmax() + + +def argmin(a, axis=None): + """ + Return the indices of the minimum values along an axis. + + See Also + -------- + argmax : Similar function. Please refer to `numpy.argmax` for detailed + documentation. + + """ + if not hasattr(a, 'argmin'): + a = numpypy.array(a) + return a.argmin() + + +def searchsorted(a, v, side='left'): + """ + Find indices where elements should be inserted to maintain order. + + Find the indices into a sorted array `a` such that, if the corresponding + elements in `v` were inserted before the indices, the order of `a` would + be preserved. + + Parameters + ---------- + a : 1-D array_like + Input array, sorted in ascending order. + v : array_like + Values to insert into `a`. + side : {'left', 'right'}, optional + If 'left', the index of the first suitable location found is given. If + 'right', return the last such index. If there is no suitable + index, return either 0 or N (where N is the length of `a`). + + Returns + ------- + indices : array of ints + Array of insertion points with the same shape as `v`. + + See Also + -------- + sort : Return a sorted copy of an array. + histogram : Produce histogram from 1-D data. + + Notes + ----- + Binary search is used to find the required insertion points. + + As of Numpy 1.4.0 `searchsorted` works with real/complex arrays containing + `nan` values. The enhanced sort order is documented in `sort`. + + Examples + -------- + >>> np.searchsorted([1,2,3,4,5], 3) + 2 + >>> np.searchsorted([1,2,3,4,5], 3, side='right') + 3 + >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) + array([0, 5, 1, 2]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def resize(a, new_shape): + """ + Return a new array with the specified shape. + + If the new array is larger than the original array, then the new + array is filled with repeated copies of `a`. Note that this behavior + is different from a.resize(new_shape) which fills with zeros instead + of repeated copies of `a`. + + Parameters + ---------- + a : array_like + Array to be resized. + + new_shape : int or tuple of int + Shape of resized array. + + Returns + ------- + reshaped_array : ndarray + The new array is formed from the data in the old array, repeated + if necessary to fill out the required number of elements. The + data are repeated in the order that they are stored in memory. + + See Also + -------- + ndarray.resize : resize an array in-place. + + Examples + -------- + >>> a=np.array([[0,1],[2,3]]) + >>> np.resize(a,(1,4)) + array([[0, 1, 2, 3]]) + >>> np.resize(a,(2,4)) + array([[0, 1, 2, 3], + [0, 1, 2, 3]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def squeeze(a): + """ + Remove single-dimensional entries from the shape of an array. + + Parameters + ---------- + a : array_like + Input data. + + Returns + ------- + squeezed : ndarray + The input array, but with with all dimensions of length 1 + removed. Whenever possible, a view on `a` is returned. + + Examples + -------- + >>> x = np.array([[[0], [1], [2]]]) + >>> x.shape + (1, 3, 1) + >>> np.squeeze(x).shape + (3,) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def diagonal(a, offset=0, axis1=0, axis2=1): + """ + Return specified diagonals. + + If `a` is 2-D, returns the diagonal of `a` with the given offset, + i.e., the collection of elements of the form ``a[i, i+offset]``. If + `a` has more than two dimensions, then the axes specified by `axis1` + and `axis2` are used to determine the 2-D sub-array whose diagonal is + returned. The shape of the resulting array can be determined by + removing `axis1` and `axis2` and appending an index to the right equal + to the size of the resulting diagonals. + + Parameters + ---------- + a : array_like + Array from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be positive or + negative. Defaults to main diagonal (0). + axis1 : int, optional + Axis to be used as the first axis of the 2-D sub-arrays from which + the diagonals should be taken. Defaults to first axis (0). + axis2 : int, optional + Axis to be used as the second axis of the 2-D sub-arrays from + which the diagonals should be taken. Defaults to second axis (1). + + Returns + ------- + array_of_diagonals : ndarray + If `a` is 2-D, a 1-D array containing the diagonal is returned. + If the dimension of `a` is larger, then an array of diagonals is + returned, "packed" from left-most dimension to right-most (e.g., + if `a` is 3-D, then the diagonals are "packed" along rows). + + Raises + ------ + ValueError + If the dimension of `a` is less than 2. + + See Also + -------- + diag : MATLAB work-a-like for 1-D and 2-D arrays. + diagflat : Create diagonal arrays. + trace : Sum along diagonals. + + Examples + -------- + >>> a = np.arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + A 3-D example: + + >>> a = np.arange(8).reshape(2,2,2); a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0, # Main diagonals of two arrays created by skipping + ... 0, # across the outer(left)-most axis last and + ... 1) # the "middle" (row) axis first. + array([[0, 6], + [1, 7]]) + + The sub-arrays whose main diagonals we just obtained; note that each + corresponds to fixing the right-most (column) axis, and that the + diagonals are "packed" in rows. + + >>> a[:,:,0] # main diagonal is [0 6] + array([[0, 2], + [4, 6]]) + >>> a[:,:,1] # main diagonal is [1 7] + array([[1, 3], + [5, 7]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): + """ + Return the sum along diagonals of the array. + + If `a` is 2-D, the sum along its diagonal with the given offset + is returned, i.e., the sum of elements ``a[i,i+offset]`` for all i. + + If `a` has more than two dimensions, then the axes specified by axis1 and + axis2 are used to determine the 2-D sub-arrays whose traces are returned. + The shape of the resulting array is the same as that of `a` with `axis1` + and `axis2` removed. + + Parameters + ---------- + a : array_like + Input array, from which the diagonals are taken. + offset : int, optional + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to 0. + axis1, axis2 : int, optional + Axes to be used as the first and second axis of the 2-D sub-arrays + from which the diagonals should be taken. Defaults are the first two + axes of `a`. + dtype : dtype, optional + Determines the data-type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and `a` is + of integer type of precision less than the default integer + precision, then the default integer precision is used. Otherwise, + the precision is the same as that of `a`. + out : ndarray, optional + Array into which the output is placed. Its type is preserved and + it must be of the right shape to hold the output. + + Returns + ------- + sum_along_diagonals : ndarray + If `a` is 2-D, the sum along the diagonal is returned. If `a` has + larger dimensions, then an array of sums along diagonals is returned. + + See Also + -------- + diag, diagonal, diagflat + + Examples + -------- + >>> np.trace(np.eye(3)) + 3.0 + >>> a = np.arange(8).reshape((2,2,2)) + >>> np.trace(a) + array([6, 8]) + + >>> a = np.arange(24).reshape((2,2,2,3)) + >>> np.trace(a).shape + (2, 3) + + """ + raise NotImplementedError('Waiting on interp level method') + +def ravel(a, order='C'): + """ + Return a flattened array. + + A 1-D array, containing the elements of the input, is returned. A copy is + made only if needed. + + Parameters + ---------- + a : array_like + Input array. The elements in ``a`` are read in the order specified by + `order`, and packed as a 1-D array. + order : {'C','F', 'A', 'K'}, optional + The elements of ``a`` are read in this order. 'C' means to view + the elements in C (row-major) order. 'F' means to view the elements + in Fortran (column-major) order. 'A' means to view the elements + in 'F' order if a is Fortran contiguous, 'C' order otherwise. + 'K' means to view the elements in the order they occur in memory, + except for reversing the data when strides are negative. + By default, 'C' order is used. + + Returns + ------- + 1d_array : ndarray + Output of the same dtype as `a`, and of shape ``(a.size(),)``. + + See Also + -------- + ndarray.flat : 1-D iterator over an array. + ndarray.flatten : 1-D array copy of the elements of an array + in row-major order. + + Notes + ----- + In row-major order, the row index varies the slowest, and the column + index the quickest. This can be generalized to multiple dimensions, + where row-major order implies that the index along the first axis + varies slowest, and the index along the last quickest. The opposite holds + for Fortran-, or column-major, mode. + + Examples + -------- + It is equivalent to ``reshape(-1, order=order)``. + + >>> x = np.array([[1, 2, 3], [4, 5, 6]]) + >>> print np.ravel(x) + [1 2 3 4 5 6] + + >>> print x.reshape(-1) + [1 2 3 4 5 6] + + >>> print np.ravel(x, order='F') + [1 4 2 5 3 6] + + When ``order`` is 'A', it will preserve the array's 'C' or 'F' ordering: + + >>> print np.ravel(x.T) + [1 4 2 5 3 6] + >>> print np.ravel(x.T, order='A') + [1 2 3 4 5 6] + + When ``order`` is 'K', it will preserve orderings that are neither 'C' + nor 'F', but won't reverse axes: + + >>> a = np.arange(3)[::-1]; a + array([2, 1, 0]) + >>> a.ravel(order='C') + array([2, 1, 0]) + >>> a.ravel(order='K') + array([2, 1, 0]) + + >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a + array([[[ 0, 2, 4], + [ 1, 3, 5]], + [[ 6, 8, 10], + [ 7, 9, 11]]]) + >>> a.ravel(order='C') + array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) + >>> a.ravel(order='K') + array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def nonzero(a): + """ + Return the indices of the elements that are non-zero. + + Returns a tuple of arrays, one for each dimension of `a`, containing + the indices of the non-zero elements in that dimension. The + corresponding non-zero values can be obtained with:: + + a[nonzero(a)] + + To group the indices by element, rather than dimension, use:: + + transpose(nonzero(a)) + + The result of this is always a 2-D array, with a row for + each non-zero element. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + tuple_of_arrays : tuple + Indices of elements that are non-zero. + + See Also + -------- + flatnonzero : + Return indices that are non-zero in the flattened version of the input + array. + ndarray.nonzero : + Equivalent ndarray method. + count_nonzero : + Counts the number of non-zero elements in the input array. + + Examples + -------- + >>> x = np.eye(3) + >>> x + array([[ 1., 0., 0.], + [ 0., 1., 0.], + [ 0., 0., 1.]]) + >>> np.nonzero(x) + (array([0, 1, 2]), array([0, 1, 2])) + + >>> x[np.nonzero(x)] + array([ 1., 1., 1.]) + >>> np.transpose(np.nonzero(x)) + array([[0, 0], + [1, 1], + [2, 2]]) + + A common use for ``nonzero`` is to find the indices of an array, where + a condition is True. Given an array `a`, the condition `a` > 3 is a + boolean array and since False is interpreted as 0, np.nonzero(a > 3) + yields the indices of the `a` where the condition is true. + + >>> a = np.array([[1,2,3],[4,5,6],[7,8,9]]) + >>> a > 3 + array([[False, False, False], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> np.nonzero(a > 3) + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + The ``nonzero`` method of the boolean array can also be called. + + >>> (a > 3).nonzero() + (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def shape(a): + """ + Return the shape of an array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + shape : tuple of ints + The elements of the shape tuple give the lengths of the + corresponding array dimensions. + + See Also + -------- + alen + ndarray.shape : Equivalent array method. + + Examples + -------- + >>> np.shape(np.eye(3)) + (3, 3) + >>> np.shape([[1, 2]]) + (1, 2) + >>> np.shape([0]) + (1,) + >>> np.shape(0) + () + + >>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + >>> np.shape(a) + (2,) + >>> a.shape + (2,) + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape + + +def compress(condition, a, axis=None, out=None): + """ + Return selected slices of an array along given axis. + + When working along a given axis, a slice along that axis is returned in + `output` for each index where `condition` evaluates to True. When + working on a 1-D array, `compress` is equivalent to `extract`. + + Parameters + ---------- + condition : 1-D array of bools + Array that selects which entries to return. If len(condition) + is less than the size of `a` along the given axis, then output is + truncated to the length of the condition array. + a : array_like + Array from which to extract a part. + axis : int, optional + Axis along which to take slices. If None (default), work on the + flattened array. + out : ndarray, optional + Output array. Its type is preserved and it must be of the right + shape to hold the output. + + Returns + ------- + compressed_array : ndarray + A copy of `a` without the slices along axis for which `condition` + is false. + + See Also + -------- + take, choose, diag, diagonal, select + ndarray.compress : Equivalent method. + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4], [5, 6]]) + >>> a + array([[1, 2], + [3, 4], + [5, 6]]) + >>> np.compress([0, 1], a, axis=0) + array([[3, 4]]) + >>> np.compress([False, True, True], a, axis=0) + array([[3, 4], + [5, 6]]) + >>> np.compress([False, True], a, axis=1) + array([[2], + [4], + [6]]) + + Working on the flattened array does not return slices along an axis but + selects elements. + + >>> np.compress([False, True], a) + array([2]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def clip(a, a_min, a_max, out=None): + """ + Clip (limit) the values in an array. + + Given an interval, values outside the interval are clipped to + the interval edges. For example, if an interval of ``[0, 1]`` + is specified, values smaller than 0 become 0, and values larger + than 1 become 1. + + Parameters + ---------- + a : array_like + Array containing elements to clip. + a_min : scalar or array_like + Minimum value. + a_max : scalar or array_like + Maximum value. If `a_min` or `a_max` are array_like, then they will + be broadcasted to the shape of `a`. + out : ndarray, optional + The results will be placed in this array. It may be the input + array for in-place clipping. `out` must be of the right shape + to hold the output. Its type is preserved. + + Returns + ------- + clipped_array : ndarray + An array with the elements of `a`, but where values + < `a_min` are replaced with `a_min`, and those > `a_max` + with `a_max`. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Examples + -------- + >>> a = np.arange(10) + >>> np.clip(a, 1, 8) + array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, 3, 6, out=a) + array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) + >>> a = np.arange(10) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8) + array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def sum(a, axis=None, dtype=None, out=None): + """ + Sum of array elements over a given axis. + + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + dtype : dtype, optional + The type of the returned array and of the accumulator in which + the elements are summed. By default, the dtype of `a` is used. + An exception is when `a` has an integer type with less precision + than the default platform integer. In that case, the default + platform integer is used instead. + out : ndarray, optional + Array into which the output is placed. By default, a new array is + created. If `out` is given, it must be of the appropriate shape + (the shape of `a` with `axis` removed, i.e., + ``numpy.delete(a.shape, axis)``). Its type is preserved. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + + cumsum : Cumulative sum of array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + mean, average + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> np.sum([0.5, 1.5]) + 2.0 + >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) + 1 + >>> np.sum([[0, 1], [0, 5]]) + 6 + >>> np.sum([[0, 1], [0, 5]], axis=0) + array([0, 6]) + >>> np.sum([[0, 1], [0, 5]], axis=1) + array([1, 5]) + + If the accumulator is too small, overflow occurs: + + >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) + -128 + + """ + if not hasattr(a, "sum"): + a = numpypy.array(a) + return a.sum() + + +def product (a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + See Also + -------- + prod : equivalent function; see for details. + + """ + raise NotImplementedError('Waiting on interp level method') + + +def sometrue(a, axis=None, out=None): + """ + Check whether some values are true. + + Refer to `any` for full documentation. + + See Also + -------- + any : equivalent function + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def alltrue (a, axis=None, out=None): + """ + Check if all elements of input array are true. + + See Also + -------- + numpy.all : Equivalent function; see for details. + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + +def any(a,axis=None, out=None): + """ + Test whether any array element along a given axis evaluates to True. + + Returns single boolean unless `axis` is not ``None`` + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical OR is performed. The default + (`axis` = `None`) is to perform a logical OR over a flattened + input array. `axis` may be negative, in which case it counts + from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output and its type is preserved + (e.g., if it is of type float, then it will remain so, returning + 1.0 for True and 0.0 for False, regardless of the type of `a`). + See `doc.ufuncs` (Section "Output arguments") for details. + + Returns + ------- + any : bool or ndarray + A new boolean or `ndarray` is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.any : equivalent method + + all : Test whether all elements along a given axis evaluate to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity evaluate + to `True` because these are not equal to zero. + + Examples + -------- + >>> np.any([[True, False], [True, True]]) + True + + >>> np.any([[True, False], [False, False]], axis=0) + array([ True, False], dtype=bool) + + >>> np.any([-1, 0, 5]) + True + + >>> np.any(np.nan) + True + + >>> o=np.array([False]) + >>> z=np.any([-1, 4, 5], out=o) + >>> z, o + (array([ True], dtype=bool), array([ True], dtype=bool)) + >>> # Check now that z is a reference to o + >>> z is o + True + >>> id(z), id(o) # identity of z and o # doctest: +SKIP + (191614240, 191614240) + + """ + if not hasattr(a, 'any'): + a = numpypy.array(a) + return a.any() + + +def all(a,axis=None, out=None): + """ + Test whether all array elements along a given axis evaluate to True. + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int, optional + Axis along which a logical AND is performed. + The default (`axis` = `None`) is to perform a logical AND + over a flattened input array. `axis` may be negative, in which + case it counts from the last to the first axis. + out : ndarray, optional + Alternate output array in which to place the result. + It must have the same shape as the expected output and its + type is preserved (e.g., if ``dtype(out)`` is float, the result + will consist of 0.0's and 1.0's). See `doc.ufuncs` (Section + "Output arguments") for more details. + + Returns + ------- + all : ndarray, bool + A new boolean or array is returned unless `out` is specified, + in which case a reference to `out` is returned. + + See Also + -------- + ndarray.all : equivalent method + + any : Test whether any element along a given axis evaluates to True. + + Notes + ----- + Not a Number (NaN), positive infinity and negative infinity + evaluate to `True` because these are not equal to zero. + + Examples + -------- + >>> np.all([[True,False],[True,True]]) + False + + >>> np.all([[True,False],[True,True]], axis=0) + array([ True, False], dtype=bool) + + >>> np.all([-1, 4, 5]) + True + + >>> np.all([1.0, np.nan]) + True + + >>> o=np.array([False]) + >>> z=np.all([-1, 4, 5], out=o) + >>> id(z), id(o), z # doctest: +SKIP + (28293632, 28293632, array([ True], dtype=bool)) + + """ + if not hasattr(a, 'all'): + a = numpypy.array(a) + return a.all() + + +def cumsum (a, axis=None, dtype=None, out=None): + """ + Return the cumulative sum of the elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative sum is computed. The default + (None) is to compute the cumsum over the flattened array. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults + to the dtype of `a`, unless `a` has an integer dtype with a + precision less than that of the default platform integer. In + that case, the default platform integer is used. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. See `doc.ufuncs` + (Section "Output arguments") for more details. + + Returns + ------- + cumsum_along_axis : ndarray. + A new array holding the result is returned unless `out` is + specified, in which case a reference to `out` is returned. The + result has the same size as `a`, and the same shape as `a` if + `axis` is not None or `a` is a 1-d array. + + + See Also + -------- + sum : Sum array elements. + + trapz : Integration of array values using the composite trapezoidal rule. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([[1,2,3], [4,5,6]]) + >>> a + array([[1, 2, 3], + [4, 5, 6]]) + >>> np.cumsum(a) + array([ 1, 3, 6, 10, 15, 21]) + >>> np.cumsum(a, dtype=float) # specifies type of output value(s) + array([ 1., 3., 6., 10., 15., 21.]) + + >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns + array([[1, 2, 3], + [5, 7, 9]]) + >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows + array([[ 1, 3, 6], + [ 4, 9, 15]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def cumproduct(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product over the given axis. + + + See Also + -------- + cumprod : equivalent function; see for details. + + """ + raise NotImplementedError('Waiting on interp level method') + + +def ptp(a, axis=None, out=None): + """ + Range of values (maximum - minimum) along an axis. + + The name of the function comes from the acronym for 'peak to peak'. + + Parameters + ---------- + a : array_like + Input values. + axis : int, optional + Axis along which to find the peaks. By default, flatten the + array. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output, + but the type of the output values will be cast if necessary. + + Returns + ------- + ptp : ndarray + A new array holding the result, unless `out` was + specified, in which case a reference to `out` is returned. + + Examples + -------- + >>> x = np.arange(4).reshape((2,2)) + >>> x + array([[0, 1], + [2, 3]]) + + >>> np.ptp(x, axis=0) + array([2, 2]) + + >>> np.ptp(x, axis=1) + array([1, 1]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def amax(a, axis=None, out=None): + """ + Return the maximum of an array or maximum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default flattened input is used. + out : ndarray, optional + Alternate output array in which to place the result. Must be of + the same shape and buffer length as the expected output. See + `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amax : ndarray or scalar + Maximum of `a`. If `axis` is None, the result is a scalar value. + If `axis` is given, the result is an array of dimension + ``a.ndim - 1``. + + See Also + -------- + nanmax : NaN values are ignored instead of being propagated. + fmax : same behavior as the C99 fmax function. + argmax : indices of the maximum values. + + Notes + ----- + NaN values are propagated, that is if at least one item is NaN, the + corresponding max value will be NaN as well. To ignore NaN values + (MATLAB behavior), please use nanmax. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amax(a) + 3 + >>> np.amax(a, axis=0) + array([2, 3]) + >>> np.amax(a, axis=1) + array([1, 3]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amax(b) + nan + >>> np.nanmax(b) + 4.0 + + """ + if not hasattr(a, "max"): + a = numpypy.array(a) + return a.max() + + +def amin(a, axis=None, out=None): + """ + Return the minimum of an array or minimum along an axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which to operate. By default a flattened input is used. + out : ndarray, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + See `doc.ufuncs` (Section "Output arguments") for more details. + + Returns + ------- + amin : ndarray + A new array or a scalar array with the result. + + See Also + -------- + nanmin: nan values are ignored instead of being propagated + fmin: same behavior as the C99 fmin function + argmin: Return the indices of the minimum values. + + amax, nanmax, fmax + + Notes + ----- + NaN values are propagated, that is if at least one item is nan, the + corresponding min value will be nan as well. To ignore NaN values (matlab + behavior), please use nanmin. + + Examples + -------- + >>> a = np.arange(4).reshape((2,2)) + >>> a + array([[0, 1], + [2, 3]]) + >>> np.amin(a) # Minimum of the flattened array + 0 + >>> np.amin(a, axis=0) # Minima along the first axis + array([0, 1]) + >>> np.amin(a, axis=1) # Minima along the second axis + array([0, 2]) + + >>> b = np.arange(5, dtype=np.float) + >>> b[2] = np.NaN + >>> np.amin(b) + nan + >>> np.nanmin(b) + 0.0 + + """ + # amin() is equivalent to min() + if not hasattr(a, 'min'): + a = numpypy.array(a) + return a.min() + +def alen(a): + """ + Return the length of the first dimension of the input array. + + Parameters + ---------- + a : array_like + Input array. + + Returns + ------- + l : int + Length of the first dimension of `a`. + + See Also + -------- + shape, size + + Examples + -------- + >>> a = np.zeros((7,4,5)) + >>> a.shape[0] + 7 + >>> np.alen(a) + 7 + + """ + if not hasattr(a, 'shape'): + a = numpypy.array(a) + return a.shape[0] + + +def prod(a, axis=None, dtype=None, out=None): + """ + Return the product of array elements over a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis over which the product is taken. By default, the product + of all elements is calculated. + dtype : data-type, optional + The data-type of the returned array, as well as of the accumulator + in which the elements are multiplied. By default, if `a` is of + integer type, `dtype` is the default platform integer. (Note: if + the type of `a` is unsigned, then so is `dtype`.) Otherwise, + the dtype is the same as that of `a`. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the + output values will be cast if necessary. + + Returns + ------- + product_along_axis : ndarray, see `dtype` parameter above. + An array shaped as `a` but with the specified axis removed. + Returns a reference to `out` if specified. + + See Also + -------- + ndarray.prod : equivalent method + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. That means that, on a 32-bit platform: + + >>> x = np.array([536870910, 536870910, 536870910, 536870910]) + >>> np.prod(x) #random + 16 + + Examples + -------- + By default, calculate the product of all elements: + + >>> np.prod([1.,2.]) + 2.0 + + Even when the input array is two-dimensional: + + >>> np.prod([[1.,2.],[3.,4.]]) + 24.0 + + But we can also specify the axis over which to multiply: + + >>> np.prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + If the type of `x` is unsigned, then the output type is + the unsigned platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.uint8) + >>> np.prod(x).dtype == np.uint + True + + If `x` is of a signed integer type, then the output type + is the default platform integer: + + >>> x = np.array([1, 2, 3], dtype=np.int8) + >>> np.prod(x).dtype == np.int + True + + """ + raise NotImplementedError('Waiting on interp level method') + + +def cumprod(a, axis=None, dtype=None, out=None): + """ + Return the cumulative product of elements along a given axis. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which the cumulative product is computed. By default + the input is flattened. + dtype : dtype, optional + Type of the returned array, as well as of the accumulator in which + the elements are multiplied. If *dtype* is not specified, it + defaults to the dtype of `a`, unless `a` has an integer dtype with + a precision less than that of the default platform integer. In + that case, the default platform integer is used instead. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type of the resulting values will be cast if necessary. + + Returns + ------- + cumprod : ndarray + A new array holding the result is returned unless `out` is + specified, in which case a reference to out is returned. + + See Also + -------- + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + Examples + -------- + >>> a = np.array([1,2,3]) + >>> np.cumprod(a) # intermediate results 1, 1*2 + ... # total product 1*2*3 = 6 + array([1, 2, 6]) + >>> a = np.array([[1, 2, 3], [4, 5, 6]]) + >>> np.cumprod(a, dtype=float) # specify type of output + array([ 1., 2., 6., 24., 120., 720.]) + + The cumulative product for each column (i.e., over the rows) of `a`: + + >>> np.cumprod(a, axis=0) + array([[ 1, 2, 3], + [ 4, 10, 18]]) + + The cumulative product for each row (i.e. over the columns) of `a`: + + >>> np.cumprod(a,axis=1) + array([[ 1, 2, 6], + [ 4, 20, 120]]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def ndim(a): + """ + Return the number of dimensions of an array. + + Parameters + ---------- + a : array_like + Input array. If it is not already an ndarray, a conversion is + attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in `a`. Scalars are zero-dimensional. + + See Also + -------- + ndarray.ndim : equivalent method + shape : dimensions of array + ndarray.shape : dimensions of array + + Examples + -------- + >>> np.ndim([[1,2,3],[4,5,6]]) + 2 + >>> np.ndim(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.ndim(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def rank(a): + """ + Return the number of dimensions of an array. + + If `a` is not already an array, a conversion is attempted. + Scalars are zero dimensional. + + Parameters + ---------- + a : array_like + Array whose number of dimensions is desired. If `a` is not an array, + a conversion is attempted. + + Returns + ------- + number_of_dimensions : int + The number of dimensions in the array. + + See Also + -------- + ndim : equivalent function + ndarray.ndim : equivalent property + shape : dimensions of array + ndarray.shape : dimensions of array + + Notes + ----- + In the old Numeric package, `rank` was the term used for the number of + dimensions, but in Numpy `ndim` is used instead. + + Examples + -------- + >>> np.rank([1,2,3]) + 1 + >>> np.rank(np.array([[1,2,3],[4,5,6]])) + 2 + >>> np.rank(1) + 0 + + """ + if not hasattr(a, 'ndim'): + a = numpypy.array(a) + return a.ndim + + +def size(a, axis=None): + """ + Return the number of elements along a given axis. + + Parameters + ---------- + a : array_like + Input data. + axis : int, optional + Axis along which the elements are counted. By default, give + the total number of elements. + + Returns + ------- + element_count : int + Number of elements along the specified axis. + + See Also + -------- + shape : dimensions of array + ndarray.shape : dimensions of array + ndarray.size : number of elements in array + + Examples + -------- + >>> a = np.array([[1,2,3],[4,5,6]]) + >>> np.size(a) + 6 + >>> np.size(a,1) + 3 + >>> np.size(a,0) + 2 + + """ + raise NotImplementedError('Waiting on interp level method') + + +def around(a, decimals=0, out=None): + """ + Evenly round to the given number of decimals. + + Parameters + ---------- + a : array_like + Input data. + decimals : int, optional + Number of decimal places to round to (default: 0). If + decimals is negative, it specifies the number of positions to + the left of the decimal point. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output, but the type of the output + values will be cast if necessary. See `doc.ufuncs` (Section + "Output arguments") for details. + + Returns + ------- + rounded_array : ndarray + An array of the same type as `a`, containing the rounded values. + Unless `out` was specified, a new array is created. A reference to + the result is returned. + + The real and imaginary parts of complex numbers are rounded + separately. The result of rounding a float is a float. + + See Also + -------- + ndarray.round : equivalent method + + ceil, fix, floor, rint, trunc + + + Notes + ----- + For values exactly halfway between rounded decimal values, Numpy + rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, + -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due + to the inexact representation of decimal fractions in the IEEE + floating point standard [1]_ and errors introduced when scaling + by powers of ten. + + References + ---------- + .. [1] "Lecture Notes on the Status of IEEE 754", William Kahan, + http://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF + .. [2] "How Futile are Mindless Assessments of + Roundoff in Floating-Point Computation?", William Kahan, + http://www.cs.berkeley.edu/~wkahan/Mindless.pdf + + Examples + -------- + >>> np.around([0.37, 1.64]) + array([ 0., 2.]) + >>> np.around([0.37, 1.64], decimals=1) + array([ 0.4, 1.6]) + >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value + array([ 0., 2., 2., 4., 4.]) + >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned + array([ 1, 2, 3, 11]) + >>> np.around([1,2,3,11], decimals=-1) + array([ 0, 0, 0, 10]) + + """ + raise NotImplementedError('Waiting on interp level method') + + +def round_(a, decimals=0, out=None): + """ + Round an array to the given number of decimals. + + Refer to `around` for full documentation. + + See Also + -------- + around : equivalent function + + """ + raise NotImplementedError('Waiting on interp level method') + + +def mean(a, axis=None, dtype=None, out=None): + """ + Compute the arithmetic mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + `float64` intermediate and return values are used for integer inputs. + + Parameters + ---------- + a : array_like + Array containing numbers whose mean is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the means are computed. The default is to compute + the mean of the flattened array. + dtype : data-type, optional + Type to use in computing the mean. For integer inputs, the default + is `float64`; for floating point inputs, it is the same as the + input dtype. + out : ndarray, optional + Alternate output array in which to place the result. The default + is ``None``; if provided, it must have the same shape as the + expected output, but the type will be cast if necessary. + See `doc.ufuncs` for details. + + Returns + ------- + m : ndarray, see dtype parameter above + If `out=None`, returns a new array containing the mean values, + otherwise a reference to the output array is returned. + + See Also + -------- + average : Weighted average + + Notes + ----- + The arithmetic mean is the sum of the elements along the axis divided + by the number of elements. + + Note that for floating-point input, the mean is computed using the + same precision the input has. Depending on the input data, this can + cause the results to be inaccurate, especially for `float32` (see + example below). Specifying a higher-precision accumulator using the + `dtype` keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.mean(a) + 2.5 + >>> np.mean(a, axis=0) + array([ 2., 3.]) + >>> np.mean(a, axis=1) + array([ 1.5, 3.5]) + + In single precision, `mean` can be inaccurate: + + >>> a = np.zeros((2, 512*512), dtype=np.float32) + >>> a[0, :] = 1.0 + >>> a[1, :] = 0.1 + >>> np.mean(a) + 0.546875 + + Computing the mean in float64 is more accurate: + + >>> np.mean(a, dtype=np.float64) + 0.55000000074505806 + + """ + if not hasattr(a, "mean"): + a = numpypy.array(a) + return a.mean() + + +def std(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the standard deviation along the specified axis. + + Returns the standard deviation, a measure of the spread of a distribution, + of the array elements. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Calculate the standard deviation of these values. + axis : int, optional + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : dtype, optional + Type to use in computing the standard deviation. For arrays of + integer type the default is float64, for arrays of float types it is + the same as the array type. + out : ndarray, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type (of the calculated + values) will be cast if necessary. + ddof : int, optional + Means Delta Degrees of Freedom. The divisor used in calculations + is ``N - ddof``, where ``N`` represents the number of elements. + By default `ddof` is zero. + + Returns + ------- + standard_deviation : ndarray, see dtype parameter above. + If `out` is None, return a new array containing the standard deviation, + otherwise return a reference to the output array. + + See Also + -------- + var, mean + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e., ``std = sqrt(mean(abs(x - x.mean())**2))``. + + The average squared deviation is normally calculated as ``x.sum() / N``, where + ``N = len(x)``. If, however, `ddof` is specified, the divisor ``N - ddof`` + is used instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of the infinite population. ``ddof=0`` + provides a maximum likelihood estimate of the variance for normally + distributed variables. The standard deviation computed in this function + is the square root of the estimated variance, so even with ``ddof=1``, it + will not be an unbiased estimate of the standard deviation per se. + + Note that, for complex numbers, `std` takes the absolute + value before squaring, so that the result is always real and nonnegative. + + For floating-point input, the *std* is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for float32 (see example below). + Specifying a higher-accuracy accumulator using the `dtype` keyword can + alleviate this issue. + + Examples + -------- + >>> a = np.array([[1, 2], [3, 4]]) + >>> np.std(a) + 1.1180339887498949 + >>> np.std(a, axis=0) + array([ 1., 1.]) + >>> np.std(a, axis=1) + array([ 0.5, 0.5]) + + In single precision, std() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.std(a) + 0.45172946707416706 + + Computing the standard deviation in float64 is more accurate: + + >>> np.std(a, dtype=np.float64) + 0.44999999925552653 + + """ + if not hasattr(a, "std"): + a = numpypy.array(a) + return a.std() + + +def var(a, axis=None, dtype=None, out=None, ddof=0): + """ + Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + Parameters + ---------- + a : array_like + Array containing numbers whose variance is desired. If `a` is not an + array, a conversion is attempted. + axis : int, optional + Axis along which the variance is computed. The default is to compute + the variance of the flattened array. + dtype : data-type, optional + Type to use in computing the variance. For arrays of integer type + the default is `float32`; for arrays of float types it is the same as + the array type. + out : ndarray, optional + Alternate output array in which to place the result. It must have + the same shape as the expected output, but the type is cast if + necessary. + ddof : int, optional + "Delta Degrees of Freedom": the divisor used in the calculation is + ``N - ddof``, where ``N`` represents the number of elements. By + default `ddof` is zero. + + Returns + ------- + variance : ndarray, see dtype parameter above + If ``out=None``, returns a new array containing the variance; + otherwise, a reference to the output array is returned. + + See Also + -------- + std : Standard deviation + mean : Average + numpy.doc.ufuncs : Section "Output arguments" + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e., ``var = mean(abs(x - x.mean())**2)``. + + The mean is normally calculated as ``x.sum() / N``, where ``N = len(x)``. + If, however, `ddof` is specified, the divisor ``N - ddof`` is used + instead. In standard statistical practice, ``ddof=1`` provides an + unbiased estimator of the variance of a hypothetical infinite population. + ``ddof=0`` provides a maximum likelihood estimate of the variance for + normally distributed variables. + + Note that for complex numbers, the absolute value is taken before + squaring, so that the result is always real and nonnegative. + + For floating-point input, the variance is computed using the same + precision the input has. Depending on the input data, this can cause + the results to be inaccurate, especially for `float32` (see example + below). Specifying a higher-accuracy accumulator using the ``dtype`` + keyword can alleviate this issue. + + Examples + -------- + >>> a = np.array([[1,2],[3,4]]) + >>> np.var(a) + 1.25 + >>> np.var(a,0) + array([ 1., 1.]) + >>> np.var(a,1) + array([ 0.25, 0.25]) + + In single precision, var() can be inaccurate: + + >>> a = np.zeros((2,512*512), dtype=np.float32) + >>> a[0,:] = 1.0 + >>> a[1,:] = 0.1 + >>> np.var(a) + 0.20405951142311096 + + Computing the standard deviation in float64 is more accurate: + + >>> np.var(a, dtype=np.float64) + 0.20249999932997387 + >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 + 0.20250000000000001 + + """ + if not hasattr(a, "var"): + a = numpypy.array(a) + return a.var() diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -257,7 +257,8 @@ try: inputcells = args.match_signature(signature, defs_s) except ArgErr, e: - raise TypeError, "signature mismatch: %s" % e.getmsg(self.name) + raise TypeError("signature mismatch: %s() %s" % + (self.name, e.getmsg())) return inputcells def specialize(self, inputcells, op=None): diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -340,7 +340,7 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=True), + default=False), ]), ]) @@ -370,6 +370,7 @@ config.objspace.std.suggest(getattributeshortcut=True) config.objspace.std.suggest(newshortcut=True) config.objspace.std.suggest(withspecialisedtuple=True) + config.objspace.std.suggest(withidentitydict=True) #if not IS_64_BITS: # config.objspace.std.suggest(withsmalllong=True) diff --git a/pypy/doc/Makefile b/pypy/doc/Makefile --- a/pypy/doc/Makefile +++ b/pypy/doc/Makefile @@ -12,7 +12,7 @@ PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . -.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest +.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @@ -23,6 +23,7 @@ @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" + @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @@ -79,6 +80,11 @@ @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." +man: + $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man + @echo + @echo "Build finished. The manual pages are in $(BUILDDIR)/man" + changes: python config/generate.py $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes diff --git a/pypy/doc/coding-guide.rst b/pypy/doc/coding-guide.rst --- a/pypy/doc/coding-guide.rst +++ b/pypy/doc/coding-guide.rst @@ -175,15 +175,15 @@ RPython ================= -RPython Definition, not ------------------------ +RPython Definition +------------------ -The list and exact details of the "RPython" restrictions are a somewhat -evolving topic. In particular, we have no formal language definition -as we find it more practical to discuss and evolve the set of -restrictions while working on the whole program analysis. If you -have any questions about the restrictions below then please feel -free to mail us at pypy-dev at codespeak net. +RPython is a restricted subset of Python that is amenable to static analysis. +Although there are additions to the language and some things might surprisingly +work, this is a rough list of restrictions that should be considered. Note +that there are tons of special cased restrictions that you'll encounter +as you go. The exact definition is "RPython is everything that our translation +toolchain can accept" :) .. _`wrapped object`: coding-guide.html#wrapping-rules @@ -198,7 +198,7 @@ contain both a string and a int must be avoided. It is allowed to mix None (basically with the role of a null pointer) with many other types: `wrapped objects`, class instances, lists, dicts, strings, etc. - but *not* with int and floats. + but *not* with int, floats or tuples. **constants** @@ -209,9 +209,12 @@ have this restriction, so if you need mutable global state, store it in the attributes of some prebuilt singleton instance. + + **control structures** - all allowed but yield, ``for`` loops restricted to builtin types + all allowed, ``for`` loops restricted to builtin types, generators + very restricted. **range** @@ -226,7 +229,8 @@ **generators** - generators are not supported. + generators are supported, but their exact scope is very limited. you can't + merge two different generator in one control point. **exceptions** @@ -245,22 +249,27 @@ **strings** - a lot of, but not all string methods are supported. Indexes can be + a lot of, but not all string methods are supported and those that are + supported, not necesarilly accept all arguments. Indexes can be negative. In case they are not, then you get slightly more efficient code if the translator can prove that they are non-negative. When slicing a string it is necessary to prove that the slice start and - stop indexes are non-negative. + stop indexes are non-negative. There is no implicit str-to-unicode cast + anywhere. **tuples** no variable-length tuples; use them to store or return pairs or n-tuples of - values. Each combination of types for elements and length constitute a separate - and not mixable type. + values. Each combination of types for elements and length constitute + a separate and not mixable type. **lists** lists are used as an allocated array. Lists are over-allocated, so list.append() - is reasonably fast. Negative or out-of-bound indexes are only allowed for the + is reasonably fast. However, if you use a fixed-size list, the code + is more efficient. Annotator can figure out most of the time that your + list is fixed-size, even when you use list comprehension. + Negative or out-of-bound indexes are only allowed for the most common operations, as follows: - *indexing*: @@ -287,16 +296,14 @@ **dicts** - dicts with a unique key type only, provided it is hashable. - String keys have been the only allowed key types for a while, but this was generalized. - After some re-optimization, - the implementation could safely decide that all string dict keys should be interned. + dicts with a unique key type only, provided it is hashable. Custom + hash functions and custom equality will not be honored. + Use ``pypy.rlib.objectmodel.r_dict`` for custom hash functions. **list comprehensions** - may be used to create allocated, initialized arrays. - After list over-allocation was introduced, there is no longer any restriction. + May be used to create allocated, initialized arrays. **functions** @@ -334,9 +341,8 @@ **objects** - in PyPy, wrapped objects are borrowed from the object space. Just like - in CPython, code that needs e.g. a dictionary can use a wrapped dict - and the object space operations on it. + Normal rules apply. Special methods are not honoured, except ``__init__`` and + ``__del__``. This layout makes the number of types to take care about quite limited. diff --git a/pypy/doc/conf.py b/pypy/doc/conf.py --- a/pypy/doc/conf.py +++ b/pypy/doc/conf.py @@ -197,3 +197,10 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} +# -- Options for manpage output------------------------------------------------- + +man_pages = [ + ('man/pypy.1', 'pypy', + u'fast, compliant alternative implementation of the Python language', + u'The PyPy Project', 1) +] diff --git a/pypy/doc/extradoc.rst b/pypy/doc/extradoc.rst --- a/pypy/doc/extradoc.rst +++ b/pypy/doc/extradoc.rst @@ -8,6 +8,9 @@ *Articles about PyPy published so far, most recent first:* (bibtex_ file) +* `Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`_, + C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo + * `Allocation Removal by Partial Evaluation in a Tracing JIT`_, C.F. Bolz, A. Cuni, M. Fijalkowski, M. Leuschel, S. Pedroni, A. Rigo @@ -50,6 +53,9 @@ *Other research using PyPy (as far as we know it):* +* `Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`_, + N. Riley and C. Zilles + * `PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`_, C. Bruni and T. Verwaest @@ -65,6 +71,7 @@ .. _bibtex: https://bitbucket.org/pypy/extradoc/raw/tip/talk/bibtex.bib +.. _`Runtime Feedback in a Meta-Tracing JIT for Efficient Dynamic Languages`: https://bitbucket.org/pypy/extradoc/raw/extradoc/talk/icooolps2011/jit-hints.pdf .. _`Allocation Removal by Partial Evaluation in a Tracing JIT`: http://codespeak.net/svn/pypy/extradoc/talk/pepm2011/bolz-allocation-removal.pdf .. _`Towards a Jitting VM for Prolog Execution`: http://www.stups.uni-duesseldorf.de/publications/bolz-prolog-jit.pdf .. _`High performance implementation of Python for CLI/.NET with JIT compiler generation for dynamic languages`: http://buildbot.pypy.org/misc/antocuni-thesis.pdf @@ -74,6 +81,7 @@ .. _`Automatic JIT Compiler Generation with Runtime Partial Evaluation`: http://www.stups.uni-duesseldorf.de/thesis/final-master.pdf .. _`RPython: A Step towards Reconciling Dynamically and Statically Typed OO Languages`: http://www.disi.unige.it/person/AnconaD/papers/Recent_abstracts.html#AACM-DLS07 .. _`EU Reports`: index-report.html +.. _`Hardware Transactional Memory Support for Lightweight Dynamic Language Evolution`: http://sabi.net/nriley/pubs/dls6-riley.pdf .. _`PyGirl: Generating Whole-System VMs from High-Level Prototypes using PyPy`: http://scg.unibe.ch/archive/papers/Brun09cPyGirl.pdf .. _`Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python`: http://psyco.sourceforge.net/psyco-pepm-a.ps.gz .. _`Back to the Future in One Week -- Implementing a Smalltalk VM in PyPy`: http://dx.doi.org/10.1007/978-3-540-89275-5_7 diff --git a/pypy/doc/man/pypy.1.rst b/pypy/doc/man/pypy.1.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/man/pypy.1.rst @@ -0,0 +1,90 @@ +====== + pypy +====== + +SYNOPSIS +======== + +``pypy`` [*options*] +[``-c`` *cmd*\ \|\ ``-m`` *mod*\ \|\ *file.py*\ \|\ ``-``\ ] +[*arg*\ ...] + +OPTIONS +======= + +-i + Inspect interactively after running script. + +-O + Dummy optimization flag for compatibility with C Python. + +-c *cmd* + Program passed in as CMD (terminates option list). + +-S + Do not ``import site`` on initialization. + +-u + Unbuffered binary ``stdout`` and ``stderr``. + +-h, --help + Show a help message and exit. + +-m *mod* + Library module to be run as a script (terminates option list). + +-W *arg* + Warning control (*arg* is *action*:*message*:*category*:*module*:*lineno*). + +-E + Ignore environment variables (such as ``PYTHONPATH``). + +--version + Print the PyPy version. + +--info + Print translation information about this PyPy executable. + +--jit *arg* + Low level JIT parameters. Format is + *arg*\ ``=``\ *value*\ [``,``\ *arg*\ ``=``\ *value*\ ...] + + ``off`` + Disable the JIT. + + ``threshold=``\ *value* + Number of times a loop has to run for it to become hot. + + ``function_threshold=``\ *value* + Number of times a function must run for it to become traced from + start. + + ``inlining=``\ *value* + Inline python functions or not (``1``/``0``). + + ``loop_longevity=``\ *value* + A parameter controlling how long loops will be kept before being + freed, an estimate. + + ``max_retrace_guards=``\ *value* + Number of extra guards a retrace can cause. + + ``retrace_limit=``\ *value* + How many times we can try retracing before giving up. + + ``trace_eagerness=``\ *value* + Number of times a guard has to fail before we start compiling a + bridge. + + ``trace_limit=``\ *value* + Number of recorded operations before we abort tracing with + ``ABORT_TRACE_TOO_LONG``. + + ``enable_opts=``\ *value* + Optimizations to enabled or ``all``. + Warning, this option is dangerous, and should be avoided. + +SEE ALSO +======== + +**python**\ (1) diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py deleted file mode 100644 --- a/pypy/doc/tool/makecontributor.py +++ /dev/null @@ -1,47 +0,0 @@ -""" - -generates a contributor list - -""" -import py - -# this file is useless, use the following commandline instead: -# hg churn -c -t "{author}" | sed -e 's/ <.*//' - -try: - path = py.std.sys.argv[1] -except IndexError: - print "usage: %s ROOTPATH" %(py.std.sys.argv[0]) - raise SystemExit, 1 - -d = {} - -for logentry in py.path.svnwc(path).log(): - a = logentry.author - if a in d: - d[a] += 1 - else: - d[a] = 1 - -items = d.items() -items.sort(lambda x,y: -cmp(x[1], y[1])) - -import uconf # http://codespeak.net/svn/uconf/dist/uconf - -# Authors that don't want to be listed -excluded = set("anna gintas ignas".split()) -cutoff = 5 # cutoff for authors in the LICENSE file -mark = False -for author, count in items: - if author in excluded: - continue - user = uconf.system.User(author) - try: - realname = user.realname.strip() - except KeyError: - realname = author - if not mark and count < cutoff: - mark = True - print '-'*60 - print " ", realname - #print count, " ", author diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -428,8 +428,8 @@ return self._match_signature(w_firstarg, scope_w, signature, defaults_w, 0) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) def _parse(self, w_firstarg, signature, defaults_w, blindargs=0): """Parse args and kwargs according to the signature of a code object, @@ -450,8 +450,8 @@ try: return self._parse(w_firstarg, signature, defaults_w, blindargs) except ArgErr, e: - raise OperationError(self.space.w_TypeError, - self.space.wrap(e.getmsg(fnname))) + raise operationerrfmt(self.space.w_TypeError, + "%s() %s", fnname, e.getmsg()) @staticmethod def frompacked(space, w_args=None, w_kwds=None): @@ -626,7 +626,7 @@ class ArgErr(Exception): - def getmsg(self, fnname): + def getmsg(self): raise NotImplementedError class ArgErrCount(ArgErr): @@ -642,11 +642,10 @@ self.num_args = got_nargs self.num_kwds = nkwds - def getmsg(self, fnname): + def getmsg(self): n = self.expected_nargs if n == 0: - msg = "%s() takes no arguments (%d given)" % ( - fnname, + msg = "takes no arguments (%d given)" % ( self.num_args + self.num_kwds) else: defcount = self.num_defaults @@ -672,8 +671,7 @@ msg2 = " non-keyword" else: msg2 = "" - msg = "%s() takes %s %d%s argument%s (%d given)" % ( - fnname, + msg = "takes %s %d%s argument%s (%d given)" % ( msg1, n, msg2, @@ -686,9 +684,8 @@ def __init__(self, argname): self.argname = argname - def getmsg(self, fnname): - msg = "%s() got multiple values for keyword argument '%s'" % ( - fnname, + def getmsg(self): + msg = "got multiple values for keyword argument '%s'" % ( self.argname) return msg @@ -722,13 +719,11 @@ break self.kwd_name = name - def getmsg(self, fnname): + def getmsg(self): if self.num_kwds == 1: - msg = "%s() got an unexpected keyword argument '%s'" % ( - fnname, + msg = "got an unexpected keyword argument '%s'" % ( self.kwd_name) else: - msg = "%s() got %d unexpected keyword arguments" % ( - fnname, + msg = "got %d unexpected keyword arguments" % ( self.num_kwds) return msg diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1591,12 +1591,15 @@ 'ArithmeticError', 'AssertionError', 'AttributeError', + 'BaseException', + 'DeprecationWarning', 'EOFError', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', + 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', @@ -1617,7 +1620,10 @@ 'TabError', 'TypeError', 'UnboundLocalError', + 'UnicodeDecodeError', 'UnicodeError', + 'UnicodeEncodeError', + 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', 'UnicodeEncodeError', diff --git a/pypy/interpreter/eval.py b/pypy/interpreter/eval.py --- a/pypy/interpreter/eval.py +++ b/pypy/interpreter/eval.py @@ -2,7 +2,6 @@ This module defines the abstract base classes that support execution: Code and Frame. """ -from pypy.rlib import jit from pypy.interpreter.error import OperationError from pypy.interpreter.baseobjspace import Wrappable diff --git a/pypy/interpreter/generator.py b/pypy/interpreter/generator.py --- a/pypy/interpreter/generator.py +++ b/pypy/interpreter/generator.py @@ -162,7 +162,8 @@ # generate 2 versions of the function and 2 jit drivers. def _create_unpack_into(): jitdriver = jit.JitDriver(greens=['pycode'], - reds=['self', 'frame', 'results']) + reds=['self', 'frame', 'results'], + name='unpack_into') def unpack_into(self, results): """This is a hack for performance: runs the generator and collects all produced items in a list.""" @@ -196,4 +197,4 @@ self.frame = None return unpack_into unpack_into = _create_unpack_into() - unpack_into_w = _create_unpack_into() \ No newline at end of file + unpack_into_w = _create_unpack_into() diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -393,8 +393,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -404,7 +404,7 @@ excinfo = py.test.raises(OperationError, args.parse_obj, "obj", "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_args_parsing_into_scope(self): @@ -448,8 +448,8 @@ class FakeArgErr(ArgErr): - def getmsg(self, fname): - return "msg "+fname + def getmsg(self): + return "msg" def _match_signature(*args): raise FakeArgErr() @@ -460,7 +460,7 @@ "obj", [None, None], "foo", Signature(["a", "b"], None, None)) assert excinfo.value.w_type is TypeError - assert excinfo.value._w_value == "msg foo" + assert excinfo.value.get_w_value(space) == "foo() msg" def test_topacked_frompacked(self): space = DummySpace() @@ -493,35 +493,35 @@ # got_nargs, nkwds, expected_nargs, has_vararg, has_kwarg, # defaults_w, missing_args err = ArgErrCount(1, 0, 0, False, False, None, 0) - s = err.getmsg('foo') - assert s == "foo() takes no arguments (1 given)" + s = err.getmsg() + assert s == "takes no arguments (1 given)" err = ArgErrCount(0, 0, 1, False, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 argument (0 given)" err = ArgErrCount(3, 0, 2, False, False, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes exactly 2 arguments (3 given)" err = ArgErrCount(3, 0, 2, False, False, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 2 arguments (3 given)" + s = err.getmsg() + assert s == "takes at most 2 arguments (3 given)" err = ArgErrCount(1, 0, 2, True, False, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 2 arguments (1 given)" + s = err.getmsg() + assert s == "takes at least 2 arguments (1 given)" err = ArgErrCount(0, 1, 2, True, False, ['a'], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, [], 0) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (2 given)" err = ArgErrCount(0, 1, 1, False, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes exactly 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes exactly 1 non-keyword argument (0 given)" err = ArgErrCount(0, 1, 1, True, True, [], 1) - s = err.getmsg('foo') - assert s == "foo() takes at least 1 non-keyword argument (0 given)" + s = err.getmsg() + assert s == "takes at least 1 non-keyword argument (0 given)" err = ArgErrCount(2, 1, 1, False, True, ['a'], 0) - s = err.getmsg('foo') - assert s == "foo() takes at most 1 non-keyword argument (2 given)" + s = err.getmsg() + assert s == "takes at most 1 non-keyword argument (2 given)" def test_bad_type_for_star(self): space = self.space @@ -543,12 +543,12 @@ def test_unknown_keywords(self): space = DummySpace() err = ArgErrUnknownKwds(space, 1, ['a', 'b'], [True, False], None) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument 'b'" + s = err.getmsg() + assert s == "got an unexpected keyword argument 'b'" err = ArgErrUnknownKwds(space, 2, ['a', 'b', 'c'], [True, False, False], None) - s = err.getmsg('foo') - assert s == "foo() got 2 unexpected keyword arguments" + s = err.getmsg() + assert s == "got 2 unexpected keyword arguments" def test_unknown_unicode_keyword(self): class DummySpaceUnicode(DummySpace): @@ -558,13 +558,13 @@ err = ArgErrUnknownKwds(space, 1, ['a', None, 'b', 'c'], [True, False, True, True], [unichr(0x1234), u'b', u'c']) - s = err.getmsg('foo') - assert s == "foo() got an unexpected keyword argument '\xe1\x88\xb4'" + s = err.getmsg() + assert s == "got an unexpected keyword argument '\xe1\x88\xb4'" def test_multiple_values(self): err = ArgErrMultipleValues('bla') - s = err.getmsg('foo') - assert s == "foo() got multiple values for keyword argument 'bla'" + s = err.getmsg() + assert s == "got multiple values for keyword argument 'bla'" class AppTestArgument: def test_error_message(self): diff --git a/pypy/jit/backend/llsupport/test/test_runner.py b/pypy/jit/backend/llsupport/test/test_runner.py --- a/pypy/jit/backend/llsupport/test/test_runner.py +++ b/pypy/jit/backend/llsupport/test/test_runner.py @@ -8,6 +8,12 @@ class MyLLCPU(AbstractLLCPU): supports_floats = True + + class assembler(object): + @staticmethod + def set_debug(flag): + pass + def compile_loop(self, inputargs, operations, looptoken): py.test.skip("llsupport test: cannot compile operations") diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -17,6 +17,7 @@ from pypy.rpython.llinterp import LLException from pypy.jit.codewriter import heaptracker, longlong from pypy.rlib.rarithmetic import intmask +from pypy.jit.backend.detect_cpu import autodetect_main_model_and_size def boxfloat(x): return BoxFloat(longlong.getfloatstorage(x)) @@ -27,6 +28,9 @@ class Runner(object): + add_loop_instruction = ['overload for a specific cpu'] + bridge_loop_instruction = ['overload for a specific cpu'] + def execute_operation(self, opname, valueboxes, result_type, descr=None): inputargs, operations = self._get_single_operation_list(opname, result_type, @@ -3184,6 +3188,56 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) + + def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another # unrelated loop that ends in a jump directly to this loopy bit of code. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -7,6 +7,7 @@ from pypy.rpython.lltypesystem import lltype, rffi, rstr, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.annlowlevel import llhelper +from pypy.rlib.jit import AsmInfo from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.backend.x86.regalloc import (RegAlloc, get_ebp_ofs, _get_scale, gpr_reg_mgr_cls, _valid_addressing_size) @@ -411,6 +412,7 @@ '''adds the following attributes to looptoken: _x86_function_addr (address of the generated func, as an int) _x86_loop_code (debug: addr of the start of the ResOps) + _x86_fullsize (debug: full size including failure) _x86_debug_checksum ''' # XXX this function is too longish and contains some code @@ -476,7 +478,8 @@ name = "Loop # %s: %s" % (looptoken.number, loopname) self.cpu.profile_agent.native_code_written(name, rawstart, full_size) - return ops_offset + return AsmInfo(ops_offset, rawstart + looppos, + size_excluding_failure_stuff - looppos) def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): @@ -485,12 +488,7 @@ assert len(set(inputargs)) == len(inputargs) descr_number = self.cpu.get_fail_descr_number(faildescr) - try: - failure_recovery = self._find_failure_recovery_bytecode(faildescr) - except ValueError: - debug_print("Bridge out of guard", descr_number, - "was already compiled!") - return + failure_recovery = self._find_failure_recovery_bytecode(faildescr) self.setup(original_loop_token) if log: @@ -503,6 +501,7 @@ [loc.assembler() for loc in faildescr._x86_debug_faillocs]) regalloc = RegAlloc(self, self.cpu.translate_support_code) fail_depths = faildescr._x86_current_depths + startpos = self.mc.get_relative_pos() operations = regalloc.prepare_bridge(fail_depths, inputargs, arglocs, operations, self.current_clt.allgcrefs) @@ -537,7 +536,7 @@ name = "Bridge # %s" % (descr_number,) self.cpu.profile_agent.native_code_written(name, rawstart, fullsize) - return ops_offset + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def write_pending_failure_recoveries(self): # for each pending guard, generate the code of the recovery stub @@ -621,7 +620,10 @@ def _find_failure_recovery_bytecode(self, faildescr): adr_jump_offset = faildescr._x86_adr_jump_offset if adr_jump_offset == 0: - raise ValueError + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # follow the JMP/Jcond p = rffi.cast(rffi.INTP, adr_jump_offset) adr_target = adr_jump_offset + 4 + rffi.cast(lltype.Signed, p[0]) @@ -810,7 +812,10 @@ target = newlooptoken._x86_function_addr mc = codebuf.MachineCodeBlockWrapper() mc.JMP(imm(target)) - assert mc.get_relative_pos() <= 13 # keep in sync with prepare_loop() + if WORD == 4: # keep in sync with prepare_loop() + assert mc.get_relative_pos() == 5 + else: + assert mc.get_relative_pos() <= 13 mc.copy_to_raw_memory(oldadr) def dump(self, text): @@ -2550,3 +2555,6 @@ def not_implemented(msg): os.write(2, '[x86/asm] %s\n' % msg) raise NotImplementedError(msg) + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/x86/regalloc.py b/pypy/jit/backend/x86/regalloc.py --- a/pypy/jit/backend/x86/regalloc.py +++ b/pypy/jit/backend/x86/regalloc.py @@ -194,7 +194,10 @@ # note: we need to make a copy of inputargs because possibly_free_vars # is also used on op args, which is a non-resizable list self.possibly_free_vars(list(inputargs)) - self.min_bytes_before_label = 13 + if WORD == 4: # see redirect_call_assembler() + self.min_bytes_before_label = 5 + else: + self.min_bytes_before_label = 13 return operations def prepare_bridge(self, prev_depths, inputargs, arglocs, operations, @@ -683,7 +686,7 @@ self.xrm.possibly_free_var(op.getarg(0)) def consider_cast_int_to_float(self, op): - loc0 = self.rm.loc(op.getarg(0)) + loc0 = self.rm.make_sure_var_in_reg(op.getarg(0)) loc1 = self.xrm.force_allocate_reg(op.result) self.Perform(op, [loc0], loc1) self.rm.possibly_free_var(op.getarg(0)) diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -33,6 +33,13 @@ # for the individual tests see # ====> ../../test/runner_test.py + add_loop_instructions = ['mov', 'add', 'test', 'je', 'jmp'] + if WORD == 4: + bridge_loop_instructions = ['lea', 'jmp'] + else: + # the 'mov' is part of the 'jmp' so far + bridge_loop_instructions = ['lea', 'mov', 'jmp'] + def setup_method(self, meth): self.cpu = CPU(rtyper=None, stats=FakeStats()) self.cpu.setup_once() @@ -416,12 +423,13 @@ ] inputargs = [i0] debug._log = dlog = debug.DebugLog() - ops_offset = self.cpu.compile_loop(inputargs, operations, looptoken) + info = self.cpu.compile_loop(inputargs, operations, looptoken) + ops_offset = info.ops_offset debug._log = None # assert ops_offset is looptoken._x86_ops_offset - # getfield_raw/int_add/setfield_raw + ops + None - assert len(ops_offset) == 3 + len(operations) + 1 + # 2*(getfield_raw/int_add/setfield_raw) + ops + None + assert len(ops_offset) == 2*3 + len(operations) + 1 assert (ops_offset[operations[0]] <= ops_offset[operations[1]] <= ops_offset[operations[2]] <= diff --git a/pypy/jit/backend/x86/tool/viewcode.py b/pypy/jit/backend/x86/tool/viewcode.py --- a/pypy/jit/backend/x86/tool/viewcode.py +++ b/pypy/jit/backend/x86/tool/viewcode.py @@ -39,6 +39,7 @@ def machine_code_dump(data, originaddr, backend_name, label_list=None): objdump_backend_option = { 'x86': 'i386', + 'x86_32': 'i386', 'x86_64': 'x86-64', 'i386': 'i386', } diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -8,11 +8,15 @@ class JitPolicy(object): - def __init__(self): + def __init__(self, jithookiface=None): self.unsafe_loopy_graphs = set() self.supports_floats = False self.supports_longlong = False self.supports_singlefloats = False + if jithookiface is None: + from pypy.rlib.jit import JitHookInterface + jithookiface = JitHookInterface() + self.jithookiface = jithookiface def set_supports_floats(self, flag): self.supports_floats = flag diff --git a/pypy/jit/metainterp/compile.py b/pypy/jit/metainterp/compile.py --- a/pypy/jit/metainterp/compile.py +++ b/pypy/jit/metainterp/compile.py @@ -5,6 +5,7 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.debug import debug_start, debug_stop, debug_print from pypy.rlib import rstack +from pypy.rlib.jit import JitDebugInfo from pypy.conftest import option from pypy.tool.sourcetools import func_with_new_name @@ -75,7 +76,7 @@ if descr is not original_jitcell_token: original_jitcell_token.record_jump_to(descr) descr.exported_state = None - op._descr = None # clear reference, mostly for tests + op.cleardescr() # clear reference, mostly for tests elif isinstance(descr, TargetToken): # for a JUMP: record it as a potential jump. # (the following test is not enough to prevent more complicated @@ -90,8 +91,8 @@ assert descr.exported_state is None if not we_are_translated(): op._descr_wref = weakref.ref(op._descr) - op._descr = None # clear reference to prevent the history.Stats - # from keeping the loop alive during tests + op.cleardescr() # clear reference to prevent the history.Stats + # from keeping the loop alive during tests # record this looptoken on the QuasiImmut used in the code if loop.quasi_immutable_deps is not None: for qmut in loop.quasi_immutable_deps: @@ -296,8 +297,6 @@ patch_new_loop_to_load_virtualizable_fields(loop, jitdriver_sd) original_jitcell_token = loop.original_jitcell_token - jitdriver_sd.on_compile(metainterp_sd.logger_ops, original_jitcell_token, - loop.operations, type, greenkey) loopname = jitdriver_sd.warmstate.get_location_str(greenkey) globaldata = metainterp_sd.globaldata original_jitcell_token.number = n = globaldata.loopnumbering @@ -307,21 +306,38 @@ show_procedures(metainterp_sd, loop) loop.check_consistency() + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_jitcell_token, loop.operations, + type, greenkey) + hooks.before_compile(debug_info) + else: + debug_info = None + hooks = None operations = get_deep_immutable_oplist(loop.operations) metainterp_sd.profiler.start_backend() debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, - original_jitcell_token, name=loopname) + asminfo = metainterp_sd.cpu.compile_loop(loop.inputargs, operations, + original_jitcell_token, + name=loopname) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile(debug_info) metainterp_sd.stats.add_new_loop(loop) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new " + type) # loopname = jitdriver_sd.warmstate.get_location_str(greenkey) + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_loop(loop.inputargs, loop.operations, n, type, ops_offset, name=loopname) @@ -332,25 +348,40 @@ def send_bridge_to_backend(jitdriver_sd, metainterp_sd, faildescr, inputargs, operations, original_loop_token): n = metainterp_sd.cpu.get_fail_descr_number(faildescr) - jitdriver_sd.on_compile_bridge(metainterp_sd.logger_ops, - original_loop_token, operations, n) if not we_are_translated(): show_procedures(metainterp_sd) seen = dict.fromkeys(inputargs) TreeLoop.check_consistency_of_branch(operations, seen) + if metainterp_sd.warmrunnerdesc is not None: + hooks = metainterp_sd.warmrunnerdesc.hooks + debug_info = JitDebugInfo(jitdriver_sd, metainterp_sd.logger_ops, + original_loop_token, operations, 'bridge', + fail_descr_no=n) + hooks.before_compile_bridge(debug_info) + else: + hooks = None + debug_info = None + operations = get_deep_immutable_oplist(operations) metainterp_sd.profiler.start_backend() - operations = get_deep_immutable_oplist(operations) debug_start("jit-backend") try: - ops_offset = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, operations, - original_loop_token) + asminfo = metainterp_sd.cpu.compile_bridge(faildescr, inputargs, + operations, + original_loop_token) finally: debug_stop("jit-backend") metainterp_sd.profiler.end_backend() + if hooks is not None: + debug_info.asminfo = asminfo + hooks.after_compile_bridge(debug_info) if not we_are_translated(): metainterp_sd.stats.compiled() metainterp_sd.log("compiled new bridge") # + if asminfo is not None: + ops_offset = asminfo.ops_offset + else: + ops_offset = None metainterp_sd.logger_ops.log_bridge(inputargs, operations, n, ops_offset) # #if metainterp_sd.warmrunnerdesc is not None: # for tests diff --git a/pypy/jit/metainterp/history.py b/pypy/jit/metainterp/history.py --- a/pypy/jit/metainterp/history.py +++ b/pypy/jit/metainterp/history.py @@ -1003,16 +1003,16 @@ return insns def check_simple_loop(self, expected=None, **check): - # Usefull in the simplest case when we have only one trace ending with - # a jump back to itself and possibly a few bridges ending with finnish. - # Only the operations within the loop formed by that single jump will - # be counted. + """ Usefull in the simplest case when we have only one trace ending with + a jump back to itself and possibly a few bridges. + Only the operations within the loop formed by that single jump will + be counted. + """ loops = self.get_all_loops() assert len(loops) == 1 loop = loops[0] jumpop = loop.operations[-1] assert jumpop.getopnum() == rop.JUMP - assert self.check_resops(jump=1) labels = [op for op in loop.operations if op.getopnum() == rop.LABEL] targets = [op._descr_wref() for op in labels] assert None not in targets # TargetToken was freed, give up diff --git a/pypy/jit/metainterp/jitdriver.py b/pypy/jit/metainterp/jitdriver.py --- a/pypy/jit/metainterp/jitdriver.py +++ b/pypy/jit/metainterp/jitdriver.py @@ -21,7 +21,6 @@ # self.portal_finishtoken... pypy.jit.metainterp.pyjitpl # self.index ... pypy.jit.codewriter.call # self.mainjitcode ... pypy.jit.codewriter.call - # self.on_compile ... pypy.jit.metainterp.warmstate # These attributes are read by the backend in CALL_ASSEMBLER: # self.assembler_helper_adr diff --git a/pypy/jit/metainterp/jitprof.py b/pypy/jit/metainterp/jitprof.py --- a/pypy/jit/metainterp/jitprof.py +++ b/pypy/jit/metainterp/jitprof.py @@ -18,8 +18,8 @@ OPT_FORCINGS ABORT_TOO_LONG ABORT_BRIDGE +ABORT_BAD_LOOP ABORT_ESCAPE -ABORT_BAD_LOOP ABORT_FORCE_QUASIIMMUT NVIRTUALS NVHOLES @@ -30,10 +30,13 @@ TOTAL_FREED_BRIDGES """ +counter_names = [] + def _setup(): names = counters.split() for i, name in enumerate(names): globals()[name] = i + counter_names.append(name) global ncounters ncounters = len(names) _setup() diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -234,11 +234,11 @@ # longlongs are treated as floats, see # e.g. llsupport/descr.py:getDescrClass is_float = True - elif kind == 'u': + elif kind == 'u' or kind == 's': # they're all False pass else: - assert False, "unsupported ffitype or kind" + raise NotImplementedError("unsupported ffitype or kind: %s" % kind) # fieldsize = rffi.getintfield(ffitype, 'c_size') return self.optimizer.cpu.interiorfielddescrof_dynamic( diff --git a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py --- a/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_multilabel.py @@ -442,6 +442,22 @@ """ self.optimize_loop(ops, expected) + def test_optimizer_renaming_boxes_not_imported(self): + ops = """ + [p1] + i1 = strlen(p1) + label(p1) + jump(p1) + """ + expected = """ + [p1] + i1 = strlen(p1) + label(p1, i1) + i11 = same_as(i1) + jump(p1, i11) + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeoptTestMultiLabel, LLtypeMixin): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -117,7 +117,7 @@ def optimize_loop(self, ops, optops, call_pure_results=None): loop = self.parse(ops) - token = JitCellToken() + token = JitCellToken() loop.operations = [ResOperation(rop.LABEL, loop.inputargs, None, descr=TargetToken(token))] + \ loop.operations if loop.operations[-1].getopnum() == rop.JUMP: diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -271,6 +271,10 @@ if newresult is not op.result and not newvalue.is_constant(): op = ResOperation(rop.SAME_AS, [op.result], newresult) self.optimizer._newoperations.append(op) + if self.optimizer.loop.logops: + debug_print(' Falling back to add extra: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + self.optimizer.flush() self.optimizer.emitting_dissabled = False @@ -435,7 +439,13 @@ return for a in op.getarglist(): if not isinstance(a, Const) and a not in seen: - self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, seen) + self.ensure_short_op_emitted(self.short_boxes.producer(a), optimizer, + seen) + + if self.optimizer.loop.logops: + debug_print(' Emitting short op: ' + + self.optimizer.loop.logops.repr_of_resop(op)) + optimizer.send_extra_operation(op) seen[op.result] = True if op.is_ovf(): diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1553,6 +1553,7 @@ class MetaInterp(object): in_recursion = 0 + cancel_count = 0 def __init__(self, staticdata, jitdriver_sd): self.staticdata = staticdata @@ -1793,6 +1794,15 @@ def aborted_tracing(self, reason): self.staticdata.profiler.count(reason) debug_print('~~~ ABORTING TRACING') + jd_sd = self.jitdriver_sd + if not self.current_merge_points: + greenkey = None # we're in the bridge + else: + greenkey = self.current_merge_points[0][0][:jd_sd.num_green_args] + self.staticdata.warmrunnerdesc.hooks.on_abort(reason, + jd_sd.jitdriver, + greenkey, + jd_sd.warmstate.get_location_str(greenkey)) self.staticdata.stats.aborted() def blackhole_if_trace_too_long(self): @@ -1966,9 +1976,14 @@ raise SwitchToBlackhole(ABORT_BAD_LOOP) # For now self.compile_loop(original_boxes, live_arg_boxes, start, resumedescr) # creation of the loop was cancelled! + self.cancel_count += 1 + if self.staticdata.warmrunnerdesc: + memmgr = self.staticdata.warmrunnerdesc.memory_manager + if memmgr: + if self.cancel_count > memmgr.max_unroll_loops: + self.staticdata.log('cancelled too many times!') + raise SwitchToBlackhole(ABORT_BAD_LOOP) self.staticdata.log('cancelled, tracing more...') - #self.staticdata.log('cancelled, stopping tracing') - #raise SwitchToBlackhole(ABORT_BAD_LOOP) # Otherwise, no loop found so far, so continue tracing. start = len(self.history.operations) diff --git a/pypy/jit/metainterp/resoperation.py b/pypy/jit/metainterp/resoperation.py --- a/pypy/jit/metainterp/resoperation.py +++ b/pypy/jit/metainterp/resoperation.py @@ -16,15 +16,15 @@ # debug name = "" pc = 0 + opnum = 0 + + _attrs_ = ('result',) def __init__(self, result): self.result = result - # methods implemented by each concrete class - # ------------------------------------------ - def getopnum(self): - raise NotImplementedError + return self.opnum # methods implemented by the arity mixins # --------------------------------------- @@ -64,6 +64,9 @@ def setdescr(self, descr): raise NotImplementedError + def cleardescr(self): + pass + # common methods # -------------- @@ -196,6 +199,9 @@ self._check_descr(descr) self._descr = descr + def cleardescr(self): + self._descr = None + def _check_descr(self, descr): if not we_are_translated() and getattr(descr, 'I_am_a_descr', False): return # needed for the mock case in oparser_model @@ -590,12 +596,9 @@ baseclass = PlainResOp mixin = arity2mixin.get(arity, N_aryOp) - def getopnum(self): - return opnum - cls_name = '%s_OP' % name bases = (get_base_class(mixin, baseclass),) - dic = {'getopnum': getopnum} + dic = {'opnum': opnum} return type(cls_name, bases, dic) setup(__name__ == '__main__') # print out the table when run directly diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -56,8 +56,6 @@ greenfield_info = None result_type = result_kind portal_runner_ptr = "???" - on_compile = lambda *args: None - on_compile_bridge = lambda *args: None stats = history.Stats() cpu = CPUClass(rtyper, stats, None, False) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2629,6 +2629,38 @@ self.check_jitcell_token_count(1) self.check_target_token_count(5) + def test_max_unroll_loops(self): + from pypy.jit.metainterp.optimize import InvalidLoop + from pypy.jit.metainterp import optimizeopt + myjitdriver = JitDriver(greens = [], reds = ['n', 'i']) + # + def f(n, limit): + set_param(myjitdriver, 'threshold', 5) + set_param(myjitdriver, 'max_unroll_loops', limit) + i = 0 + while i < n: + myjitdriver.jit_merge_point(n=n, i=i) + print i + i += 1 + return i + # + def my_optimize_trace(*args, **kwds): + raise InvalidLoop + old_optimize_trace = optimizeopt.optimize_trace + optimizeopt.optimize_trace = my_optimize_trace + try: + res = self.meta_interp(f, [23, 4]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(3) + # + res = self.meta_interp(f, [23, 20]) + assert res == 23 + self.check_trace_count(0) + self.check_aborted_count(2) + finally: + optimizeopt.optimize_trace = old_optimize_trace + def test_retrace_limit_with_extra_guards(self): myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'sa', 'a', 'node']) diff --git a/pypy/jit/metainterp/test/test_compile.py b/pypy/jit/metainterp/test/test_compile.py --- a/pypy/jit/metainterp/test/test_compile.py +++ b/pypy/jit/metainterp/test/test_compile.py @@ -53,8 +53,6 @@ call_pure_results = {} class jitdriver_sd: warmstate = FakeState() - on_compile = staticmethod(lambda *args: None) - on_compile_bridge = staticmethod(lambda *args: None) virtualizable_info = None def test_compile_loop(): diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -148,28 +148,38 @@ self.check_resops({'jump': 1, 'int_lt': 2, 'setinteriorfield_raw': 4, 'getinteriorfield_raw': 8, 'int_add': 6, 'guard_true': 2}) - def test_array_getitem_uint8(self): + def _test_getitem_type(self, TYPE, ffitype, COMPUTE_TYPE): + reds = ["n", "i", "s", "data"] + if COMPUTE_TYPE is lltype.Float: + # Move the float var to the back. + reds.remove("s") + reds.append("s") myjitdriver = JitDriver( greens = [], - reds = ["n", "i", "s", "data"], + reds = reds, ) def f(data, n): - i = s = 0 + i = 0 + s = rffi.cast(COMPUTE_TYPE, 0) while i < n: myjitdriver.jit_merge_point(n=n, i=i, s=s, data=data) - s += rffi.cast(lltype.Signed, array_getitem(types.uchar, 1, data, 0, 0)) + s += rffi.cast(COMPUTE_TYPE, array_getitem(ffitype, rffi.sizeof(TYPE), data, 0, 0)) i += 1 return s + def main(n): + with lltype.scoped_alloc(rffi.CArray(TYPE), 1) as data: + data[0] = rffi.cast(TYPE, 200) + return f(data, n) + assert self.meta_interp(main, [10]) == 2000 - def main(n): - with lltype.scoped_alloc(rffi.CArray(rffi.UCHAR), 1) as data: - data[0] = rffi.cast(rffi.UCHAR, 200) - return f(data, n) - - assert self.meta_interp(main, [10]) == 2000 + def test_array_getitem_uint8(self): + self._test_getitem_type(rffi.UCHAR, types.uchar, lltype.Signed) self.check_resops({'jump': 1, 'int_lt': 2, 'getinteriorfield_raw': 2, 'guard_true': 2, 'int_add': 4}) + def test_array_getitem_float(self): + self._test_getitem_type(rffi.FLOAT, types.float, lltype.Float) + class TestFfiCall(FfiCallTests, LLJitMixin): supports_all = False diff --git a/pypy/jit/metainterp/test/test_jitdriver.py b/pypy/jit/metainterp/test/test_jitdriver.py --- a/pypy/jit/metainterp/test/test_jitdriver.py +++ b/pypy/jit/metainterp/test/test_jitdriver.py @@ -10,57 +10,6 @@ def getloc2(g): return "in jitdriver2, with g=%d" % g -class JitDriverTests(object): - def test_on_compile(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = looptoken - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - i += 1 - - self.meta_interp(loop, [1, 4]) - assert sorted(called.keys()) == [(4, 1, "loop")] - self.meta_interp(loop, [2, 4]) - assert sorted(called.keys()) == [(4, 1, "loop"), - (4, 2, "loop")] - - def test_on_compile_bridge(self): - called = {} - - class MyJitDriver(JitDriver): - def on_compile(self, logger, looptoken, operations, type, n, m): - called[(m, n, type)] = loop - def on_compile_bridge(self, logger, orig_token, operations, n): - assert 'bridge' not in called - called['bridge'] = orig_token - - driver = MyJitDriver(greens = ['n', 'm'], reds = ['i']) - - def loop(n, m): - i = 0 - while i < n + m: - driver.can_enter_jit(n=n, m=m, i=i) - driver.jit_merge_point(n=n, m=m, i=i) - if i >= 4: - i += 2 - i += 1 - - self.meta_interp(loop, [1, 10]) - assert sorted(called.keys()) == ['bridge', (10, 1, "loop")] - - -class TestLLtypeSingle(JitDriverTests, LLJitMixin): - pass - class MultipleJitDriversTests(object): def test_simple(self): diff --git a/pypy/jit/metainterp/test/test_jitiface.py b/pypy/jit/metainterp/test/test_jitiface.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_jitiface.py @@ -0,0 +1,148 @@ + +from pypy.rlib.jit import JitDriver, JitHookInterface +from pypy.rlib import jit_hooks +from pypy.jit.metainterp.test.support import LLJitMixin +from pypy.jit.codewriter.policy import JitPolicy +from pypy.jit.metainterp.jitprof import ABORT_FORCE_QUASIIMMUT +from pypy.jit.metainterp.resoperation import rop +from pypy.rpython.annlowlevel import hlstr + +class TestJitHookInterface(LLJitMixin): + def test_abort_quasi_immut(self): + reasons = [] + + class MyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + assert jitdriver is myjitdriver + assert len(greenkey) == 1 + reasons.append(reason) + assert greenkey_repr == 'blah' + + iface = MyJitIface() + + myjitdriver = JitDriver(greens=['foo'], reds=['x', 'total'], + get_printable_location=lambda *args: 'blah') + + class Foo: + _immutable_fields_ = ['a?'] + def __init__(self, a): + self.a = a + def f(a, x): + foo = Foo(a) + total = 0 + while x > 0: + myjitdriver.jit_merge_point(foo=foo, x=x, total=total) + # read a quasi-immutable field out of a Constant + total += foo.a + foo.a += 1 + x -= 1 + return total + # + assert f(100, 7) == 721 + res = self.meta_interp(f, [100, 7], policy=JitPolicy(iface)) + assert res == 721 + assert reasons == [ABORT_FORCE_QUASIIMMUT] * 2 + + def test_on_compile(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append(("compile", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + def before_compile(self, di): + called.append(("optimize", di.greenkey[1].getint(), + di.greenkey[0].getint(), di.type)) + + #def before_optimize(self, jitdriver, logger, looptoken, oeprations, + # type, greenkey): + # called.append(("trace", greenkey[1].getint(), + # greenkey[0].getint(), type)) + + iface = MyJitIface() + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + i += 1 + + self.meta_interp(loop, [1, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop")] + self.meta_interp(loop, [2, 4], policy=JitPolicy(iface)) + assert called == [#("trace", 4, 1, "loop"), + ("optimize", 4, 1, "loop"), + ("compile", 4, 1, "loop"), + #("trace", 4, 2, "loop"), + ("optimize", 4, 2, "loop"), + ("compile", 4, 2, "loop")] + + def test_on_compile_bridge(self): + called = [] + + class MyJitIface(JitHookInterface): + def after_compile(self, di): + called.append("compile") + + def after_compile_bridge(self, di): + called.append("compile_bridge") + + def before_compile_bridge(self, di): + called.append("before_compile_bridge") + + driver = JitDriver(greens = ['n', 'm'], reds = ['i']) + + def loop(n, m): + i = 0 + while i < n + m: + driver.can_enter_jit(n=n, m=m, i=i) + driver.jit_merge_point(n=n, m=m, i=i) + if i >= 4: + i += 2 + i += 1 + + self.meta_interp(loop, [1, 10], policy=JitPolicy(MyJitIface())) + assert called == ["compile", "before_compile_bridge", "compile_bridge"] + + def test_resop_interface(self): + driver = JitDriver(greens = [], reds = ['i']) + + def loop(i): + while i > 0: + driver.jit_merge_point(i=i) + i -= 1 + + def main(): + loop(1) + op = jit_hooks.resop_new(rop.INT_ADD, + [jit_hooks.boxint_new(3), + jit_hooks.boxint_new(4)], + jit_hooks.boxint_new(1)) + assert hlstr(jit_hooks.resop_getopname(op)) == 'int_add' + assert jit_hooks.resop_getopnum(op) == rop.INT_ADD + box = jit_hooks.resop_getarg(op, 0) + assert jit_hooks.box_getint(box) == 3 + box2 = jit_hooks.box_clone(box) + assert box2 != box + assert jit_hooks.box_getint(box2) == 3 + assert not jit_hooks.box_isconst(box2) + box3 = jit_hooks.box_constbox(box) + assert jit_hooks.box_getint(box) == 3 + assert jit_hooks.box_isconst(box3) + box4 = jit_hooks.box_nonconstbox(box) + assert not jit_hooks.box_isconst(box4) + box5 = jit_hooks.boxint_new(18) + jit_hooks.resop_setarg(op, 0, box5) + assert jit_hooks.resop_getarg(op, 0) == box5 + box6 = jit_hooks.resop_getresult(op) + assert jit_hooks.box_getint(box6) == 1 + jit_hooks.resop_setresult(op, box5) + assert jit_hooks.resop_getresult(op) == box5 + + self.meta_interp(main, []) diff --git a/pypy/jit/metainterp/test/test_resoperation.py b/pypy/jit/metainterp/test/test_resoperation.py --- a/pypy/jit/metainterp/test/test_resoperation.py +++ b/pypy/jit/metainterp/test/test_resoperation.py @@ -30,17 +30,17 @@ cls = rop.opclasses[rop.rop.INT_ADD] assert issubclass(cls, rop.PlainResOp) assert issubclass(cls, rop.BinaryOp) - assert cls.getopnum.im_func(None) == rop.rop.INT_ADD + assert cls.getopnum.im_func(cls) == rop.rop.INT_ADD cls = rop.opclasses[rop.rop.CALL] assert issubclass(cls, rop.ResOpWithDescr) assert issubclass(cls, rop.N_aryOp) - assert cls.getopnum.im_func(None) == rop.rop.CALL + assert cls.getopnum.im_func(cls) == rop.rop.CALL cls = rop.opclasses[rop.rop.GUARD_TRUE] assert issubclass(cls, rop.GuardResOp) assert issubclass(cls, rop.UnaryOp) - assert cls.getopnum.im_func(None) == rop.rop.GUARD_TRUE + assert cls.getopnum.im_func(cls) == rop.rop.GUARD_TRUE def test_mixins_in_common_base(): INT_ADD = rop.opclasses[rop.rop.INT_ADD] diff --git a/pypy/jit/metainterp/test/test_virtualstate.py b/pypy/jit/metainterp/test/test_virtualstate.py --- a/pypy/jit/metainterp/test/test_virtualstate.py +++ b/pypy/jit/metainterp/test/test_virtualstate.py @@ -5,7 +5,7 @@ VArrayStateInfo, NotVirtualStateInfo, VirtualState, ShortBoxes from pypy.jit.metainterp.optimizeopt.optimizer import OptValue from pypy.jit.metainterp.history import BoxInt, BoxFloat, BoxPtr, ConstInt, ConstPtr -from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem import lltype, llmemory from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin, BaseTest, \ equaloplists, FakeDescrWithSnapshot from pypy.jit.metainterp.optimizeopt.intutils import IntBound @@ -82,6 +82,13 @@ assert isgeneral(value1, value2) assert not isgeneral(value2, value1) + assert isgeneral(OptValue(ConstInt(7)), OptValue(ConstInt(7))) + S = lltype.GcStruct('S') + foo = lltype.malloc(S) + fooref = lltype.cast_opaque_ptr(llmemory.GCREF, foo) + assert isgeneral(OptValue(ConstPtr(fooref)), + OptValue(ConstPtr(fooref))) + def test_field_matching_generalization(self): const1 = NotVirtualStateInfo(OptValue(ConstInt(1))) const2 = NotVirtualStateInfo(OptValue(ConstInt(2))) diff --git a/pypy/jit/metainterp/test/test_ztranslation.py b/pypy/jit/metainterp/test/test_ztranslation.py --- a/pypy/jit/metainterp/test/test_ztranslation.py +++ b/pypy/jit/metainterp/test/test_ztranslation.py @@ -3,7 +3,9 @@ from pypy.jit.backend.llgraph import runner from pypy.rlib.jit import JitDriver, unroll_parameters, set_param from pypy.rlib.jit import PARAMETERS, dont_look_inside, hint +from pypy.rlib.jit_hooks import boxint_new, resop_new, resop_getopnum from pypy.jit.metainterp.jitprof import Profiler +from pypy.jit.metainterp.resoperation import rop from pypy.rpython.lltypesystem import lltype, llmemory class TranslationTest: @@ -22,6 +24,7 @@ # - jitdriver hooks # - two JITs # - string concatenation, slicing and comparison + # - jit hooks interface class Frame(object): _virtualizable2_ = ['l[*]'] @@ -91,7 +94,9 @@ return f.i # def main(i, j): - return f(i) - f2(i+j, i, j) + op = resop_new(rop.INT_ADD, [boxint_new(3), boxint_new(5)], + boxint_new(8)) + return f(i) - f2(i+j, i, j) + resop_getopnum(op) res = ll_meta_interp(main, [40, 5], CPUClass=self.CPUClass, type_system=self.type_system, listops=True) diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -1,4 +1,5 @@ import sys, py +from pypy.tool.sourcetools import func_with_new_name from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.annlowlevel import llhelper, MixLevelHelperAnnotator,\ cast_base_ptr_to_instance, hlstr @@ -112,7 +113,7 @@ return ll_meta_interp(function, args, backendopt=backendopt, translate_support_code=True, **kwds) -def _find_jit_marker(graphs, marker_name): +def _find_jit_marker(graphs, marker_name, check_driver=True): results = [] for graph in graphs: for block in graph.iterblocks(): @@ -120,8 +121,8 @@ op = block.operations[i] if (op.opname == 'jit_marker' and op.args[0].value == marker_name and - (op.args[1].value is None or - op.args[1].value.active)): # the jitdriver + (not check_driver or op.args[1].value is None or + op.args[1].value.active)): # the jitdriver results.append((graph, block, i)) return results @@ -140,6 +141,9 @@ "found several jit_merge_points in the same graph") return results +def find_access_helpers(graphs): + return _find_jit_marker(graphs, 'access_helper', False) + def locate_jit_merge_point(graph): [(graph, block, pos)] = find_jit_merge_points([graph]) return block, pos, block.operations[pos] @@ -206,6 +210,7 @@ vrefinfo = VirtualRefInfo(self) self.codewriter.setup_vrefinfo(vrefinfo) # + self.hooks = policy.jithookiface self.make_virtualizable_infos() self.make_exception_classes() self.make_driverhook_graphs() @@ -213,6 +218,7 @@ self.rewrite_jit_merge_points(policy) verbose = False # not self.cpu.translate_support_code + self.rewrite_access_helpers() self.codewriter.make_jitcodes(verbose=verbose) self.rewrite_can_enter_jits() self.rewrite_set_param() @@ -619,6 +625,24 @@ graph = self.annhelper.getgraph(func, args_s, s_result) return self.annhelper.graph2delayed(graph, FUNC) + def rewrite_access_helpers(self): + ah = find_access_helpers(self.translator.graphs) + for graph, block, index in ah: + op = block.operations[index] + self.rewrite_access_helper(op) + + def rewrite_access_helper(self, op): + ARGS = [arg.concretetype for arg in op.args[2:]] + RESULT = op.result.concretetype + FUNCPTR = lltype.Ptr(lltype.FuncType(ARGS, RESULT)) + # make sure we make a copy of function so it no longer belongs + # to extregistry + func = op.args[1].value + func = func_with_new_name(func, func.func_name + '_compiled') + ptr = self.helper_func(FUNCPTR, func) + op.opname = 'direct_call' + op.args = [Constant(ptr, FUNCPTR)] + op.args[2:] + def rewrite_jit_merge_points(self, policy): for jd in self.jitdrivers_sd: self.rewrite_jit_merge_point(jd, policy) diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -244,6 +244,11 @@ if self.warmrunnerdesc.memory_manager: self.warmrunnerdesc.memory_manager.max_retrace_guards = value + def set_param_max_unroll_loops(self, value): + if self.warmrunnerdesc: + if self.warmrunnerdesc.memory_manager: + self.warmrunnerdesc.memory_manager.max_unroll_loops = value + def disable_noninlinable_function(self, greenkey): cell = self.jit_cell_at_key(greenkey) cell.dont_trace_here = True @@ -596,20 +601,6 @@ return fn(*greenargs) self.should_unroll_one_iteration = should_unroll_one_iteration - if hasattr(jd.jitdriver, 'on_compile'): - def on_compile(logger, token, operations, type, greenkey): - greenargs = unwrap_greenkey(greenkey) - return jd.jitdriver.on_compile(logger, token, operations, type, - *greenargs) - def on_compile_bridge(logger, orig_token, operations, n): - return jd.jitdriver.on_compile_bridge(logger, orig_token, - operations, n) - jd.on_compile = on_compile - jd.on_compile_bridge = on_compile_bridge - else: - jd.on_compile = lambda *args: None - jd.on_compile_bridge = lambda *args: None - redargtypes = ''.join([kind[0] for kind in jd.red_args_types]) def get_assembler_token(greenkey): diff --git a/pypy/jit/tool/oparser.py b/pypy/jit/tool/oparser.py --- a/pypy/jit/tool/oparser.py +++ b/pypy/jit/tool/oparser.py @@ -89,11 +89,18 @@ assert typ == 'class' return self.model.ConstObj(ootype.cast_to_object(obj)) - def get_descr(self, poss_descr): + def get_descr(self, poss_descr, allow_invent): if poss_descr.startswith('<'): return None - else: + try: return self._consts[poss_descr] + except KeyError: + if allow_invent: + int(poss_descr) + token = self.model.JitCellToken() + tt = self.model.TargetToken(token) + self._consts[poss_descr] = tt + return tt def box_for_var(self, elem): try: @@ -186,7 +193,8 @@ poss_descr = allargs[-1].strip() if poss_descr.startswith('descr='): - descr = self.get_descr(poss_descr[len('descr='):]) + descr = self.get_descr(poss_descr[len('descr='):], + opname == 'label') allargs = allargs[:-1] for arg in allargs: arg = arg.strip() diff --git a/pypy/jit/tool/oparser_model.py b/pypy/jit/tool/oparser_model.py --- a/pypy/jit/tool/oparser_model.py +++ b/pypy/jit/tool/oparser_model.py @@ -6,7 +6,7 @@ from pypy.jit.metainterp.history import TreeLoop, JitCellToken from pypy.jit.metainterp.history import Box, BoxInt, BoxFloat from pypy.jit.metainterp.history import ConstInt, ConstObj, ConstPtr, ConstFloat - from pypy.jit.metainterp.history import BasicFailDescr + from pypy.jit.metainterp.history import BasicFailDescr, TargetToken from pypy.jit.metainterp.typesystem import llhelper from pypy.jit.metainterp.history import get_const_ptr_for_string @@ -42,6 +42,10 @@ class JitCellToken(object): I_am_a_descr = True + class TargetToken(object): + def __init__(self, jct): + pass + class BasicFailDescr(object): I_am_a_descr = True diff --git a/pypy/jit/tool/pypytrace.vim b/pypy/jit/tool/pypytrace.vim --- a/pypy/jit/tool/pypytrace.vim +++ b/pypy/jit/tool/pypytrace.vim @@ -19,6 +19,7 @@ syn match pypyLoopArgs '^[[].*' syn match pypyLoopStart '^#.*' syn match pypyDebugMergePoint '^debug_merge_point(.\+)' +syn match pypyLogBoundary '[[][0-9a-f]\+[]] \([{].\+\|.\+[}]\)$' hi def link pypyLoopStart Structure "hi def link pypyLoopArgs PreProc @@ -29,3 +30,4 @@ hi def link pypyNumber Number hi def link pypyDescr PreProc hi def link pypyDescrField Label +hi def link pypyLogBoundary Statement diff --git a/pypy/jit/tool/test/test_oparser.py b/pypy/jit/tool/test/test_oparser.py --- a/pypy/jit/tool/test/test_oparser.py +++ b/pypy/jit/tool/test/test_oparser.py @@ -4,7 +4,8 @@ from pypy.jit.tool.oparser import parse, OpParser from pypy.jit.metainterp.resoperation import rop -from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken +from pypy.jit.metainterp.history import AbstractDescr, BoxInt, JitCellToken,\ + TargetToken class BaseTestOparser(object): @@ -243,6 +244,16 @@ b = loop.getboxes() assert isinstance(b.sum0, BoxInt) + def test_label(self): + x = """ + [i0] + label(i0, descr=1) + jump(i0, descr=1) + """ + loop = self.parse(x) + assert loop.operations[0].getdescr() is loop.operations[1].getdescr() + assert isinstance(loop.operations[0].getdescr(), TargetToken) + class ForbiddenModule(object): def __init__(self, name, old_mod): diff --git a/pypy/module/_codecs/interp_codecs.py b/pypy/module/_codecs/interp_codecs.py --- a/pypy/module/_codecs/interp_codecs.py +++ b/pypy/module/_codecs/interp_codecs.py @@ -108,6 +108,10 @@ w_result = state.codec_search_cache.get(normalized_encoding, None) if w_result is not None: return w_result + return _lookup_codec_loop(space, encoding, normalized_encoding) + +def _lookup_codec_loop(space, encoding, normalized_encoding): + state = space.fromcache(CodecState) if state.codec_need_encodings: w_import = space.getattr(space.builtin, space.wrap("__import__")) # registers new codecs diff --git a/pypy/module/_codecs/test/test_codecs.py b/pypy/module/_codecs/test/test_codecs.py --- a/pypy/module/_codecs/test/test_codecs.py +++ b/pypy/module/_codecs/test/test_codecs.py @@ -588,10 +588,18 @@ raises(UnicodeDecodeError, '+3ADYAA-'.decode, 'utf-7') def test_utf_16_encode_decode(self): - import codecs + import codecs, sys x = u'123abc' - assert codecs.getencoder('utf-16')(x) == ('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) - assert codecs.getdecoder('utf-16')('\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) + if sys.byteorder == 'big': + assert codecs.getencoder('utf-16')(x) == ( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c', 6) + assert codecs.getdecoder('utf-16')( + '\xfe\xff\x001\x002\x003\x00a\x00b\x00c') == (x, 14) + else: + assert codecs.getencoder('utf-16')(x) == ( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00', 6) + assert codecs.getdecoder('utf-16')( + '\xff\xfe1\x002\x003\x00a\x00b\x00c\x00') == (x, 14) def test_unicode_escape(self): assert u'\\'.encode('unicode-escape') == '\\\\' diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -19,8 +19,9 @@ # cpu affinity settings srcdir = py.path.local(pypydir).join('translator', 'c', 'src') -eci = ExternalCompilationInfo(separate_module_files= - [srcdir.join('profiling.c')]) +eci = ExternalCompilationInfo( + separate_module_files=[srcdir.join('profiling.c')], + export_symbols=['pypy_setup_profiling', 'pypy_teardown_profiling']) c_setup_profiling = rffi.llexternal('pypy_setup_profiling', [], lltype.Void, diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -23,6 +23,7 @@ from pypy.interpreter.function import StaticMethod from pypy.objspace.std.sliceobject import W_SliceObject from pypy.module.__builtin__.descriptor import W_Property +from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize @@ -387,6 +388,8 @@ "Float": "space.w_float", "Long": "space.w_long", "Complex": "space.w_complex", + "ByteArray": "space.w_bytearray", + "MemoryView": "space.gettypeobject(W_MemoryView.typedef)", "BaseObject": "space.w_object", 'None': 'space.type(space.w_None)', 'NotImplemented': 'space.type(space.w_NotImplemented)', diff --git a/pypy/module/cpyext/buffer.py b/pypy/module/cpyext/buffer.py --- a/pypy/module/cpyext/buffer.py +++ b/pypy/module/cpyext/buffer.py @@ -1,6 +1,36 @@ +from pypy.interpreter.error import OperationError from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( cpython_api, CANNOT_FAIL, Py_buffer) +from pypy.module.cpyext.pyobject import PyObject + + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def PyObject_CheckBuffer(space, w_obj): + """Return 1 if obj supports the buffer interface otherwise 0.""" + return 0 # the bf_getbuffer field is never filled by cpyext + + at cpython_api([PyObject, lltype.Ptr(Py_buffer), rffi.INT_real], + rffi.INT_real, error=-1) +def PyObject_GetBuffer(space, w_obj, view, flags): + """Export obj into a Py_buffer, view. These arguments must + never be NULL. The flags argument is a bit field indicating what + kind of buffer the caller is prepared to deal with and therefore what + kind of buffer the exporter is allowed to return. The buffer interface + allows for complicated memory sharing possibilities, but some caller may + not be able to handle all the complexity but may want to see if the + exporter will let them take a simpler view to its memory. + + Some exporters may not be able to share memory in every possible way and + may need to raise errors to signal to some consumers that something is + just not possible. These errors should be a BufferError unless + there is another error that is actually causing the problem. The + exporter can use flags information to simplify how much of the + Py_buffer structure is filled in with non-default values and/or + raise an error if the object can't support a simpler view of its memory. + + 0 is returned on success and -1 on error.""" + raise OperationError(space.w_TypeError, space.wrap( + 'PyPy does not yet implement the new buffer interface')) @cpython_api([lltype.Ptr(Py_buffer), lltype.Char], rffi.INT_real, error=CANNOT_FAIL) def PyBuffer_IsContiguous(space, view, fortran): diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -123,10 +123,6 @@ typedef Py_ssize_t (*segcountproc)(PyObject *, Py_ssize_t *); typedef Py_ssize_t (*charbufferproc)(PyObject *, Py_ssize_t, char **); -typedef int (*objobjproc)(PyObject *, PyObject *); -typedef int (*visitproc)(PyObject *, void *); -typedef int (*traverseproc)(PyObject *, visitproc, void *); - /* Py3k buffer interface */ typedef struct bufferinfo { void *buf; @@ -153,6 +149,41 @@ typedef int (*getbufferproc)(PyObject *, Py_buffer *, int); typedef void (*releasebufferproc)(PyObject *, Py_buffer *); + /* Flags for getting buffers */ +#define PyBUF_SIMPLE 0 +#define PyBUF_WRITABLE 0x0001 +/* we used to include an E, backwards compatible alias */ +#define PyBUF_WRITEABLE PyBUF_WRITABLE +#define PyBUF_FORMAT 0x0004 +#define PyBUF_ND 0x0008 +#define PyBUF_STRIDES (0x0010 | PyBUF_ND) +#define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) +#define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) +#define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) +#define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#define PyBUF_CONTIG (PyBUF_ND | PyBUF_WRITABLE) +#define PyBUF_CONTIG_RO (PyBUF_ND) + +#define PyBUF_STRIDED (PyBUF_STRIDES | PyBUF_WRITABLE) +#define PyBUF_STRIDED_RO (PyBUF_STRIDES) + +#define PyBUF_RECORDS (PyBUF_STRIDES | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_RECORDS_RO (PyBUF_STRIDES | PyBUF_FORMAT) + +#define PyBUF_FULL (PyBUF_INDIRECT | PyBUF_WRITABLE | PyBUF_FORMAT) +#define PyBUF_FULL_RO (PyBUF_INDIRECT | PyBUF_FORMAT) + + +#define PyBUF_READ 0x100 +#define PyBUF_WRITE 0x200 +#define PyBUF_SHADOW 0x400 +/* end Py3k buffer interface */ + +typedef int (*objobjproc)(PyObject *, PyObject *); +typedef int (*visitproc)(PyObject *, void *); +typedef int (*traverseproc)(PyObject *, visitproc, void *); + typedef struct { /* For numbers without flag bit Py_TPFLAGS_CHECKTYPES set, all arguments are guaranteed to be of the object's type (modulo diff --git a/pypy/module/cpyext/include/patchlevel.h b/pypy/module/cpyext/include/patchlevel.h --- a/pypy/module/cpyext/include/patchlevel.h +++ b/pypy/module/cpyext/include/patchlevel.h @@ -29,7 +29,7 @@ #define PY_VERSION "2.7.1" /* PyPy version as a string */ -#define PYPY_VERSION "1.7.1" +#define PYPY_VERSION "1.8.1" /* Subversion Revision number of this file (not of the repository). * Empty since Mercurial migration. */ diff --git a/pypy/module/cpyext/include/pystate.h b/pypy/module/cpyext/include/pystate.h --- a/pypy/module/cpyext/include/pystate.h +++ b/pypy/module/cpyext/include/pystate.h @@ -5,7 +5,7 @@ struct _is; /* Forward */ typedef struct _is { - int _foo; + struct _is *next; } PyInterpreterState; typedef struct _ts { diff --git a/pypy/module/cpyext/pystate.py b/pypy/module/cpyext/pystate.py --- a/pypy/module/cpyext/pystate.py +++ b/pypy/module/cpyext/pystate.py @@ -2,7 +2,10 @@ cpython_api, generic_cpy_call, CANNOT_FAIL, CConfig, cpython_struct) from pypy.rpython.lltypesystem import rffi, lltype -PyInterpreterState = lltype.Ptr(cpython_struct("PyInterpreterState", ())) +PyInterpreterStateStruct = lltype.ForwardReference() +PyInterpreterState = lltype.Ptr(PyInterpreterStateStruct) +cpython_struct( + "PyInterpreterState", [('next', PyInterpreterState)], PyInterpreterStateStruct) PyThreadState = lltype.Ptr(cpython_struct("PyThreadState", [('interp', PyInterpreterState)])) @cpython_api([], PyThreadState, error=CANNOT_FAIL) @@ -54,7 +57,8 @@ class InterpreterState(object): def __init__(self, space): - self.interpreter_state = lltype.malloc(PyInterpreterState.TO, flavor='raw', immortal=True) + self.interpreter_state = lltype.malloc( + PyInterpreterState.TO, flavor='raw', zero=True, immortal=True) def new_thread_state(self): capsule = ThreadStateCapsule() diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -34,141 +34,6 @@ @cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) def PyObject_CheckBuffer(space, obj): - """Return 1 if obj supports the buffer interface otherwise 0.""" - raise NotImplementedError - - at cpython_api([PyObject, Py_buffer, rffi.INT_real], rffi.INT_real, error=-1) -def PyObject_GetBuffer(space, obj, view, flags): - """Export obj into a Py_buffer, view. These arguments must - never be NULL. The flags argument is a bit field indicating what - kind of buffer the caller is prepared to deal with and therefore what - kind of buffer the exporter is allowed to return. The buffer interface - allows for complicated memory sharing possibilities, but some caller may - not be able to handle all the complexity but may want to see if the - exporter will let them take a simpler view to its memory. - - Some exporters may not be able to share memory in every possible way and - may need to raise errors to signal to some consumers that something is - just not possible. These errors should be a BufferError unless - there is another error that is actually causing the problem. The - exporter can use flags information to simplify how much of the - Py_buffer structure is filled in with non-default values and/or - raise an error if the object can't support a simpler view of its memory. - - 0 is returned on success and -1 on error. - - The following table gives possible values to the flags arguments. - - Flag - - Description - - PyBUF_SIMPLE - - This is the default flag state. The returned - buffer may or may not have writable memory. The - format of the data will be assumed to be unsigned - bytes. This is a "stand-alone" flag constant. It - never needs to be '|'d to the others. The exporter - will raise an error if it cannot provide such a - contiguous buffer of bytes. - - PyBUF_WRITABLE - - The returned buffer must be writable. If it is - not writable, then raise an error. - - PyBUF_STRIDES - - This implies PyBUF_ND. The returned - buffer must provide strides information (i.e. the - strides cannot be NULL). This would be used when - the consumer can handle strided, discontiguous - arrays. Handling strides automatically assumes - you can handle shape. The exporter can raise an - error if a strided representation of the data is - not possible (i.e. without the suboffsets). - - PyBUF_ND - - The returned buffer must provide shape - information. The memory will be assumed C-style - contiguous (last dimension varies the - fastest). The exporter may raise an error if it - cannot provide this kind of contiguous buffer. If - this is not given then shape will be NULL. - - PyBUF_C_CONTIGUOUS - PyBUF_F_CONTIGUOUS - PyBUF_ANY_CONTIGUOUS - - These flags indicate that the contiguity returned - buffer must be respectively, C-contiguous (last - dimension varies the fastest), Fortran contiguous - (first dimension varies the fastest) or either - one. All of these flags imply - PyBUF_STRIDES and guarantee that the - strides buffer info structure will be filled in - correctly. - - PyBUF_INDIRECT - - This flag indicates the returned buffer must have - suboffsets information (which can be NULL if no - suboffsets are needed). This can be used when - the consumer can handle indirect array - referencing implied by these suboffsets. This - implies PyBUF_STRIDES. - - PyBUF_FORMAT - - The returned buffer must have true format - information if this flag is provided. This would - be used when the consumer is going to be checking - for what 'kind' of data is actually stored. An - exporter should always be able to provide this - information if requested. If format is not - explicitly requested then the format must be - returned as NULL (which means 'B', or - unsigned bytes) - - PyBUF_STRIDED - - This is equivalent to (PyBUF_STRIDES | - PyBUF_WRITABLE). - - PyBUF_STRIDED_RO - - This is equivalent to (PyBUF_STRIDES). - - PyBUF_RECORDS - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_RECORDS_RO - - This is equivalent to (PyBUF_STRIDES | - PyBUF_FORMAT). - - PyBUF_FULL - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT | PyBUF_WRITABLE). - - PyBUF_FULL_RO - - This is equivalent to (PyBUF_INDIRECT | - PyBUF_FORMAT). - - PyBUF_CONTIG - - This is equivalent to (PyBUF_ND | - PyBUF_WRITABLE). - - PyBUF_CONTIG_RO - - This is equivalent to (PyBUF_ND).""" raise NotImplementedError @cpython_api([rffi.CCHARP], Py_ssize_t, error=CANNOT_FAIL) diff --git a/pypy/module/cpyext/test/test_pystate.py b/pypy/module/cpyext/test/test_pystate.py --- a/pypy/module/cpyext/test/test_pystate.py +++ b/pypy/module/cpyext/test/test_pystate.py @@ -37,6 +37,7 @@ def test_thread_state_interp(self, space, api): ts = api.PyThreadState_Get() assert ts.c_interp == api.PyInterpreterState_Head() + assert ts.c_interp.c_next == nullptr(PyInterpreterState.TO) def test_basic_threadstate_dance(self, space, api): # Let extension modules call these functions, diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -9,7 +9,7 @@ appleveldefs = {} class Module(MixedModule): - applevel_name = 'numpypy' + applevel_name = '_numpypy' submodules = { 'pypy': PyPyModule @@ -48,6 +48,7 @@ 'int_': 'interp_boxes.W_LongBox', 'inexact': 'interp_boxes.W_InexactBox', 'floating': 'interp_boxes.W_FloatingBox', + 'float_': 'interp_boxes.W_Float64Box', 'float32': 'interp_boxes.W_Float32Box', 'float64': 'interp_boxes.W_Float64Box', } @@ -89,7 +90,6 @@ appleveldefs = { 'average': 'app_numpy.average', - 'mean': 'app_numpy.mean', 'sum': 'app_numpy.sum', 'min': 'app_numpy.min', 'identity': 'app_numpy.identity', @@ -98,5 +98,4 @@ 'e': 'app_numpy.e', 'pi': 'app_numpy.pi', 'arange': 'app_numpy.arange', - 'reshape': 'app_numpy.reshape', } diff --git a/pypy/module/micronumpy/app_numpy.py b/pypy/module/micronumpy/app_numpy.py --- a/pypy/module/micronumpy/app_numpy.py +++ b/pypy/module/micronumpy/app_numpy.py @@ -1,6 +1,6 @@ import math -import numpypy +import _numpypy inf = float("inf") @@ -11,33 +11,54 @@ def average(a): # This implements a weighted average, for now we don't implement the # weighting, just the average part! - return mean(a) + if not hasattr(a, "mean"): + a = _numpypy.array(a) + return a.mean() def identity(n, dtype=None): - a = numpypy.zeros((n,n), dtype=dtype) + a = _numpypy.zeros((n, n), dtype=dtype) for i in range(n): a[i][i] = 1 return a -def mean(a): - if not hasattr(a, "mean"): - a = numpypy.array(a) - return a.mean() +def sum(a,axis=None): + '''sum(a, axis=None) + Sum of array elements over a given axis. -def sum(a): + Parameters + ---------- + a : array_like + Elements to sum. + axis : integer, optional + Axis over which the sum is taken. By default `axis` is None, + and all elements are summed. + + Returns + ------- + sum_along_axis : ndarray + An array with the same shape as `a`, with the specified + axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar + is returned. If an output array is specified, a reference to + `out` is returned. + + See Also + -------- + ndarray.sum : Equivalent method. + ''' + # TODO: add to doc (once it's implemented): cumsum : Cumulative sum of array elements. if not hasattr(a, "sum"): - a = numpypy.array(a) - return a.sum() + a = _numpypy.array(a) + return a.sum(axis) -def min(a): +def min(a, axis=None): if not hasattr(a, "min"): - a = numpypy.array(a) - return a.min() + a = _numpypy.array(a) + return a.min(axis) -def max(a): +def max(a, axis=None): if not hasattr(a, "max"): - a = numpypy.array(a) - return a.max() + a = _numpypy.array(a) + return a.max(axis) def arange(start, stop=None, step=1, dtype=None): '''arange([start], stop[, step], dtype=None) @@ -47,48 +68,11 @@ stop = start start = 0 if dtype is None: - test = numpypy.array([start, stop, step, 0]) + test = _numpypy.array([start, stop, step, 0]) dtype = test.dtype - arr = numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) + arr = _numpypy.zeros(int(math.ceil((stop - start) / step)), dtype=dtype) i = start for j in range(arr.size): arr[j] = i i += step return arr - - -def reshape(a, shape): - '''reshape(a, newshape) - Gives a new shape to an array without changing its data. - - Parameters - ---------- - a : array_like - Array to be reshaped. - newshape : int or tuple of ints - The new shape should be compatible with the original shape. If - an integer, then the result will be a 1-D array of that length. - One shape dimension can be -1. In this case, the value is inferred - from the length of the array and remaining dimensions. - - Returns - ------- - reshaped_array : ndarray - This will be a new view object if possible; otherwise, it will - be a copy. - - - See Also - -------- - ndarray.reshape : Equivalent method. - - Notes - ----- - - It is not always possible to change the shape of an array without - copying the data. If you want an error to be raise if the data is copied, - you should assign the new shape to the shape attribute of the array -''' - if not hasattr(a, 'reshape'): - a = numpypy.array(a) - return a.reshape(shape) diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -372,13 +372,17 @@ def execute(self, interp): if self.name in SINGLE_ARG_FUNCTIONS: - if len(self.args) != 1: + if len(self.args) != 1 and self.name != 'sum': raise ArgumentMismatch arr = self.args[0].execute(interp) if not isinstance(arr, BaseArray): raise ArgumentNotAnArray if self.name == "sum": - w_res = arr.descr_sum(interp.space) + if len(self.args)>1: + w_res = arr.descr_sum(interp.space, + self.args[1].execute(interp)) + else: + w_res = arr.descr_sum(interp.space) elif self.name == "prod": w_res = arr.descr_prod(interp.space) elif self.name == "max": @@ -416,7 +420,7 @@ ('\]', 'array_right'), ('(->)|[\+\-\*\/]', 'operator'), ('=', 'assign'), - (',', 'coma'), + (',', 'comma'), ('\|', 'pipe'), ('\(', 'paren_left'), ('\)', 'paren_right'), @@ -504,7 +508,7 @@ return SliceConstant(start, stop, step) - def parse_expression(self, tokens): + def parse_expression(self, tokens, accept_comma=False): stack = [] while tokens.remaining(): token = tokens.pop() @@ -524,9 +528,13 @@ stack.append(RangeConstant(tokens.pop().v)) end = tokens.pop() assert end.name == 'pipe' + elif accept_comma and token.name == 'comma': + continue else: tokens.push() break + if accept_comma: + return stack stack.reverse() lhs = stack.pop() while stack: @@ -540,7 +548,7 @@ args = [] tokens.pop() # lparen while tokens.get(0).name != 'paren_right': - args.append(self.parse_expression(tokens)) + args += self.parse_expression(tokens, accept_comma=True) return FunctionCall(name, args) def parse_array_const(self, tokens): @@ -556,7 +564,7 @@ token = tokens.pop() if token.name == 'array_right': return elems - assert token.name == 'coma' + assert token.name == 'comma' def parse_statement(self, tokens): if (tokens.get(0).name == 'identifier' and diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -78,6 +78,7 @@ descr_sub = _binop_impl("subtract") descr_mul = _binop_impl("multiply") descr_div = _binop_impl("divide") + descr_pow = _binop_impl("power") descr_eq = _binop_impl("equal") descr_ne = _binop_impl("not_equal") descr_lt = _binop_impl("less") @@ -170,6 +171,7 @@ __sub__ = interp2app(W_GenericBox.descr_sub), __mul__ = interp2app(W_GenericBox.descr_mul), __div__ = interp2app(W_GenericBox.descr_div), + __pow__ = interp2app(W_GenericBox.descr_pow), __radd__ = interp2app(W_GenericBox.descr_radd), __rsub__ = interp2app(W_GenericBox.descr_rsub), @@ -245,6 +247,7 @@ long_name = "int64" W_LongBox.typedef = TypeDef(long_name, (W_SignedIntegerBox.typedef, int_typedef,), __module__ = "numpypy", + __new__ = interp2app(W_LongBox.descr__new__.im_func), ) W_ULongBox.typedef = TypeDef("u" + long_name, W_UnsignedIntegerBox.typedef, diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -1,19 +1,20 @@ from pypy.rlib import jit from pypy.rlib.objectmodel import instantiate -from pypy.module.micronumpy.strides import calculate_broadcast_strides +from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ + calculate_slice_strides -# Iterators for arrays -# -------------------- -# all those iterators with the exception of BroadcastIterator iterate over the -# entire array in C order (the last index changes the fastest). This will -# yield all elements. Views iterate over indices and look towards strides and -# backstrides to find the correct position. Notably the offset between -# x[..., i + 1] and x[..., i] will be strides[-1]. Offset between -# x[..., k + 1, 0] and x[..., k, i_max] will be backstrides[-2] etc. +class BaseTransform(object): + pass -# BroadcastIterator works like that, but for indexes that don't change source -# in the original array, strides[i] == backstrides[i] == 0 +class ViewTransform(BaseTransform): + def __init__(self, chunks): + # 4-tuple specifying slicing + self.chunks = chunks + +class BroadcastTransform(BaseTransform): + def __init__(self, res_shape): + self.res_shape = res_shape class BaseIterator(object): def next(self, shapelen): @@ -22,6 +23,15 @@ def done(self): raise NotImplementedError + def apply_transformations(self, arr, transformations): + v = self + for transform in transformations: + v = v.transform(arr, transform) + return v + + def transform(self, arr, t): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 @@ -36,6 +46,10 @@ def done(self): return self.offset >= self.size + def transform(self, arr, t): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).transform(arr, t) + class OneDimIterator(BaseIterator): def __init__(self, start, step, stop): self.offset = start @@ -52,26 +66,29 @@ def done(self): return self.offset == self.size -def view_iter_from_arr(arr): - return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) - class ViewIterator(BaseIterator): - def __init__(self, start, strides, backstrides, shape, res_shape=None): + def __init__(self, start, strides, backstrides, shape): self.offset = start self._done = False - if res_shape is not None and res_shape != shape: - r = calculate_broadcast_strides(strides, backstrides, - shape, res_shape) - self.strides, self.backstrides = r - self.res_shape = res_shape - else: - self.strides = strides - self.backstrides = backstrides - self.res_shape = shape + self.strides = strides + self.backstrides = backstrides + self.res_shape = shape self.indices = [0] * len(self.res_shape) + def transform(self, arr, t): + if isinstance(t, BroadcastTransform): + r = calculate_broadcast_strides(self.strides, self.backstrides, + self.res_shape, t.res_shape) + return ViewIterator(self.offset, r[0], r[1], t.res_shape) + elif isinstance(t, ViewTransform): + r = calculate_slice_strides(self.res_shape, self.offset, + self.strides, + self.backstrides, t.chunks) + return ViewIterator(r[1], r[2], r[3], r[0]) + @jit.unroll_safe def next(self, shapelen): + shapelen = jit.promote(len(self.res_shape)) offset = self.offset indices = [0] * shapelen for i in range(shapelen): @@ -96,6 +113,13 @@ res._done = done return res + def apply_transformations(self, arr, transformations): + v = BaseIterator.apply_transformations(self, arr, transformations) + if len(arr.shape) == 1: + return OneDimIterator(self.offset, self.strides[0], + self.res_shape[0]) + return v + def done(self): return self._done @@ -103,11 +127,59 @@ def next(self, shapelen): return self + def transform(self, arr, t): + pass + +class AxisIterator(BaseIterator): + def __init__(self, start, dim, shape, strides, backstrides): + self.res_shape = shape[:] + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + self.first_line = True + self.indices = [0] * len(shape) + self._done = False + self.offset = start + self.dim = dim + + @jit.unroll_safe + def next(self, shapelen): + offset = self.offset + first_line = self.first_line + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - 1: + if i == self.dim: + first_line = False + indices[i] += 1 + offset += self.strides[i] + break + else: + if i == self.dim: + first_line = True + indices[i] = 0 + offset -= self.backstrides[i] + else: + done = True + res = instantiate(AxisIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + res.first_line = first_line + res.dim = self.dim + return res + + def done(self): + return self._done + # ------ other iterators that are not part of the computation frame ---------- - -class AxisIterator(object): - """ This object will return offsets of each start of the last stride - """ + +class SkipLastAxisIterator(object): def __init__(self, arr): self.arr = arr self.indices = [0] * (len(arr.shape) - 1) @@ -125,4 +197,3 @@ self.offset -= self.arr.backstrides[i] else: self.done = True - diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -8,34 +8,39 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator,\ - view_iter_from_arr, OneDimIterator, AxisIterator +from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ + SkipLastAxisIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['result_size', 'frame', 'ri', 'self', 'result'], get_printable_location=signature.new_printable_location('numpy'), + name='numpy', ) all_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('all'), + name='numpy_all', ) any_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], reds=['frame', 'self', 'dtype'], get_printable_location=signature.new_printable_location('any'), + name='numpy_any', ) slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'], + reds=['self', 'frame', 'arr'], get_printable_location=signature.new_printable_location('slice'), + name='numpy_slice', ) + def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) @@ -152,9 +157,6 @@ # (meaning that the realignment of elements crosses from one step into another) # return None so that the caller can raise an exception. def calc_new_strides(new_shape, old_shape, old_strides): - # Return the proper strides for new_shape, or None if the mapping crosses - # stepping boundaries - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and # len(new_shape) > 0 steps = [] @@ -162,6 +164,7 @@ oldI = 0 new_strides = [] if old_strides[0] < old_strides[-1]: + #Start at old_shape[0], old_stides[0] for i in range(len(old_shape)): steps.append(old_strides[i] / last_step) last_step *= old_shape[i] @@ -179,10 +182,11 @@ if n_new_elems_used == n_old_elems_to_use: oldI += 1 if oldI >= len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] else: + #Start at old_shape[-1], old_strides[-1] for i in range(len(old_shape) - 1, -1, -1): steps.insert(0, old_strides[i] / last_step) last_step *= old_shape[i] @@ -202,7 +206,7 @@ if n_new_elems_used == n_old_elems_to_use: oldI -= 1 if oldI < -len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] return new_strides @@ -282,13 +286,17 @@ descr_rpow = _binop_right_impl("power") descr_rmod = _binop_right_impl("mod") - def _reduce_ufunc_impl(ufunc_name): - def impl(self, space): - return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, self, multidim=True) + def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) + return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") - descr_prod = _reduce_ufunc_impl("multiply") + descr_sum_promote = _reduce_ufunc_impl("add", True) + descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") @@ -297,6 +305,7 @@ greens=['shapelen', 'sig'], reds=['result', 'idx', 'frame', 'self', 'cur_best', 'dtype'], get_printable_location=signature.new_printable_location(op_name), + name='numpy_' + op_name, ) def loop(self): sig = self.find_sig() @@ -372,7 +381,7 @@ else: w_res = self.descr_mul(space, w_other) assert isinstance(w_res, BaseArray) - return w_res.descr_sum(space) + return w_res.descr_sum(space, space.wrap(-1)) def get_concrete(self): raise NotImplementedError @@ -380,6 +389,9 @@ def descr_get_dtype(self, space): return space.wrap(self.find_dtype()) + def descr_get_ndim(self, space): + return space.wrap(len(self.shape)) + @jit.unroll_safe def descr_get_shape(self, space): return space.newtuple([space.wrap(i) for i in self.shape]) @@ -409,7 +421,7 @@ def descr_repr(self, space): res = StringBuilder() res.append("array(") - concrete = self.get_concrete() + concrete = self.get_concrete_or_scalar() dtype = concrete.find_dtype() if not concrete.size: res.append('[]') @@ -422,8 +434,9 @@ else: concrete.to_str(space, 1, res, indent=' ') if (dtype is not interp_dtype.get_dtype_cache(space).w_float64dtype and - dtype is not interp_dtype.get_dtype_cache(space).w_int64dtype) or \ - not self.size: + not (dtype.kind == interp_dtype.SIGNEDLTR and + dtype.itemtype.get_element_size() == rffi.sizeof(lltype.Signed)) or + not self.size): res.append(", dtype=" + dtype.name) res.append(")") return space.wrap(res.build()) @@ -556,8 +569,26 @@ ) return w_result - def descr_mean(self, space): - return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) + w_denom = space.wrap(self.size) + else: + dim = space.int_w(w_axis) + w_denom = space.wrap(self.shape[dim]) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) + + def descr_var(self, space): + # var = mean((values - mean(values)) ** 2) + w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) + assert isinstance(w_res, BaseArray) + w_res = w_res.descr_pow(space, space.wrap(2)) + assert isinstance(w_res, BaseArray) + return w_res.descr_mean(space, space.w_None) + + def descr_std(self, space): + # std(v) = sqrt(var(v)) + return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) def descr_nonzero(self, space): if self.size > 1: @@ -592,11 +623,12 @@ def getitem(self, item): raise NotImplementedError - def find_sig(self, res_shape=None): + def find_sig(self, res_shape=None, arr=None): """ find a correct signature for the array """ res_shape = res_shape or self.shape - return signature.find_sig(self.create_sig(res_shape), self) + arr = arr or self + return signature.find_sig(self.create_sig(), arr) def descr_array_iface(self, space): if not self.shape: @@ -650,7 +682,7 @@ def copy(self, space): return Scalar(self.dtype, self.value) - def create_sig(self, res_shape): + def create_sig(self): return signature.ScalarSignature(self.dtype) def get_concrete_or_scalar(self): @@ -668,7 +700,8 @@ self.name = name def _del_sources(self): - # Function for deleting references to source arrays, to allow garbage-collecting them + # Function for deleting references to source arrays, + # to allow garbage-collecting them raise NotImplementedError def compute(self): @@ -720,11 +753,11 @@ self.size = size VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() return signature.VirtualSliceSignature( - self.child.create_sig(res_shape)) + self.child.create_sig()) def force_if_needed(self): if self.forced_result is None: @@ -734,6 +767,7 @@ def _del_sources(self): self.child = None + class Call1(VirtualArray): def __init__(self, ufunc, name, shape, res_dtype, values): VirtualArray.__init__(self, name, shape, res_dtype) @@ -744,16 +778,17 @@ def _del_sources(self): self.values = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) - return signature.Call1(self.ufunc, self.name, - self.values.create_sig(res_shape)) + return self.forced_result.create_sig() + return signature.Call1(self.ufunc, self.name, self.values.create_sig()) class Call2(VirtualArray): """ Intermediate class for performing binary operations. """ + _immutable_fields_ = ['left', 'right'] + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -768,12 +803,55 @@ self.left = None self.right = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() + if self.shape != self.left.shape and self.shape != self.right.shape: + return signature.BroadcastBoth(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.left.shape: + return signature.BroadcastLeft(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.right.shape: + return signature.BroadcastRight(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) return signature.Call2(self.ufunc, self.name, self.calc_dtype, - self.left.create_sig(res_shape), - self.right.create_sig(res_shape)) + self.left.create_sig(), self.right.create_sig()) + +class SliceArray(Call2): + def __init__(self, shape, dtype, left, right): + Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, + right) + + def create_sig(self): + lsig = self.left.create_sig() + rsig = self.right.create_sig() + if self.shape != self.right.shape: + return signature.SliceloopBroadcastSignature(self.ufunc, + self.name, + self.calc_dtype, + lsig, rsig) + return signature.SliceloopSignature(self.ufunc, self.name, + self.calc_dtype, + lsig, rsig) + +class AxisReduce(Call2): + """ NOTE: this is only used as a container, you should never + encounter such things in the wild. Remove this comment + when we'll make AxisReduce lazy + """ + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not @@ -828,11 +906,6 @@ self.strides = strides self.backstrides = backstrides - def array_sig(self, res_shape): - if res_shape is not None and self.shape != res_shape: - return signature.ViewSignature(self.dtype) - return signature.ArraySignature(self.dtype) - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): '''Modifies builder with a representation of the array/slice The items will be seperated by a comma if comma is 1 @@ -840,80 +913,80 @@ each line will begin with indent. ''' size = self.size + ccomma = ',' * comma + ncomma = ',' * (1 - comma) + dtype = self.find_dtype() if size < 1: builder.append('[]') return + elif size == 1: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True - dtype = self.find_dtype() ndims = len(self.shape) i = 0 - start = True builder.append('[') if ndims > 1: if use_ellipsis: - for i in range(3): - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + for i in range(min(3, self.shape[0])): + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) - # create_slice requires len(chunks) > 1 in order to reduce - # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) - builder.append('\n' + indent + '..., ') - i = self.shape[0] - 3 + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) + if i < self.shape[0] - 1: + builder.append(ccomma + '\n' + indent + '...' + ncomma) + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: - builder.append(',' * comma + '\n') - if ndims == 3: + if i > 0: + builder.append(ccomma + '\n') + if ndims >= 3: builder.append('\n' + indent) else: builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1), (0, self.shape[1], 1, self.shape[1])]).get_concrete() - view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) + view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view.to_str(space, comma, builder, indent=indent + ' ', + use_ellipsis=use_ellipsis) i += 1 elif ndims == 1: - spacer = ',' * comma + ' ' + spacer = ccomma + ' ' item = self.start # An iterator would be a nicer way to walk along the 1d array, but # how do I reset it if printing ellipsis? iterators have no # "set_offset()" i = 0 if use_ellipsis: - for i in range(3): - if start: - start = False - else: + for i in range(min(3, self.shape[0])): + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] - # Add a comma only if comma is False - this prevents adding two - # commas - builder.append(spacer + '...' + ',' * (1 - comma)) - # Ugly, but can this be done with an iterator? - item = self.start + self.backstrides[0] - 2 * self.strides[0] - i = self.shape[0] - 3 + if i < self.shape[0] - 1: + # Add a comma only if comma is False - this prevents adding + # two commas + builder.append(spacer + '...' + ncomma) + # Ugly, but can this be done with an iterator? + item = self.start + self.backstrides[0] - 2 * self.strides[0] + i = self.shape[0] - 3 + else: + i += 1 while i < self.shape[0]: - if start: - start = False - else: + if i > 0: builder.append(spacer) builder.append(dtype.itemtype.str_format(self.getitem(item))) item += self.strides[0] i += 1 - else: - builder.append('[') builder.append(']') @jit.unroll_safe @@ -947,20 +1020,22 @@ self.dtype is w_value.find_dtype()): self._fast_setslice(space, w_value) else: - self._sliceloop(w_value, res_shape) + arr = SliceArray(self.shape, self.dtype, self, w_value) + self._sliceloop(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) itemsize = self.dtype.itemtype.get_element_size() - if len(self.shape) == 1: + shapelen = len(self.shape) + if shapelen == 1: rffi.c_memcpy( rffi.ptradd(self.storage, self.start * itemsize), rffi.ptradd(w_value.storage, w_value.start * itemsize), self.size * itemsize ) else: - dest = AxisIterator(self) - source = AxisIterator(w_value) + dest = SkipLastAxisIterator(self) + source = SkipLastAxisIterator(w_value) while not dest.done: rffi.c_memcpy( rffi.ptradd(self.storage, dest.offset * itemsize), @@ -970,21 +1045,16 @@ source.next() dest.next() - def _sliceloop(self, source, res_shape): - sig = source.find_sig(res_shape) - frame = sig.create_frame(source, res_shape) - res_iter = view_iter_from_arr(self) - shapelen = len(res_shape) - while not res_iter.done(): - slice_driver.jit_merge_point(sig=sig, - frame=frame, - shapelen=shapelen, - self=self, source=source, - res_iter=res_iter) - self.setitem(res_iter.offset, sig.eval(frame, source).convert_to( - self.find_dtype())) + def _sliceloop(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(arr) + shapelen = len(self.shape) + while not frame.done(): + slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, + arr=arr, + shapelen=shapelen) + sig.eval(frame, arr) frame.next(shapelen) - res_iter = res_iter.next(shapelen) def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) @@ -993,7 +1063,7 @@ class ViewArray(ConcreteArray): - def create_sig(self, res_shape): + def create_sig(self): return signature.ViewSignature(self.dtype) @@ -1057,8 +1127,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_sig(self, res_shape): - return self.array_sig(res_shape) + def create_sig(self): + return signature.ArraySignature(self.dtype) def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) @@ -1185,6 +1255,7 @@ shape = GetSetProperty(BaseArray.descr_get_shape, BaseArray.descr_set_shape), size = GetSetProperty(BaseArray.descr_get_size), + ndim = GetSetProperty(BaseArray.descr_get_ndim), T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), @@ -1199,6 +1270,8 @@ all = interp2app(BaseArray.descr_all), any = interp2app(BaseArray.descr_any), dot = interp2app(BaseArray.descr_dot), + var = interp2app(BaseArray.descr_var), + std = interp2app(BaseArray.descr_std), copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -3,19 +3,29 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature,\ - find_sig, new_printable_location +from pypy.module.micronumpy.signature import ReduceSignature,\ + find_sig, new_printable_location, AxisReduceSignature, ScalarSignature from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name reduce_driver = jit.JitDriver( - greens = ['shapelen', "sig"], - virtualizables = ["frame"], - reds = ["frame", "self", "dtype", "value", "obj"], + greens=['shapelen', "sig"], + virtualizables=["frame"], + reds=["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), + name='numpy_reduce', ) +axisreduce_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['self','arr', 'identity', 'frame'], + name='numpy_axisreduce', + get_printable_location=new_printable_location('axisreduce'), +) + + class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -48,18 +58,72 @@ ) return self.call(space, __args__.arguments_w) - def descr_reduce(self, space, w_obj): - return self.reduce(space, w_obj, multidim=False) + def descr_reduce(self, space, w_obj, w_dim=0): + """reduce(...) + reduce(a, axis=0) - def reduce(self, space, w_obj, multidim): - from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar - + Reduces `a`'s dimension by one, by applying ufunc along one axis. + + Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then + :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = + the result of iterating `j` over :math:`range(N_i)`, cumulatively applying + ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. + For a one-dimensional array, reduce produces results equivalent to: + :: + + r = op.identity # op = ufunc + for i in xrange(len(A)): + r = op(r, A[i]) + return r + + For example, add.reduce() is equivalent to sum(). + + Parameters + ---------- + a : array_like + The array to act on. + axis : int, optional + The axis along which to apply the reduction. + + Examples + -------- + >>> np.multiply.reduce([2,3,5]) + 30 + + A multi-dimensional array example: + + >>> X = np.arange(8).reshape((2,2,2)) + >>> X + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> np.add.reduce(X, 0) + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X) # confirm: default axis value is 0 + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X, 1) + array([[ 2, 4], + [10, 12]]) + >>> np.add.reduce(X, 2) + array([[ 1, 5], + [ 9, 13]]) + """ + return self.reduce(space, w_obj, False, False, w_dim) + + def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): + from pypy.module.micronumpy.interp_numarray import convert_to_array, \ + Scalar if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) - + dim = space.int_w(w_dim) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) + if dim >= len(obj.shape): + raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -67,26 +131,80 @@ size = obj.size dtype = find_unaryop_result_dtype( space, obj.find_dtype(), - promote_to_largest=True + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True ) shapelen = len(obj.shape) + if self.identity is None and size == 0: + raise operationerrfmt(space.w_ValueError, "zero-size array to " + "%s.reduce without identity", self.name) + if shapelen > 1 and dim >= 0: + res = self.do_axis_reduce(obj, dtype, dim) + return space.wrap(res) + scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, - ScalarSignature(dtype), - obj.create_sig(obj.shape)), obj) + scalarsig, + obj.create_sig()), obj) frame = sig.create_frame(obj) - if shapelen > 1 and not multidim: - raise OperationError(space.w_NotImplementedError, - space.wrap("not implemented yet")) if self.identity is None: - if size == 0: - raise operationerrfmt(space.w_ValueError, "zero-size array to " - "%s.reduce without identity", self.name) value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) else: value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + def do_axis_reduce(self, obj, dtype, dim): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + W_NDimArray + + shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] + size = 1 + for s in shape: + size *= s + result = W_NDimArray(size, shape, dtype) + rightsig = obj.create_sig() + # note - this is just a wrapper so signature can fetch + # both left and right, nothing more, especially + # this is not a true virtual array, because shapes + # don't quite match + arr = AxisReduce(self.func, self.name, obj.shape, dtype, + result, obj, dim) + scalarsig = ScalarSignature(dtype) + sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, + scalarsig, rightsig), arr) + assert isinstance(sig, AxisReduceSignature) + frame = sig.create_frame(arr) + shapelen = len(obj.shape) + if self.identity is not None: + identity = self.identity.convert_to(dtype) + else: + identity = None + self.reduce_axis_loop(frame, sig, shapelen, arr, identity) + return result + + def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): + # note - we can be advanterous here, depending on the exact field + # layout. For now let's say we iterate the original way and + # simply follow the original iteration order + while not frame.done(): + axisreduce_driver.jit_merge_point(frame=frame, self=self, + sig=sig, + identity=identity, + shapelen=shapelen, arr=arr) + iter = frame.get_final_iter() + v = sig.eval(frame, arr).convert_to(sig.calc_dtype) + if iter.first_line: + if identity is not None: + value = self.func(sig.calc_dtype, identity, v) + else: + value = v + else: + cur = arr.left.getitem(iter.offset) + value = self.func(sig.calc_dtype, cur, v) + arr.left.setitem(iter.offset, value) + frame.next(shapelen) + def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): while not frame.done(): reduce_driver.jit_merge_point(sig=sig, @@ -94,10 +212,12 @@ value=value, obj=obj, frame=frame, dtype=dtype) assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, sig.eval(frame, obj).convert_to(dtype)) + value = sig.binfunc(dtype, value, + sig.eval(frame, obj).convert_to(dtype)) frame.next(shapelen) return value + class W_Ufunc1(W_Ufunc): argcount = 1 @@ -182,6 +302,7 @@ reduce = interp2app(W_Ufunc.descr_reduce), ) + def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, promote_bools=False): # dt1.num should be <= dt2.num @@ -230,6 +351,7 @@ dtypenum += 3 return interp_dtype.get_dtype_cache(space).builtin_dtypes[dtypenum] + def find_unaryop_result_dtype(space, dt, promote_to_float=False, promote_bools=False, promote_to_largest=False): if promote_bools and (dt.kind == interp_dtype.BOOLLTR): @@ -254,6 +376,7 @@ assert False return dt + def find_dtype_for_scalar(space, w_obj, current_guess=None): bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype @@ -347,7 +470,8 @@ identity = extra_kwargs.get("identity") if identity is not None: - identity = interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) + identity = \ + interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,10 +1,32 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator -from pypy.module.micronumpy.strides import calculate_slice_strides + ConstantIterator, AxisIterator, ViewTransform,\ + BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote +""" Signature specifies both the numpy expression that has been constructed +and the assembler to be compiled. This is a very important observation - +Two expressions will be using the same assembler if and only if they are +compiled to the same signature. + +This is also a very convinient tool for specializations. For example +a + a and a + b (where a != b) will compile to different assembler because +we specialize on the same array access. + +When evaluating, signatures will create iterators per signature node, +potentially sharing some of them. Iterators depend also on the actual +expression, they're not only dependant on the array itself. For example +a + b where a is dim 2 and b is dim 1 would create a broadcasted iterator for +the array b. + +Such iterator changes are called Transformations. An actual iterator would +be a combination of array and various transformation, like view, broadcast, +dimension swapping etc. + +See interp_iter for transformations +""" + def new_printable_location(driver_name): def get_printable_location(shapelen, sig): return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) @@ -33,7 +55,8 @@ return sig class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]'] + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity'] @unroll_safe def __init__(self, iterators, arrays): @@ -51,7 +74,7 @@ def done(self): final_iter = promote(self.final_iter) if final_iter < 0: - return False + assert False return self.iterators[final_iter].done() @unroll_safe @@ -59,6 +82,12 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -70,6 +99,9 @@ cache.append(ptr) return res +def new_cache(): + return r_dict(sigeq_no_numbering, sighash) + class Signature(object): _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -78,7 +110,7 @@ iter_no = 0 def invent_numbering(self): - cache = r_dict(sigeq_no_numbering, sighash) + cache = new_cache() allnumbers = [] self._invent_numbering(cache, allnumbers) @@ -95,13 +127,13 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None): - res_shape = res_shape or arr.shape + def create_frame(self, arr): iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, []) + self._create_iter(iterlist, arraylist, arr, []) return NumpyEvalFrame(iterlist, arraylist) + class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -120,16 +152,6 @@ def hash(self): return compute_identity_hash(self.dtype) - def allocate_view_iter(self, arr, res_shape, chunklist): - r = arr.shape, arr.start, arr.strides, arr.backstrides - if chunklist: - for chunkelem in chunklist: - r = calculate_slice_strides(r[0], r[1], r[2], r[3], chunkelem) - shape, start, strides, backstrides = r - if len(res_shape) == 1: - return OneDimIterator(start, strides[0], res_shape[0]) - return ViewIterator(start, strides, backstrides, shape, res_shape) - class ArraySignature(ConcreteSignature): def debug_repr(self): return 'Array' @@ -137,23 +159,25 @@ def _invent_array_numbering(self, arr, cache): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() + # this get_concrete never forces assembler. If we're here and array + # is not of a concrete class it means that we have a _forced_result, + # otherwise the signature would not match assert isinstance(concr, ConcreteArray) + assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, res_shape, chunklist)) + iterlist.append(self.allocate_iter(concr, transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, res_shape, chunklist): - if chunklist: - return self.allocate_view_iter(arr, res_shape, chunklist) - return ArrayIterator(arr.size) + def allocate_iter(self, arr, transforms): + return ArrayIterator(arr.size).apply_transformations(arr, transforms) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] @@ -166,7 +190,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -186,8 +210,9 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, res_shape, chunklist): - return self.allocate_view_iter(arr, res_shape, chunklist) + def allocate_iter(self, arr, transforms): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).apply_transformations(arr, transforms) class VirtualSliceSignature(Signature): def __init__(self, child): @@ -198,6 +223,9 @@ assert isinstance(arr, VirtualSlice) self.child._invent_array_numbering(arr.child, cache) + def _invent_numbering(self, cache, allnumbers): + self.child._invent_numbering(new_cache(), allnumbers) + def hash(self): return intmask(self.child.hash() ^ 1234) @@ -207,12 +235,11 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) - chunklist.append(arr.chunks) - self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist) + transforms = transforms + [ViewTransform(arr.chunks)] + self.child._create_iter(iterlist, arraylist, arr.child, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -248,11 +275,10 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist) + self.child._create_iter(iterlist, arraylist, arr.values, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 @@ -293,29 +319,68 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) - self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist) - self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist) + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class BroadcastLeft(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + +class BroadcastRight(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(cache, allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class BroadcastBoth(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): - self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist) + def _create_iter(self, iterlist, arraylist, arr, transforms): + self.right._create_iter(iterlist, arraylist, arr, transforms) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) @@ -325,3 +390,63 @@ def eval(self, frame, arr): return self.right.eval(frame, arr) + + def debug_repr(self): + return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + +class SliceloopSignature(Call2): + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ofs = frame.iterators[0].offset + arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( + self.calc_dtype)) + + def debug_repr(self): + return 'SliceLoop(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + +class SliceloopBroadcastSignature(SliceloopSignature): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import SliceArray + + assert isinstance(arr, SliceArray) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class AxisReduceSignature(Call2): + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + ConcreteArray + + assert isinstance(arr, AxisReduce) + left = arr.left + assert isinstance(left, ConcreteArray) + iterlist.append(AxisIterator(left.start, arr.dim, arr.shape, + left.strides, left.backstrides)) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + + def _invent_numbering(self, cache, allnumbers): + allnumbers.append(0) + self.right._invent_numbering(cache, allnumbers) + + def _invent_array_numbering(self, arr, cache): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + self.right._invent_array_numbering(arr.right, cache) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + + def debug_repr(self): + return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -3,7 +3,7 @@ class AppTestDtypes(BaseNumpyAppTest): def test_dtype(self): - from numpypy import dtype + from _numpypy import dtype d = dtype('?') assert d.num == 0 @@ -14,7 +14,7 @@ raises(TypeError, dtype, 1042) def test_dtype_with_types(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(bool).num == 0 assert dtype(int).num == 7 @@ -22,13 +22,13 @@ assert dtype(float).num == 12 def test_array_dtype_attr(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), long) assert a.dtype is dtype(long) def test_repr_str(self): - from numpypy import dtype + from _numpypy import dtype assert repr(dtype) == "" d = dtype('?') @@ -36,7 +36,7 @@ assert str(d) == "bool" def test_bool_array(self): - from numpypy import array, False_, True_ + from _numpypy import array, False_, True_ a = array([0, 1, 2, 2.5], dtype='?') assert a[0] is False_ @@ -44,7 +44,7 @@ assert a[i] is True_ def test_copy_array_with_dtype(self): - from numpypy import array, False_, True_, int64 + from _numpypy import array, False_, True_, int64 a = array([0, 1, 2, 3], dtype=long) # int on 64-bit, long in 32-bit @@ -58,35 +58,35 @@ assert b[0] is False_ def test_zeros_bool(self): - from numpypy import zeros, False_ + from _numpypy import zeros, False_ a = zeros(10, dtype=bool) for i in range(10): assert a[i] is False_ def test_ones_bool(self): - from numpypy import ones, True_ + from _numpypy import ones, True_ a = ones(10, dtype=bool) for i in range(10): assert a[i] is True_ def test_zeros_long(self): - from numpypy import zeros, int64 + from _numpypy import zeros, int64 a = zeros(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 0 def test_ones_long(self): - from numpypy import ones, int64 + from _numpypy import ones, int64 a = ones(10, dtype=long) for i in range(10): assert isinstance(a[i], int64) assert a[1] == 1 def test_overflow(self): - from numpypy import array, dtype + from _numpypy import array, dtype assert array([128], 'b')[0] == -128 assert array([256], 'B')[0] == 0 assert array([32768], 'h')[0] == -32768 @@ -98,7 +98,7 @@ raises(OverflowError, "array([2**64], 'Q')") def test_bool_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype types = [ '?', 'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'q', 'Q', 'f', 'd' ] @@ -107,7 +107,7 @@ assert (a + array([0], t)).dtype is dtype(t) def test_binop_types(self): - from numpypy import array, dtype + from _numpypy import array, dtype tests = [('b','B','h'), ('b','h','h'), ('b','H','i'), ('b','i','i'), ('b','l','l'), ('b','q','q'), ('b','Q','d'), ('B','h','h'), ('B','H','H'), ('B','i','i'), ('B','I','I'), ('B','l','l'), @@ -129,7 +129,7 @@ assert (array([1], d1) + array([1], d2)).dtype is dtype(dout) def test_add_int8(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int8") b = a + a @@ -138,7 +138,7 @@ assert b[i] == i * 2 def test_add_int16(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="int16") b = a + a @@ -147,7 +147,7 @@ assert b[i] == i * 2 def test_add_uint32(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5), dtype="I") b = a + a @@ -156,19 +156,28 @@ assert b[i] == i * 2 def test_shape(self): - from numpypy import dtype + from _numpypy import dtype assert dtype(long).shape == () def test_cant_subclass(self): - from numpypy import dtype + from _numpypy import dtype # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): - import numpypy as numpy + import _numpypy as numpy raises(TypeError, numpy.generic, 0) raises(TypeError, numpy.number, 0) raises(TypeError, numpy.integer, 0) @@ -181,7 +190,7 @@ raises(TypeError, numpy.inexact, 0) def test_bool(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.bool_.mro() == [numpy.bool_, numpy.generic, object] assert numpy.bool_(3) is numpy.True_ @@ -196,7 +205,7 @@ assert numpy.bool_("False") is numpy.True_ def test_int8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int8.mro() == [numpy.int8, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -218,7 +227,7 @@ assert numpy.int8('128') == -128 def test_uint8(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint8.mro() == [numpy.uint8, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -241,7 +250,7 @@ assert numpy.uint8('256') == 0 def test_int16(self): - import numpypy as numpy + import _numpypy as numpy x = numpy.int16(3) assert x == 3 @@ -251,7 +260,7 @@ assert numpy.int16('32768') == -32768 def test_uint16(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.uint16(65535) == 65535 assert numpy.uint16(65536) == 0 @@ -260,7 +269,7 @@ def test_int32(self): import sys - import numpypy as numpy + import _numpypy as numpy x = numpy.int32(23) assert x == 23 @@ -275,7 +284,7 @@ def test_uint32(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint32(10) == 10 @@ -286,14 +295,14 @@ assert numpy.uint32('4294967296') == 0 def test_int_(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.int_ is numpy.dtype(int).type assert numpy.int_.mro() == [numpy.int_, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] def test_int64(self): import sys - import numpypy as numpy + import _numpypy as numpy if sys.maxint == 2 ** 63 -1: assert numpy.int64.mro() == [numpy.int64, numpy.signedinteger, numpy.integer, numpy.number, numpy.generic, int, object] @@ -315,7 +324,7 @@ def test_uint64(self): import sys - import numpypy as numpy + import _numpypy as numpy assert numpy.uint64.mro() == [numpy.uint64, numpy.unsignedinteger, numpy.integer, numpy.number, numpy.generic, object] @@ -330,7 +339,7 @@ raises(OverflowError, numpy.uint64(18446744073709551616)) def test_float32(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float32.mro() == [numpy.float32, numpy.floating, numpy.inexact, numpy.number, numpy.generic, object] @@ -339,7 +348,7 @@ raises(ValueError, numpy.float32, '23.2df') def test_float64(self): - import numpypy as numpy + import _numpypy as numpy assert numpy.float64.mro() == [numpy.float64, numpy.floating, numpy.inexact, numpy.number, numpy.generic, float, object] @@ -352,7 +361,7 @@ raises(ValueError, numpy.float64, '23.2df') def test_subclass_type(self): - import numpypy as numpy + import _numpypy as numpy class X(numpy.float64): def m(self): diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,34 +2,29 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): - from numpypy import array, average + from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): - from numpypy import array, sum + from _numpypy import array, sum assert sum(range(10)) == 45 assert sum(array(range(10))) == 45 def test_min(self): - from numpypy import array, min + from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): - from numpypy import array, max + from _numpypy import array, max assert max(range(10)) == 9 assert max(array(range(10))) == 9 def test_constants(self): import math - from numpypy import inf, e, pi + from _numpypy import inf, e, pi assert type(inf) is float assert inf == float("inf") assert e == math.e diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -157,10 +157,13 @@ assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([105, 1], [3, 5, 7], [35, 7, 1]) == [1, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1]) == [105, 1] + class AppTestNumArray(BaseNumpyAppTest): def test_ndarray(self): - from numpypy import ndarray, array, dtype + from _numpypy import ndarray, array, dtype assert type(ndarray) is type assert type(array) is not type @@ -175,12 +178,26 @@ assert a.dtype is dtype(int) def test_type(self): - from numpypy import array + from _numpypy import array ar = array(range(5)) assert type(ar) is type(ar + ar) + def test_ndim(self): + from _numpypy import array + x = array(0.2) + assert x.ndim == 0 + x = array([1, 2]) + assert x.ndim == 1 + x = array([[1, 2], [3, 4]]) + assert x.ndim == 2 + x = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert x.ndim == 3 + # numpy actually raises an AttributeError, but _numpypy raises an + # TypeError + raises(TypeError, 'x.ndim = 3') + def test_init(self): - from numpypy import zeros + from _numpypy import zeros a = zeros(15) # Check that storage was actually zero'd. assert a[10] == 0.0 @@ -189,7 +206,7 @@ assert a[13] == 5.3 def test_size(self): - from numpypy import array + from _numpypy import array assert array(3).size == 1 a = array([1, 2, 3]) assert a.size == 3 @@ -200,13 +217,13 @@ Test that empty() works. """ - from numpypy import empty + from _numpypy import empty a = empty(2) a[1] = 1.0 assert a[1] == 1.0 def test_ones(self): - from numpypy import ones + from _numpypy import ones a = ones(3) assert len(a) == 3 assert a[0] == 1 @@ -215,7 +232,7 @@ assert a[2] == 4 def test_copy(self): - from numpypy import arange, array + from _numpypy import arange, array a = arange(5) b = a.copy() for i in xrange(5): @@ -231,13 +248,17 @@ c = b.copy() assert (c == b).all() + a = arange(15).reshape(5,3) + b = a.copy() + assert (b == a).all() + def test_iterator_init(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a[3] == 3 def test_getitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[5]") a = a + a @@ -246,7 +267,7 @@ raises(IndexError, "a[-6]") def test_getitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)]") for i in xrange(5): @@ -256,7 +277,7 @@ assert a[i] == b[i] def test_setitem(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[-1] = 5.0 assert a[4] == 5.0 @@ -264,7 +285,7 @@ raises(IndexError, "a[-6] = 3.0") def test_setitem_tuple(self): - from numpypy import array + from _numpypy import array a = array(range(5)) raises(IndexError, "a[(1,2)] = [0,1]") for i in xrange(5): @@ -275,7 +296,7 @@ assert a[i] == i def test_setslice_array(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array(range(2)) a[1:4:2] = b @@ -286,7 +307,7 @@ assert b[1] == 0. def test_setslice_of_slice_array(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = zeros(5) a[::2] = array([9., 10., 11.]) assert a[0] == 9. @@ -305,7 +326,7 @@ assert a[0] == 3. def test_setslice_list(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = [0., 1.] a[1:4:2] = b @@ -313,14 +334,14 @@ assert a[3] == 1. def test_setslice_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) a[1:4:2] = 0. assert a[1] == 0. assert a[3] == 0. def test_scalar(self): - from numpypy import array, dtype + from _numpypy import array, dtype a = array(3) raises(IndexError, "a[0]") raises(IndexError, "a[0] = 5") @@ -329,13 +350,13 @@ assert a.dtype is dtype(int) def test_len(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert len(a) == 5 assert len(a + a) == 5 def test_shape(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.shape == (5,) b = a + a @@ -344,7 +365,7 @@ assert c.shape == (3,) def test_set_shape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array([]) a.shape = [] a = array(range(12)) @@ -364,7 +385,7 @@ a.shape = (1,) def test_reshape(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(12)) exc = raises(ValueError, "b = a.reshape((3, 10))") assert str(exc.value) == "total size of new array must be unchanged" @@ -377,7 +398,7 @@ a.shape = (12, 2) def test_slice_reshape(self): - from numpypy import zeros, arange + from _numpypy import zeros, arange a = zeros((4, 2, 3)) b = a[::2, :, :] b.shape = (2, 6) @@ -413,13 +434,13 @@ raises(ValueError, arange(10).reshape, (5, -1, -1)) def test_reshape_varargs(self): - from numpypy import arange + from _numpypy import arange z = arange(96).reshape(12, -1) y = z.reshape(4, 3, 8) assert y.shape == (4, 3, 8) def test_add(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + a for i in range(5): @@ -432,7 +453,7 @@ assert c[i] == bool(a[i] + b[i]) def test_add_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([i for i in reversed(range(5))]) c = a + b @@ -440,20 +461,20 @@ assert c[i] == 4 def test_add_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a + 5 for i in range(5): assert b[i] == i + 5 def test_radd(self): - from numpypy import array + from _numpypy import array r = 3 + array(range(3)) for i in range(3): assert r[i] == i + 3 def test_add_list(self): - from numpypy import array, ndarray + from _numpypy import array, ndarray a = array(range(5)) b = list(reversed(range(5))) c = a + b @@ -462,14 +483,14 @@ assert c[i] == 4 def test_subtract(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - a for i in range(5): assert b[i] == 0 def test_subtract_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([1, 1, 1, 1, 1]) c = a - b @@ -477,34 +498,34 @@ assert c[i] == i - 1 def test_subtract_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 5 for i in range(5): assert b[i] == i - 5 def test_scalar_subtract(self): - from numpypy import int32 + from _numpypy import int32 assert int32(2) - 1 == 1 assert 1 - int32(2) == -1 def test_mul(self): - import numpypy + import _numpypy - a = numpypy.array(range(5)) + a = _numpypy.array(range(5)) b = a * a for i in range(5): assert b[i] == i * i - a = numpypy.array(range(5), dtype=bool) + a = _numpypy.array(range(5), dtype=bool) b = a * a - assert b.dtype is numpypy.dtype(bool) - assert b[0] is numpypy.False_ + assert b.dtype is _numpypy.dtype(bool) + assert b[0] is _numpypy.False_ for i in range(1, 5): - assert b[i] is numpypy.True_ + assert b[i] is _numpypy.True_ def test_mul_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a * 5 for i in range(5): @@ -512,7 +533,7 @@ def test_div(self): from math import isnan - from numpypy import array, dtype, inf + from _numpypy import array, dtype, inf a = array(range(1, 6)) b = a / a @@ -544,7 +565,7 @@ assert c[2] == -inf def test_div_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2], float) c = a / b @@ -552,14 +573,14 @@ assert c[i] == i / 2.0 def test_div_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a / 5.0 for i in range(5): assert b[i] == i / 5.0 def test_pow(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** a for i in range(5): @@ -569,7 +590,7 @@ assert (a ** 2 == a * a).all() def test_pow_other(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = array([2, 2, 2, 2, 2]) c = a ** b @@ -577,14 +598,14 @@ assert c[i] == i ** 2 def test_pow_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5), float) b = a ** 2 for i in range(5): assert b[i] == i ** 2 def test_mod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) b = a % a for i in range(5): @@ -597,7 +618,7 @@ assert b[i] == 1 def test_mod_other(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = array([2, 2, 2, 2, 2]) c = a % b @@ -605,14 +626,14 @@ assert c[i] == i % 2 def test_mod_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a % 2 for i in range(5): assert b[i] == i % 2 def test_pos(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = +a for i in range(5): @@ -623,7 +644,7 @@ assert a[i] == i def test_neg(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = -a for i in range(5): @@ -634,7 +655,7 @@ assert a[i] == -i def test_abs(self): - from numpypy import array + from _numpypy import array a = array([1., -2., 3., -4., -5.]) b = abs(a) for i in range(5): @@ -645,7 +666,7 @@ assert a[i + 5] == abs(i) def test_auto_force(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a - 1 a[2] = 3 @@ -659,7 +680,7 @@ assert c[1] == 4 def test_getslice(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[1:5] assert len(s) == 4 @@ -673,7 +694,7 @@ assert s[0] == 5 def test_getslice_step(self): - from numpypy import array + from _numpypy import array a = array(range(10)) s = a[1:9:2] assert len(s) == 4 @@ -681,7 +702,7 @@ assert s[i] == a[2 * i + 1] def test_slice_update(self): - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:3] s[1] = 10 @@ -691,7 +712,7 @@ def test_slice_invaidate(self): # check that slice shares invalidation list with - from numpypy import array + from _numpypy import array a = array(range(5)) s = a[0:2] b = array([10, 11]) @@ -705,13 +726,19 @@ assert d[1] == 12 def test_mean(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 + a = array(range(105)).reshape(3, 5, 7) + b = a.mean(axis=0) + b[0, 0]==35. + assert a.mean(axis=0)[0, 0] == 35 + assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.sum() == 10.0 assert a[:4].sum() == 6.0 @@ -719,53 +746,81 @@ a = array([True] * 5, bool) assert a.sum() == 5 + raises(TypeError, 'a.sum(2, 3)') + + def test_reduce_nd(self): + from numpypy import arange, array, multiply + a = arange(15).reshape(5, 3) + assert a.sum() == 105 + assert a.max() == 14 + assert array([]).sum() == 0.0 + raises(ValueError, 'array([]).max()') + assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() + assert (a.sum(1) == [3, 12, 21, 30, 39]).all() + assert (a.max(0) == [12, 13, 14]).all() + assert (a.max(1) == [2, 5, 8, 11, 14]).all() + assert ((a + a).max() == 28) + assert ((a + a).max(0) == [24, 26, 28]).all() + assert ((a + a).sum(1) == [6, 24, 42, 60, 78]).all() + assert (multiply.reduce(a) == array([0, 3640, 12320])).all() + a = array(range(105)).reshape(3, 5, 7) + assert (a[:, 1, :].sum(0) == [126, 129, 132, 135, 138, 141, 144]).all() + assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() + raises (ValueError, 'a[:, 1, :].sum(2)') + assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() + assert (a.reshape(1,-1).sum(0) == range(105)).all() + assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() + def test_identity(self): - from numpypy import identity, array - from numpypy import int32, float64, dtype + from _numpypy import identity, array + from _numpypy import int32, float64, dtype a = identity(0) assert len(a) == 0 assert a.dtype == dtype('float64') - assert a.shape == (0,0) + assert a.shape == (0, 0) b = identity(1, dtype=int32) assert len(b) == 1 assert b[0][0] == 1 - assert b.shape == (1,1) + assert b.shape == (1, 1) assert b.dtype == dtype('int32') c = identity(2) - assert c.shape == (2,2) - assert (c == [[1,0],[0,1]]).all() + assert c.shape == (2, 2) + assert (c == [[1, 0], [0, 1]]).all() d = identity(3, dtype='int32') - assert d.shape == (3,3) + assert d.shape == (3, 3) assert d.dtype == dtype('int32') - assert (d == [[1,0,0],[0,1,0],[0,0,1]]).all() + assert (d == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]).all() def test_prod(self): - from numpypy import array + from _numpypy import array a = array(range(1, 6)) assert a.prod() == 120.0 assert a[:4].prod() == 24.0 def test_max(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.max() == 5.7 b = array([]) raises(ValueError, "b.max()") def test_max_add(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert (a + a).max() == 11.4 def test_min(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.min() == -3.0 b = array([]) raises(ValueError, "b.min()") def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) r = a.argmax() assert r == 2 @@ -786,14 +841,14 @@ assert a.argmax() == 2 def test_argmin(self): - from numpypy import array + from _numpypy import array a = array([-1.2, 3.4, 5.7, -3.0, 2.7]) assert a.argmin() == 3 b = array([]) raises(ValueError, "b.argmin()") def test_all(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.all() == False a[0] = 3.0 @@ -802,7 +857,7 @@ assert b.all() == True def test_any(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5)) assert a.any() == True b = zeros(5) @@ -811,7 +866,7 @@ assert c.any() == False def test_dot(self): - from numpypy import array, dot + from _numpypy import array, dot a = array(range(5)) assert a.dot(a) == 30.0 @@ -821,14 +876,14 @@ assert (dot(5, [1, 2, 3]) == [5, 10, 15]).all() def test_dot_constant(self): - from numpypy import array + from _numpypy import array a = array(range(5)) b = a.dot(2.5) for i in xrange(5): assert b[i] == 2.5 * a[i] def test_dtype_guessing(self): - from numpypy import array, dtype, float64, int8, bool_ + from _numpypy import array, dtype, float64, int8, bool_ assert array([True]).dtype is dtype(bool) assert array([True, False]).dtype is dtype(bool) @@ -845,7 +900,7 @@ def test_comparison(self): import operator - from numpypy import array, dtype + from _numpypy import array, dtype a = array(range(5)) b = array(range(5), float) @@ -864,7 +919,7 @@ assert c[i] == func(b[i], 3) def test_nonzero(self): - from numpypy import array + from _numpypy import array a = array([1, 2]) raises(ValueError, bool, a) raises(ValueError, bool, a == a) @@ -874,7 +929,7 @@ assert not bool(array([0])) def test_slice_assignment(self): - from numpypy import array + from _numpypy import array a = array(range(5)) a[::-1] = a assert (a == [0, 1, 2, 1, 0]).all() @@ -884,8 +939,8 @@ assert (a == [8, 6, 4, 2, 0]).all() def test_debug_repr(self): - from numpypy import zeros, sin - from numpypy.pypy import debug_repr + from _numpypy import zeros, sin + from _numpypy.pypy import debug_repr a = zeros(1) assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' @@ -899,8 +954,8 @@ assert debug_repr(b) == 'Array' def test_remove_invalidates(self): - from numpypy import array - from numpypy.pypy import remove_invalidates + from _numpypy import array + from _numpypy.pypy import remove_invalidates a = array([1, 2, 3]) b = a + a remove_invalidates(a) @@ -908,7 +963,7 @@ assert b[0] == 28 def test_virtual_views(self): - from numpypy import arange + from _numpypy import arange a = arange(15) c = (a + a) d = c[::2] @@ -926,7 +981,7 @@ assert b[1] == 2 def test_tolist_scalar(self): - from numpypy import int32, bool_ + from _numpypy import int32, bool_ x = int32(23) assert x.tolist() == 23 assert type(x.tolist()) is int @@ -934,13 +989,13 @@ assert y.tolist() is True def test_tolist_zerodim(self): - from numpypy import array + from _numpypy import array x = array(3) assert x.tolist() == 3 assert type(x.tolist()) is int def test_tolist_singledim(self): - from numpypy import array + from _numpypy import array a = array(range(5)) assert a.tolist() == [0, 1, 2, 3, 4] assert type(a.tolist()[0]) is int @@ -948,41 +1003,55 @@ assert b.tolist() == [0.2, 0.4, 0.6] def test_tolist_multidim(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4]]) assert a.tolist() == [[1, 2], [3, 4]] def test_tolist_view(self): - from numpypy import array - a = array([[1,2],[3,4]]) + from _numpypy import array + a = array([[1, 2], [3, 4]]) assert (a + a).tolist() == [[2, 4], [6, 8]] def test_tolist_slice(self): - from numpypy import array + from _numpypy import array a = array([[17.1, 27.2], [40.3, 50.3]]) - assert a[:,0].tolist() == [17.1, 40.3] + assert a[:, 0].tolist() == [17.1, 40.3] assert a[0].tolist() == [17.1, 27.2] + def test_var(self): + from _numpypy import array + a = array(range(10)) + assert a.var() == 8.25 + a = array([5.0]) + assert a.var() == 0.0 + + def test_std(self): + from _numpypy import array + a = array(range(10)) + assert a.std() == 2.8722813232690143 + a = array([5.0]) + assert a.std() == 0.0 + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): - import numpypy - a = numpypy.zeros((2, 2)) + import _numpypy + a = _numpypy.zeros((2, 2)) assert len(a) == 2 def test_shape(self): - import numpypy - assert numpypy.zeros(1).shape == (1,) - assert numpypy.zeros((2, 2)).shape == (2, 2) - assert numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) - assert numpypy.array([[1], [2], [3]]).shape == (3, 1) - assert len(numpypy.zeros((3, 1, 2))) == 3 - raises(TypeError, len, numpypy.zeros(())) - raises(ValueError, numpypy.array, [[1, 2], 3]) + import _numpypy + assert _numpypy.zeros(1).shape == (1,) + assert _numpypy.zeros((2, 2)).shape == (2, 2) + assert _numpypy.zeros((3, 1, 2)).shape == (3, 1, 2) + assert _numpypy.array([[1], [2], [3]]).shape == (3, 1) + assert len(_numpypy.zeros((3, 1, 2))) == 3 + raises(TypeError, len, _numpypy.zeros(())) + raises(ValueError, _numpypy.array, [[1, 2], 3]) def test_getsetitem(self): - import numpypy - a = numpypy.zeros((2, 3, 1)) + import _numpypy + a = _numpypy.zeros((2, 3, 1)) raises(IndexError, a.__getitem__, (2, 0, 0)) raises(IndexError, a.__getitem__, (0, 3, 0)) raises(IndexError, a.__getitem__, (0, 0, 1)) @@ -993,8 +1062,8 @@ assert a[1, -1, 0] == 3 def test_slices(self): - import numpypy - a = numpypy.zeros((4, 3, 2)) + import _numpypy + a = _numpypy.zeros((4, 3, 2)) raises(IndexError, a.__getitem__, (4,)) raises(IndexError, a.__getitem__, (3, 3)) raises(IndexError, a.__getitem__, (slice(None), 3)) @@ -1027,51 +1096,51 @@ assert a[1][2][1] == 15 def test_init_2(self): - import numpypy - raises(ValueError, numpypy.array, [[1], 2]) - raises(ValueError, numpypy.array, [[1, 2], [3]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], 5]]) - raises(ValueError, numpypy.array, [[[1, 2], [3, 4], [5]]]) - a = numpypy.array([[1, 2], [4, 5]]) + import _numpypy + raises(ValueError, _numpypy.array, [[1], 2]) + raises(ValueError, _numpypy.array, [[1, 2], [3]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], 5]]) + raises(ValueError, _numpypy.array, [[[1, 2], [3, 4], [5]]]) + a = _numpypy.array([[1, 2], [4, 5]]) assert a[0, 1] == 2 assert a[0][1] == 2 - a = numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) + a = _numpypy.array(([[[1, 2], [3, 4], [5, 6]]])) assert (a[0, 1] == [3, 4]).all() def test_setitem_slice(self): - import numpypy - a = numpypy.zeros((3, 4)) + import _numpypy + a = _numpypy.zeros((3, 4)) a[1] = [1, 2, 3, 4] assert a[1, 2] == 3 raises(TypeError, a[1].__setitem__, [1, 2, 3]) - a = numpypy.array([[1, 2], [3, 4]]) + a = _numpypy.array([[1, 2], [3, 4]]) assert (a == [[1, 2], [3, 4]]).all() - a[1] = numpypy.array([5, 6]) + a[1] = _numpypy.array([5, 6]) assert (a == [[1, 2], [5, 6]]).all() - a[:, 1] = numpypy.array([8, 10]) + a[:, 1] = _numpypy.array([8, 10]) assert (a == [[1, 8], [5, 10]]).all() - a[0, :: -1] = numpypy.array([11, 12]) + a[0, :: -1] = _numpypy.array([11, 12]) assert (a == [[12, 11], [5, 10]]).all() def test_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert ((a + a) == \ array([[1 + 1, 2 + 2], [3 + 3, 4 + 4], [5 + 5, 6 + 6]])).all() def test_getitem_add(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) assert (a + a)[1, 1] == 8 def test_ufunc_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([[1, 2], [3, 4]]) b = negative(a + a) assert (b == [[-2, -4], [-6, -8]]).all() def test_getitem_3(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14]]) b = a[::2] @@ -1082,37 +1151,37 @@ assert c[1][1] == 12 def test_multidim_ones(self): - from numpypy import ones + from _numpypy import ones a = ones((1, 2, 3)) assert a[0, 1, 2] == 1.0 def test_multidim_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((3, 3)) b = ones((3, 3)) - a[:,1:3] = b[:,1:3] + a[:, 1:3] = b[:, 1:3] assert (a == [[0, 1, 1], [0, 1, 1], [0, 1, 1]]).all() a = zeros((3, 3)) b = ones((3, 3)) - a[:,::2] = b[:,::2] + a[:, ::2] = b[:, ::2] assert (a == [[1, 0, 1], [1, 0, 1], [1, 0, 1]]).all() def test_broadcast_ufunc(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) b = array([5, 6]) c = ((a + b) == [[1 + 5, 2 + 6], [3 + 5, 4 + 6], [5 + 5, 6 + 6]]) assert c.all() def test_broadcast_setslice(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((10, 10)) b = ones(10) a[:, :] = b assert a[3, 5] == 1 def test_broadcast_shape_agreement(self): - from numpypy import zeros, array + from _numpypy import zeros, array a = zeros((3, 1, 3)) b = array(((10, 11, 12), (20, 21, 22), (30, 31, 32))) c = ((a + b) == [b, b, b]) @@ -1126,7 +1195,7 @@ assert c.all() def test_broadcast_scalar(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 5), 'd') a[:, 1] = 3 assert a[2, 1] == 3 @@ -1137,14 +1206,14 @@ assert a[3, 2] == 0 def test_broadcast_call2(self): - from numpypy import zeros, ones + from _numpypy import zeros, ones a = zeros((4, 1, 5)) b = ones((4, 3, 5)) b[:] = (a + a) assert (b == zeros((4, 3, 5))).all() def test_broadcast_virtualview(self): - from numpypy import arange, zeros + from _numpypy import arange, zeros a = arange(8).reshape([2, 2, 2]) b = (a + a)[1, 1] c = zeros((2, 2, 2)) @@ -1152,13 +1221,13 @@ assert (c == [[[12, 14], [12, 14]], [[12, 14], [12, 14]]]).all() def test_argmax(self): - from numpypy import array + from _numpypy import array a = array([[1, 2], [3, 4], [5, 6]]) assert a.argmax() == 5 assert a[:2, ].argmax() == 3 def test_broadcast_wrong_shapes(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((4, 3, 2)) b = zeros((4, 2)) exc = raises(ValueError, lambda: a + b) @@ -1166,7 +1235,7 @@ " together with shapes (4,3,2) (4,2)" def test_reduce(self): - from numpypy import array + from _numpypy import array a = array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) assert a.sum() == (13 * 12) / 2 b = a[1:, 1::2] @@ -1174,7 +1243,7 @@ assert c.sum() == (6 + 8 + 10 + 12) * 2 def test_transpose(self): - from numpypy import array + from _numpypy import array a = array(((range(3), range(3, 6)), (range(6, 9), range(9, 12)), (range(12, 15), range(15, 18)), @@ -1193,7 +1262,7 @@ assert(b[:, 0] == a[0, :]).all() def test_flatiter(self): - from numpypy import array, flatiter + from _numpypy import array, flatiter a = array([[10, 30], [40, 60]]) f_iter = a.flat assert f_iter.next() == 10 @@ -1208,23 +1277,23 @@ assert s == 140 def test_flatiter_array_conv(self): - from numpypy import array, dot + from _numpypy import array, dot a = array([1, 2, 3]) assert dot(a.flat, a.flat) == 14 def test_flatiter_varray(self): - from numpypy import ones + from _numpypy import ones a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] def test_slice_copy(self): - from numpypy import zeros + from _numpypy import zeros a = zeros((10, 10)) b = a[0].copy() assert (b == zeros(10)).all() def test_array_interface(self): - from numpypy import array + from _numpypy import array a = array([1, 2, 3]) i = a.__array_interface__ assert isinstance(i['data'][0], int) @@ -1233,6 +1302,7 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1245,7 +1315,7 @@ def test_fromstring(self): import sys - from numpypy import fromstring, array, uint8, float32, int32 + from _numpypy import fromstring, array, uint8, float32, int32 a = fromstring(self.data) for i in range(4): @@ -1275,17 +1345,17 @@ assert g[1] == 2 assert g[2] == 3 h = fromstring("1, , 2, 3", dtype=uint8, sep=",") - assert (h == [1,0,2,3]).all() + assert (h == [1, 0, 2, 3]).all() i = fromstring("1 2 3", dtype=uint8, sep=" ") - assert (i == [1,2,3]).all() + assert (i == [1, 2, 3]).all() j = fromstring("1\t\t\t\t2\t3", dtype=uint8, sep="\t") - assert (j == [1,2,3]).all() + assert (j == [1, 2, 3]).all() k = fromstring("1,x,2,3", dtype=uint8, sep=",") - assert (k == [1,0]).all() + assert (k == [1, 0]).all() l = fromstring("1,x,2,3", dtype='float32', sep=",") - assert (l == [1.0,-1.0]).all() + assert (l == [1.0, -1.0]).all() m = fromstring("1,,2,3", sep=",") - assert (m == [1.0,-1.0,2.0,3.0]).all() + assert (m == [1.0, -1.0, 2.0, 3.0]).all() n = fromstring("3.4 2.0 3.8 2.2", dtype=int32, sep=" ") assert (n == [3]).all() o = fromstring("1.0 2f.0f 3.8 2.2", dtype=float32, sep=" ") @@ -1309,7 +1379,7 @@ assert (u == [1, 0]).all() def test_fromstring_types(self): - from numpypy import (fromstring, int8, int16, int32, int64, uint8, + from _numpypy import (fromstring, int8, int16, int32, int64, uint8, uint16, uint32, float32, float64) a = fromstring('\xFF', dtype=int8) @@ -1333,9 +1403,8 @@ j = fromstring(self.ulongval, dtype='L') assert j[0] == 12 - def test_fromstring_invalid(self): - from numpypy import fromstring, uint16, uint8, int32 + from _numpypy import fromstring, uint16, uint8, int32 #default dtype is 64-bit float, so 3 bytes should fail raises(ValueError, fromstring, "\x01\x02\x03") #3 bytes is not modulo 2 bytes (int16) @@ -1346,7 +1415,8 @@ class AppTestRepr(BaseNumpyAppTest): def test_repr(self): - from numpypy import array, zeros + from _numpypy import array, zeros + int_size = array(5).dtype.itemsize a = array(range(5), float) assert repr(a) == "array([0.0, 1.0, 2.0, 3.0, 4.0])" a = array([], float) @@ -1354,14 +1424,26 @@ a = zeros(1001) assert repr(a) == "array([0.0, 0.0, 0.0, ..., 0.0, 0.0, 0.0])" a = array(range(5), long) - assert repr(a) == "array([0, 1, 2, 3, 4])" + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int64)" + a = array(range(5), 'int32') + if a.dtype.itemsize == int_size: + assert repr(a) == "array([0, 1, 2, 3, 4])" + else: + assert repr(a) == "array([0, 1, 2, 3, 4], dtype=int32)" a = array([], long) assert repr(a) == "array([], dtype=int64)" a = array([True, False, True, False], "?") assert repr(a) == "array([True, False, True, False], dtype=bool)" + a = zeros([]) + assert repr(a) == "array(0.0)" + a = array(0.2) + assert repr(a) == "array(0.2)" def test_repr_multi(self): - from numpypy import array, zeros + from _numpypy import arange, zeros a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1374,9 +1456,19 @@ [[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]])''' + a = arange(1002).reshape((2, 501)) + assert repr(a) == '''array([[0, 1, 2, ..., 498, 499, 500], + [501, 502, 503, ..., 999, 1000, 1001]])''' + assert repr(a.T) == '''array([[0, 501], + [1, 502], + [2, 503], + ..., + [498, 999], + [499, 1000], + [500, 1001]])''' def test_repr_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert repr(b) == "array([1.0, 3.0])" @@ -1391,7 +1483,7 @@ assert repr(b) == "array([], shape=(0, 5), dtype=int16)" def test_str(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) assert str(a) == "[0.0 1.0 2.0 3.0 4.0]" assert str((2 * a)[:]) == "[0.0 2.0 4.0 6.0 8.0]" @@ -1417,14 +1509,14 @@ a = zeros((400, 400), dtype=int) assert str(a) == "[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n" \ - " [0 0 0 ..., 0 0 0]\n ..., \n [0 0 0 ..., 0 0 0]\n" \ + " [0 0 0 ..., 0 0 0]\n ...,\n [0 0 0 ..., 0 0 0]\n" \ " [0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]]" a = zeros((2, 2, 2)) r = str(a) assert r == '[[[0.0 0.0]\n [0.0 0.0]]\n\n [[0.0 0.0]\n [0.0 0.0]]]' def test_str_slice(self): - from numpypy import array, zeros + from _numpypy import array, zeros a = array(range(5), float) b = a[1::2] assert str(b) == "[1.0 3.0]" @@ -1440,7 +1532,7 @@ class AppTestRanges(BaseNumpyAppTest): def test_arange(self): - from numpypy import arange, array, dtype + from _numpypy import arange, array, dtype a = arange(3) assert (a == [0, 1, 2]).all() assert a.dtype is dtype(int) @@ -1458,14 +1550,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -4,14 +4,14 @@ class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): - from numpypy import add, ufunc + from _numpypy import add, ufunc assert isinstance(add, ufunc) assert repr(add) == "" assert repr(ufunc) == "" def test_ufunc_attrs(self): - from numpypy import add, multiply, sin + from _numpypy import add, multiply, sin assert add.identity == 0 assert multiply.identity == 1 @@ -22,7 +22,7 @@ assert sin.nin == 1 def test_wrong_arguments(self): - from numpypy import add, sin + from _numpypy import add, sin raises(ValueError, add, 1) raises(TypeError, add, 1, 2, 3) @@ -30,14 +30,14 @@ raises(ValueError, sin) def test_single_item(self): - from numpypy import negative, sign, minimum + from _numpypy import negative, sign, minimum assert negative(5.0) == -5.0 assert sign(-0.0) == 0.0 assert minimum(2.0, 3.0) == 2.0 def test_sequence(self): - from numpypy import array, ndarray, negative, minimum + from _numpypy import array, ndarray, negative, minimum a = array(range(3)) b = [2.0, 1.0, 0.0] c = 1.0 @@ -71,7 +71,7 @@ assert min_c_b[i] == min(b[i], c) def test_negative(self): - from numpypy import array, negative + from _numpypy import array, negative a = array([-5.0, 0.0, 1.0]) b = negative(a) @@ -86,7 +86,7 @@ assert negative(a + a)[3] == -6 def test_abs(self): - from numpypy import array, absolute + from _numpypy import array, absolute a = array([-5.0, -0.0, 1.0]) b = absolute(a) @@ -94,7 +94,7 @@ assert b[i] == abs(a[i]) def test_add(self): - from numpypy import array, add + from _numpypy import array, add a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -103,7 +103,7 @@ assert c[i] == a[i] + b[i] def test_divide(self): - from numpypy import array, divide + from _numpypy import array, divide a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -114,7 +114,7 @@ assert (divide(array([-10]), array([2])) == array([-5])).all() def test_fabs(self): - from numpypy import array, fabs + from _numpypy import array, fabs from math import fabs as math_fabs a = array([-5.0, -0.0, 1.0]) @@ -123,7 +123,7 @@ assert b[i] == math_fabs(a[i]) def test_minimum(self): - from numpypy import array, minimum + from _numpypy import array, minimum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -132,7 +132,7 @@ assert c[i] == min(a[i], b[i]) def test_maximum(self): - from numpypy import array, maximum + from _numpypy import array, maximum a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -145,7 +145,7 @@ assert isinstance(x, (int, long)) def test_multiply(self): - from numpypy import array, multiply + from _numpypy import array, multiply a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -154,7 +154,7 @@ assert c[i] == a[i] * b[i] def test_sign(self): - from numpypy import array, sign, dtype + from _numpypy import array, sign, dtype reference = [-1.0, 0.0, 0.0, 1.0] a = array([-5.0, -0.0, 0.0, 6.0]) @@ -173,7 +173,7 @@ assert a[1] == 0 def test_reciporocal(self): - from numpypy import array, reciprocal + from _numpypy import array, reciprocal reference = [-0.2, float("inf"), float("-inf"), 2.0] a = array([-5.0, 0.0, -0.0, 0.5]) @@ -182,7 +182,7 @@ assert b[i] == reference[i] def test_subtract(self): - from numpypy import array, subtract + from _numpypy import array, subtract a = array([-5.0, -0.0, 1.0]) b = array([ 3.0, -2.0,-3.0]) @@ -191,7 +191,7 @@ assert c[i] == a[i] - b[i] def test_floor(self): - from numpypy import array, floor + from _numpypy import array, floor reference = [-2.0, -1.0, 0.0, 1.0, 1.0] a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) @@ -200,7 +200,7 @@ assert b[i] == reference[i] def test_copysign(self): - from numpypy import array, copysign + from _numpypy import array, copysign reference = [5.0, -0.0, 0.0, -6.0] a = array([-5.0, 0.0, 0.0, 6.0]) @@ -216,7 +216,7 @@ def test_exp(self): import math - from numpypy import array, exp + from _numpypy import array, exp a = array([-5.0, -0.0, 0.0, 12345678.0, float("inf"), -float('inf'), -12343424.0]) @@ -230,7 +230,7 @@ def test_sin(self): import math - from numpypy import array, sin + from _numpypy import array, sin a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = sin(a) @@ -243,7 +243,7 @@ def test_cos(self): import math - from numpypy import array, cos + from _numpypy import array, cos a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = cos(a) @@ -252,7 +252,7 @@ def test_tan(self): import math - from numpypy import array, tan + from _numpypy import array, tan a = array([0, 1, 2, 3, math.pi, math.pi*1.5, math.pi*2]) b = tan(a) @@ -262,7 +262,7 @@ def test_arcsin(self): import math - from numpypy import array, arcsin + from _numpypy import array, arcsin a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arcsin(a) @@ -276,7 +276,7 @@ def test_arccos(self): import math - from numpypy import array, arccos + from _numpypy import array, arccos a = array([-1, -0.5, -0.33, 0, 0.33, 0.5, 1]) b = arccos(a) @@ -291,20 +291,20 @@ def test_arctan(self): import math - from numpypy import array, arctan + from _numpypy import array, arctan a = array([-3, -2, -1, 0, 1, 2, 3, float('inf'), float('-inf')]) b = arctan(a) for i in range(len(a)): assert b[i] == math.atan(a[i]) - a = array([float('nan')]) + a = array([float('nan')]) b = arctan(a) assert math.isnan(b[0]) def test_arcsinh(self): import math - from numpypy import arcsinh, inf + from _numpypy import arcsinh, inf for v in [inf, -inf, 1.0, math.e]: assert math.asinh(v) == arcsinh(v) @@ -312,7 +312,7 @@ def test_arctanh(self): import math - from numpypy import arctanh + from _numpypy import arctanh for v in [.99, .5, 0, -.5, -.99]: assert math.atanh(v) == arctanh(v) @@ -323,7 +323,7 @@ def test_sqrt(self): import math - from numpypy import sqrt + from _numpypy import sqrt nan, inf = float("nan"), float("inf") data = [1, 2, 3, inf] @@ -333,22 +333,28 @@ assert math.isnan(sqrt(nan)) def test_reduce_errors(self): - from numpypy import sin, add + from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(TypeError, add.reduce, 1) + raises(ValueError, add.reduce, 1) - def test_reduce(self): - from numpypy import add, maximum + def test_reduce_1d(self): + from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 assert maximum.reduce([1]) == 1 assert maximum.reduce([1, 2, 3]) == 3 raises(ValueError, maximum.reduce, []) + def test_reduceND(self): + from numpypy import add, arange + a = arange(12).reshape(3, 4) + assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() + assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_comparisons(self): import operator - from numpypy import equal, not_equal, less, less_equal, greater, greater_equal + from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal for ufunc, func in [ (equal, operator.eq), diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -47,6 +47,8 @@ def f(i): interp = InterpreterState(codes[i]) interp.run(space) + if not len(interp.results): + raise Exception("need results") w_res = interp.results[-1] if isinstance(w_res, BaseArray): concr = w_res.get_concrete_or_scalar() @@ -115,6 +117,28 @@ "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) + def define_axissum(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = sum(a,0) + b -> 1 + """ + + def test_axissum(self): + result = self.run("axissum") + assert result == 30 + # XXX note - the bridge here is fairly crucial and yet it's pretty + # bogus. We need to improve the situation somehow. + self.check_simple_loop({'getinteriorfield_raw': 2, + 'setinteriorfield_raw': 1, + 'arraylen_gc': 1, + 'guard_true': 1, + 'int_lt': 1, + 'jump': 1, + 'float_add': 1, + 'int_add': 3, + }) + def define_prod(): return """ a = |30| @@ -193,9 +217,9 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 26, + self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, - 'getfield_gc_pure': 4, + 'getfield_gc_pure': 8, 'guard_class': 8, 'int_add': 8, 'float_mul': 2, 'jump': 2, 'int_ge': 4, 'getinteriorfield_raw': 4, 'float_add': 2, @@ -212,7 +236,8 @@ def test_ufunc(self): result = self.run("ufunc") assert result == -6 - self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, + self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, + "float_neg": 1, "setinteriorfield_raw": 1, "int_add": 2, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -322,10 +347,9 @@ result = self.run("setslice") assert result == 11.0 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, - 'setinteriorfield_raw': 1, 'int_add': 3, - 'int_lt': 1, 'guard_true': 1, 'jump': 1, - 'arraylen_gc': 3}) + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 2, + 'int_eq': 1, 'guard_false': 1, 'jump': 1}) def define_virtual_slice(): return """ @@ -339,11 +363,12 @@ result = self.run("virtual_slice") assert result == 4 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) + class TestNumpyOld(LLJitMixin): def setup_class(cls): py.test.skip("old") @@ -377,4 +402,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) - diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -7,16 +7,22 @@ interpleveldefs = { 'set_param': 'interp_jit.set_param', 'residual_call': 'interp_jit.residual_call', - 'set_compile_hook': 'interp_jit.set_compile_hook', - 'DebugMergePoint': 'interp_resop.W_DebugMergePoint', + 'set_compile_hook': 'interp_resop.set_compile_hook', + 'set_optimize_hook': 'interp_resop.set_optimize_hook', + 'set_abort_hook': 'interp_resop.set_abort_hook', + 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', + 'Box': 'interp_resop.WrappedBox', } def setup_after_space_initialization(self): # force the __extend__ hacks to occur early from pypy.module.pypyjit.interp_jit import pypyjitdriver + from pypy.module.pypyjit.policy import pypy_hooks # add the 'defaults' attribute from pypy.rlib.jit import PARAMETERS space = self.space pypyjitdriver.space = space w_obj = space.wrap(PARAMETERS) space.setattr(space.wrap(self), space.wrap('defaults'), w_obj) + pypy_hooks.space = space diff --git a/pypy/module/pypyjit/interp_jit.py b/pypy/module/pypyjit/interp_jit.py --- a/pypy/module/pypyjit/interp_jit.py +++ b/pypy/module/pypyjit/interp_jit.py @@ -13,11 +13,7 @@ from pypy.interpreter.pycode import PyCode, CO_GENERATOR from pypy.interpreter.pyframe import PyFrame from pypy.interpreter.pyopcode import ExitFrame -from pypy.interpreter.gateway import unwrap_spec from opcode import opmap -from pypy.rlib.nonconst import NonConstant -from pypy.jit.metainterp.resoperation import rop -from pypy.module.pypyjit.interp_resop import debug_merge_point_from_boxes PyFrame._virtualizable2_ = ['last_instr', 'pycode', 'valuestackdepth', 'locals_stack_w[*]', @@ -51,72 +47,19 @@ def should_unroll_one_iteration(next_instr, is_being_profiled, bytecode): return (bytecode.co_flags & CO_GENERATOR) != 0 -def wrap_oplist(space, logops, operations): - list_w = [] - for op in operations: - if op.getopnum() == rop.DEBUG_MERGE_POINT: - list_w.append(space.wrap(debug_merge_point_from_boxes( - op.getarglist()))) - else: - list_w.append(space.wrap(logops.repr_of_resop(op))) - return list_w - class PyPyJitDriver(JitDriver): reds = ['frame', 'ec'] greens = ['next_instr', 'is_being_profiled', 'pycode'] virtualizables = ['frame'] - def on_compile(self, logger, looptoken, operations, type, next_instr, - is_being_profiled, ll_pycode): - from pypy.rpython.annlowlevel import cast_base_ptr_to_instance - - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - pycode = cast_base_ptr_to_instance(PyCode, ll_pycode) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap(type), - space.newtuple([pycode, - space.wrap(next_instr), - space.wrap(is_being_profiled)]), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - space = self.space - cache = space.fromcache(Cache) - if cache.in_recursion: - return - if space.is_true(cache.w_compile_hook): - logops = logger._make_log_operations() - list_w = wrap_oplist(space, logops, operations) - cache.in_recursion = True - try: - space.call_function(cache.w_compile_hook, - space.wrap('main'), - space.wrap('bridge'), - space.wrap(n), - space.newlist(list_w)) - except OperationError, e: - e.write_unraisable(space, "jit hook ", cache.w_compile_hook) - cache.in_recursion = False - pypyjitdriver = PyPyJitDriver(get_printable_location = get_printable_location, get_jitcell_at = get_jitcell_at, set_jitcell_at = set_jitcell_at, confirm_enter_jit = confirm_enter_jit, can_never_inline = can_never_inline, should_unroll_one_iteration = - should_unroll_one_iteration) + should_unroll_one_iteration, + name='pypyjit') class __extend__(PyFrame): @@ -223,34 +166,3 @@ '''For testing. Invokes callable(...), but without letting the JIT follow the call.''' return space.call_args(w_callable, __args__) - -class Cache(object): - in_recursion = False - - def __init__(self, space): - self.w_compile_hook = space.w_None - -def set_compile_hook(space, w_hook): - """ set_compile_hook(hook) - - Set a compiling hook that will be called each time a loop is compiled. - The hook will be called with the following signature: - hook(merge_point_type, loop_type, greenkey or guard_number, operations) - - for now merge point type is always `main` - - loop_type can be either `loop` `entry_bridge` or `bridge` - in case loop is not `bridge`, greenkey will be a set of constants - for jit merge point. in case it's `main` it'll be a tuple - (code, offset, is_being_profiled) - - Note that jit hook is not reentrant. It means that if the code - inside the jit hook is itself jitted, it will get compiled, but the - jit hook won't be called for that. - - XXX write down what else - """ - cache = space.fromcache(Cache) - cache.w_compile_hook = w_hook - cache.in_recursion = NonConstant(False) - return space.w_None diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,41 +1,250 @@ -from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.gateway import unwrap_spec, interp2app +from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode -from pypy.rpython.lltypesystem import lltype -from pypy.rpython.annlowlevel import cast_base_ptr_to_instance +from pypy.interpreter.error import OperationError +from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance, hlstr from pypy.rpython.lltypesystem.rclass import OBJECT +from pypy.jit.metainterp.resoperation import rop, AbstractResOp +from pypy.rlib.nonconst import NonConstant +from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver -class W_DebugMergePoint(Wrappable): - """ A class representing debug_merge_point JIT operation +class Cache(object): + in_recursion = False + + def __init__(self, space): + self.w_compile_hook = space.w_None + self.w_abort_hook = space.w_None + self.w_optimize_hook = space.w_None + +def wrap_greenkey(space, jitdriver, greenkey, greenkey_repr): + if greenkey is None: + return space.w_None + jitdriver_name = jitdriver.name + if jitdriver_name == 'pypyjit': + next_instr = greenkey[0].getint() + is_being_profiled = greenkey[1].getint() + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + greenkey[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return space.newtuple([space.wrap(pycode), space.wrap(next_instr), + space.newbool(bool(is_being_profiled))]) + else: + return space.wrap(greenkey_repr) + +def set_compile_hook(space, w_hook): + """ set_compile_hook(hook) + + Set a compiling hook that will be called each time a loop is compiled. + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations, + assembler_addr, assembler_length) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + assembler_addr is an integer describing where assembler starts, + can be accessed via ctypes, assembler_lenght is the lenght of compiled + asm + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. """ + cache = space.fromcache(Cache) + cache.w_compile_hook = w_hook + cache.in_recursion = NonConstant(False) - def __init__(self, mp_no, offset, pycode): - self.mp_no = mp_no +def set_optimize_hook(space, w_hook): + """ set_optimize_hook(hook) + + Set a compiling hook that will be called each time a loop is optimized, + but before assembler compilation. This allows to add additional + optimizations on Python level. + + The hook will be called with the following signature: + hook(jitdriver_name, loop_type, greenkey or guard_number, operations) + + jitdriver_name is the name of this particular jitdriver, 'pypyjit' is + the main interpreter loop + + loop_type can be either `loop` `entry_bridge` or `bridge` + in case loop is not `bridge`, greenkey will be a tuple of constants + or a string describing it. + + for the interpreter loop` it'll be a tuple + (code, offset, is_being_profiled) + + Note that jit hook is not reentrant. It means that if the code + inside the jit hook is itself jitted, it will get compiled, but the + jit hook won't be called for that. + + Result value will be the resulting list of operations, or None + """ + cache = space.fromcache(Cache) + cache.w_optimize_hook = w_hook + cache.in_recursion = NonConstant(False) + +def set_abort_hook(space, w_hook): + """ set_abort_hook(hook) + + Set a hook (callable) that will be called each time there is tracing + aborted due to some reason. + + The hook will be called as in: hook(jitdriver_name, greenkey, reason) + + Where reason is the reason for abort, see documentation for set_compile_hook + for descriptions of other arguments. + """ + cache = space.fromcache(Cache) + cache.w_abort_hook = w_hook + cache.in_recursion = NonConstant(False) + +def wrap_oplist(space, logops, operations, ops_offset=None): + l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd + for op in operations: + if ops_offset is None: + ofs = -1 + else: + ofs = ops_offset.get(op, 0) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_greenkey)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) + return l_w + +class WrappedBox(Wrappable): + """ A class representing a single box + """ + def __init__(self, llbox): + self.llbox = llbox + + def descr_getint(self, space): + return space.wrap(jit_hooks.box_getint(self.llbox)) + + at unwrap_spec(no=int) +def descr_new_box(space, w_tp, no): + return WrappedBox(jit_hooks.boxint_new(no)) + +WrappedBox.typedef = TypeDef( + 'Box', + __new__ = interp2app(descr_new_box), + getint = interp2app(WrappedBox.descr_getint), +) + + at unwrap_spec(num=int, offset=int, repr=str, res=WrappedBox) +def descr_new_resop(space, w_tp, num, w_args, res, offset=-1, + repr=''): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + if res is None: + llres = jit_hooks.emptyval() + else: + llres = res.llbox + return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_greenkey) + +class WrappedOp(Wrappable): + """ A class representing a single ResOperation, wrapped nicely + """ + def __init__(self, op, offset, repr_of_resop): + self.op = op self.offset = offset - self.pycode = pycode + self.repr_of_resop = repr_of_resop def descr_repr(self, space): - return space.wrap('DebugMergePoint()') + return space.wrap(self.repr_of_resop) - at unwrap_spec(mp_no=int, offset=int, pycode=PyCode) -def new_debug_merge_point(space, w_tp, mp_no, offset, pycode): - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_num(self, space): + return space.wrap(jit_hooks.resop_getopnum(self.op)) -def debug_merge_point_from_boxes(boxes): - mp_no = boxes[0].getint() - offset = boxes[2].getint() - llcode = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), - boxes[4].getref_base()) - pycode = cast_base_ptr_to_instance(PyCode, llcode) - assert pycode is not None - return W_DebugMergePoint(mp_no, offset, pycode) + def descr_name(self, space): + return space.wrap(hlstr(jit_hooks.resop_getopname(self.op))) -W_DebugMergePoint.typedef = TypeDef( - 'DebugMergePoint', - __new__ = interp2app(new_debug_merge_point), - __doc__ = W_DebugMergePoint.__doc__, - __repr__ = interp2app(W_DebugMergePoint.descr_repr), - code = interp_attrproperty('pycode', W_DebugMergePoint), + @unwrap_spec(no=int) + def descr_getarg(self, space, no): + return WrappedBox(jit_hooks.resop_getarg(self.op, no)) + + @unwrap_spec(no=int, box=WrappedBox) + def descr_setarg(self, space, no, box): + jit_hooks.resop_setarg(self.op, no, box.llbox) + + def descr_getresult(self, space): + return WrappedBox(jit_hooks.resop_getresult(self.op)) + + def descr_setresult(self, space, w_box): + box = space.interp_w(WrappedBox, w_box) + jit_hooks.resop_setresult(self.op, box.llbox) + +class DebugMergePoint(WrappedOp): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.w_greenkey = w_greenkey + self.jd_name = jd_name + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_bytecode_no(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_jitdriver_name(self, space): + return space.wrap(self.jd_name) + +WrappedOp.typedef = TypeDef( + 'ResOperation', + __doc__ = WrappedOp.__doc__, + __new__ = interp2app(descr_new_resop), + __repr__ = interp2app(WrappedOp.descr_repr), + num = GetSetProperty(WrappedOp.descr_num), + name = GetSetProperty(WrappedOp.descr_name), + getarg = interp2app(WrappedOp.descr_getarg), + setarg = interp2app(WrappedOp.descr_setarg), + result = GetSetProperty(WrappedOp.descr_getresult, + WrappedOp.descr_setresult) ) +WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), + pycode = GetSetProperty(DebugMergePoint.get_pycode), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -1,4 +1,112 @@ from pypy.jit.codewriter.policy import JitPolicy +from pypy.rlib.jit import JitHookInterface +from pypy.rlib import jit_hooks +from pypy.interpreter.error import OperationError +from pypy.jit.metainterp.jitprof import counter_names +from pypy.module.pypyjit.interp_resop import wrap_oplist, Cache, wrap_greenkey,\ + WrappedOp + +class PyPyJitIface(JitHookInterface): + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_abort_hook): + cache.in_recursion = True + try: + try: + space.call_function(cache.w_abort_hook, + space.wrap(jitdriver.name), + wrap_greenkey(space, jitdriver, + greenkey, greenkey_repr), + space.wrap(counter_names[reason])) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_abort_hook) + finally: + cache.in_recursion = False + + def after_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._compile_hook(debug_info, w_greenkey) + + def after_compile_bridge(self, debug_info): + self._compile_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def before_compile(self, debug_info): + w_greenkey = wrap_greenkey(self.space, debug_info.get_jitdriver(), + debug_info.greenkey, + debug_info.get_greenkey_repr()) + self._optimize_hook(debug_info, w_greenkey) + + def before_compile_bridge(self, debug_info): + self._optimize_hook(debug_info, + self.space.wrap(debug_info.fail_descr_no)) + + def _compile_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_compile_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations, + debug_info.asminfo.ops_offset) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + asminfo = debug_info.asminfo + space.call_function(cache.w_compile_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w), + space.wrap(asminfo.asmaddr), + space.wrap(asminfo.asmlen)) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + + def _optimize_hook(self, debug_info, w_arg): + space = self.space + cache = space.fromcache(Cache) + if cache.in_recursion: + return + if space.is_true(cache.w_optimize_hook): + logops = debug_info.logger._make_log_operations() + list_w = wrap_oplist(space, logops, debug_info.operations) + cache.in_recursion = True + try: + try: + jd_name = debug_info.get_jitdriver().name + w_res = space.call_function(cache.w_optimize_hook, + space.wrap(jd_name), + space.wrap(debug_info.type), + w_arg, + space.newlist(list_w)) + if space.is_w(w_res, space.w_None): + return + l = [] + for w_item in space.listview(w_res): + item = space.interp_w(WrappedOp, w_item) + l.append(jit_hooks._cast_to_resop(item.op)) + del debug_info.operations[:] # modifying operations above is + # probably not a great idea since types may not work + # and we'll end up with half-working list and + # a segfault/fatal RPython error + for elem in l: + debug_info.operations.append(elem) + except OperationError, e: + e.write_unraisable(space, "jit hook ", cache.w_compile_hook) + finally: + cache.in_recursion = False + +pypy_hooks = PyPyJitIface() class PyPyJitPolicy(JitPolicy): @@ -12,12 +120,16 @@ modname == 'thread.os_thread'): return True if '.' in modname: - modname, _ = modname.split('.', 1) + modname, rest = modname.split('.', 1) + else: + rest = '' if modname in ['pypyjit', 'signal', 'micronumpy', 'math', 'exceptions', 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: + if modname == 'pypyjit' and 'interp_resop' in rest: + return False return True return False diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -1,22 +1,40 @@ import py from pypy.conftest import gettestobjspace, option +from pypy.interpreter.gateway import interp2app from pypy.interpreter.pycode import PyCode -from pypy.interpreter.gateway import interp2app -from pypy.jit.metainterp.history import JitCellToken -from pypy.jit.metainterp.resoperation import ResOperation, rop +from pypy.jit.metainterp.history import JitCellToken, ConstInt, ConstPtr +from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.logger import Logger from pypy.rpython.annlowlevel import (cast_instance_to_base_ptr, cast_base_ptr_to_instance) from pypy.rpython.lltypesystem import lltype, llmemory +from pypy.rpython.lltypesystem.rclass import OBJECT from pypy.module.pypyjit.interp_jit import pypyjitdriver +from pypy.module.pypyjit.policy import pypy_hooks from pypy.jit.tool.oparser import parse from pypy.jit.metainterp.typesystem import llhelper +from pypy.jit.metainterp.jitprof import ABORT_TOO_LONG +from pypy.rlib.jit import JitDebugInfo, AsmInfo + +class MockJitDriverSD(object): + class warmstate(object): + @staticmethod + def get_location_str(boxes): + ll_code = lltype.cast_opaque_ptr(lltype.Ptr(OBJECT), + boxes[2].getref_base()) + pycode = cast_base_ptr_to_instance(PyCode, ll_code) + return pycode.co_name + + jitdriver = pypyjitdriver + class MockSD(object): class cpu(object): ts = llhelper + jitdrivers_sd = [MockJitDriverSD] + class AppTestJitHook(object): def setup_class(cls): if option.runappdirect: @@ -24,9 +42,9 @@ space = gettestobjspace(usemodules=('pypyjit',)) cls.space = space w_f = space.appexec([], """(): - def f(): + def function(): pass - return f + return function """) cls.w_f = w_f ll_code = cast_instance_to_base_ptr(w_f.code) @@ -34,41 +52,78 @@ logger = Logger(MockSD()) oplist = parse(""" - [i1, i2] + [i1, i2, p2] i3 = int_add(i1, i2) debug_merge_point(0, 0, 0, 0, ConstPtr(ptr0)) + guard_nonnull(p2) [] guard_true(i3) [] """, namespace={'ptr0': code_gcref}).operations + greenkey = [ConstInt(0), ConstInt(0), ConstPtr(code_gcref)] + offset = {} + for i, op in enumerate(oplist): + if i != 1: + offset[op] = i + + di_loop = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop_optimize = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'loop', greenkey) + di_loop.asminfo = AsmInfo(offset, 0, 0) + di_bridge = JitDebugInfo(MockJitDriverSD, logger, JitCellToken(), + oplist, 'bridge', fail_descr_no=0) + di_bridge.asminfo = AsmInfo(offset, 0, 0) def interp_on_compile(): - pypyjitdriver.on_compile(logger, JitCellToken(), oplist, 'loop', - 0, False, ll_code) + di_loop.oplist = cls.oplist + pypy_hooks.after_compile(di_loop) def interp_on_compile_bridge(): - pypyjitdriver.on_compile_bridge(logger, JitCellToken(), oplist, 0) + pypy_hooks.after_compile_bridge(di_bridge) + + def interp_on_optimize(): + di_loop_optimize.oplist = cls.oplist + pypy_hooks.before_compile(di_loop_optimize) + + def interp_on_abort(): + pypy_hooks.on_abort(ABORT_TOO_LONG, pypyjitdriver, greenkey, + 'blah') cls.w_on_compile = space.wrap(interp2app(interp_on_compile)) cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) + cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) + cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) + cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) + cls.orig_oplist = oplist + + def setup_method(self, meth): + self.__class__.oplist = self.orig_oplist[:] def test_on_compile(self): import pypyjit all = [] - def hook(*args): - assert args[0] == 'main' - assert args[1] in ['loop', 'bridge'] - all.append(args[2:]) + def hook(name, looptype, tuple_or_guard_no, ops, asmstart, asmlen): + all.append((name, looptype, tuple_or_guard_no, ops)) self.on_compile() pypyjit.set_compile_hook(hook) assert not all self.on_compile() assert len(all) == 1 - assert all[0][0][0].co_name == 'f' - assert all[0][0][1] == 0 - assert all[0][0][2] == False - assert len(all[0][1]) == 3 - assert 'int_add' in all[0][1][0] + elem = all[0] + assert elem[0] == 'pypyjit' + assert elem[2][0].co_name == 'function' + assert elem[2][1] == 0 + assert elem[2][2] == False + assert len(elem[3]) == 4 + int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) + #assert int_add.name == 'int_add' + assert int_add.num == self.int_add_num self.on_compile_bridge() assert len(all) == 2 pypyjit.set_compile_hook(None) @@ -116,11 +171,63 @@ pypyjit.set_compile_hook(hook) self.on_compile() - dmp = l[0][3][1] - assert isinstance(dmp, pypyjit.DebugMergePoint) - assert dmp.code is self.f.func_code + op = l[0][3][1] + assert isinstance(op, pypyjit.ResOperation) + assert 'function' in repr(op) + + def test_on_abort(self): + import pypyjit + l = [] + + def hook(jitdriver_name, greenkey, reason): + l.append((jitdriver_name, reason)) + + pypyjit.set_abort_hook(hook) + self.on_abort() + assert l == [('pypyjit', 'ABORT_TOO_LONG')] + + def test_on_optimize(self): + import pypyjit + l = [] + + def hook(name, looptype, tuple_or_guard_no, ops, *args): + l.append(ops) + + def optimize_hook(name, looptype, tuple_or_guard_no, ops): + return [] + + pypyjit.set_compile_hook(hook) + pypyjit.set_optimize_hook(optimize_hook) + self.on_optimize() + self.on_compile() + assert l == [[]] def test_creation(self): - import pypyjit - dmp = pypyjit.DebugMergePoint(0, 0, self.f.func_code) - assert dmp.code is self.f.func_code + from pypyjit import Box, ResOperation + + op = ResOperation(self.int_add_num, [Box(1), Box(3)], Box(4)) + assert op.num == self.int_add_num + assert op.name == 'int_add' + box = op.getarg(0) + assert box.getint() == 1 + box2 = op.result + assert box2.getint() == 4 + op.setarg(0, box2) + assert op.getarg(0).getint() == 4 + op.result = box + assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' + assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') diff --git a/pypy/module/pypyjit/test/test_policy.py b/pypy/module/pypyjit/test/test_policy.py --- a/pypy/module/pypyjit/test/test_policy.py +++ b/pypy/module/pypyjit/test/test_policy.py @@ -52,6 +52,7 @@ for modname in 'pypyjit', 'signal', 'micronumpy', 'math', 'imp': assert pypypolicy.look_inside_pypy_module(modname) assert pypypolicy.look_inside_pypy_module(modname + '.foo') + assert not pypypolicy.look_inside_pypy_module('pypyjit.interp_resop') def test_see_jit_module(): assert pypypolicy.look_inside_pypy_module('pypyjit.interp_jit') diff --git a/pypy/module/pypyjit/test/test_ztranslation.py b/pypy/module/pypyjit/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/pypyjit/test/test_ztranslation.py @@ -0,0 +1,5 @@ + +from pypy.objspace.fake.checkmodule import checkmodule + +def test_pypyjit_translates(): + checkmodule('pypyjit') diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -8,10 +8,12 @@ from pypy.tool import logparser from pypy.jit.tool.jitoutput import parse_prof from pypy.module.pypyjit.test_pypy_c.model import (Log, find_ids_range, - find_ids, TraceWithIds, + find_ids, OpMatcher, InvalidMatch) class BaseTestPyPyC(object): + log_string = 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary' + def setup_class(cls): if '__pypy__' not in sys.builtin_module_names: py.test.skip("must run this test with pypy") @@ -52,8 +54,7 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - print cmdline, logfile - env={'PYPYLOG': 'jit-log-opt,jit-log-noopt,jit-log-virtualstate,jit-summary:' + str(logfile)} + env={'PYPYLOG': self.log_string + ':' + str(logfile)} pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -98,7 +98,8 @@ end = time.time() return end - start # - log = self.run(main, [get_libc_name(), 200], threshold=150) + log = self.run(main, [get_libc_name(), 200], threshold=150, + import_site=True) assert 1 <= log.result <= 1.5 # at most 0.5 seconds of overhead loops = log.loops_by_id('sleep') assert len(loops) == 1 # make sure that we actually JITted the loop @@ -121,7 +122,7 @@ return fabs._ptr.getaddr(), x libm_name = get_libm_name(sys.platform) - log = self.run(main, [libm_name]) + log = self.run(main, [libm_name], import_site=True) fabs_addr, res = log.result assert res == -4.0 loop, = log.loops_by_filename(self.filepath) diff --git a/pypy/module/pypyjit/test_pypy_c/test_string.py b/pypy/module/pypyjit/test_pypy_c/test_string.py --- a/pypy/module/pypyjit/test_pypy_c/test_string.py +++ b/pypy/module/pypyjit/test_pypy_c/test_string.py @@ -15,7 +15,7 @@ i += letters[i % len(letters)] == uletters[i % len(letters)] return i - log = self.run(main, [300]) + log = self.run(main, [300], import_site=True) assert log.result == 300 loop, = log.loops_by_filename(self.filepath) assert loop.match(""" @@ -55,7 +55,7 @@ i += int(long(string.digits[i % len(string.digits)], 16)) return i - log = self.run(main, [1100]) + log = self.run(main, [1100], import_site=True) assert log.result == main(1100) loop, = log.loops_by_filename(self.filepath) assert loop.match(""" diff --git a/pypy/module/sys/__init__.py b/pypy/module/sys/__init__.py --- a/pypy/module/sys/__init__.py +++ b/pypy/module/sys/__init__.py @@ -42,7 +42,7 @@ 'argv' : 'state.get(space).w_argv', 'py3kwarning' : 'space.w_False', 'warnoptions' : 'state.get(space).w_warnoptions', - 'builtin_module_names' : 'state.w_None', + 'builtin_module_names' : 'space.w_None', 'pypy_getudir' : 'state.pypy_getudir', # not translated 'pypy_initial_path' : 'state.pypy_initial_path', diff --git a/pypy/module/sys/app.py b/pypy/module/sys/app.py --- a/pypy/module/sys/app.py +++ b/pypy/module/sys/app.py @@ -66,11 +66,11 @@ return None copyright_str = """ -Copyright 2003-2011 PyPy development team. +Copyright 2003-2012 PyPy development team. All Rights Reserved. For further information, see -Portions Copyright (c) 2001-2008 Python Software Foundation. +Portions Copyright (c) 2001-2012 Python Software Foundation. All Rights Reserved. Portions Copyright (c) 2000 BeOpen.com. diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -0,0 +1,137 @@ +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + + +class AppTestFromNumeric(BaseNumpyAppTest): + def test_argmax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, argmax + a = arange(6).reshape((2,3)) + assert argmax(a) == 5 + # assert (argmax(a, axis=0) == array([1, 1, 1])).all() + # assert (argmax(a, axis=1) == array([2, 2])).all() + b = arange(6) + b[1] = 5 + assert argmax(b) == 1 + + def test_argmin(self): + # tests adapted from test_argmax + from numpypy import array, arange, argmin + a = arange(6).reshape((2,3)) + assert argmin(a) == 0 + assert (argmin(a, axis=0) == array([0, 0, 0])).all() + assert (argmin(a, axis=1) == array([0, 0])).all() + b = arange(6) + b[1] = 0 + assert argmin(b) == 0 + + def test_shape(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, identity, shape + assert shape(identity(3)) == (3, 3) + assert shape([[1, 2]]) == (1, 2) + assert shape([0]) == (1,) + assert shape(0) == () + # a = array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) + # assert shape(a) == (2,) + + def test_sum(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, sum, ones + assert sum([0.5, 1.5])== 2.0 + assert sum([[0, 1], [0, 5]]) == 6 + # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 + # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + # If the accumulator is too small, overflow occurs: + # assert ones(128, dtype=int8).sum(dtype=int8) == -128 + + def test_amin(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amin + a = arange(4).reshape((2,2)) + assert amin(a) == 0 + # # Minima along the first axis + # assert (amin(a, axis=0) == array([0, 1])).all() + # # Minima along the second axis + # assert (amin(a, axis=1) == array([0, 2])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amin(b) == nan + # assert nanmin(b) == 0.0 + + def test_amax(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, arange, amax + a = arange(4).reshape((2,2)) + assert amax(a) == 3 + # assert (amax(a, axis=0) == array([2, 3])).all() + # assert (amax(a, axis=1) == array([1, 3])).all() + # # NaN behaviour + # b = arange(5, dtype=float) + # b[2] = NaN + # assert amax(b) == nan + # assert nanmax(b) == 4.0 + + def test_alen(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, zeros, alen + a = zeros((7,4,5)) + assert a.shape[0] == 7 + assert alen(a) == 7 + + def test_ndim(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, ndim + assert ndim([[1,2,3],[4,5,6]]) == 2 + assert ndim(array([[1,2,3],[4,5,6]])) == 2 + assert ndim(1) == 0 + + def test_rank(self): + # tests taken from numpy/core/fromnumeric.py docstring + from numpypy import array, rank + assert rank([[1,2,3],[4,5,6]]) == 2 + assert rank(array([[1,2,3],[4,5,6]])) == 2 + assert rank(1) == 0 + + def test_var(self): + from numpypy import array, var + a = array([[1,2],[3,4]]) + assert var(a) == 1.25 + # assert (np.var(a,0) == array([ 1., 1.])).all() + # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + + def test_std(self): + from numpypy import array, std + a = array([[1, 2], [3, 4]]) + assert std(a) == 1.1180339887498949 + # assert (std(a, axis=0) == array([ 1., 1.])).all() + # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + + def test_mean(self): + from numpypy import array, mean + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) + + def test_transpose(self): + from numpypy import arange, array, transpose, ones + x = arange(4).reshape((2,2)) + assert (transpose(x) == array([[0, 2],[1, 3]])).all() + # Once axes argument is implemented, add more tests + raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") + # x = ones((1, 2, 3)) + # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py --- a/pypy/objspace/fake/checkmodule.py +++ b/pypy/objspace/fake/checkmodule.py @@ -1,8 +1,10 @@ from pypy.objspace.fake.objspace import FakeObjSpace, W_Root +from pypy.config.pypyoption import get_pypy_config def checkmodule(modname): - space = FakeObjSpace() + config = get_pypy_config(translating=True) + space = FakeObjSpace(config) mod = __import__('pypy.module.%s' % modname, None, None, ['__doc__']) # force computation and record what we wrap module = mod.Module(space, W_Root()) diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -93,9 +93,9 @@ class FakeObjSpace(ObjSpace): - def __init__(self): + def __init__(self, config=None): self._seen_extras = [] - ObjSpace.__init__(self) + ObjSpace.__init__(self, config=config) def float_w(self, w_obj): is_root(w_obj) @@ -135,6 +135,9 @@ def newfloat(self, x): return w_some_obj() + def newcomplex(self, x, y): + return w_some_obj() + def marshal_w(self, w_obj): "NOT_RPYTHON" raise NotImplementedError @@ -215,6 +218,10 @@ expected_length = 3 return [w_some_obj()] * expected_length + def unpackcomplex(self, w_complex): + is_root(w_complex) + return 1.1, 2.2 + def allocate_instance(self, cls, w_subtype): is_root(w_subtype) return instantiate(cls) @@ -232,6 +239,11 @@ def exec_(self, *args, **kwds): pass + def createexecutioncontext(self): + ec = ObjSpace.createexecutioncontext(self) + ec._py_repr = None + return ec + # ---------- def translates(self, func=None, argtypes=None, **kwds): @@ -267,18 +279,21 @@ ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', 'dict', 'unicode', 'complex', 'slice', 'bool', - 'type', 'basestring']): + 'type', 'basestring', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) # for (name, _, arity, _) in ObjSpace.MethodTable: args = ['w_%d' % i for i in range(arity)] + params = args[:] d = {'is_root': is_root, 'w_some_obj': w_some_obj} + if name in ('get',): + params[-1] += '=None' exec compile2("""\ def meth(self, %s): %s return w_some_obj() - """ % (', '.join(args), + """ % (', '.join(params), '; '.join(['is_root(%s)' % arg for arg in args]))) in d meth = func_with_new_name(d['meth'], name) setattr(FakeObjSpace, name, meth) @@ -301,9 +316,12 @@ pass FakeObjSpace.default_compiler = FakeCompiler() -class FakeModule(object): +class FakeModule(Wrappable): + def __init__(self): + self.w_dict = w_some_obj() def get(self, name): name + "xx" # check that it's a string return w_some_obj() FakeObjSpace.sys = FakeModule() FakeObjSpace.sys.filesystemencoding = 'foobar' +FakeObjSpace.builtin = FakeModule() diff --git a/pypy/objspace/fake/test/test_objspace.py b/pypy/objspace/fake/test/test_objspace.py --- a/pypy/objspace/fake/test/test_objspace.py +++ b/pypy/objspace/fake/test/test_objspace.py @@ -40,7 +40,7 @@ def test_constants(self): space = self.space space.translates(lambda: (space.w_None, space.w_True, space.w_False, - space.w_int, space.w_str, + space.w_int, space.w_str, space.w_object, space.w_TypeError)) def test_wrap(self): @@ -72,3 +72,9 @@ def test_newlist(self): self.space.newlist([W_Root(), W_Root()]) + + def test_default_values(self): + # the __get__ method takes either 2 or 3 arguments + space = self.space + space.translates(lambda: (space.get(W_Root(), W_Root()), + space.get(W_Root(), W_Root(), W_Root()))) diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -64,6 +64,12 @@ check(', '.join([u'a']), u'a') check(', '.join(['a', u'b']), u'a, b') check(u', '.join(['a', 'b']), u'a, b') + try: + u''.join([u'a', 2, 3]) + except TypeError, e: + assert 'sequence item 1' in str(e) + else: + raise Exception("DID NOT RAISE") if sys.version_info >= (2,3): def test_contains_ex(self): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -201,7 +201,7 @@ return space.newbool(container.find(item) != -1) def unicode_join__Unicode_ANY(space, w_self, w_list): - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -216,22 +216,21 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = len(self) * (size - 1) + for i in range(size): + try: + prealloc_size += len(space.unicode_w(list_w[i])) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if isinstance(w_s, W_UnicodeObject): - # shortcut for performance - sb.append(w_s._value) - else: - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -30,6 +30,9 @@ _MAC_OS = platform.name == "darwin" _FREEBSD_7 = platform.name == "freebsd7" +_LITTLE_ENDIAN = sys.byteorder == 'little' +_BIG_ENDIAN = sys.byteorder == 'big' + if _WIN32: from pypy.rlib import rwin32 @@ -360,12 +363,36 @@ cast_type_to_ffitype._annspecialcase_ = 'specialize:memo' def push_arg_as_ffiptr(ffitp, arg, ll_buf): - # this is for primitive types. For structures and arrays - # would be something different (more dynamic) + # This is for primitive types. Note that the exact type of 'arg' may be + # different from the expected 'c_size'. To cope with that, we fall back + # to a byte-by-byte copy. TP = lltype.typeOf(arg) TP_P = lltype.Ptr(rffi.CArray(TP)) - buf = rffi.cast(TP_P, ll_buf) - buf[0] = arg + TP_size = rffi.sizeof(TP) + c_size = intmask(ffitp.c_size) + # if both types have the same size, we can directly write the + # value to the buffer + if c_size == TP_size: + buf = rffi.cast(TP_P, ll_buf) + buf[0] = arg + else: + # needs byte-by-byte copying. Make sure 'arg' is an integer type. + # Note that this won't work for rffi.FLOAT/rffi.DOUBLE. + assert TP is not rffi.FLOAT and TP is not rffi.DOUBLE + if TP_size <= rffi.sizeof(lltype.Signed): + arg = rffi.cast(lltype.Unsigned, arg) + else: + arg = rffi.cast(lltype.UnsignedLongLong, arg) + if _LITTLE_ENDIAN: + for i in range(c_size): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + elif _BIG_ENDIAN: + for i in range(c_size-1, -1, -1): + ll_buf[i] = chr(arg & 0xFF) + arg >>= 8 + else: + raise AssertionError push_arg_as_ffiptr._annspecialcase_ = 'specialize:argtype(1)' diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -6,18 +6,24 @@ from pypy.rlib.objectmodel import CDefinedIntSymbolic, keepalive_until_here, specialize from pypy.rlib.unroll import unrolling_iterable from pypy.rpython.extregistry import ExtRegistryEntry -from pypy.tool.sourcetools import func_with_new_name DEBUG_ELIDABLE_FUNCTIONS = False def elidable(func): - """ Decorate a function as "trace-elidable". This means precisely that: + """ Decorate a function as "trace-elidable". Usually this means simply that + the function is constant-foldable, i.e. is pure and has no side-effects. + + In some situations it is ok to use this decorator if the function *has* + side effects, as long as these side-effects are idempotent. A typical + example for this would be a cache. + + To be totally precise: (1) the result of the call should not change if the arguments are the same (same numbers or same pointers) (2) it's fine to remove the call completely if we can guess the result - according to rule 1 + according to rule 1 (3) the function call can be moved around by optimizer, but only so it'll be called earlier and not later. @@ -386,6 +392,19 @@ class JitHintError(Exception): """Inconsistency in the JIT hints.""" +PARAMETER_DOCS = { + 'threshold': 'number of times a loop has to run for it to become hot', + 'function_threshold': 'number of times a function must run for it to become traced from start', + 'trace_eagerness': 'number of times a guard has to fail before we start compiling a bridge', + 'trace_limit': 'number of recorded operations before we abort tracing with ABORT_TOO_LONG', + 'inlining': 'inline python functions or not (1/0)', + 'loop_longevity': 'a parameter controlling how long loops will be kept before being freed, an estimate', + 'retrace_limit': 'how many times we can try retracing before giving up', + 'max_retrace_guards': 'number of extra guards a retrace can cause', + 'max_unroll_loops': 'number of extra unrollings a loop can cause', + 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' + } + PARAMETERS = {'threshold': 1039, # just above 1024, prime 'function_threshold': 1619, # slightly more than one above, also prime 'trace_eagerness': 200, @@ -394,6 +413,7 @@ 'loop_longevity': 1000, 'retrace_limit': 5, 'max_retrace_guards': 15, + 'max_unroll_loops': 4, 'enable_opts': 'all', } unroll_parameters = unrolling_iterable(PARAMETERS.items()) @@ -410,13 +430,16 @@ active = True # if set to False, this JitDriver is ignored virtualizables = [] + name = 'jitdriver' def __init__(self, greens=None, reds=None, virtualizables=None, get_jitcell_at=None, set_jitcell_at=None, get_printable_location=None, confirm_enter_jit=None, - can_never_inline=None, should_unroll_one_iteration=None): + can_never_inline=None, should_unroll_one_iteration=None, + name='jitdriver'): if greens is not None: self.greens = greens + self.name = name if reds is not None: self.reds = reds if not hasattr(self, 'greens') or not hasattr(self, 'reds'): @@ -450,23 +473,6 @@ # special-cased by ExtRegistryEntry pass - def on_compile(self, logger, looptoken, operations, type, *greenargs): - """ A hook called when loop is compiled. Overwrite - for your own jitdriver if you want to do something special, like - call applevel code - """ - - def on_compile_bridge(self, logger, orig_looptoken, operations, n): - """ A hook called when a bridge is compiled. Overwrite - for your own jitdriver if you want to do something special - """ - - # note: if you overwrite this functions with the above signature it'll - # work, but the *greenargs is different for each jitdriver, so we - # can't share the same methods - del on_compile - del on_compile_bridge - def _make_extregistryentries(self): # workaround: we cannot declare ExtRegistryEntries for functions # used as methods of a frozen object, but we can attach the @@ -628,7 +634,6 @@ def specialize_call(self, hop, **kwds_i): # XXX to be complete, this could also check that the concretetype # of the variables are the same for each of the calls. - from pypy.rpython.error import TyperError from pypy.rpython.lltypesystem import lltype driver = self.instance.im_self greens_v = [] @@ -741,6 +746,105 @@ return hop.genop('jit_marker', vlist, resulttype=lltype.Void) +class AsmInfo(object): + """ An addition to JitDebugInfo concerning assembler. Attributes: + + ops_offset - dict of offsets of operations or None + asmaddr - (int) raw address of assembler block + asmlen - assembler block length + """ + def __init__(self, ops_offset, asmaddr, asmlen): + self.ops_offset = ops_offset + self.asmaddr = asmaddr + self.asmlen = asmlen + +class JitDebugInfo(object): + """ An object representing debug info. Attributes meanings: + + greenkey - a list of green boxes or None for bridge + logger - an instance of jit.metainterp.logger.LogOperations + type - either 'loop', 'entry bridge' or 'bridge' + looptoken - description of a loop + fail_descr_no - number of failing descr for bridges, -1 otherwise + asminfo - extra assembler information + """ + + asminfo = None + def __init__(self, jitdriver_sd, logger, looptoken, operations, type, + greenkey=None, fail_descr_no=-1): + self.jitdriver_sd = jitdriver_sd + self.logger = logger + self.looptoken = looptoken + self.operations = operations + self.type = type + if type == 'bridge': + assert fail_descr_no != -1 + else: + assert greenkey is not None + self.greenkey = greenkey + self.fail_descr_no = fail_descr_no + + def get_jitdriver(self): + """ Return where the jitdriver on which the jitting started + """ + return self.jitdriver_sd.jitdriver + + def get_greenkey_repr(self): + """ Return the string repr of a greenkey + """ + return self.jitdriver_sd.warmstate.get_location_str(self.greenkey) + +class JitHookInterface(object): + """ This is the main connector between the JIT and the interpreter. + Several methods on this class will be invoked at various stages + of JIT running like JIT loops compiled, aborts etc. + An instance of this class will be available as policy.jithookiface. + """ + def on_abort(self, reason, jitdriver, greenkey, greenkey_repr): + """ A hook called each time a loop is aborted with jitdriver and + greenkey where it started, reason is a string why it got aborted + """ + + #def before_optimize(self, debug_info): + # """ A hook called before optimizer is run, called with instance of + # JitDebugInfo. Overwrite for custom behavior + # """ + # DISABLED + + def before_compile(self, debug_info): + """ A hook called after a loop is optimized, before compiling assembler, + called with JitDebugInfo instance. Overwrite for custom behavior + """ + + def after_compile(self, debug_info): + """ A hook called after a loop has compiled assembler, + called with JitDebugInfo instance. Overwrite for custom behavior + """ + + #def before_optimize_bridge(self, debug_info): + # operations, fail_descr_no): + # """ A hook called before a bridge is optimized. + # Called with JitDebugInfo instance, overwrite for + # custom behavior + # """ + # DISABLED + + def before_compile_bridge(self, debug_info): + """ A hook called before a bridge is compiled, but after optimizations + are performed. Called with instance of debug_info, overwrite for + custom behavior + """ + + def after_compile_bridge(self, debug_info): + """ A hook called after a bridge is compiled, called with JitDebugInfo + instance, overwrite for custom behavior + """ + + def get_stats(self): + """ Returns various statistics + """ + raise NotImplementedError + def record_known_class(value, cls): """ Assure the JIT that value is an instance of cls. This is not a precise @@ -748,7 +852,6 @@ """ assert isinstance(value, cls) - class Entry(ExtRegistryEntry): _about_ = record_known_class @@ -759,7 +862,8 @@ assert isinstance(s_inst, annmodel.SomeInstance) def specialize_call(self, hop): - from pypy.rpython.lltypesystem import lltype, rclass + from pypy.rpython.lltypesystem import rclass, lltype + classrepr = rclass.get_type_repr(hop.rtyper) hop.exception_cannot_occur() diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py From noreply at buildbot.pypy.org Wed Jan 18 12:35:39 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 12:35:39 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: move this test to the x86 backend Message-ID: <20120118113539.40E12820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51436:b2fff2c474ff Date: 2012-01-18 12:11 +0100 http://bitbucket.org/pypy/pypy/changeset/b2fff2c474ff/ Log: move this test to the x86 backend diff --git a/pypy/jit/backend/x86/test/test_runner.py b/pypy/jit/backend/x86/test/test_runner.py --- a/pypy/jit/backend/x86/test/test_runner.py +++ b/pypy/jit/backend/x86/test/test_runner.py @@ -582,3 +582,52 @@ self.cpu.execute_token(looptoken, 0) assert looptoken._x86_debug_checksum == sum([op.getopnum() for op in ops.operations]) + + def test_compile_asmlen(self): + from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU + if not isinstance(self.cpu, AbstractLLCPU): + py.test.skip("pointless test on non-asm") + from pypy.jit.backend.x86.tool.viewcode import machine_code_dump + import ctypes + ops = """ + [i2] + i0 = same_as(i2) # but forced to be in a register + label(i0, descr=1) + i1 = int_add(i0, i0) + guard_true(i1, descr=faildesr) [i1] + jump(i1, descr=1) + """ + faildescr = BasicFailDescr(2) + loop = parse(ops, self.cpu, namespace=locals()) + faildescr = loop.operations[-2].getdescr() + jumpdescr = loop.operations[-1].getdescr() + bridge_ops = """ + [i0] + jump(i0, descr=jumpdescr) + """ + bridge = parse(bridge_ops, self.cpu, namespace=locals()) + looptoken = JitCellToken() + self.cpu.assembler.set_debug(False) + info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) + bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, + bridge.operations, + looptoken) + self.cpu.assembler.set_debug(True) # always on untranslated + assert info.asmlen != 0 + cpuname = autodetect_main_model_and_size() + # XXX we have to check the precise assembler, otherwise + # we don't quite know if borders are correct + + def checkops(mc, ops): + assert len(mc) == len(ops) + for i in range(len(mc)): + assert mc[i].split("\t")[-1].startswith(ops[i]) + + data = ctypes.string_at(info.asmaddr, info.asmlen) + mc = list(machine_code_dump(data, info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.add_loop_instructions) + data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) + mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) + lines = [line for line in mc if line.count('\t') == 2] + checkops(lines, self.bridge_loop_instructions) From noreply at buildbot.pypy.org Wed Jan 18 12:35:40 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 12:35:40 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Create a CPU instance for each test Message-ID: <20120118113540.7BA1D820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51437:1c132ebb86eb Date: 2012-01-18 12:13 +0100 http://bitbucket.org/pypy/pypy/changeset/1c132ebb86eb/ Log: Create a CPU instance for each test diff --git a/pypy/jit/backend/arm/test/test_runner.py b/pypy/jit/backend/arm/test/test_runner.py --- a/pypy/jit/backend/arm/test/test_runner.py +++ b/pypy/jit/backend/arm/test/test_runner.py @@ -23,12 +23,9 @@ class TestARM(LLtypeBackendTest): - def setup_class(cls): - cls.cpu = ArmCPU(rtyper=None, stats=FakeStats()) - cls.cpu.setup_once() - - def teardown_method(self, method): - self.cpu.assembler.teardown() + def setup_method(self, meth): + self.cpu = ArmCPU(rtyper=None, stats=FakeStats()) + self.cpu.setup_once() # for the individual tests see # ====> ../../test/runner_test.py From noreply at buildbot.pypy.org Wed Jan 18 12:35:41 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 12:35:41 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: kill unused code Message-ID: <20120118113541.B7916820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51438:d0ed9d2beb0e Date: 2012-01-18 12:14 +0100 http://bitbucket.org/pypy/pypy/changeset/d0ed9d2beb0e/ Log: kill unused code diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -1,12 +1,10 @@ from __future__ import with_statement import os -from pypy.jit.backend.arm.helper.assembler import saved_registers, \ - decode32, encode32, \ - decode64 +from pypy.jit.backend.arm.helper.assembler import saved_registers from pypy.jit.backend.arm import conditions as c from pypy.jit.backend.arm import registers as r from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD, FUNC_ALIGN, \ - PC_OFFSET, N_REGISTERS_SAVED_BY_MALLOC + N_REGISTERS_SAVED_BY_MALLOC from pypy.jit.backend.arm.codebuilder import ARMv7Builder, OverwritingBuilder from pypy.jit.backend.arm.regalloc import (Regalloc, ARMFrameManager, ARMv7RegisterManager, check_imm_arg, diff --git a/pypy/jit/backend/arm/helper/assembler.py b/pypy/jit/backend/arm/helper/assembler.py --- a/pypy/jit/backend/arm/helper/assembler.py +++ b/pypy/jit/backend/arm/helper/assembler.py @@ -195,25 +195,3 @@ reg_args = x break return reg_args - -def decode32(mem, index): - return intmask(ord(mem[index]) - | ord(mem[index+1]) << 8 - | ord(mem[index+2]) << 16 - | ord(mem[index+3]) << 24) - -def decode64(mem, index): - low = decode32(mem, index) - index += 4 - high = decode32(mem, index) - return (r_longlong(high) << 32) | r_longlong(r_uint(low)) - -def encode32(mem, i, n): - mem[i] = chr(n & 0xFF) - mem[i+1] = chr((n >> 8) & 0xFF) - mem[i+2] = chr((n >> 16) & 0xFF) - mem[i+3] = chr((n >> 24) & 0xFF) - -def encode64(mem, i, n): - for x in range(8): - mem[i+x] = chr((n >> (x*8)) & 0xFF) diff --git a/pypy/jit/backend/arm/test/test_helper.py b/pypy/jit/backend/arm/test/test_helper.py --- a/pypy/jit/backend/arm/test/test_helper.py +++ b/pypy/jit/backend/arm/test/test_helper.py @@ -6,6 +6,7 @@ from pypy.jit.backend.arm.test.support import skip_unless_arm skip_unless_arm() + def test_count_reg_args(): assert count_reg_args([BoxPtr()]) == 1 assert count_reg_args([BoxPtr()] * 2) == 2 @@ -21,26 +22,3 @@ assert count_reg_args([BoxInt(), BoxFloat(), BoxInt()]) == 2 assert count_reg_args([BoxInt(), BoxInt(), BoxInt(), BoxFloat()]) == 3 - -def test_encode32(): - mem = [None]*4 - encode32(mem, 0, 1234567) - assert ''.join(mem) == '\x87\xd6\x12\x00' - mem = [None]*4 - encode32(mem, 0, 983040) - assert ''.join(mem) == '\x00\x00\x0F\x00' - -def test_decode32(): - mem = list('\x87\xd6\x12\x00') - assert decode32(mem, 0) == 1234567 - mem = list('\x00\x00\x0F\x00') - assert decode32(mem, 0) == 983040 - -def test_decode64(): - mem = list('\x87\xd6\x12\x00\x00\x00\x0F\x00') - assert decode64(mem, 0) == 4222124651894407L - -def test_encode64(): - mem = [None] * 8 - encode64(mem, 0, 4222124651894407L) - assert ''.join(mem) == '\x87\xd6\x12\x00\x00\x00\x0F\x00' From noreply at buildbot.pypy.org Wed Jan 18 12:35:42 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 12:35:42 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Generate more debugging information (taken from the x86 backend) Message-ID: <20120118113542.F23D8820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51439:fceb632073b8 Date: 2012-01-18 12:16 +0100 http://bitbucket.org/pypy/pypy/changeset/fceb632073b8/ Log: Generate more debugging information (taken from the x86 backend) diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -14,19 +14,28 @@ from pypy.jit.backend.llsupport.asmmemmgr import MachineDataBlockWrapper from pypy.jit.backend.model import CompiledLoopToken from pypy.jit.codewriter import longlong -from pypy.jit.metainterp.history import (AbstractFailDescr, INT, REF, FLOAT) -from pypy.jit.metainterp.resoperation import rop +from pypy.jit.metainterp.history import AbstractFailDescr, INT, REF, FLOAT +from pypy.jit.metainterp.history import BoxInt, ConstInt +from pypy.jit.metainterp.resoperation import rop, ResOperation from pypy.rlib import rgc -from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.objectmodel import we_are_translated, specialize from pypy.rpython.annlowlevel import llhelper from pypy.rpython.lltypesystem import lltype, rffi, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.jit.backend.arm.opassembler import ResOpAssembler -from pypy.rlib.debug import debug_print, debug_start, debug_stop +from pypy.rlib.debug import (debug_print, debug_start, debug_stop, + have_debug_prints) +from pypy.rlib.jit import AsmInfo +from pypy.rlib.objectmodel import compute_unique_id # XXX Move to llsupport from pypy.jit.backend.x86.support import values_array, memcpy_fn +DEBUG_COUNTER = lltype.Struct('DEBUG_COUNTER', ('i', lltype.Signed), + ('type', lltype.Char), # 'b'ridge, 'l'abel or + # 'e'ntry point + ('number', lltype.Signed)) + class AssemblerARM(ResOpAssembler): @@ -53,6 +62,12 @@ self.datablockwrapper = None self.propagate_exception_path = 0 self._compute_stack_size() + self._debug = False + self.loop_run_counters = [] + self.debug_counter_descr = cpu.fielddescrof(DEBUG_COUNTER, 'i') + + def set_debug(self, v): + self._debug = v def _compute_stack_size(self): self.STACK_FIXED_AREA = len(r.callee_saved_registers) * WORD @@ -100,6 +115,72 @@ self._leave_jitted_hook_save_exc = \ self._gen_leave_jitted_hook_code(True) self._leave_jitted_hook = self._gen_leave_jitted_hook_code(False) + debug_start('jit-backend-counts') + self.set_debug(have_debug_prints()) + debug_stop('jit-backend-counts') + + def finish_once(self): + if self._debug: + debug_start('jit-backend-counts') + for i in range(len(self.loop_run_counters)): + struct = self.loop_run_counters[i] + if struct.type == 'l': + prefix = 'TargetToken(%d)' % struct.number + elif struct.type == 'b': + prefix = 'bridge ' + str(struct.number) + else: + prefix = 'entry ' + str(struct.number) + debug_print(prefix + ':' + str(struct.i)) + debug_stop('jit-backend-counts') + + # XXX: merge with x86 + def _register_counter(self, tp, number, token): + # YYY very minor leak -- we need the counters to stay alive + # forever, just because we want to report them at the end + # of the process + struct = lltype.malloc(DEBUG_COUNTER, flavor='raw', + track_allocation=False) + struct.i = 0 + struct.type = tp + if tp == 'b' or tp == 'e': + struct.number = number + else: + assert token + struct.number = compute_unique_id(token) + self.loop_run_counters.append(struct) + return struct + + def _append_debugging_code(self, operations, tp, number, token): + counter = self._register_counter(tp, number, token) + c_adr = ConstInt(rffi.cast(lltype.Signed, counter)) + box = BoxInt() + box2 = BoxInt() + ops = [ResOperation(rop.GETFIELD_RAW, [c_adr], + box, descr=self.debug_counter_descr), + ResOperation(rop.INT_ADD, [box, ConstInt(1)], box2), + ResOperation(rop.SETFIELD_RAW, [c_adr, box2], + None, descr=self.debug_counter_descr)] + operations.extend(ops) + + @specialize.argtype(1) + def _inject_debugging_code(self, looptoken, operations, tp, number): + if self._debug: + # before doing anything, let's increase a counter + s = 0 + for op in operations: + s += op.getopnum() + looptoken._arm_debug_checksum = s + + newoperations = [] + self._append_debugging_code(newoperations, tp, number, + None) + for op in operations: + newoperations.append(op) + if op.getopnum() == rop.LABEL: + self._append_debugging_code(newoperations, 'l', number, + op.getdescr()) + operations = newoperations + return operations @staticmethod def _release_gil_shadowstack(): @@ -491,7 +572,10 @@ assert len(set(inputargs)) == len(inputargs) operations = self.setup(looptoken, operations) - self._dump(operations) + if log: + operations = self._inject_debugging_code(looptoken, operations, + 'e', looptoken.number) + self._dump(operations) self._call_header() sp_patch_location = self._prepare_sp_patch_position() @@ -536,7 +620,11 @@ def assemble_bridge(self, faildescr, inputargs, operations, original_loop_token, log): operations = self.setup(original_loop_token, operations) - self._dump(operations, 'bridge') + descr_number = self.cpu.get_fail_descr_number(faildescr) + if log: + operations = self._inject_debugging_code(faildescr, operations, + 'b', descr_number) + self._dump(operations, 'bridge') assert isinstance(faildescr, AbstractFailDescr) code = self._find_failure_recovery_bytecode(faildescr) frame_depth = faildescr._arm_current_frame_depth @@ -736,7 +824,7 @@ return True def _insert_checks(self, mc=None): - if not we_are_translated(): + if self._debug: if mc is None: mc = self.mc mc.CMP_rr(r.fp.value, r.sp.value) diff --git a/pypy/jit/backend/arm/test/test_runner.py b/pypy/jit/backend/arm/test/test_runner.py --- a/pypy/jit/backend/arm/test/test_runner.py +++ b/pypy/jit/backend/arm/test/test_runner.py @@ -202,3 +202,44 @@ args = [i+1 for i in range(numargs)] res = self.cpu.execute_token(looptoken, *args) assert self.cpu.get_latest_value_int(0) == sum(args) + + def test_debugger_on(self): + from pypy.rlib import debug + + targettoken, preambletoken = TargetToken(), TargetToken() + loop = """ + [i0] + label(i0, descr=preambletoken) + debug_merge_point('xyz', 0) + i1 = int_add(i0, 1) + i2 = int_ge(i1, 10) + guard_false(i2) [] + label(i1, descr=targettoken) + debug_merge_point('xyz', 0) + i11 = int_add(i1, 1) + i12 = int_ge(i11, 10) + guard_false(i12) [] + jump(i11, descr=targettoken) + """ + ops = parse(loop, namespace={'targettoken': targettoken, + 'preambletoken': preambletoken}) + debug._log = dlog = debug.DebugLog() + try: + self.cpu.assembler.set_debug(True) + looptoken = JitCellToken() + self.cpu.compile_loop(ops.inputargs, ops.operations, looptoken) + self.cpu.execute_token(looptoken, 0) + # check debugging info + struct = self.cpu.assembler.loop_run_counters[0] + assert struct.i == 1 + struct = self.cpu.assembler.loop_run_counters[1] + assert struct.i == 1 + struct = self.cpu.assembler.loop_run_counters[2] + assert struct.i == 9 + self.cpu.finish_once() + finally: + debug._log = None + l0 = ('debug_print', 'entry -1:1') + l1 = ('debug_print', preambletoken.repr_of_descr() + ':1') + l2 = ('debug_print', targettoken.repr_of_descr() + ':9') + assert ('jit-backend-counts', [l0, l1, l2]) in dlog From noreply at buildbot.pypy.org Wed Jan 18 12:35:44 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 12:35:44 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Port changes since last merge Message-ID: <20120118113544.3FB31820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51440:be46980ab0b2 Date: 2012-01-18 12:16 +0100 http://bitbucket.org/pypy/pypy/changeset/be46980ab0b2/ Log: Port changes since last merge diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -596,6 +596,7 @@ rawstart = self.materialize_loop(looptoken) looptoken._arm_func_addr = rawstart + size_excluding_failure_stuff = self.mc.get_relative_pos() self.process_pending_guards(rawstart) self.fixup_target_tokens(rawstart) @@ -604,8 +605,13 @@ self.mc._dump_trace(rawstart, 'loop_%s.asm' % self.cpu.total_compiled_loops) print 'Done assembling loop with token %r' % looptoken + + ops_offset = self.mc.ops_offset self.teardown() + return AsmInfo(ops_offset, rawstart + loop_head, + size_excluding_failure_stuff - loop_head) + def _assemble(self, operations, regalloc): regalloc.compute_hint_frame_locations(operations) #self.mc.BKPT() @@ -632,6 +638,7 @@ if not we_are_translated(): assert len(inputargs) == len(arglocs) + startpos = self.mc.get_relative_pos() regalloc = Regalloc(assembler=self, frame_manager=ARMFrameManager()) regalloc.prepare_bridge(inputargs, arglocs, operations) @@ -639,15 +646,19 @@ frame_depth = self._assemble(operations, regalloc) + codeendpos = self.mc.get_relative_pos() + self._patch_sp_offset(sp_patch_location, frame_depth) self.write_pending_failure_recoveries() + rawstart = self.materialize_loop(original_loop_token) + self.process_pending_guards(rawstart) + self.fixup_target_tokens(rawstart) self.patch_trace(faildescr, original_loop_token, rawstart, regalloc) - self.fixup_target_tokens(rawstart) if not we_are_translated(): # for the benefit of tests @@ -658,12 +669,19 @@ self.cpu.total_compiled_bridges) self.current_clt.frame_depth = max(self.current_clt.frame_depth, frame_depth) + ops_offset = self.mc.ops_offset self.teardown() + return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def _find_failure_recovery_bytecode(self, faildescr): - guard_addr = faildescr._arm_block_start + faildescr._arm_guard_pos + guard_stub_addr = faildescr._arm_recovery_stub_offset + if guard_stub_addr == 0: + # This case should be prevented by the logic in compile.py: + # look for CNT_BUSY_FLAG, which disables tracing from a guard + # when another tracing from the same guard is already in progress. + raise BridgeAlreadyCompiled # a guard requires 3 words to encode the jump to the exit code. - return guard_addr + 3 * WORD + return guard_stub_addr + 3 * WORD def fixup_target_tokens(self, rawstart): for targettoken in self.target_tokens_currently_compiling: @@ -698,21 +716,27 @@ for tok in self.pending_guards: descr = tok.descr assert isinstance(descr, AbstractFailDescr) + jump_target = tok.pos_recovery_stub + relative_target = jump_target - tok.offset - #XXX _arm_block_start should go in the looptoken - descr._arm_block_start = block_start + addr = block_start + tok.offset + stub_addr = block_start + jump_target + + descr._arm_recovery_stub_offset = stub_addr if not tok.is_invalidate: #patch the guard jumpt to the stub # overwrite the generate NOP with a B_offs to the pos of the # stub mc = ARMv7Builder() - mc.B_offs(descr._arm_guard_pos - tok.offset, - c.get_opposite_of(tok.fcond)) - mc.copy_to_raw_memory(block_start + tok.offset) + mc.B_offs(relative_target, c.get_opposite_of(tok.fcond)) + mc.copy_to_raw_memory(addr) else: - clt.invalidate_positions.append( - (block_start + tok.offset, descr._arm_guard_pos - tok.offset)) + # GUARD_NOT_INVALIDATED, record an entry in + # clt.invalidate_positions of the form: + # (addr-in-the-code-of-the-not-yet-written-jump-target, + # relative-target-to-use) + clt.invalidate_positions.append((addr, relative_target)) def get_asmmemmgr_blocks(self, looptoken): clt = looptoken.compiled_loop_token @@ -770,6 +794,7 @@ regalloc.next_instruction() i = regalloc.position() op = operations[i] + self.mc.mark_op(op) opnum = op.getopnum() if op.has_no_side_effect() and op.result not in regalloc.longevity: regalloc.possibly_free_vars_for_op(op) @@ -797,6 +822,7 @@ regalloc.possibly_free_vars_for_op(op) regalloc.free_temp_vars() regalloc._check_invariants() + self.mc.mark_op(None) # end of the loop # from ../x86/regalloc.py def can_merge_with_next_guard(self, op, i, operations): @@ -852,9 +878,11 @@ # The first instruction (word) is not overwritten, because it is the # one that actually checks the condition b = ARMv7Builder() - patch_addr = faildescr._arm_block_start + faildescr._arm_guard_pos + adr_jump_offset = faildescr._arm_recovery_stub_offset + assert adr_jump_offset != 0 b.B(bridge_addr) - b.copy_to_raw_memory(patch_addr) + b.copy_to_raw_memory(adr_jump_offset) + faildescr._arm_recovery_stub_offset = 0 # regalloc support def load(self, loc, value): @@ -1186,3 +1214,7 @@ if hasattr(AssemblerARM, methname): func = getattr(AssemblerARM, methname).im_func asm_operations_with_guard[value] = func + + +class BridgeAlreadyCompiled(Exception): + pass diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py --- a/pypy/jit/backend/arm/codebuilder.py +++ b/pypy/jit/backend/arm/codebuilder.py @@ -265,6 +265,15 @@ def __init__(self): AbstractARMv7Builder.__init__(self) self.init_block_builder() + # + # ResOperation --> offset in the assembly. + # ops_offset[None] represents the beginning of the code after the last op + # (i.e., the tail of the loop) + self.ops_offset = {} + + def mark_op(self, op): + pos = self.get_relative_pos() + self.ops_offset[op] = pos def _dump_trace(self, addr, name, formatter=-1): if not we_are_translated(): diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -32,19 +32,19 @@ self.assembler.setup_once() def finish_once(self): - pass + self.assembler.finish_once() def compile_loop(self, inputargs, operations, looptoken, log=True, name=''): - self.assembler.assemble_loop(inputargs, operations, + return self.assembler.assemble_loop(inputargs, operations, looptoken, log=log) def compile_bridge(self, faildescr, inputargs, operations, original_loop_token, log=True): clt = original_loop_token.compiled_loop_token clt.compiling_a_bridge() - self.assembler.assemble_bridge(faildescr, inputargs, operations, - original_loop_token, log=log) + return self.assembler.assemble_bridge(faildescr, inputargs, operations, + original_loop_token, log=log) def get_latest_value_float(self, index): return self.assembler.fail_boxes_float.getitem(index) From noreply at buildbot.pypy.org Wed Jan 18 12:35:45 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 12:35:45 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: also remove test_compile_asmlen from runner_test after it was moved to the x86 backend Message-ID: <20120118113545.7B141820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51441:839659291f03 Date: 2012-01-18 12:33 +0100 http://bitbucket.org/pypy/pypy/changeset/839659291f03/ Log: also remove test_compile_asmlen from runner_test after it was moved to the x86 backend diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3188,55 +3188,6 @@ res = self.cpu.get_latest_value_int(0) assert res == -10 - def test_compile_asmlen(self): - from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU - if not isinstance(self.cpu, AbstractLLCPU): - py.test.skip("pointless test on non-asm") - from pypy.jit.backend.x86.tool.viewcode import machine_code_dump - import ctypes - ops = """ - [i2] - i0 = same_as(i2) # but forced to be in a register - label(i0, descr=1) - i1 = int_add(i0, i0) - guard_true(i1, descr=faildesr) [i1] - jump(i1, descr=1) - """ - faildescr = BasicFailDescr(2) - loop = parse(ops, self.cpu, namespace=locals()) - faildescr = loop.operations[-2].getdescr() - jumpdescr = loop.operations[-1].getdescr() - bridge_ops = """ - [i0] - jump(i0, descr=jumpdescr) - """ - bridge = parse(bridge_ops, self.cpu, namespace=locals()) - looptoken = JitCellToken() - self.cpu.assembler.set_debug(False) - info = self.cpu.compile_loop(loop.inputargs, loop.operations, looptoken) - bridge_info = self.cpu.compile_bridge(faildescr, bridge.inputargs, - bridge.operations, - looptoken) - self.cpu.assembler.set_debug(True) # always on untranslated - assert info.asmlen != 0 - cpuname = autodetect_main_model_and_size() - # XXX we have to check the precise assembler, otherwise - # we don't quite know if borders are correct - - def checkops(mc, ops): - assert len(mc) == len(ops) - for i in range(len(mc)): - assert mc[i].split("\t")[-1].startswith(ops[i]) - - data = ctypes.string_at(info.asmaddr, info.asmlen) - mc = list(machine_code_dump(data, info.asmaddr, cpuname)) - lines = [line for line in mc if line.count('\t') == 2] - checkops(lines, self.add_loop_instructions) - data = ctypes.string_at(bridge_info.asmaddr, bridge_info.asmlen) - mc = list(machine_code_dump(data, bridge_info.asmaddr, cpuname)) - lines = [line for line in mc if line.count('\t') == 2] - checkops(lines, self.bridge_loop_instructions) - def test_compile_bridge_with_target(self): # This test creates a loopy piece of code in a bridge, and builds another From noreply at buildbot.pypy.org Wed Jan 18 13:44:43 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 18 Jan 2012 13:44:43 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: adjust size parameter in _check_im_arg and check_im_box Message-ID: <20120118124443.07F14820D8@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51442:d0d21bbde762 Date: 2012-01-18 13:43 +0100 http://bitbucket.org/pypy/pypy/changeset/d0d21bbde762/ Log: adjust size parameter in _check_im_arg and check_im_box diff --git a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py @@ -2,13 +2,14 @@ from pypy.rlib.objectmodel import we_are_translated from pypy.jit.metainterp.history import Box +IMM_SIZE = 2 ** 15 - 1 -def check_imm_box(arg, size=0xFF, allow_zero=True): +def check_imm_box(arg, size=IMM_SIZE, allow_zero=True): if isinstance(arg, ConstInt): return _check_imm_arg(arg.getint(), size, allow_zero) return False -def _check_imm_arg(arg, size=0xFF, allow_zero=True): +def _check_imm_arg(arg, size=IMM_SIZE, allow_zero=True): #assert not isinstance(arg, ConstInt) #if not we_are_translated(): # if not isinstance(arg, int): From noreply at buildbot.pypy.org Wed Jan 18 13:44:44 2012 From: noreply at buildbot.pypy.org (hager) Date: Wed, 18 Jan 2012 13:44:44 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: remove comment Message-ID: <20120118124444.3F367820D8@wyvern.cs.uni-duesseldorf.de> Author: hager Branch: ppc-jit-backend Changeset: r51443:35aaf3680c92 Date: 2012-01-18 13:44 +0100 http://bitbucket.org/pypy/pypy/changeset/35aaf3680c92/ Log: remove comment diff --git a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/helper/regalloc.py @@ -37,7 +37,6 @@ self.possibly_free_vars_for_op(op) self.free_temp_vars() res = self.force_allocate_reg(op.result) - #self.possibly_free_var(op.result) return [l0, l1, res] return f From noreply at buildbot.pypy.org Wed Jan 18 15:00:00 2012 From: noreply at buildbot.pypy.org (justinpeel) Date: Wed, 18 Jan 2012 15:00:00 +0100 (CET) Subject: [pypy-commit] pypy default: fix a bug in to_str for ndim=1, size=1 arrays. also affected any multi-dim array where the last dimension is 1. tests included. Message-ID: <20120118140000.E44F2820D8@wyvern.cs.uni-duesseldorf.de> Author: Justin Peel Branch: Changeset: r51444:db7b9b0aa08d Date: 2012-01-17 22:13 -0700 http://bitbucket.org/pypy/pypy/changeset/db7b9b0aa08d/ Log: fix a bug in to_str for ndim=1, size=1 arrays. also affected any multi-dim array where the last dimension is 1. tests included. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -919,14 +919,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1441,9 +1441,11 @@ assert repr(a) == "array(0.0)" a = array(0.2) assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" def test_repr_multi(self): - from _numpypy import arange, zeros + from _numpypy import arange, zeros, array a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1466,6 +1468,9 @@ [498, 999], [499, 1000], [500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' def test_repr_slice(self): from _numpypy import array, zeros From noreply at buildbot.pypy.org Wed Jan 18 15:53:42 2012 From: noreply at buildbot.pypy.org (edelsohn) Date: Wed, 18 Jan 2012 15:53:42 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: Remove boxes from getinteriorfield_gc, setinteriorfield_gc. Message-ID: <20120118145342.E2078820D8@wyvern.cs.uni-duesseldorf.de> Author: edelsohn Branch: ppc-jit-backend Changeset: r51445:db20886769cb Date: 2012-01-18 09:53 -0500 http://bitbucket.org/pypy/pypy/changeset/db20886769cb/ Log: Remove boxes from getinteriorfield_gc, setinteriorfield_gc. diff --git a/pypy/jit/backend/ppc/ppcgen/regalloc.py b/pypy/jit/backend/ppc/ppcgen/regalloc.py --- a/pypy/jit/backend/ppc/ppcgen/regalloc.py +++ b/pypy/jit/backend/ppc/ppcgen/regalloc.py @@ -597,17 +597,14 @@ t = unpack_interiorfielddescr(op.getdescr()) ofs, itemsize, fieldsize, sign = t args = op.getarglist() - base_loc, base_box = self._ensure_value_is_boxed(op.getarg(0), args) - index_loc, index_box = self._ensure_value_is_boxed(op.getarg(1), args) + base_loc = self._ensure_value_is_boxed(op.getarg(0), args) + index_loc = self._ensure_value_is_boxed(op.getarg(1), args) c_ofs = ConstInt(ofs) if _check_imm_arg(c_ofs): ofs_loc = imm(ofs) else: - ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, [base_box]) - self.possibly_free_var(ofs_box) + ofs_loc = self._ensure_value_is_boxed(c_ofs, args) self.possibly_free_vars_for_op(op) - self.possibly_free_var(base_box) - self.possibly_free_var(index_box) self.free_temp_vars() result_loc = self.force_allocate_reg(op.result) self.possibly_free_var(op.result) @@ -618,18 +615,14 @@ t = unpack_interiorfielddescr(op.getdescr()) ofs, itemsize, fieldsize, sign = t args = op.getarglist() - base_loc, base_box = self._ensure_value_is_boxed(op.getarg(0), args) - index_loc, index_box = self._ensure_value_is_boxed(op.getarg(1), args) - value_loc, value_box = self._ensure_value_is_boxed(op.getarg(2), args) + base_loc = self._ensure_value_is_boxed(op.getarg(0), args) + index_loc = self._ensure_value_is_boxed(op.getarg(1), args) + value_loc = self._ensure_value_is_boxed(op.getarg(2), args) c_ofs = ConstInt(ofs) if _check_imm_arg(c_ofs): ofs_loc = imm(ofs) else: - ofs_loc, ofs_box = self._ensure_value_is_boxed(c_ofs, [base_box]) - self.possibly_free_var(ofs_box) - self.possibly_free_var(base_box) - self.possibly_free_var(index_box) - self.possibly_free_var(value_box) + ofs_loc = self._ensure_value_is_boxed(c_ofs, args) return [base_loc, index_loc, value_loc, ofs_loc, imm(ofs), imm(itemsize), imm(fieldsize)] From noreply at buildbot.pypy.org Wed Jan 18 17:09:18 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Wed, 18 Jan 2012 17:09:18 +0100 (CET) Subject: [pypy-commit] pypy py3k: Implement py3k's tuple unpacking in the code generator Message-ID: <20120118160918.58F07820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51446:f4cb22c63136 Date: 2012-01-18 13:55 +0100 http://bitbucket.org/pypy/pypy/changeset/f4cb22c63136/ Log: Implement py3k's tuple unpacking in the code generator diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -833,11 +833,22 @@ self.update_position(tup.lineno) elt_count = len(tup.elts) if tup.elts is not None else 0 if tup.ctx == ast.Store: - self.emit_op_arg(ops.UNPACK_SEQUENCE, elt_count) + star_pos = -1 + if elt_count > 0: + for i, elt in enumerate(tup.elts): + if isinstance(elt, ast.Starred): + star_pos = i + if star_pos > -1: + self.emit_op_arg(ops.UNPACK_EX, star_pos | (elt_count-star_pos-1)<<8) + else: + self.emit_op_arg(ops.UNPACK_SEQUENCE, elt_count) self.visit_sequence(tup.elts) if tup.ctx == ast.Load: self.emit_op_arg(ops.BUILD_TUPLE, elt_count) + def visit_Starred(self, star): + star.value.walkabout(self) + def visit_List(self, l): self.update_position(l.lineno) elt_count = len(l.elts) if l.elts is not None else 0 From noreply at buildbot.pypy.org Wed Jan 18 17:09:19 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Wed, 18 Jan 2012 17:09:19 +0100 (CET) Subject: [pypy-commit] pypy py3k: Implement the UNPACK_EX bytecode which is used by py3k's new tuple unpacking Message-ID: <20120118160919.A0AAA820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51447:6face22d8492 Date: 2012-01-18 14:11 +0100 http://bitbucket.org/pypy/pypy/changeset/6face22d8492/ Log: Implement the UNPACK_EX bytecode which is used by py3k's new tuple unpacking diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -582,7 +582,25 @@ self.pushrevvalues(itemcount, items) def UNPACK_EX(self, oparg, next_instr): - raise NotImplementedError + left = oparg & 0xFF + right = (oparg & 0xFF00) >> 8 + w_iterable = self.popvalue() + + items = self.space.fixedview(w_iterable) + itemcount = len(items) + + i = itemcount - 1 + while i >= itemcount-right: + self.pushvalue(items[i]) + i -= 1 + + self.pushvalue(self.space.newlist(items[left:itemcount-right])) + + i = left - 1 + while i >= 0: + self.pushvalue(items[i]) + i -= 1 + def STORE_ATTR(self, nameindex, next_instr): "obj.attributename = newvalue" From noreply at buildbot.pypy.org Wed Jan 18 17:09:20 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Wed, 18 Jan 2012 17:09:20 +0100 (CET) Subject: [pypy-commit] pypy py3k: Check if the destination doesn't contain multiple stars Message-ID: <20120118160920.DD12F820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51448:d0b94548df3b Date: 2012-01-18 14:22 +0100 http://bitbucket.org/pypy/pypy/changeset/d0b94548df3b/ Log: Check if the destination doesn't contain multiple stars diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -837,6 +837,8 @@ if elt_count > 0: for i, elt in enumerate(tup.elts): if isinstance(elt, ast.Starred): + if star_pos != -1: + self.error("two starred expressions in assignment", tup) star_pos = i if star_pos > -1: self.emit_op_arg(ops.UNPACK_EX, star_pos | (elt_count-star_pos-1)<<8) From noreply at buildbot.pypy.org Wed Jan 18 17:53:46 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 17:53:46 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) fix for guard_nonnull_class which was emitting two guards. Message-ID: <20120118165346.E6965820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51449:48cc79e4f981 Date: 2012-01-18 17:32 +0100 http://bitbucket.org/pypy/pypy/changeset/48cc79e4f981/ Log: (arigo, bivab) fix for guard_nonnull_class which was emitting two guards. Additionally make sure that the offset is a small enough imm value. diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -267,38 +267,28 @@ # from ../x86/assembler.py:1265 def emit_op_guard_class(self, op, arglocs, regalloc, fcond): self._cmp_guard_class(op, arglocs, regalloc, fcond) + self._emit_guard(op, arglocs[3:], c.EQ, save_exc=False) return fcond def emit_op_guard_nonnull_class(self, op, arglocs, regalloc, fcond): - offset = self.cpu.vtable_offset + self.mc.CMP_ri(arglocs[0].value, 1) + self._cmp_guard_class(op, arglocs, regalloc, c.HS) + self._emit_guard(op, arglocs[3:], c.EQ, save_exc=False) + return fcond - self.mc.CMP_ri(arglocs[0].value, 0) + def _cmp_guard_class(self, op, locs, regalloc, fcond): + offset = locs[2] if offset is not None: - self._emit_guard(op, arglocs[3:], c.NE, save_exc=False) + self.mc.LDR_ri(r.ip.value, locs[0].value, offset.value, cond=fcond) + self.mc.CMP_rr(r.ip.value, locs[1].value, cond=fcond) else: raise NotImplementedError - self._cmp_guard_class(op, arglocs, regalloc, fcond) - return fcond + # XXX port from x86 backend once gc support is in place def emit_op_guard_not_invalidated(self, op, locs, regalloc, fcond): return self._emit_guard(op, locs, fcond, save_exc=False, is_guard_not_invalidated=True) - def _cmp_guard_class(self, op, locs, regalloc, fcond): - offset = locs[2] - if offset is not None: - if offset.is_imm(): - self.mc.LDR_ri(r.ip.value, locs[0].value, offset.value) - else: - assert offset.is_reg() - self.mc.LDR_rr(r.ip.value, locs[0].value, offset.value) - self.mc.CMP_rr(r.ip.value, locs[1].value) - else: - raise NotImplementedError - # XXX port from x86 backend once gc support is in place - - return self._emit_guard(op, locs[3:], c.EQ, save_exc=False) - class OpAssembler(object): diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -646,13 +646,14 @@ boxes = op.getarglist() x = self._ensure_value_is_boxed(boxes[0], boxes) - y = self.get_scratch_reg(REF, forbidden_vars=boxes) + y = self.get_scratch_reg(INT, forbidden_vars=boxes) y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) self.assembler.load(y, imm(y_val)) offset = self.cpu.vtable_offset assert offset is not None - offset_loc = self._ensure_value_is_boxed(ConstInt(offset), boxes) + assert check_imm_arg(offset) + offset_loc = imm(offset) arglocs = self._prepare_guard(op, [x, y, offset_loc]) return arglocs From noreply at buildbot.pypy.org Wed Jan 18 17:53:48 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 17:53:48 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) refactor guard and guard_token handling a bit and get rid of _arm_guard_pos Message-ID: <20120118165348.29CF0820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51450:1caec07bb02e Date: 2012-01-18 17:33 +0100 http://bitbucket.org/pypy/pypy/changeset/1caec07bb02e/ Log: (arigo, bivab) refactor guard and guard_token handling a bit and get rid of _arm_guard_pos diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -674,7 +674,7 @@ return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def _find_failure_recovery_bytecode(self, faildescr): - guard_stub_addr = faildescr._arm_recovery_stub_offset + guard_stub_addr = faildescr._arm_failure_recovery_block if guard_stub_addr == 0: # This case should be prevented by the logic in compile.py: # look for CNT_BUSY_FLAG, which disables tracing from a guard @@ -709,34 +709,25 @@ tok.faillocs, save_exc=tok.save_exc) # store info on the descr descr._arm_current_frame_depth = tok.faillocs[0].getint() - descr._arm_guard_pos = pos def process_pending_guards(self, block_start): clt = self.current_clt for tok in self.pending_guards: descr = tok.descr assert isinstance(descr, AbstractFailDescr) - jump_target = tok.pos_recovery_stub - relative_target = jump_target - tok.offset - - addr = block_start + tok.offset - stub_addr = block_start + jump_target - - descr._arm_recovery_stub_offset = stub_addr - + failure_recovery_pos = block_start + tok.pos_recovery_stub + descr._arm_failure_recovery_block = failure_recovery_pos + relative_offset = tok.pos_recovery_stub - tok.offset + guard_pos = block_start + tok.offset if not tok.is_invalidate: - #patch the guard jumpt to the stub + # patch the guard jumpt to the stub # overwrite the generate NOP with a B_offs to the pos of the # stub mc = ARMv7Builder() - mc.B_offs(relative_target, c.get_opposite_of(tok.fcond)) - mc.copy_to_raw_memory(addr) + mc.B_offs(relative_offset, c.get_opposite_of(tok.fcond)) + mc.copy_to_raw_memory(guard_pos) else: - # GUARD_NOT_INVALIDATED, record an entry in - # clt.invalidate_positions of the form: - # (addr-in-the-code-of-the-not-yet-written-jump-target, - # relative-target-to-use) - clt.invalidate_positions.append((addr, relative_target)) + clt.invalidate_positions.append((guard_pos, relative_offset)) def get_asmmemmgr_blocks(self, looptoken): clt = looptoken.compiled_loop_token @@ -875,14 +866,12 @@ self.mc.ASR_ri(resloc.value, resloc.value, 16) def patch_trace(self, faildescr, looptoken, bridge_addr, regalloc): - # The first instruction (word) is not overwritten, because it is the - # one that actually checks the condition b = ARMv7Builder() - adr_jump_offset = faildescr._arm_recovery_stub_offset - assert adr_jump_offset != 0 + patch_addr = faildescr._arm_failure_recovery_block + assert patch_addr != 0 b.B(bridge_addr) - b.copy_to_raw_memory(adr_jump_offset) - faildescr._arm_recovery_stub_offset = 0 + b.copy_to_raw_memory(patch_addr) + faildescr._arm_failure_recovery_block = 0 # regalloc support def load(self, loc, value): From noreply at buildbot.pypy.org Wed Jan 18 17:53:49 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 17:53:49 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (bivab, arigo) Add test for an operation that does not correctly emit the code for the guard, i.e. emitting two guards for the same operation Message-ID: <20120118165349.61A5C820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51451:8dd9c2555f29 Date: 2012-01-18 17:46 +0100 http://bitbucket.org/pypy/pypy/changeset/8dd9c2555f29/ Log: (bivab, arigo) Add test for an operation that does not correctly emit the code for the guard, i.e. emitting two guards for the same operation diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3273,6 +3273,24 @@ fail = self.cpu.execute_token(looptoken2, -9) assert fail.identifier == 42 + def test_some_issue(self): + t_box, T_box = self.alloc_instance(self.T) + null_box = self.null_instance() + faildescr = BasicFailDescr(42) + operations = [ + ResOperation(rop.GUARD_NONNULL_CLASS, [t_box, T_box], None, + descr=faildescr), + ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(1))] + operations[0].setfailargs([]) + looptoken = JitCellToken() + inputargs = [t_box] + self.cpu.compile_loop(inputargs, operations, looptoken) + operations = [ + ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(99)) + ] + self.cpu.compile_bridge(faildescr, [], operations, looptoken) + fail = self.cpu.execute_token(looptoken, null_box.getref_base()) + assert fail.identifier == 99 class OOtypeBackendTest(BaseBackendTest): From noreply at buildbot.pypy.org Wed Jan 18 17:53:50 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 17:53:50 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge heads Message-ID: <20120118165350.9EFFB820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51452:aee28928eeae Date: 2012-01-18 17:46 +0100 http://bitbucket.org/pypy/pypy/changeset/aee28928eeae/ Log: merge heads diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -674,7 +674,7 @@ return AsmInfo(ops_offset, startpos + rawstart, codeendpos - startpos) def _find_failure_recovery_bytecode(self, faildescr): - guard_stub_addr = faildescr._arm_recovery_stub_offset + guard_stub_addr = faildescr._arm_failure_recovery_block if guard_stub_addr == 0: # This case should be prevented by the logic in compile.py: # look for CNT_BUSY_FLAG, which disables tracing from a guard @@ -709,34 +709,25 @@ tok.faillocs, save_exc=tok.save_exc) # store info on the descr descr._arm_current_frame_depth = tok.faillocs[0].getint() - descr._arm_guard_pos = pos def process_pending_guards(self, block_start): clt = self.current_clt for tok in self.pending_guards: descr = tok.descr assert isinstance(descr, AbstractFailDescr) - jump_target = tok.pos_recovery_stub - relative_target = jump_target - tok.offset - - addr = block_start + tok.offset - stub_addr = block_start + jump_target - - descr._arm_recovery_stub_offset = stub_addr - + failure_recovery_pos = block_start + tok.pos_recovery_stub + descr._arm_failure_recovery_block = failure_recovery_pos + relative_offset = tok.pos_recovery_stub - tok.offset + guard_pos = block_start + tok.offset if not tok.is_invalidate: - #patch the guard jumpt to the stub + # patch the guard jumpt to the stub # overwrite the generate NOP with a B_offs to the pos of the # stub mc = ARMv7Builder() - mc.B_offs(relative_target, c.get_opposite_of(tok.fcond)) - mc.copy_to_raw_memory(addr) + mc.B_offs(relative_offset, c.get_opposite_of(tok.fcond)) + mc.copy_to_raw_memory(guard_pos) else: - # GUARD_NOT_INVALIDATED, record an entry in - # clt.invalidate_positions of the form: - # (addr-in-the-code-of-the-not-yet-written-jump-target, - # relative-target-to-use) - clt.invalidate_positions.append((addr, relative_target)) + clt.invalidate_positions.append((guard_pos, relative_offset)) def get_asmmemmgr_blocks(self, looptoken): clt = looptoken.compiled_loop_token @@ -875,14 +866,12 @@ self.mc.ASR_ri(resloc.value, resloc.value, 16) def patch_trace(self, faildescr, looptoken, bridge_addr, regalloc): - # The first instruction (word) is not overwritten, because it is the - # one that actually checks the condition b = ARMv7Builder() - adr_jump_offset = faildescr._arm_recovery_stub_offset - assert adr_jump_offset != 0 + patch_addr = faildescr._arm_failure_recovery_block + assert patch_addr != 0 b.B(bridge_addr) - b.copy_to_raw_memory(adr_jump_offset) - faildescr._arm_recovery_stub_offset = 0 + b.copy_to_raw_memory(patch_addr) + faildescr._arm_failure_recovery_block = 0 # regalloc support def load(self, loc, value): diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -267,38 +267,28 @@ # from ../x86/assembler.py:1265 def emit_op_guard_class(self, op, arglocs, regalloc, fcond): self._cmp_guard_class(op, arglocs, regalloc, fcond) + self._emit_guard(op, arglocs[3:], c.EQ, save_exc=False) return fcond def emit_op_guard_nonnull_class(self, op, arglocs, regalloc, fcond): - offset = self.cpu.vtable_offset + self.mc.CMP_ri(arglocs[0].value, 1) + self._cmp_guard_class(op, arglocs, regalloc, c.HS) + self._emit_guard(op, arglocs[3:], c.EQ, save_exc=False) + return fcond - self.mc.CMP_ri(arglocs[0].value, 0) + def _cmp_guard_class(self, op, locs, regalloc, fcond): + offset = locs[2] if offset is not None: - self._emit_guard(op, arglocs[3:], c.NE, save_exc=False) + self.mc.LDR_ri(r.ip.value, locs[0].value, offset.value, cond=fcond) + self.mc.CMP_rr(r.ip.value, locs[1].value, cond=fcond) else: raise NotImplementedError - self._cmp_guard_class(op, arglocs, regalloc, fcond) - return fcond + # XXX port from x86 backend once gc support is in place def emit_op_guard_not_invalidated(self, op, locs, regalloc, fcond): return self._emit_guard(op, locs, fcond, save_exc=False, is_guard_not_invalidated=True) - def _cmp_guard_class(self, op, locs, regalloc, fcond): - offset = locs[2] - if offset is not None: - if offset.is_imm(): - self.mc.LDR_ri(r.ip.value, locs[0].value, offset.value) - else: - assert offset.is_reg() - self.mc.LDR_rr(r.ip.value, locs[0].value, offset.value) - self.mc.CMP_rr(r.ip.value, locs[1].value) - else: - raise NotImplementedError - # XXX port from x86 backend once gc support is in place - - return self._emit_guard(op, locs[3:], c.EQ, save_exc=False) - class OpAssembler(object): diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -646,13 +646,14 @@ boxes = op.getarglist() x = self._ensure_value_is_boxed(boxes[0], boxes) - y = self.get_scratch_reg(REF, forbidden_vars=boxes) + y = self.get_scratch_reg(INT, forbidden_vars=boxes) y_val = rffi.cast(lltype.Signed, op.getarg(1).getint()) self.assembler.load(y, imm(y_val)) offset = self.cpu.vtable_offset assert offset is not None - offset_loc = self._ensure_value_is_boxed(ConstInt(offset), boxes) + assert check_imm_arg(offset) + offset_loc = imm(offset) arglocs = self._prepare_guard(op, [x, y, offset_loc]) return arglocs From noreply at buildbot.pypy.org Wed Jan 18 17:53:51 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 17:53:51 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: give test a proper name Message-ID: <20120118165351.D76AD820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51453:0e325c9172fb Date: 2012-01-18 17:49 +0100 http://bitbucket.org/pypy/pypy/changeset/0e325c9172fb/ Log: give test a proper name diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3273,7 +3273,7 @@ fail = self.cpu.execute_token(looptoken2, -9) assert fail.identifier == 42 - def test_some_issue(self): + def test_wrong_guard_nonnull_class(self): t_box, T_box = self.alloc_instance(self.T) null_box = self.null_instance() faildescr = BasicFailDescr(42) From noreply at buildbot.pypy.org Wed Jan 18 18:15:27 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 18 Jan 2012 18:15:27 +0100 (CET) Subject: [pypy-commit] pypy default: issue #1005 -- accept 'float' as a dtype Message-ID: <20120118171527.E4F70820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51454:923df65f2286 Date: 2012-01-18 11:15 -0600 http://bitbucket.org/pypy/pypy/changeset/923df65f2286/ Log: issue #1005 -- accept 'float' as a dtype diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -20,7 +20,7 @@ class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[]): self.itemtype = itemtype self.num = num self.kind = kind @@ -28,6 +28,7 @@ self.char = char self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors + self.aliases = aliases def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations @@ -62,7 +63,7 @@ elif space.isinstance_w(w_dtype, space.w_str): name = space.str_w(w_dtype) for dtype in cache.builtin_dtypes: - if dtype.name == name or dtype.char == name: + if dtype.name == name or dtype.char == name or name in dtype.aliases: return dtype else: for dtype in cache.builtin_dtypes: @@ -107,7 +108,7 @@ kind=BOOLLTR, name="bool", char="?", - w_box_type = space.gettypefor(interp_boxes.W_BoolBox), + w_box_type=space.gettypefor(interp_boxes.W_BoolBox), alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( @@ -116,7 +117,7 @@ kind=SIGNEDLTR, name="int8", char="b", - w_box_type = space.gettypefor(interp_boxes.W_Int8Box) + w_box_type=space.gettypefor(interp_boxes.W_Int8Box) ) self.w_uint8dtype = W_Dtype( types.UInt8(), @@ -124,7 +125,7 @@ kind=UNSIGNEDLTR, name="uint8", char="B", - w_box_type = space.gettypefor(interp_boxes.W_UInt8Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt8Box), ) self.w_int16dtype = W_Dtype( types.Int16(), @@ -132,7 +133,7 @@ kind=SIGNEDLTR, name="int16", char="h", - w_box_type = space.gettypefor(interp_boxes.W_Int16Box), + w_box_type=space.gettypefor(interp_boxes.W_Int16Box), ) self.w_uint16dtype = W_Dtype( types.UInt16(), @@ -140,7 +141,7 @@ kind=UNSIGNEDLTR, name="uint16", char="H", - w_box_type = space.gettypefor(interp_boxes.W_UInt16Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt16Box), ) self.w_int32dtype = W_Dtype( types.Int32(), @@ -148,7 +149,7 @@ kind=SIGNEDLTR, name="int32", char="i", - w_box_type = space.gettypefor(interp_boxes.W_Int32Box), + w_box_type=space.gettypefor(interp_boxes.W_Int32Box), ) self.w_uint32dtype = W_Dtype( types.UInt32(), @@ -156,7 +157,7 @@ kind=UNSIGNEDLTR, name="uint32", char="I", - w_box_type = space.gettypefor(interp_boxes.W_UInt32Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt32Box), ) if LONG_BIT == 32: name = "int32" @@ -168,7 +169,7 @@ kind=SIGNEDLTR, name=name, char="l", - w_box_type = space.gettypefor(interp_boxes.W_LongBox), + w_box_type=space.gettypefor(interp_boxes.W_LongBox), alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( @@ -177,7 +178,7 @@ kind=UNSIGNEDLTR, name="u" + name, char="L", - w_box_type = space.gettypefor(interp_boxes.W_ULongBox), + w_box_type=space.gettypefor(interp_boxes.W_ULongBox), ) self.w_int64dtype = W_Dtype( types.Int64(), @@ -185,7 +186,7 @@ kind=SIGNEDLTR, name="int64", char="q", - w_box_type = space.gettypefor(interp_boxes.W_Int64Box), + w_box_type=space.gettypefor(interp_boxes.W_Int64Box), alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( @@ -194,7 +195,7 @@ kind=UNSIGNEDLTR, name="uint64", char="Q", - w_box_type = space.gettypefor(interp_boxes.W_UInt64Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt64Box), ) self.w_float32dtype = W_Dtype( types.Float32(), @@ -202,7 +203,7 @@ kind=FLOATINGLTR, name="float32", char="f", - w_box_type = space.gettypefor(interp_boxes.W_Float32Box), + w_box_type=space.gettypefor(interp_boxes.W_Float32Box), ) self.w_float64dtype = W_Dtype( types.Float64(), @@ -212,6 +213,7 @@ char="d", w_box_type = space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], + aliases=["float"], ) self.builtin_dtypes = [ diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') From noreply at buildbot.pypy.org Wed Jan 18 18:51:22 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 18:51:22 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (arigo, bivab) use get_spp_offset to correctly compute the offset which is currently differnt for positive and negative stack locations, arg! Message-ID: <20120118175122.9020C820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r51455:d9303a92441b Date: 2012-01-18 09:42 -0800 http://bitbucket.org/pypy/pypy/changeset/d9303a92441b/ Log: (arigo, bivab) use get_spp_offset to correctly compute the offset which is currently differnt for positive and negative stack locations, arg! diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -40,7 +40,7 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop -from pypy.jit.backend.ppc.ppcgen.locations import StackLocation +from pypy.jit.backend.ppc.ppcgen.locations import StackLocation, get_spp_offset memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, @@ -212,7 +212,7 @@ if group == self.FLOAT_TYPE: assert 0, "not implemented yet" else: - start = spp_loc - (stack_location + 1) * WORD + start = spp_loc + get_spp_offset(stack_location) value = rffi.cast(rffi.LONGP, start)[0] else: # REG_LOC reg = ord(enc[i]) From noreply at buildbot.pypy.org Wed Jan 18 18:51:23 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 18:51:23 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: (arigo, bivab) Add tests for finish and guard that check that fail/finish args pased to the loop on the stack are decoded as they should Message-ID: <20120118175123.C957B820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r51456:7a455997891c Date: 2012-01-18 09:48 -0800 http://bitbucket.org/pypy/pypy/changeset/7a455997891c/ Log: (arigo, bivab) Add tests for finish and guard that check that fail/finish args pased to the loop on the stack are decoded as they should diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3359,6 +3359,27 @@ fail = self.cpu.execute_token(looptoken2, -9) assert fail.identifier == 42 + def test_finish_with_long_arglist(self): + boxes = [BoxInt(i) for i in range(30)] + ops = [ResOperation(rop.FINISH, boxes, None, descr=BasicFailDescr(1))] + looptoken = JitCellToken() + self.cpu.compile_loop(boxes, ops, looptoken) + fail = self.cpu.execute_token(looptoken, *range(30)) + assert fail.identifier == 1 + for i in range(30): + assert self.cpu.get_latest_value_int(i) == i + + def test_guard_with_long_arglist(self): + boxes = [BoxInt(i) for i in range(30)] + ops = [ResOperation(rop.GUARD_FALSE, [boxes[1]], None, descr=BasicFailDescr(1)), + ResOperation(rop.FINISH, [], None, descr=BasicFailDescr(2))] + ops[0].setfailargs(boxes) + looptoken = JitCellToken() + self.cpu.compile_loop(boxes, ops, looptoken) + fail = self.cpu.execute_token(looptoken, *range(30)) + assert fail.identifier == 1 + for i in range(30): + assert self.cpu.get_latest_value_int(i) == i class OOtypeBackendTest(BaseBackendTest): From noreply at buildbot.pypy.org Wed Jan 18 18:51:25 2012 From: noreply at buildbot.pypy.org (bivab) Date: Wed, 18 Jan 2012 18:51:25 +0100 (CET) Subject: [pypy-commit] pypy ppc-jit-backend: merge heads Message-ID: <20120118175125.10319820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: ppc-jit-backend Changeset: r51457:326dc978de77 Date: 2012-01-18 09:49 -0800 http://bitbucket.org/pypy/pypy/changeset/326dc978de77/ Log: merge heads diff --git a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py --- a/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py +++ b/pypy/jit/backend/ppc/ppcgen/ppc_assembler.py @@ -40,7 +40,7 @@ from pypy.rpython.annlowlevel import llhelper from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.lltypesystem.lloperation import llop -from pypy.jit.backend.ppc.ppcgen.locations import StackLocation +from pypy.jit.backend.ppc.ppcgen.locations import StackLocation, get_spp_offset memcpy_fn = rffi.llexternal('memcpy', [llmemory.Address, llmemory.Address, rffi.SIZE_T], lltype.Void, @@ -212,7 +212,7 @@ if group == self.FLOAT_TYPE: assert 0, "not implemented yet" else: - start = spp_loc - (stack_location + 1) * WORD + start = spp_loc + get_spp_offset(stack_location) value = rffi.cast(rffi.LONGP, start)[0] else: # REG_LOC reg = ord(enc[i]) From noreply at buildbot.pypy.org Wed Jan 18 18:52:17 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 18 Jan 2012 18:52:17 +0100 (CET) Subject: [pypy-commit] pypy default: Issue #1007 -- added fill() method to numpypy arrays Message-ID: <20120118175217.459AF820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51458:36de78269626 Date: 2012-01-18 11:52 -0600 http://bitbucket.org/pypy/pypy/changeset/36de78269626/ Log: Issue #1007 -- added fill() method to numpypy arrays diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -581,7 +581,7 @@ def descr_var(self, space): # var = mean((values - mean(values)) ** 2) w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) - assert isinstance(w_res, BaseArray) + assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) return w_res.descr_mean(space, space.w_None) @@ -590,6 +590,10 @@ # std(v) = sqrt(var(v)) return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -682,6 +686,9 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + def create_sig(self): return signature.ScalarSignature(self.dtype) @@ -788,7 +795,7 @@ Intermediate class for performing binary operations. """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -828,7 +835,7 @@ def __init__(self, shape, dtype, left, right): Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) - + def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() @@ -847,7 +854,7 @@ when we'll make AxisReduce lazy """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) @@ -1061,6 +1068,9 @@ array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): def create_sig(self): @@ -1273,6 +1283,8 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1302,6 +1302,28 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_fill(self): + from _numpypy import array + + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): From noreply at buildbot.pypy.org Wed Jan 18 18:55:37 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 18 Jan 2012 18:55:37 +0100 (CET) Subject: [pypy-commit] pypy default: add an arrylen_gc to this so the test passes, it's killed by the backend (I wish we did DCE earlier) Message-ID: <20120118175537.EE994820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51459:8f0e9fcd8de3 Date: 2012-01-18 11:55 -0600 http://bitbucket.org/pypy/pypy/changeset/8f0e9fcd8de3/ Log: add an arrylen_gc to this so the test passes, it's killed by the backend (I wish we did DCE earlier) diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -349,7 +349,8 @@ self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, - 'int_eq': 1, 'guard_false': 1, 'jump': 1}) + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ From noreply at buildbot.pypy.org Wed Jan 18 20:05:00 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 18 Jan 2012 20:05:00 +0100 (CET) Subject: [pypy-commit] pypy py3k: (arigo, romain, antocuni) aaargh. enumerate did not convert the repr of the items, resulting in a lot of confusion if you enumerate a list of instances, leading to incorrect C code. Fix it, and add a sanity check inside rtuple.newtuple Message-ID: <20120118190500.F33C7820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51460:8df1c2074547 Date: 2012-01-18 20:03 +0100 http://bitbucket.org/pypy/pypy/changeset/8df1c2074547/ Log: (arigo, romain, antocuni) aaargh. enumerate did not convert the repr of the items, resulting in a lot of confusion if you enumerate a list of instances, leading to incorrect C code. Fix it, and add a sanity check inside rtuple.newtuple diff --git a/pypy/rpython/lltypesystem/rtuple.py b/pypy/rpython/lltypesystem/rtuple.py --- a/pypy/rpython/lltypesystem/rtuple.py +++ b/pypy/rpython/lltypesystem/rtuple.py @@ -27,6 +27,10 @@ def newtuple(cls, llops, r_tuple, items_v): # items_v should have the lowleveltype of the internal reprs + assert len(r_tuple.items_r) == len(items_v) + for r_item, v_item in zip(r_tuple.items_r, items_v): + assert r_item.lowleveltype == v_item.concretetype + # if len(r_tuple.items_r) == 0: return inputconst(Void, ()) # a Void empty tuple c1 = inputconst(Void, r_tuple.lowleveltype.TO) diff --git a/pypy/rpython/rrange.py b/pypy/rpython/rrange.py --- a/pypy/rpython/rrange.py +++ b/pypy/rpython/rrange.py @@ -204,7 +204,10 @@ v_index = hop.gendirectcall(self.ll_getnextindex, v_enumerate) hop2 = hop.copy() hop2.args_r = [self.r_baseiter] + r_item_src = self.r_baseiter.r_list.external_item_repr + r_item_dst = hop.r_result.items_r[1] v_item = self.r_baseiter.rtype_next(hop2) + v_item = hop.llops.convertvar(v_item, r_item_src, r_item_dst) return hop.r_result.newtuple(hop.llops, hop.r_result, [v_index, v_item]) diff --git a/pypy/rpython/test/test_rrange.py b/pypy/rpython/test/test_rrange.py --- a/pypy/rpython/test/test_rrange.py +++ b/pypy/rpython/test/test_rrange.py @@ -169,6 +169,22 @@ res = self.interpret(fn, [2]) assert res == 789 + def test_enumerate_instances(self): + class A: + pass + def fn(n): + a = A() + b = A() + a.k = 10 + b.k = 20 + for i, x in enumerate([a, b]): + if i == n: + return x.k + return 5 + res = self.interpret(fn, [1]) + assert res == 20 + + class TestLLtype(BaseTestRrange, LLRtypeMixin): from pypy.rpython.lltypesystem import rrange From noreply at buildbot.pypy.org Wed Jan 18 20:05:02 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 18 Jan 2012 20:05:02 +0100 (CET) Subject: [pypy-commit] pypy default: (arigo, romain, antocuni) aaargh. enumerate did not convert the repr of the items, resulting in a lot of confusion if you enumerate a list of instances, leading to incorrect C code. Fix it, and add a sanity check inside rtuple.newtuple Message-ID: <20120118190502.38F42820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r51461:a5b5795a3edd Date: 2012-01-18 20:03 +0100 http://bitbucket.org/pypy/pypy/changeset/a5b5795a3edd/ Log: (arigo, romain, antocuni) aaargh. enumerate did not convert the repr of the items, resulting in a lot of confusion if you enumerate a list of instances, leading to incorrect C code. Fix it, and add a sanity check inside rtuple.newtuple diff --git a/pypy/rpython/lltypesystem/rtuple.py b/pypy/rpython/lltypesystem/rtuple.py --- a/pypy/rpython/lltypesystem/rtuple.py +++ b/pypy/rpython/lltypesystem/rtuple.py @@ -27,6 +27,10 @@ def newtuple(cls, llops, r_tuple, items_v): # items_v should have the lowleveltype of the internal reprs + assert len(r_tuple.items_r) == len(items_v) + for r_item, v_item in zip(r_tuple.items_r, items_v): + assert r_item.lowleveltype == v_item.concretetype + # if len(r_tuple.items_r) == 0: return inputconst(Void, ()) # a Void empty tuple c1 = inputconst(Void, r_tuple.lowleveltype.TO) diff --git a/pypy/rpython/rrange.py b/pypy/rpython/rrange.py --- a/pypy/rpython/rrange.py +++ b/pypy/rpython/rrange.py @@ -204,7 +204,10 @@ v_index = hop.gendirectcall(self.ll_getnextindex, v_enumerate) hop2 = hop.copy() hop2.args_r = [self.r_baseiter] + r_item_src = self.r_baseiter.r_list.external_item_repr + r_item_dst = hop.r_result.items_r[1] v_item = self.r_baseiter.rtype_next(hop2) + v_item = hop.llops.convertvar(v_item, r_item_src, r_item_dst) return hop.r_result.newtuple(hop.llops, hop.r_result, [v_index, v_item]) diff --git a/pypy/rpython/test/test_rrange.py b/pypy/rpython/test/test_rrange.py --- a/pypy/rpython/test/test_rrange.py +++ b/pypy/rpython/test/test_rrange.py @@ -169,6 +169,22 @@ res = self.interpret(fn, [2]) assert res == 789 + def test_enumerate_instances(self): + class A: + pass + def fn(n): + a = A() + b = A() + a.k = 10 + b.k = 20 + for i, x in enumerate([a, b]): + if i == n: + return x.k + return 5 + res = self.interpret(fn, [1]) + assert res == 20 + + class TestLLtype(BaseTestRrange, LLRtypeMixin): from pypy.rpython.lltypesystem import rrange From noreply at buildbot.pypy.org Wed Jan 18 20:05:03 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 18 Jan 2012 20:05:03 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120118190503.6FCE8820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r51462:c85a96246d2f Date: 2012-01-18 20:04 +0100 http://bitbucket.org/pypy/pypy/changeset/c85a96246d2f/ Log: merge heads diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -20,7 +20,7 @@ class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[]): self.itemtype = itemtype self.num = num self.kind = kind @@ -28,6 +28,7 @@ self.char = char self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors + self.aliases = aliases def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations @@ -62,7 +63,7 @@ elif space.isinstance_w(w_dtype, space.w_str): name = space.str_w(w_dtype) for dtype in cache.builtin_dtypes: - if dtype.name == name or dtype.char == name: + if dtype.name == name or dtype.char == name or name in dtype.aliases: return dtype else: for dtype in cache.builtin_dtypes: @@ -107,7 +108,7 @@ kind=BOOLLTR, name="bool", char="?", - w_box_type = space.gettypefor(interp_boxes.W_BoolBox), + w_box_type=space.gettypefor(interp_boxes.W_BoolBox), alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( @@ -116,7 +117,7 @@ kind=SIGNEDLTR, name="int8", char="b", - w_box_type = space.gettypefor(interp_boxes.W_Int8Box) + w_box_type=space.gettypefor(interp_boxes.W_Int8Box) ) self.w_uint8dtype = W_Dtype( types.UInt8(), @@ -124,7 +125,7 @@ kind=UNSIGNEDLTR, name="uint8", char="B", - w_box_type = space.gettypefor(interp_boxes.W_UInt8Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt8Box), ) self.w_int16dtype = W_Dtype( types.Int16(), @@ -132,7 +133,7 @@ kind=SIGNEDLTR, name="int16", char="h", - w_box_type = space.gettypefor(interp_boxes.W_Int16Box), + w_box_type=space.gettypefor(interp_boxes.W_Int16Box), ) self.w_uint16dtype = W_Dtype( types.UInt16(), @@ -140,7 +141,7 @@ kind=UNSIGNEDLTR, name="uint16", char="H", - w_box_type = space.gettypefor(interp_boxes.W_UInt16Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt16Box), ) self.w_int32dtype = W_Dtype( types.Int32(), @@ -148,7 +149,7 @@ kind=SIGNEDLTR, name="int32", char="i", - w_box_type = space.gettypefor(interp_boxes.W_Int32Box), + w_box_type=space.gettypefor(interp_boxes.W_Int32Box), ) self.w_uint32dtype = W_Dtype( types.UInt32(), @@ -156,7 +157,7 @@ kind=UNSIGNEDLTR, name="uint32", char="I", - w_box_type = space.gettypefor(interp_boxes.W_UInt32Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt32Box), ) if LONG_BIT == 32: name = "int32" @@ -168,7 +169,7 @@ kind=SIGNEDLTR, name=name, char="l", - w_box_type = space.gettypefor(interp_boxes.W_LongBox), + w_box_type=space.gettypefor(interp_boxes.W_LongBox), alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( @@ -177,7 +178,7 @@ kind=UNSIGNEDLTR, name="u" + name, char="L", - w_box_type = space.gettypefor(interp_boxes.W_ULongBox), + w_box_type=space.gettypefor(interp_boxes.W_ULongBox), ) self.w_int64dtype = W_Dtype( types.Int64(), @@ -185,7 +186,7 @@ kind=SIGNEDLTR, name="int64", char="q", - w_box_type = space.gettypefor(interp_boxes.W_Int64Box), + w_box_type=space.gettypefor(interp_boxes.W_Int64Box), alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( @@ -194,7 +195,7 @@ kind=UNSIGNEDLTR, name="uint64", char="Q", - w_box_type = space.gettypefor(interp_boxes.W_UInt64Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt64Box), ) self.w_float32dtype = W_Dtype( types.Float32(), @@ -202,7 +203,7 @@ kind=FLOATINGLTR, name="float32", char="f", - w_box_type = space.gettypefor(interp_boxes.W_Float32Box), + w_box_type=space.gettypefor(interp_boxes.W_Float32Box), ) self.w_float64dtype = W_Dtype( types.Float64(), @@ -212,6 +213,7 @@ char="d", w_box_type = space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], + aliases=["float"], ) self.builtin_dtypes = [ diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -581,7 +581,7 @@ def descr_var(self, space): # var = mean((values - mean(values)) ** 2) w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) - assert isinstance(w_res, BaseArray) + assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) return w_res.descr_mean(space, space.w_None) @@ -590,6 +590,10 @@ # std(v) = sqrt(var(v)) return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -682,6 +686,9 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + def create_sig(self): return signature.ScalarSignature(self.dtype) @@ -788,7 +795,7 @@ Intermediate class for performing binary operations. """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -828,7 +835,7 @@ def __init__(self, shape, dtype, left, right): Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) - + def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() @@ -847,7 +854,7 @@ when we'll make AxisReduce lazy """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) @@ -1061,6 +1068,9 @@ array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): def create_sig(self): @@ -1273,6 +1283,8 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1302,6 +1302,28 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_fill(self): + from _numpypy import array + + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -349,7 +349,8 @@ self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, - 'int_eq': 1, 'guard_false': 1, 'jump': 1}) + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ From pullrequests-noreply at bitbucket.org Wed Jan 18 21:40:07 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Wed, 18 Jan 2012 20:40:07 -0000 Subject: [pypy-commit] [REJECTED] Pull request #20 for pypy/pypy: Improvements to the JVM backend, this time with tests In-Reply-To: References: Message-ID: <20120118204007.24052.8366@bitbucket01.managed.contegix.com> Pull request #20 has been rejected by Antonio Cuni. benol/pypy had changes to be pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/20/improvements-to-the-jvm-backend-this-time manually merged Rejected changes: 8efdb1515b2c by Micha? Bendowski: "Handle the 'jit_is_virtual' opcode by always returning False" f1d52bd62e15 by Micha? Bendowski: "Add a missing cast from Unsigned to UnsignedLongLong in the JVM backend." 8ff039cadd19 by Micha? Bendowski: "Declare oo_primitives that should implement some rffi operations. For now the a?" 25d4d323cb5f by Micha? Bendowski: "Fix compute_unique_id to support built-ins in ootype. Otherwise the translation?" 7f09531dd6a9 by Micha? Bendowski: "Fix userspace builders in ootype Implement the getlength() method of StringBuil?" 578e69f273f9 by Micha? Bendowski: "Add files generated by PyCharm to .hgignore" The pull request has been closed. -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From pullrequests-noreply at bitbucket.org Wed Jan 18 21:42:39 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Wed, 18 Jan 2012 20:42:39 -0000 Subject: [pypy-commit] [REJECTED] Pull request #19 for pypy/pypy: Improvements to the JVM backend In-Reply-To: References: Message-ID: <20120118204239.8312.39133@bitbucket01.managed.contegix.com> Pull request #19 has been rejected by Antonio Cuni. benol/pypy had changes to be pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/19/improvements-to-the-jvm-backend manually merged Rejected changes: 2e1b33862b2c by Micha? Bendowski: "Add files generated by PyCharm to .hgignore" 372628723c37 by Micha? Bendowski: "Declare oo_primitives that should implement some rffi operations. For now the a?" 30e6d59d4333 by Micha? Bendowski: "Add a missing cast from Unsigned to UnsignedLongLong." 0ab09a670d4a by Micha? Bendowski: "Handle the 'jit_is_virtual' opcode by always returning False" 9031e9c4ad78 by Micha? Bendowski: "Fix compute_unique_id to support built-ins. Otherwise the translation fails bec?" ee14653b5fa0 by Micha? Bendowski: "Fix userspace builders in ootype Implement the ll_getlength() method of StringB?" The pull request has been closed. -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Wed Jan 18 21:57:43 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 18 Jan 2012 21:57:43 +0100 (CET) Subject: [pypy-commit] pypy numpypy-reshape: add (still not enough) failing tests Message-ID: <20120118205743.179DE820D8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-reshape Changeset: r51463:95362a60e6b8 Date: 2012-01-18 22:57 +0200 http://bitbucket.org/pypy/pypy/changeset/95362a60e6b8/ Log: add (still not enough) failing tests diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -396,6 +396,11 @@ assert (a == [1000, 1, 2, 3, 1000, 5, 6, 7, 1000, 9, 10, 11]).all() a = zeros((4, 2, 3)) a.shape = (12, 2) + assert array(1).reshape((1, 1, 1, 1)).shape == (1, 1, 1, 1) + assert array([2, 2, 2]).reshape((1, 1, 1, -1)).shape == (1, 1, 1, 3) + a=numpypy.array(1) + a.shape = (1, 1, 1) + assert a.shape == (1, 1, 1) def test_slice_reshape(self): from _numpypy import zeros, arange From noreply at buildbot.pypy.org Wed Jan 18 23:43:34 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 18 Jan 2012 23:43:34 +0100 (CET) Subject: [pypy-commit] pypy default: Added ztranslation tests for _codecs Message-ID: <20120118224334.7BEE5820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51464:0b188f082267 Date: 2012-01-18 15:42 -0600 http://bitbucket.org/pypy/pypy/changeset/0b188f082267/ Log: Added ztranslation tests for _codecs diff --git a/pypy/module/_codecs/test/test_ztranslation.py b/pypy/module/_codecs/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/_codecs/test/test_ztranslation.py @@ -0,0 +1,5 @@ +from pypy.objspace.fake.checkmodule import checkmodule + + +def test__codecs_translates(): + checkmodule('_codecs') From noreply at buildbot.pypy.org Wed Jan 18 23:43:35 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 18 Jan 2012 23:43:35 +0100 (CET) Subject: [pypy-commit] pypy numpypy-remove-scalar: Remove scalar. It needs some new support in signatures though. Message-ID: <20120118224335.B919982110@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpypy-remove-scalar Changeset: r51465:6cb068520d24 Date: 2012-01-18 16:40 -0600 http://bitbucket.org/pypy/pypy/changeset/6cb068520d24/ Log: Remove scalar. It needs some new support in signatures though. diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -6,11 +6,10 @@ import re from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root -from pypy.module.micronumpy import interp_boxes +from pypy.module.micronumpy import interp_boxes, interp_ufuncs from pypy.module.micronumpy.interp_dtype import get_dtype_cache -from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, - scalar_w, W_NDimArray, array) -from pypy.module.micronumpy import interp_ufuncs +from pypy.module.micronumpy.interp_numarray import (BaseArray, W_NDimArray, + convert_to_array, array) from pypy.rlib.objectmodel import specialize, instantiate @@ -54,10 +53,10 @@ self.fromcache = InternalSpaceCache(self).getorbuild def issequence_w(self, w_obj): - return isinstance(w_obj, ListObject) or isinstance(w_obj, W_NDimArray) + return isinstance(w_obj, ListObject) or isinstance(w_obj, BaseArray) def isinstance_w(self, w_obj, w_tp): - return w_obj.tp == w_tp + return not isinstance(w_obj, BaseArray) and w_obj.tp == w_tp def decode_index4(self, w_idx, size): if isinstance(w_idx, IntObject): @@ -260,8 +259,7 @@ w_rhs = self.rhs.execute(interp) if not isinstance(w_lhs, BaseArray): # scalar - dtype = get_dtype_cache(interp.space).w_float64dtype - w_lhs = scalar_w(interp.space, dtype, w_lhs) + w_lhs = convert_to_array(interp.space, w_lhs) assert isinstance(w_lhs, BaseArray) if self.name == '+': w_res = w_lhs.descr_add(interp.space, w_rhs) @@ -270,7 +268,6 @@ elif self.name == '-': w_res = w_lhs.descr_sub(interp.space, w_rhs) elif self.name == '->': - assert not isinstance(w_rhs, Scalar) if isinstance(w_rhs, FloatObject): w_rhs = IntObject(int(w_rhs.floatval)) assert isinstance(w_lhs, BaseArray) @@ -280,7 +277,7 @@ if (not isinstance(w_res, BaseArray) and not isinstance(w_res, interp_boxes.W_GenericBox)): dtype = get_dtype_cache(interp.space).w_float64dtype - w_res = scalar_w(interp.space, dtype, w_res) + w_res = convert_to_array(interp.space, w_res, dtype=dtype) return w_res def __repr__(self): @@ -398,17 +395,7 @@ w_res = neg.call(interp.space, [arr]) else: assert False # unreachable code - if isinstance(w_res, BaseArray): - return w_res - if isinstance(w_res, FloatObject): - dtype = get_dtype_cache(interp.space).w_float64dtype - elif isinstance(w_res, BoolObject): - dtype = get_dtype_cache(interp.space).w_booldtype - elif isinstance(w_res, interp_boxes.W_GenericBox): - dtype = w_res.get_dtype(interp.space) - else: - dtype = None - return scalar_w(interp.space, dtype, w_res) + return w_res else: raise WrongFunctionName diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,15 +1,16 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, NoneNotWrapped +from pypy.interpreter.gateway import interp2app, NoneNotWrapped, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy.interp_iter import (ArrayIterator, OneDimIterator, + SkipLastAxisIterator) from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit +from pypy.rlib.rstring import StringBuilder from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name -from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -272,10 +273,7 @@ def _binop_right_impl(ufunc_name): def impl(self, space, w_other): - w_other = scalar_w(space, - interp_ufuncs.find_dtype_for_scalar(space, w_other, self.find_dtype()), - w_other - ) + w_other = convert_to_array(space, w_other, dtype=self.find_dtype()) return getattr(interp_ufuncs.get(space), ufunc_name).call(space, [w_other, self]) return func_with_new_name(impl, "binop_right_%s_impl" % ufunc_name) @@ -375,8 +373,7 @@ descr_argmin = _reduce_argmax_argmin_impl("min") def descr_dot(self, space, w_other): - w_other = convert_to_array(space, w_other) - if isinstance(w_other, Scalar): + if is_scalar(space, w_other): return self.descr_mul(space, w_other) else: w_res = self.descr_mul(space, w_other) @@ -399,8 +396,6 @@ def descr_set_shape(self, space, w_iterable): new_shape = get_shape_from_iterable(space, self.size, w_iterable) - if isinstance(self, Scalar): - return self.get_concrete().setshape(space, new_shape) def descr_get_size(self, space): @@ -560,8 +555,7 @@ def descr_tolist(self, space): if len(self.shape) == 0: - assert isinstance(self, Scalar) - return self.value.descr_tolist(space) + return self.getitem(0).descr_tolist(space) w_result = space.newlist([]) for i in range(self.shape[0]): space.call_method(w_result, "append", @@ -650,50 +644,23 @@ def supports_fast_slicing(self): return False -def convert_to_array(space, w_obj): +def convert_to_array(space, w_obj, dtype=None): if isinstance(w_obj, BaseArray): + assert dtype is None return w_obj elif space.issequence_w(w_obj): # Convert to array. - return array(space, w_obj, w_order=None) + return array(space, w_obj, w_order=None, w_dtype=dtype) else: # If it's a scalar - dtype = interp_ufuncs.find_dtype_for_scalar(space, w_obj) - return scalar_w(space, dtype, w_obj) + if dtype is None: + dtype = interp_ufuncs.find_dtype_for_scalar(space, w_obj) + arr = W_NDimArray(1, [], dtype) + arr.setitem(0, dtype.coerce(space, w_obj)) + return arr -def scalar_w(space, dtype, w_obj): - return Scalar(dtype, dtype.coerce(space, w_obj)) - -class Scalar(BaseArray): - """ - Intermediate class representing a literal. - """ - size = 1 - _attrs_ = ["dtype", "value", "shape"] - - def __init__(self, dtype, value): - self.shape = [] - BaseArray.__init__(self, []) - self.dtype = dtype - self.value = value - - def find_dtype(self): - return self.dtype - - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): - builder.append(self.dtype.itemtype.str_format(self.value)) - - def copy(self, space): - return Scalar(self.dtype, self.value) - - def fill(self, space, w_value): - self.value = self.dtype.coerce(space, w_value) - - def create_sig(self): - return signature.ScalarSignature(self.dtype) - - def get_concrete_or_scalar(self): - return self +def is_scalar(space, w_obj): + return not isinstance(w_obj, BaseArray) and not space.issequence_w(w_obj) class VirtualArray(BaseArray): @@ -884,7 +851,7 @@ return self def supports_fast_slicing(self): - return self.order == 'C' and self.strides[-1] == 1 + return self.order == 'C' and bool(self.strides) and self.strides[-1] == 1 def find_dtype(self): return self.dtype @@ -1069,7 +1036,7 @@ return array def fill(self, space, w_value): - self.setslice(space, scalar_w(space, self.dtype, w_value)) + self.setslice(space, convert_to_array(space, w_value)) class ViewArray(ConcreteArray): @@ -1129,9 +1096,9 @@ """ A class representing contiguous array. We know that each iteration by say ufunc will increase the data index by one """ - def setitem(self, item, value): + def setitem(self, idx, value): self.invalidated() - self.dtype.setitem(self.storage, item, value) + self.dtype.setitem(self.storage, idx, value) def setshape(self, space, new_shape): self.shape = new_shape @@ -1165,7 +1132,7 @@ dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) - return scalar_w(space, dtype, w_item_or_iterable) + return convert_to_array(space, w_item_or_iterable, dtype=dtype) if w_order is None: order = 'C' else: @@ -1217,9 +1184,9 @@ return space.wrap(arr) def dot(space, w_obj, w_obj2): + if is_scalar(space, w_obj): + return convert_to_array(space, w_obj2).descr_dot(space, w_obj) w_arr = convert_to_array(space, w_obj) - if isinstance(w_arr, Scalar): - return convert_to_array(space, w_obj2).descr_dot(space, w_arr) return w_arr.descr_dot(space, w_obj2) BaseArray.typedef = TypeDef( @@ -1307,6 +1274,9 @@ self.iter = OneDimIterator(arr.start, self.strides[0], self.shape[0]) + def descr_iter(self): + return self + def descr_next(self, space): if self.iter.done(): raise OperationError(space.w_StopIteration, space.w_None) @@ -1314,12 +1284,10 @@ self.iter = self.iter.next(self.shapelen) return result - def descr_iter(self): - return self W_FlatIterator.typedef = TypeDef( 'flatiter', + __iter__ = interp2app(W_FlatIterator.descr_iter), next = interp2app(W_FlatIterator.descr_next), - __iter__ = interp2app(W_FlatIterator.descr_iter), ) W_FlatIterator.acceptable_as_base_class = False diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -3,12 +3,13 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype -from pypy.module.micronumpy.signature import ReduceSignature,\ - find_sig, new_printable_location, AxisReduceSignature, ScalarSignature +from pypy.module.micronumpy.signature import (ReduceSignature, + AxisReduceSignature, ScalarSignature, new_printable_location, find_sig) from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name + reduce_driver = jit.JitDriver( greens=['shapelen', "sig"], virtualizables=["frame"], @@ -114,8 +115,8 @@ return self.reduce(space, w_obj, False, False, w_dim) def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): - from pypy.module.micronumpy.interp_numarray import convert_to_array, \ - Scalar + from pypy.module.micronumpy.interp_numarray import convert_to_array + if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) @@ -124,7 +125,7 @@ obj = convert_to_array(space, w_obj) if dim >= len(obj.shape): raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) - if isinstance(obj, Scalar): + if not obj.shape: raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -155,9 +156,9 @@ return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) def do_axis_reduce(self, obj, dtype, dim): - from pypy.module.micronumpy.interp_numarray import AxisReduce,\ - W_NDimArray - + from pypy.module.micronumpy.interp_numarray import (AxisReduce, + W_NDimArray) + shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] size = 1 for s in shape: @@ -231,17 +232,22 @@ def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call1, - convert_to_array, Scalar) + convert_to_array, is_scalar) [w_obj] = args_w - w_obj = convert_to_array(space, w_obj) + scalar = is_scalar(space, w_obj) + if scalar: + w_obj_dtype = find_dtype_for_scalar(space, w_obj) + else: + w_obj = convert_to_array(space, w_obj) + w_obj_dtype = w_obj.find_dtype() res_dtype = find_unaryop_result_dtype(space, - w_obj.find_dtype(), + w_obj_dtype, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) - if isinstance(w_obj, Scalar): - return self.func(res_dtype, w_obj.value.convert_to(res_dtype)) + if scalar: + return self.func(res_dtype, res_dtype.coerce(space, w_obj)) w_res = Call1(self.func, self.name, w_obj.shape, res_dtype, w_obj) w_obj.add_invalidates(w_res) @@ -260,14 +266,22 @@ self.comparison_func = comparison_func def call(self, space, args_w): - from pypy.module.micronumpy.interp_numarray import (Call2, - convert_to_array, Scalar, shape_agreement) + from pypy.module.micronumpy.interp_numarray import (BaseArray, Call2, + convert_to_array, is_scalar, shape_agreement) [w_lhs, w_rhs] = args_w - w_lhs = convert_to_array(space, w_lhs) - w_rhs = convert_to_array(space, w_rhs) + scalar = is_scalar(space, w_lhs) and is_scalar(space, w_rhs) + if scalar: + w_lhs_dtype = find_dtype_for_scalar(space, w_lhs) + w_rhs_dtype = find_dtype_for_scalar(space, w_rhs) + else: + w_lhs = convert_to_array(space, w_lhs) + w_rhs = convert_to_array(space, w_rhs) + w_lhs_dtype = w_lhs.find_dtype() + w_rhs_dtype = w_rhs.find_dtype() + calc_dtype = find_binop_result_dtype(space, - w_lhs.find_dtype(), w_rhs.find_dtype(), + w_lhs_dtype, w_rhs_dtype, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -275,12 +289,14 @@ res_dtype = interp_dtype.get_dtype_cache(space).w_booldtype else: res_dtype = calc_dtype - if isinstance(w_lhs, Scalar) and isinstance(w_rhs, Scalar): + if scalar: return self.func(calc_dtype, - w_lhs.value.convert_to(calc_dtype), - w_rhs.value.convert_to(calc_dtype) + calc_dtype.coerce(space, w_lhs), + calc_dtype.coerce(space, w_rhs), ) + assert isinstance(w_lhs, BaseArray) + assert isinstance(w_rhs, BaseArray) new_shape = shape_agreement(space, w_lhs.shape, w_rhs.shape) w_res = Call2(self.func, self.name, new_shape, calc_dtype, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -195,11 +195,6 @@ iter = ConstantIterator() iterlist.append(iter) - def eval(self, frame, arr): - from pypy.module.micronumpy.interp_numarray import Scalar - assert isinstance(arr, Scalar) - return arr.value - class ViewSignature(ArraySignature): def debug_repr(self): return 'Slice' @@ -331,7 +326,7 @@ assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) - + return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): @@ -342,7 +337,7 @@ def _invent_numbering(self, cache, allnumbers): self.left._invent_numbering(new_cache(), allnumbers) self.right._invent_numbering(cache, allnumbers) - + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 @@ -397,12 +392,12 @@ class SliceloopSignature(Call2): def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 - + assert isinstance(arr, Call2) ofs = frame.iterators[0].offset arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( self.calc_dtype)) - + def debug_repr(self): return 'SliceLoop(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) @@ -447,6 +442,6 @@ assert isinstance(arr, AxisReduce) return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) - + def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -1,6 +1,6 @@ from pypy.conftest import gettestobjspace from pypy.module.micronumpy.interp_dtype import get_dtype_cache -from pypy.module.micronumpy.interp_numarray import W_NDimArray, Scalar +from pypy.module.micronumpy.interp_numarray import W_NDimArray from pypy.module.micronumpy.interp_ufuncs import (find_binop_result_dtype, find_unaryop_result_dtype) @@ -16,7 +16,7 @@ ar = W_NDimArray(10, [10], dtype=float64_dtype) ar2 = W_NDimArray(10, [10], dtype=float64_dtype) v1 = ar.descr_add(space, ar) - v2 = ar.descr_add(space, Scalar(float64_dtype, 2.0)) + v2 = ar.descr_add(space, space.wrap(2.0)) sig1 = v1.find_sig() sig2 = v2.find_sig() assert v1 is not v2 @@ -26,7 +26,7 @@ sig1b = ar2.descr_add(space, ar).find_sig() assert sig1b.left.array_no != sig1b.right.array_no assert sig1b is not sig1 - v3 = ar.descr_add(space, Scalar(float64_dtype, 1.0)) + v3 = ar.descr_add(space, space.wrap(1.0)) sig3 = v3.find_sig() assert sig2 is sig3 v4 = ar.descr_add(space, ar) diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -135,7 +135,7 @@ r """ interp = self.run(code) - assert interp.results[0].value.value == 15 + assert interp.results[0].value == 15 def test_sum2(self): code = """ @@ -144,8 +144,7 @@ sum(b) """ interp = self.run(code) - assert interp.results[0].value.value == 30 * (30 - 1) - + assert interp.results[0].value == 30 * (30 - 1) def test_array_write(self): code = """ @@ -163,7 +162,7 @@ b = a + a min(b) """) - assert interp.results[0].value.value == -24 + assert interp.results[0].value == -24 def test_max(self): interp = self.run(""" @@ -172,7 +171,7 @@ b = a + a max(b) """) - assert interp.results[0].value.value == 256 + assert interp.results[0].value == 256 def test_slice(self): interp = self.run(""" diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -945,7 +945,7 @@ assert debug_repr(a) == 'Array' assert debug_repr(a + a) == 'Call2(add, Array, Array)' assert debug_repr(a[::2]) == 'Slice' - assert debug_repr(a + 2) == 'Call2(add, Array, Scalar)' + assert debug_repr(a + 2) == 'Call2(add, Array, Array)' assert debug_repr(a + a.flat) == 'Call2(add, Array, Slice)' assert debug_repr(sin(a)) == 'Call1(sin, Array)' From noreply at buildbot.pypy.org Wed Jan 18 23:43:36 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Wed, 18 Jan 2012 23:43:36 +0100 (CET) Subject: [pypy-commit] pypy default: merged default Message-ID: <20120118224336.E8E67820D8@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51466:f254dc780358 Date: 2012-01-18 16:43 -0600 http://bitbucket.org/pypy/pypy/changeset/f254dc780358/ Log: merged default diff --git a/pypy/module/_codecs/test/test_ztranslation.py b/pypy/module/_codecs/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/_codecs/test/test_ztranslation.py @@ -0,0 +1,5 @@ +from pypy.objspace.fake.checkmodule import checkmodule + + +def test__codecs_translates(): + checkmodule('_codecs') From noreply at buildbot.pypy.org Thu Jan 19 00:11:38 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 19 Jan 2012 00:11:38 +0100 (CET) Subject: [pypy-commit] pypy py3k: Test and fix RLock context manager. Message-ID: <20120118231138.C1A15820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51467:fc8fcd7a4f56 Date: 2012-01-17 21:21 +0100 http://bitbucket.org/pypy/pypy/changeset/fc8fcd7a4f56/ Log: Test and fix RLock context manager. diff --git a/pypy/module/thread/os_lock.py b/pypy/module/thread/os_lock.py --- a/pypy/module/thread/os_lock.py +++ b/pypy/module/thread/os_lock.py @@ -234,7 +234,7 @@ self.acquire_w(space) return self - def descr__exit__(self, space, *args): + def descr__exit__(self, space, __args__): self.release_w(space) W_RLock.typedef = TypeDef( diff --git a/pypy/module/thread/test/test_lock.py b/pypy/module/thread/test/test_lock.py --- a/pypy/module/thread/test/test_lock.py +++ b/pypy/module/thread/test/test_lock.py @@ -122,3 +122,9 @@ lock.release() assert lock._is_owned() is False + def test_context_manager(self): + import _thread + lock = _thread.RLock() + with lock: + assert lock._is_owned() is True + From noreply at buildbot.pypy.org Thu Jan 19 00:11:40 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 19 Jan 2012 00:11:40 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix octal notation in binascii.py Message-ID: <20120118231140.0511D820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51468:a218f270838c Date: 2012-01-17 22:18 +0100 http://bitbucket.org/pypy/pypy/changeset/a218f270838c/ Log: Fix octal notation in binascii.py diff --git a/lib_pypy/binascii.py b/lib_pypy/binascii.py --- a/lib_pypy/binascii.py +++ b/lib_pypy/binascii.py @@ -68,7 +68,7 @@ chr(0x20 + (((B << 2) | ((C >> 6) & 0x3)) & 0x3F)), chr(0x20 + (( C ) & 0x3F))]) for A, B, C in triples_gen(s)] - return chr(ord(' ') + (length & 077)) + ''.join(result) + '\n' + return chr(ord(' ') + (length & 0o77)) + ''.join(result) + '\n' table_a2b_base64 = { From noreply at buildbot.pypy.org Thu Jan 19 00:11:41 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 19 Jan 2012 00:11:41 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fixes for the struct module: the builtin part is now named _struct. Message-ID: <20120118231141.4322B820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51469:8b47ef4c3162 Date: 2012-01-18 22:09 +0100 http://bitbucket.org/pypy/pypy/changeset/8b47ef4c3162/ Log: Fixes for the struct module: the builtin part is now named _struct. diff --git a/lib_pypy/struct.py b/lib_pypy/_struct.py rename from lib_pypy/struct.py rename to lib_pypy/_struct.py --- a/lib_pypy/struct.py +++ b/lib_pypy/_struct.py @@ -51,7 +51,7 @@ bytes = [b for b in data[index:index+size]] if le == 'little': bytes.reverse() - number = 0L + number = 0 for b in bytes: number = number << 8 | b return int(number) @@ -415,3 +415,7 @@ raise error("unpack_from requires a buffer of at least %d bytes" % (size,)) return unpack(fmt, data) + +def _clearcache(): + "Clear the internal cache." + # No cache in this implementation diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1280,11 +1280,14 @@ return self.str_w(w_obj) def str_w(self, w_obj): - try: - return self.unicode_w(w_obj).encode('ascii') - except UnicodeEncodeError: - w_bytes = self.call_method(w_obj, 'encode', self.wrap('utf-8')) - return self.bytes_w(w_bytes) + if self.isinstance_w(w_obj, self.w_unicode): + try: + return self.unicode_w(w_obj).encode('ascii') + except UnicodeEncodeError: + w_bytes = self.call_method(w_obj, 'encode', self.wrap('utf-8')) + return self.bytes_w(w_bytes) + else: + return w_obj.bytes_w(self) def bytes_w(self, w_obj): return w_obj.bytes_w(self) diff --git a/pypy/module/struct/__init__.py b/pypy/module/struct/__init__.py --- a/pypy/module/struct/__init__.py +++ b/pypy/module/struct/__init__.py @@ -45,10 +45,13 @@ The variable struct.error is an exception raised on errors.""" + applevel_name = '_struct' + interpleveldefs = { 'calcsize': 'interp_struct.calcsize', 'pack': 'interp_struct.pack', 'unpack': 'interp_struct.unpack', + '_clearcache': 'interp_struct.clearcache', } appleveldefs = { diff --git a/pypy/module/struct/app_struct.py b/pypy/module/struct/app_struct.py --- a/pypy/module/struct/app_struct.py +++ b/pypy/module/struct/app_struct.py @@ -2,7 +2,7 @@ """ Application-level definitions for the struct module. """ -import struct +import _struct as struct class error(Exception): """Exception raised on various occasions; argument is a string @@ -15,7 +15,7 @@ # XXX inefficient def unpack_from(fmt, buf, offset=0): - size = struct.calcsize(fmt) + size = _struct.calcsize(fmt) data = buffer(buf)[offset:offset+size] if len(data) != size: raise error("unpack_from requires a buffer of at least %d bytes" diff --git a/pypy/module/struct/interp_struct.py b/pypy/module/struct/interp_struct.py --- a/pypy/module/struct/interp_struct.py +++ b/pypy/module/struct/interp_struct.py @@ -33,3 +33,7 @@ except StructError, e: raise e.at_applevel(space) return space.newtuple(fmtiter.result_w[:]) + +def clearcache(space): + "Clear the internal cache." + # No cache in this implementation From noreply at buildbot.pypy.org Thu Jan 19 00:11:42 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Thu, 19 Jan 2012 00:11:42 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix translation Message-ID: <20120118231142.7C7BB820D8@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51470:af6e253e47d6 Date: 2012-01-18 22:09 +0100 http://bitbucket.org/pypy/pypy/changeset/af6e253e47d6/ Log: Fix translation diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -582,26 +582,34 @@ self.pushrevvalues(itemcount, items) def UNPACK_EX(self, oparg, next_instr): + "a, *b, c = range(10)" left = oparg & 0xFF right = (oparg & 0xFF00) >> 8 w_iterable = self.popvalue() - items = self.space.fixedview(w_iterable) itemcount = len(items) - + if right < itemcount: + count = left + right + if count == 1: + plural = '' + else: + plural = 's' + raise operationerrfmt(self.space.w_ValueError, + "need more than %d value%s to unpack", + left + right, plural) + right = itemcount - right + assert right >= 0 + # push values in reverse order i = itemcount - 1 - while i >= itemcount-right: + while i >= right: self.pushvalue(items[i]) i -= 1 - - self.pushvalue(self.space.newlist(items[left:itemcount-right])) - + self.pushvalue(self.space.newlist(items[left:right])) i = left - 1 while i >= 0: self.pushvalue(items[i]) i -= 1 - def STORE_ATTR(self, nameindex, next_instr): "obj.attributename = newvalue" w_attributename = self.getname_w(nameindex) From noreply at buildbot.pypy.org Thu Jan 19 01:05:05 2012 From: noreply at buildbot.pypy.org (mattip) Date: Thu, 19 Jan 2012 01:05:05 +0100 (CET) Subject: [pypy-commit] pypy numppy-flatitter: add setitem, getitem to flatitter Message-ID: <20120119000505.55A85820D8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numppy-flatitter Changeset: r51471:60b724406de5 Date: 2012-01-19 01:46 +0200 http://bitbucket.org/pypy/pypy/changeset/60b724406de5/ Log: add setitem, getitem to flatitter diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1299,13 +1299,18 @@ size = 1 for sh in arr.shape: size *= sh - self.strides = [arr.strides[-1]] - self.backstrides = [arr.backstrides[-1]] - ViewArray.__init__(self, size, [size], arr.dtype, arr.order, - arr) + if arr.strides[-1] < arr.strides[0]: + self.strides = [arr.strides[-1]] + self.backstrides = [arr.backstrides[-1]] + else: + self.strides = [arr.strides[0]] + self.backstrides = [arr.backstrides[0]] + ViewArray.__init__(self, size, [size], arr.dtype, order=arr.order, + parent=arr) self.shapelen = len(arr.shape) self.iter = OneDimIterator(arr.start, self.strides[0], self.shape[0]) + self.base = arr def descr_next(self, space): if self.iter.done(): @@ -1317,9 +1322,42 @@ def descr_iter(self): return self + def descr_getitem(self, space, w_idx): + if not space.isinstance_w(w_idx, space.w_int): + raise OperationError(space.w_ValueError, space.wrap( + "non-integer indexing not supported yet")) + _i = space.int_w(w_idx) + if _i<0: + i = self.size + _i + else: + i = _i + if i >= self.size or i < 0: + raise operationerrfmt(space.w_IndexError, + "index (%d) out of range (%d<=index<%d", + _i, -self.size, self.size) + result = self.getitem(self.base.start + i * self.strides[0]) + return result + + def descr_setitem(self, space, w_idx, w_value): + if not space.isinstance_w(w_idx, space.w_int): + raise OperationError(space.w_ValueError, space.wrap( + "non-integer indexing not supported yet")) + _i = space.int_w(w_idx) + if _i<0: + i = self.size + _i + else: + i = _i + if i >= self.size or i < 0: + raise operationerrfmt(space.w_IndexError, + "index (%d) out of range (%d<=index<%d", + _i, -self.size, self.size) + self.setitem(self.base.start + i * self.strides[0], w_value) + W_FlatIterator.typedef = TypeDef( 'flatiter', next = interp2app(W_FlatIterator.descr_next), __iter__ = interp2app(W_FlatIterator.descr_iter), + __getitem__ = interp2app(W_FlatIterator.descr_getitem), + __setitem__ = interp2app(W_FlatIterator.descr_setitem), ) W_FlatIterator.acceptable_as_base_class = False diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -276,6 +276,12 @@ for i in xrange(5): assert a[i] == b[i] + def test_getitem_nd(self): + from _numpypy import arange + a = arange(15).reshape(3, 5) + assert a[1, 3] == 8 + assert a.T[1, 2] == 11 + def test_setitem(self): from _numpypy import array a = array(range(5)) @@ -1286,6 +1292,29 @@ a = ones((2, 2)) assert list(((a + a).flat)) == [2, 2, 2, 2] + def test_flatiter_getitem(self): + from _numpypy import arange + a = arange(10) + assert a.flat[3] == 3 + assert a[2:].flat[3] == 5 + assert (a + a).flat[3] == 6 + assert a[::2].flat[3] == 6 + assert a.reshape(2,5).flat[3] == 3 + b = a.flat + b.next() + b.next() + b.next() + assert b[3] == 3 + assert b[-2] == 8 + raises(IndexError, "b[11]") + raises(IndexError, "b[-11]") + + def test_flatiter_transpose(self): + from _numpypy import arange + a = arange(10) + skip('out-of-order transformations do not work yet') + assert a.reshape(2,5).T.flat[3] == 6 + def test_slice_copy(self): from _numpypy import zeros a = zeros((10, 10)) From notifications-noreply at bitbucket.org Thu Jan 19 01:54:02 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Thu, 19 Jan 2012 00:54:02 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120119005402.27163.74454@bitbucket05.managed.contegix.com> You have received a notification from David Ripton. Hi, I forked pypy. My fork is at https://bitbucket.org/dripton/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Thu Jan 19 10:57:46 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Thu, 19 Jan 2012 10:57:46 +0100 (CET) Subject: [pypy-commit] pypy py3k: Import the Grammar for the new yield from construct Message-ID: <20120119095746.84751820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51472:1233713137f2 Date: 2012-01-19 10:52 +0100 http://bitbucket.org/pypy/pypy/changeset/1233713137f2/ Log: Import the Grammar for the new yield from construct diff --git a/pypy/interpreter/pyparser/data/Grammar3.2 b/pypy/interpreter/pyparser/data/Grammar3.2 --- a/pypy/interpreter/pyparser/data/Grammar3.2 +++ b/pypy/interpreter/pyparser/data/Grammar3.2 @@ -136,4 +136,5 @@ # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME -yield_expr: 'yield' [testlist] +yield_expr: 'yield' [yield_arg] +yield_arg: 'from' test | testlist From noreply at buildbot.pypy.org Thu Jan 19 10:57:47 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Thu, 19 Jan 2012 10:57:47 +0100 (CET) Subject: [pypy-commit] pypy py3k: Change the Grammar's indentation, it's easier to diff it with cpython's Grammar now Message-ID: <20120119095747.C8990820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51473:2e923cfc9e5e Date: 2012-01-19 10:57 +0100 http://bitbucket.org/pypy/pypy/changeset/2e923cfc9e5e/ Log: Change the Grammar's indentation, it's easier to diff it with cpython's Grammar now diff --git a/pypy/interpreter/pyparser/data/Grammar3.2 b/pypy/interpreter/pyparser/data/Grammar3.2 --- a/pypy/interpreter/pyparser/data/Grammar3.2 +++ b/pypy/interpreter/pyparser/data/Grammar3.2 @@ -11,9 +11,9 @@ # "How to Change Python's Grammar" # Start symbols for the grammar: -# single_input is a single interactive statement; -# file_input is a module or sequence of commands read from an input file; -# eval_input is the input for the eval() and input() functions. +# single_input is a single interactive statement; +# file_input is a module or sequence of commands read from an input file; +# eval_input is the input for the eval() and input() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER @@ -70,9 +70,9 @@ for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ - ['else' ':' suite] - ['finally' ':' suite] | - 'finally' ':' suite)) + ['else' ':' suite] + ['finally' ':' suite] | + 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last From noreply at buildbot.pypy.org Thu Jan 19 11:07:07 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Thu, 19 Jan 2012 11:07:07 +0100 (CET) Subject: [pypy-commit] pypy py3k: yield from is a python 3.3 construct Message-ID: <20120119100707.C31B8820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51474:8339f8cc6ae1 Date: 2012-01-19 11:06 +0100 http://bitbucket.org/pypy/pypy/changeset/8339f8cc6ae1/ Log: yield from is a python 3.3 construct diff --git a/pypy/interpreter/pyparser/data/Grammar3.2 b/pypy/interpreter/pyparser/data/Grammar3.2 --- a/pypy/interpreter/pyparser/data/Grammar3.2 +++ b/pypy/interpreter/pyparser/data/Grammar3.2 @@ -136,5 +136,4 @@ # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME -yield_expr: 'yield' [yield_arg] -yield_arg: 'from' test | testlist +yield_expr: 'yield' [testlist] From noreply at buildbot.pypy.org Thu Jan 19 11:24:54 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 19 Jan 2012 11:24:54 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Planning for today Message-ID: <20120119102454.EA4EC820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: extradoc Changeset: r4046:079d771c58af Date: 2012-01-19 11:24 +0100 http://bitbucket.org/pypy/extradoc/changeset/079d771c58af/ Log: Planning for today diff --git a/sprintinfo/leysin-winter-2012/planning.txt b/sprintinfo/leysin-winter-2012/planning.txt --- a/sprintinfo/leysin-winter-2012/planning.txt +++ b/sprintinfo/leysin-winter-2012/planning.txt @@ -10,20 +10,21 @@ Things we want to do -------------------- -* some skiing (anto, armin) +* some skiing (DONE) * review the JVM backend pull request (DONE) -* py3k (romain) +* py3k (romain, anto) * ffistruct * Cython backend -* Debug the ARM backend (bivab) +* Debug the ARM backend (bivab, armin around) * STM - - refactored the RPython API: mostly done, must adapt targetdemo*.py + - refactored the RPython API (DONE) + - app-level transaction module (armin, bivab around) - start work on the GC * concurrent-marksweep GC From noreply at buildbot.pypy.org Thu Jan 19 12:05:19 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Thu, 19 Jan 2012 12:05:19 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain) implement keyword only args in the astbuilder Message-ID: <20120119110519.3186E820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51475:27f5a4ae0d7d Date: 2012-01-19 12:04 +0100 http://bitbucket.org/pypy/pypy/changeset/27f5a4ae0d7d/ Log: (antocuni, romain) implement keyword only args in the astbuilder diff --git a/pypy/interpreter/astcompiler/ast.py b/pypy/interpreter/astcompiler/ast.py --- a/pypy/interpreter/astcompiler/ast.py +++ b/pypy/interpreter/astcompiler/ast.py @@ -2280,18 +2280,22 @@ class arguments(AST): - def __init__(self, args, vararg, kwarg, defaults): + def __init__(self, args, vararg, kwonlyargs, kwarg, defaults): self.args = args self.w_args = None self.vararg = vararg + self.kwonlyargs = kwonlyargs + self.w_kwonlyargs = None self.kwarg = kwarg self.defaults = defaults self.w_defaults = None - self.initialization_state = 15 + self.initialization_state = 31 def mutate_over(self, visitor): if self.args: visitor._mutate_sequence(self.args) + if self.kwonlyargs: + visitor._mutate_sequence(self.kwonlyargs) if self.defaults: visitor._mutate_sequence(self.defaults) return visitor.visit_arguments(self) @@ -2300,12 +2304,12 @@ visitor.visit_arguments(self) def sync_app_attrs(self, space): - if (self.initialization_state & ~6) ^ 9: - self.missing_field(space, ['args', None, None, 'defaults'], 'arguments') + if (self.initialization_state & ~10) ^ 21: + self.missing_field(space, ['args', None, 'kwonlyargs', None, 'defaults'], 'arguments') else: if not self.initialization_state & 2: self.vararg = None - if not self.initialization_state & 4: + if not self.initialization_state & 8: self.kwarg = None w_list = self.w_args if w_list is not None: @@ -2317,6 +2321,16 @@ if self.args is not None: for node in self.args: node.sync_app_attrs(space) + w_list = self.w_kwonlyargs + if w_list is not None: + list_w = space.listview(w_list) + if list_w: + self.kwonlyargs = [space.interp_w(expr, w_obj) for w_obj in list_w] + else: + self.kwonlyargs = None + if self.kwonlyargs is not None: + for node in self.kwonlyargs: + node.sync_app_attrs(space) w_list = self.w_defaults if w_list is not None: list_w = space.listview(w_list) @@ -2727,6 +2741,7 @@ def visit_arguments(self, node): self.visit_sequence(node.args) + self.visit_sequence(node.kwonlyargs) self.visit_sequence(node.defaults) def visit_keyword(self, node): @@ -6908,12 +6923,29 @@ w_self.deldictvalue(space, 'vararg') w_self.initialization_state |= 2 +def arguments_get_kwonlyargs(space, w_self): + if not w_self.initialization_state & 4: + typename = space.type(w_self).getname(space) + raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwonlyargs') + if w_self.w_kwonlyargs is None: + if w_self.kwonlyargs is None: + list_w = [] + else: + list_w = [space.wrap(node) for node in w_self.kwonlyargs] + w_list = space.newlist(list_w) + w_self.w_kwonlyargs = w_list + return w_self.w_kwonlyargs + +def arguments_set_kwonlyargs(space, w_self, w_new_value): + w_self.w_kwonlyargs = w_new_value + w_self.initialization_state |= 4 + def arguments_get_kwarg(space, w_self): if w_self.w_dict is not None: w_obj = w_self.getdictvalue(space, 'kwarg') if w_obj is not None: return w_obj - if not w_self.initialization_state & 4: + if not w_self.initialization_state & 8: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'kwarg') return space.wrap(w_self.kwarg) @@ -6930,10 +6962,10 @@ w_self.setdictvalue(space, 'kwarg', w_new_value) return w_self.deldictvalue(space, 'kwarg') - w_self.initialization_state |= 4 + w_self.initialization_state |= 8 def arguments_get_defaults(space, w_self): - if not w_self.initialization_state & 8: + if not w_self.initialization_state & 16: typename = space.type(w_self).getname(space) raise operationerrfmt(space.w_AttributeError, "'%s' object has no attribute '%s'", typename, 'defaults') if w_self.w_defaults is None: @@ -6947,17 +6979,18 @@ def arguments_set_defaults(space, w_self, w_new_value): w_self.w_defaults = w_new_value - w_self.initialization_state |= 8 - -_arguments_field_unroller = unrolling_iterable(['args', 'vararg', 'kwarg', 'defaults']) + w_self.initialization_state |= 16 + +_arguments_field_unroller = unrolling_iterable(['args', 'vararg', 'kwonlyargs', 'kwarg', 'defaults']) def arguments_init(space, w_self, __args__): w_self = space.descr_self_interp_w(arguments, w_self) w_self.w_args = None + w_self.w_kwonlyargs = None w_self.w_defaults = None args_w, kwargs_w = __args__.unpack() if args_w: - if len(args_w) != 4: - w_err = space.wrap("arguments constructor takes either 0 or 4 positional arguments") + if len(args_w) != 5: + w_err = space.wrap("arguments constructor takes either 0 or 5 positional arguments") raise OperationError(space.w_TypeError, w_err) i = 0 for field in _arguments_field_unroller: @@ -6969,9 +7002,10 @@ arguments.typedef = typedef.TypeDef("arguments", AST.typedef, __module__='_ast', - _fields=_FieldsWrapper(['args', 'vararg', 'kwarg', 'defaults']), + _fields=_FieldsWrapper(['args', 'vararg', 'kwonlyargs', 'kwarg', 'defaults']), args=typedef.GetSetProperty(arguments_get_args, arguments_set_args, cls=arguments), vararg=typedef.GetSetProperty(arguments_get_vararg, arguments_set_vararg, cls=arguments), + kwonlyargs=typedef.GetSetProperty(arguments_get_kwonlyargs, arguments_set_kwonlyargs, cls=arguments), kwarg=typedef.GetSetProperty(arguments_get_kwarg, arguments_set_kwarg, cls=arguments), defaults=typedef.GetSetProperty(arguments_get_defaults, arguments_set_defaults, cls=arguments), __new__=interp2app(get_AST_new(arguments)), diff --git a/pypy/interpreter/astcompiler/astbuilder.py b/pypy/interpreter/astcompiler/astbuilder.py --- a/pypy/interpreter/astcompiler/astbuilder.py +++ b/pypy/interpreter/astcompiler/astbuilder.py @@ -508,13 +508,14 @@ # and varargslist (lambda definition). if arguments_node.type == syms.parameters: if len(arguments_node.children) == 2: - return ast.arguments(None, None, None, None) + return ast.arguments(None, None, None, None, None) arguments_node = arguments_node.children[1] i = 0 child_count = len(arguments_node.children) defaults = [] args = [] variable_arg = None + keywordonly_args = None keywords_arg = None have_default = False while i < child_count: @@ -552,13 +553,16 @@ self.check_forbidden_name(arg_name, name_node) name = ast.Name(arg_name, ast.Param, name_node.lineno, name_node.column) - args.append(name) + if keywordonly_args is None: + args.append(name) + else: + keywordonly_args.append(name) i += 2 break elif arg_type == tokens.STAR: name_node = arguments_node.children[i + 1] + keywordonly_args = [] if name_node.type == tokens.COMMA: - # XXX for now i += 2 else: variable_arg = name_node.children[0].value @@ -575,7 +579,7 @@ defaults = None if not args: args = None - return ast.arguments(args, variable_arg, keywords_arg, defaults) + return ast.arguments(args, variable_arg, keywordonly_args, keywords_arg, defaults) def handle_arg_unpacking(self, fplist_node): args = [] @@ -775,7 +779,7 @@ def handle_lambdef(self, lambdef_node): expr = self.handle_expr(lambdef_node.children[-1]) if len(lambdef_node.children) == 3: - args = ast.arguments(None, None, None, None) + args = ast.arguments(None, None, None, None, None) else: args = self.handle_arguments(lambdef_node.children[1]) return ast.Lambda(args, expr, lambdef_node.lineno, lambdef_node.column) diff --git a/pypy/interpreter/astcompiler/test/test_astbuilder.py b/pypy/interpreter/astcompiler/test/test_astbuilder.py --- a/pypy/interpreter/astcompiler/test/test_astbuilder.py +++ b/pypy/interpreter/astcompiler/test/test_astbuilder.py @@ -1185,3 +1185,17 @@ if1, if2 = comps[0].ifs assert isinstance(if1, ast.Name) assert isinstance(if2, ast.Name) + + def test_kwonly_arguments(self): + fn = self.get_first_stmt("def f(a, b, c, *, kwarg): pass") + assert isinstance(fn, ast.FunctionDef) + assert len(fn.args.kwonlyargs) == 1 + assert isinstance(fn.args.kwonlyargs[0], ast.expr) + assert fn.args.kwonlyargs[0].id == "kwarg" + + def test_kwonly_arguments_2(self): + fn = self.get_first_stmt("def f(a, b, c, *args, kwarg): pass") + assert isinstance(fn, ast.FunctionDef) + assert len(fn.args.kwonlyargs) == 1 + assert isinstance(fn.args.kwonlyargs[0], ast.expr) + assert fn.args.kwonlyargs[0].id == "kwarg" diff --git a/pypy/interpreter/astcompiler/tools/Python.asdl b/pypy/interpreter/astcompiler/tools/Python.asdl --- a/pypy/interpreter/astcompiler/tools/Python.asdl +++ b/pypy/interpreter/astcompiler/tools/Python.asdl @@ -105,7 +105,7 @@ excepthandler = ExceptHandler(expr? type, identifier? name, stmt* body) attributes(int lineno, int col_offset) - arguments = (expr* args, identifier? vararg, + arguments = (expr* args, identifier? vararg, expr* kwonlyargs, identifier? kwarg, expr* defaults) -- keyword arguments supplied to call From noreply at buildbot.pypy.org Thu Jan 19 12:19:37 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 19 Jan 2012 12:19:37 +0100 (CET) Subject: [pypy-commit] pypy default: (bivab, arigo) Message-ID: <20120119111937.3D45C820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51476:798d7b72c373 Date: 2012-01-19 12:19 +0100 http://bitbucket.org/pypy/pypy/changeset/798d7b72c373/ Log: (bivab, arigo) Test cpu.get_latest_force_token(). It was not tested. Argh. diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -1872,6 +1872,7 @@ values.append(descr) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(1)) + values.append(token) FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Void) func_ptr = llhelper(lltype.Ptr(FUNC), maybe_force) @@ -1902,7 +1903,8 @@ assert fail.identifier == 1 assert self.cpu.get_latest_value_int(0) == 1 assert self.cpu.get_latest_value_int(1) == 10 - assert values == [faildescr, 1, 10] + token = self.cpu.get_latest_force_token() + assert values == [faildescr, 1, 10, token] def test_force_operations_returning_int(self): values = [] @@ -1911,6 +1913,7 @@ self.cpu.force(token) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(2)) + values.append(token) return 42 FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Signed) @@ -1944,7 +1947,8 @@ assert self.cpu.get_latest_value_int(0) == 1 assert self.cpu.get_latest_value_int(1) == 42 assert self.cpu.get_latest_value_int(2) == 10 - assert values == [1, 10] + token = self.cpu.get_latest_force_token() + assert values == [1, 10, token] def test_force_operations_returning_float(self): values = [] @@ -1953,6 +1957,7 @@ self.cpu.force(token) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(2)) + values.append(token) return 42.5 FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Float) @@ -1988,7 +1993,8 @@ x = self.cpu.get_latest_value_float(1) assert longlong.getrealfloat(x) == 42.5 assert self.cpu.get_latest_value_int(2) == 10 - assert values == [1, 10] + token = self.cpu.get_latest_force_token() + assert values == [1, 10, token] def test_call_to_c_function(self): from pypy.rlib.libffi import CDLL, types, ArgChain, FUNCFLAG_CDECL From noreply at buildbot.pypy.org Thu Jan 19 12:28:54 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 19 Jan 2012 12:28:54 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) update the fail_force_index when failing a guard Message-ID: <20120119112854.B92DB820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51477:4c73e2cd39cf Date: 2012-01-19 12:21 +0100 http://bitbucket.org/pypy/pypy/changeset/4c73e2cd39cf/ Log: (arigo, bivab) update the fail_force_index when failing a guard diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -257,6 +257,7 @@ to the failboxes. Values for spilled vars and registers are stored on stack at frame_loc """ assert frame_pointer & 1 == 0 + self.fail_force_index = frame_pointer bytecode = rffi.cast(rffi.UCHARP, mem_loc) num = 0 value = 0 From noreply at buildbot.pypy.org Thu Jan 19 12:28:56 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 19 Jan 2012 12:28:56 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20120119112856.E6A52820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51478:faf485aa5523 Date: 2012-01-19 12:26 +0100 http://bitbucket.org/pypy/pypy/changeset/faf485aa5523/ Log: merge default diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -2032,6 +2032,7 @@ values.append(descr) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(1)) + values.append(token) FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Void) func_ptr = llhelper(lltype.Ptr(FUNC), maybe_force) @@ -2062,7 +2063,8 @@ assert fail.identifier == 1 assert self.cpu.get_latest_value_int(0) == 1 assert self.cpu.get_latest_value_int(1) == 10 - assert values == [faildescr, 1, 10] + token = self.cpu.get_latest_force_token() + assert values == [faildescr, 1, 10, token] def test_force_operations_returning_int(self): values = [] @@ -2071,6 +2073,7 @@ self.cpu.force(token) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(2)) + values.append(token) return 42 FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Signed) @@ -2104,7 +2107,8 @@ assert self.cpu.get_latest_value_int(0) == 1 assert self.cpu.get_latest_value_int(1) == 42 assert self.cpu.get_latest_value_int(2) == 10 - assert values == [1, 10] + token = self.cpu.get_latest_force_token() + assert values == [1, 10, token] def test_force_operations_returning_float(self): if not self.cpu.supports_floats: @@ -2115,6 +2119,7 @@ self.cpu.force(token) values.append(self.cpu.get_latest_value_int(0)) values.append(self.cpu.get_latest_value_int(2)) + values.append(token) return 42.5 FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Float) @@ -2150,7 +2155,8 @@ x = self.cpu.get_latest_value_float(1) assert longlong.getrealfloat(x) == 42.5 assert self.cpu.get_latest_value_int(2) == 10 - assert values == [1, 10] + token = self.cpu.get_latest_force_token() + assert values == [1, 10, token] def test_call_to_c_function(self): from pypy.rlib.libffi import CDLL, types, ArgChain, FUNCFLAG_CDECL diff --git a/pypy/module/_codecs/test/test_ztranslation.py b/pypy/module/_codecs/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/_codecs/test/test_ztranslation.py @@ -0,0 +1,5 @@ +from pypy.objspace.fake.checkmodule import checkmodule + + +def test__codecs_translates(): + checkmodule('_codecs') diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -20,7 +20,7 @@ class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[]): self.itemtype = itemtype self.num = num self.kind = kind @@ -28,6 +28,7 @@ self.char = char self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors + self.aliases = aliases def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations @@ -62,7 +63,7 @@ elif space.isinstance_w(w_dtype, space.w_str): name = space.str_w(w_dtype) for dtype in cache.builtin_dtypes: - if dtype.name == name or dtype.char == name: + if dtype.name == name or dtype.char == name or name in dtype.aliases: return dtype else: for dtype in cache.builtin_dtypes: @@ -107,7 +108,7 @@ kind=BOOLLTR, name="bool", char="?", - w_box_type = space.gettypefor(interp_boxes.W_BoolBox), + w_box_type=space.gettypefor(interp_boxes.W_BoolBox), alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( @@ -116,7 +117,7 @@ kind=SIGNEDLTR, name="int8", char="b", - w_box_type = space.gettypefor(interp_boxes.W_Int8Box) + w_box_type=space.gettypefor(interp_boxes.W_Int8Box) ) self.w_uint8dtype = W_Dtype( types.UInt8(), @@ -124,7 +125,7 @@ kind=UNSIGNEDLTR, name="uint8", char="B", - w_box_type = space.gettypefor(interp_boxes.W_UInt8Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt8Box), ) self.w_int16dtype = W_Dtype( types.Int16(), @@ -132,7 +133,7 @@ kind=SIGNEDLTR, name="int16", char="h", - w_box_type = space.gettypefor(interp_boxes.W_Int16Box), + w_box_type=space.gettypefor(interp_boxes.W_Int16Box), ) self.w_uint16dtype = W_Dtype( types.UInt16(), @@ -140,7 +141,7 @@ kind=UNSIGNEDLTR, name="uint16", char="H", - w_box_type = space.gettypefor(interp_boxes.W_UInt16Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt16Box), ) self.w_int32dtype = W_Dtype( types.Int32(), @@ -148,7 +149,7 @@ kind=SIGNEDLTR, name="int32", char="i", - w_box_type = space.gettypefor(interp_boxes.W_Int32Box), + w_box_type=space.gettypefor(interp_boxes.W_Int32Box), ) self.w_uint32dtype = W_Dtype( types.UInt32(), @@ -156,7 +157,7 @@ kind=UNSIGNEDLTR, name="uint32", char="I", - w_box_type = space.gettypefor(interp_boxes.W_UInt32Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt32Box), ) if LONG_BIT == 32: name = "int32" @@ -168,7 +169,7 @@ kind=SIGNEDLTR, name=name, char="l", - w_box_type = space.gettypefor(interp_boxes.W_LongBox), + w_box_type=space.gettypefor(interp_boxes.W_LongBox), alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( @@ -177,7 +178,7 @@ kind=UNSIGNEDLTR, name="u" + name, char="L", - w_box_type = space.gettypefor(interp_boxes.W_ULongBox), + w_box_type=space.gettypefor(interp_boxes.W_ULongBox), ) self.w_int64dtype = W_Dtype( types.Int64(), @@ -185,7 +186,7 @@ kind=SIGNEDLTR, name="int64", char="q", - w_box_type = space.gettypefor(interp_boxes.W_Int64Box), + w_box_type=space.gettypefor(interp_boxes.W_Int64Box), alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( @@ -194,7 +195,7 @@ kind=UNSIGNEDLTR, name="uint64", char="Q", - w_box_type = space.gettypefor(interp_boxes.W_UInt64Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt64Box), ) self.w_float32dtype = W_Dtype( types.Float32(), @@ -202,7 +203,7 @@ kind=FLOATINGLTR, name="float32", char="f", - w_box_type = space.gettypefor(interp_boxes.W_Float32Box), + w_box_type=space.gettypefor(interp_boxes.W_Float32Box), ) self.w_float64dtype = W_Dtype( types.Float64(), @@ -212,6 +213,7 @@ char="d", w_box_type = space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], + aliases=["float"], ) self.builtin_dtypes = [ diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -581,7 +581,7 @@ def descr_var(self, space): # var = mean((values - mean(values)) ** 2) w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) - assert isinstance(w_res, BaseArray) + assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) return w_res.descr_mean(space, space.w_None) @@ -590,6 +590,10 @@ # std(v) = sqrt(var(v)) return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -682,6 +686,9 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + def create_sig(self): return signature.ScalarSignature(self.dtype) @@ -788,7 +795,7 @@ Intermediate class for performing binary operations. """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -828,7 +835,7 @@ def __init__(self, shape, dtype, left, right): Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) - + def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() @@ -847,7 +854,7 @@ when we'll make AxisReduce lazy """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) @@ -919,14 +926,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: @@ -1061,6 +1068,9 @@ array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): def create_sig(self): @@ -1273,6 +1283,8 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1302,6 +1302,28 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_fill(self): + from _numpypy import array + + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): @@ -1441,9 +1463,11 @@ assert repr(a) == "array(0.0)" a = array(0.2) assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" def test_repr_multi(self): - from _numpypy import arange, zeros + from _numpypy import arange, zeros, array a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1466,6 +1490,9 @@ [498, 999], [499, 1000], [500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' def test_repr_slice(self): from _numpypy import array, zeros diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -349,7 +349,8 @@ self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, - 'int_eq': 1, 'guard_false': 1, 'jump': 1}) + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ diff --git a/pypy/rpython/lltypesystem/rtuple.py b/pypy/rpython/lltypesystem/rtuple.py --- a/pypy/rpython/lltypesystem/rtuple.py +++ b/pypy/rpython/lltypesystem/rtuple.py @@ -27,6 +27,10 @@ def newtuple(cls, llops, r_tuple, items_v): # items_v should have the lowleveltype of the internal reprs + assert len(r_tuple.items_r) == len(items_v) + for r_item, v_item in zip(r_tuple.items_r, items_v): + assert r_item.lowleveltype == v_item.concretetype + # if len(r_tuple.items_r) == 0: return inputconst(Void, ()) # a Void empty tuple c1 = inputconst(Void, r_tuple.lowleveltype.TO) diff --git a/pypy/rpython/rrange.py b/pypy/rpython/rrange.py --- a/pypy/rpython/rrange.py +++ b/pypy/rpython/rrange.py @@ -204,7 +204,10 @@ v_index = hop.gendirectcall(self.ll_getnextindex, v_enumerate) hop2 = hop.copy() hop2.args_r = [self.r_baseiter] + r_item_src = self.r_baseiter.r_list.external_item_repr + r_item_dst = hop.r_result.items_r[1] v_item = self.r_baseiter.rtype_next(hop2) + v_item = hop.llops.convertvar(v_item, r_item_src, r_item_dst) return hop.r_result.newtuple(hop.llops, hop.r_result, [v_index, v_item]) diff --git a/pypy/rpython/test/test_rrange.py b/pypy/rpython/test/test_rrange.py --- a/pypy/rpython/test/test_rrange.py +++ b/pypy/rpython/test/test_rrange.py @@ -169,6 +169,22 @@ res = self.interpret(fn, [2]) assert res == 789 + def test_enumerate_instances(self): + class A: + pass + def fn(n): + a = A() + b = A() + a.k = 10 + b.k = 20 + for i, x in enumerate([a, b]): + if i == n: + return x.k + return 5 + res = self.interpret(fn, [1]) + assert res == 20 + + class TestLLtype(BaseTestRrange, LLRtypeMixin): from pypy.rpython.lltypesystem import rrange From noreply at buildbot.pypy.org Thu Jan 19 12:28:58 2012 From: noreply at buildbot.pypy.org (bivab) Date: Thu, 19 Jan 2012 12:28:58 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge heads Message-ID: <20120119112858.87BD8820D8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51479:75e7ec3f604f Date: 2012-01-19 12:28 +0100 http://bitbucket.org/pypy/pypy/changeset/75e7ec3f604f/ Log: merge heads diff --git a/pypy/jit/backend/arm/assembler.py b/pypy/jit/backend/arm/assembler.py --- a/pypy/jit/backend/arm/assembler.py +++ b/pypy/jit/backend/arm/assembler.py @@ -257,6 +257,7 @@ to the failboxes. Values for spilled vars and registers are stored on stack at frame_loc """ assert frame_pointer & 1 == 0 + self.fail_force_index = frame_pointer bytecode = rffi.cast(rffi.UCHARP, mem_loc) num = 0 value = 0 From noreply at buildbot.pypy.org Thu Jan 19 12:55:31 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 19 Jan 2012 12:55:31 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain): fix test_try to match the new python3 syntax and the relative ast structure Message-ID: <20120119115531.29553820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51480:cafc48ed3976 Date: 2012-01-19 12:18 +0100 http://bitbucket.org/pypy/pypy/changeset/cafc48ed3976/ Log: (antocuni, romain): fix test_try to match the new python3 syntax and the relative ast structure diff --git a/pypy/interpreter/astcompiler/test/test_astbuilder.py b/pypy/interpreter/astcompiler/test/test_astbuilder.py --- a/pypy/interpreter/astcompiler/test/test_astbuilder.py +++ b/pypy/interpreter/astcompiler/test/test_astbuilder.py @@ -311,13 +311,15 @@ assert isinstance(fr.orelse[0].value, ast.Num) def test_try(self): - tr = self.get_first_stmt("try: x\nfinally: pass") + tr = self.get_first_stmt("try: x" + "\n" + + "finally: pass") assert isinstance(tr, ast.TryFinally) assert len(tr.body) == 1 assert isinstance(tr.body[0].value, ast.Name) assert len(tr.finalbody) == 1 assert isinstance(tr.finalbody[0], ast.Pass) - tr = self.get_first_stmt("try: x\nexcept: pass") + tr = self.get_first_stmt("try: x" + "\n" + + "except: pass") assert isinstance(tr, ast.TryExcept) assert len(tr.body) == 1 assert isinstance(tr.body[0].value, ast.Name) @@ -329,7 +331,8 @@ assert len(handler.body) == 1 assert isinstance(handler.body[0], ast.Pass) assert tr.orelse is None - tr = self.get_first_stmt("try: x\nexcept Exception: pass") + tr = self.get_first_stmt("try: x" + "\n" + + "except Exception: pass") assert len(tr.handlers) == 1 handler = tr.handlers[0] assert isinstance(handler.type, ast.Name) @@ -337,40 +340,48 @@ assert handler.name is None assert len(handler.body) == 1 assert tr.orelse is None - tr = self.get_first_stmt("try: x\nexcept Exception, e: pass") + tr = self.get_first_stmt("try: x" + "\n" + + "except Exception as e: pass") assert len(tr.handlers) == 1 handler = tr.handlers[0] assert isinstance(handler.type, ast.Name) - assert isinstance(handler.name, ast.Name) - assert handler.name.ctx == ast.Store - assert handler.name.id == "e" + assert handler.type.id == "Exception" + assert handler.name == "e" assert len(handler.body) == 1 - tr = self.get_first_stmt("try: x\nexcept: pass\nelse: 4") + tr = self.get_first_stmt("try: x" + "\n" + + "except: pass" + "\n" + + "else: 4") assert len(tr.body) == 1 assert isinstance(tr.body[0].value, ast.Name) assert len(tr.handlers) == 1 assert isinstance(tr.handlers[0].body[0], ast.Pass) assert len(tr.orelse) == 1 assert isinstance(tr.orelse[0].value, ast.Num) - tr = self.get_first_stmt("try: x\nexcept Exc, a: 5\nexcept F: pass") + tr = self.get_first_stmt("try: x" + "\n" + + "except Exc as a: 5" + "\n" + + "except F: pass") assert len(tr.handlers) == 2 h1, h2 = tr.handlers assert isinstance(h1.type, ast.Name) - assert isinstance(h1.name, ast.Name) + assert h1.name == "a" assert isinstance(h1.body[0].value, ast.Num) assert isinstance(h2.type, ast.Name) assert h2.name is None assert isinstance(h2.body[0], ast.Pass) - tr = self.get_first_stmt("try: x\nexcept Exc as a: 5\nexcept F: pass") + tr = self.get_first_stmt("try: x" + "\n" + + "except Exc as a: 5" + "\n" + + "except F: pass") assert len(tr.handlers) == 2 h1, h2 = tr.handlers assert isinstance(h1.type, ast.Name) - assert isinstance(h1.name, ast.Name) + assert h1.name == "a" assert isinstance(h1.body[0].value, ast.Num) assert isinstance(h2.type, ast.Name) assert h2.name is None assert isinstance(h2.body[0], ast.Pass) - tr = self.get_first_stmt("try: x\nexcept: 4\nfinally: pass") + tr = self.get_first_stmt("try: x" + "\n" + + "except: 4" + "\n" + + "finally: pass") assert isinstance(tr, ast.TryFinally) assert len(tr.finalbody) == 1 assert isinstance(tr.finalbody[0], ast.Pass) @@ -382,7 +393,10 @@ assert isinstance(exc.handlers[0].body[0].value, ast.Num) assert len(exc.body) == 1 assert isinstance(exc.body[0].value, ast.Name) - tr = self.get_first_stmt("try: x\nexcept: 4\nelse: 'hi'\nfinally: pass") + tr = self.get_first_stmt("try: x" + "\n" + + "except: 4" + "\n" + + "else: 'hi'" + "\n" + + "finally: pass") assert isinstance(tr, ast.TryFinally) assert len(tr.finalbody) == 1 assert isinstance(tr.finalbody[0], ast.Pass) From noreply at buildbot.pypy.org Thu Jan 19 12:55:32 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 19 Jan 2012 12:55:32 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain): fix test_set_context by removing the case u'str' which is no longer syntactically correct Message-ID: <20120119115532.6568D820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51481:5efd72235ac9 Date: 2012-01-19 12:21 +0100 http://bitbucket.org/pypy/pypy/changeset/5efd72235ac9/ Log: (antocuni, romain): fix test_set_context by removing the case u'str' which is no longer syntactically correct diff --git a/pypy/interpreter/astcompiler/test/test_astbuilder.py b/pypy/interpreter/astcompiler/test/test_astbuilder.py --- a/pypy/interpreter/astcompiler/test/test_astbuilder.py +++ b/pypy/interpreter/astcompiler/test/test_astbuilder.py @@ -705,7 +705,6 @@ ("{x for x in z}", "set comprehension"), ("{x : x for x in z}", "dict comprehension"), ("'str'", "literal"), - ("u'str'", "literal"), ("b'bytes'", "literal"), ("()", "()"), ("23", "literal"), From noreply at buildbot.pypy.org Thu Jan 19 12:55:33 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 19 Jan 2012 12:55:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain): fix test_string by using a plain string literal instead of the no-longer valid u'' Message-ID: <20120119115533.9DDE1820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51482:d39b62c9983e Date: 2012-01-19 12:22 +0100 http://bitbucket.org/pypy/pypy/changeset/d39b62c9983e/ Log: (antocuni, romain): fix test_string by using a plain string literal instead of the no-longer valid u'' diff --git a/pypy/interpreter/astcompiler/test/test_astbuilder.py b/pypy/interpreter/astcompiler/test/test_astbuilder.py --- a/pypy/interpreter/astcompiler/test/test_astbuilder.py +++ b/pypy/interpreter/astcompiler/test/test_astbuilder.py @@ -1055,7 +1055,7 @@ assert space.eq_w(s.s, space.wrapbytes("hi implicitly extra")) raises(SyntaxError, self.get_first_expr, "b'hello' 'world'") sentence = u"Die Männer ärgen sich!" - source = u"# coding: utf-7\nstuff = u'%s'" % (sentence,) + source = u"# coding: utf-7\nstuff = '%s'" % (sentence,) info = pyparse.CompileInfo("", "exec") tree = self.parser.parse_source(source.encode("utf-7"), info) assert info.encoding == "utf-7" From noreply at buildbot.pypy.org Thu Jan 19 12:55:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 19 Jan 2012 12:55:34 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain): 'mostly' fix test_number, by adapting it to the new py3 syntax for number literals. The only failure left is '0000' which still raises a syntax error, fixing it soon Message-ID: <20120119115534.D64ED820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51483:d2ccbdd80be0 Date: 2012-01-19 12:30 +0100 http://bitbucket.org/pypy/pypy/changeset/d2ccbdd80be0/ Log: (antocuni, romain): 'mostly' fix test_number, by adapting it to the new py3 syntax for number literals. The only failure left is '0000' which still raises a syntax error, fixing it soon diff --git a/pypy/interpreter/astcompiler/test/test_astbuilder.py b/pypy/interpreter/astcompiler/test/test_astbuilder.py --- a/pypy/interpreter/astcompiler/test/test_astbuilder.py +++ b/pypy/interpreter/astcompiler/test/test_astbuilder.py @@ -1084,14 +1084,11 @@ space = self.space assert space.eq_w(get_num("32"), space.wrap(32)) assert space.eq_w(get_num("32.5"), space.wrap(32.5)) - assert space.eq_w(get_num("32L"), space.newlong(32)) - assert space.eq_w(get_num("32l"), space.newlong(32)) - assert space.eq_w(get_num("0L"), space.newlong(0)) assert space.eq_w(get_num("2"), space.wrap(2)) assert space.eq_w(get_num("13j"), space.wrap(13j)) assert space.eq_w(get_num("13J"), space.wrap(13J)) - assert space.eq_w(get_num("053"), space.wrap(053)) - assert space.eq_w(get_num("00053"), space.wrap(053)) + assert space.eq_w(get_num("0o53"), space.wrap(053)) + assert space.eq_w(get_num("0o0053"), space.wrap(053)) for num in ("0x53", "0X53", "0x0000053", "0X00053"): assert space.eq_w(get_num(num), space.wrap(0x53)) assert space.eq_w(get_num("0Xb0d2"), space.wrap(0xb0d2)) @@ -1100,7 +1097,7 @@ assert space.eq_w(get_num("00000"), space.wrap(0)) assert space.eq_w(get_num("-3"), space.wrap(-3)) assert space.eq_w(get_num("-0"), space.wrap(0)) - assert space.eq_w(get_num("-0xAAAAAAL"), space.wrap(-0xAAAAAAL)) + assert space.eq_w(get_num("-0xAAAAAA"), space.wrap(-0xAAAAAAL)) n = get_num(str(-sys.maxint - 1)) assert space.is_true(space.isinstance(n, space.w_int)) for num in ("0o53", "0O53", "0o0000053", "0O00053"): @@ -1111,6 +1108,12 @@ py.test.raises(SyntaxError, self.get_ast, "0x") py.test.raises(SyntaxError, self.get_ast, "0b") py.test.raises(SyntaxError, self.get_ast, "0o") + py.test.raises(SyntaxError, self.get_ast, "32L") + py.test.raises(SyntaxError, self.get_ast, "32l") + py.test.raises(SyntaxError, self.get_ast, "0L") + py.test.raises(SyntaxError, self.get_ast, "-0xAAAAAAL") + py.test.raises(SyntaxError, self.get_ast, "053") + py.test.raises(SyntaxError, self.get_ast, "00053") def check_comprehension(self, brackets, ast_type): def brack(s): From noreply at buildbot.pypy.org Thu Jan 19 12:55:36 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 19 Jan 2012 12:55:36 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain): update the tokenizer to accept 0000* as a literal Message-ID: <20120119115536.1C0FA820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51484:ac227cc49124 Date: 2012-01-19 12:44 +0100 http://bitbucket.org/pypy/pypy/changeset/ac227cc49124/ Log: (antocuni, romain): update the tokenizer to accept 0000* as a literal diff --git a/pypy/interpreter/pyparser/genpytokenize.py b/pypy/interpreter/pyparser/genpytokenize.py --- a/pypy/interpreter/pyparser/genpytokenize.py +++ b/pypy/interpreter/pyparser/genpytokenize.py @@ -73,7 +73,9 @@ decNumber = chain(states, groupStr(states, "123456789"), any(states, makeDigits())) - zero = newArcPair(states, "0") + zero = chain(states, + newArcPair(states, "0"), + any(states, newArcPair(states, "0"))) intNumber = group(states, hexNumber, octNumber, binNumber, decNumber, zero) # ____________________________________________________________ # Exponents diff --git a/pypy/interpreter/pyparser/pytokenize.py b/pypy/interpreter/pyparser/pytokenize.py --- a/pypy/interpreter/pyparser/pytokenize.py +++ b/pypy/interpreter/pyparser/pytokenize.py @@ -25,10 +25,10 @@ accepts = [True, True, True, True, True, True, True, True, True, True, False, True, True, True, True, False, - False, False, True, False, False, False, False, - True, False, True, False, True, False, False, - True, False, False, True, True, True, False, - False, True, False, False, False, True] + False, False, True, False, False, False, True, + False, True, False, True, False, True, False, + False, True, False, False, True, True, True, + False, False, True, False, False, False, True] states = [ # 0 {'\t': 0, '\n': 13, '\x0c': 0, @@ -110,21 +110,21 @@ 'v': 1, 'w': 1, 'x': 1, 'y': 1, 'z': 1}, # 4 - {'.': 23, '0': 22, '1': 22, '2': 22, - '3': 22, '4': 22, '5': 22, '6': 22, - '7': 22, '8': 22, '9': 22, 'B': 21, - 'E': 24, 'J': 13, 'O': 20, 'X': 19, - 'b': 21, 'e': 24, 'j': 13, 'o': 20, + {'.': 24, '0': 22, '1': 23, '2': 23, + '3': 23, '4': 23, '5': 23, '6': 23, + '7': 23, '8': 23, '9': 23, 'B': 21, + 'E': 25, 'J': 13, 'O': 20, 'X': 19, + 'b': 21, 'e': 25, 'j': 13, 'o': 20, 'x': 19}, # 5 - {'.': 23, '0': 5, '1': 5, '2': 5, + {'.': 24, '0': 5, '1': 5, '2': 5, '3': 5, '4': 5, '5': 5, '6': 5, - '7': 5, '8': 5, '9': 5, 'E': 24, - 'J': 13, 'e': 24, 'j': 13}, + '7': 5, '8': 5, '9': 5, 'E': 25, + 'J': 13, 'e': 25, 'j': 13}, # 6 - {'0': 25, '1': 25, '2': 25, '3': 25, - '4': 25, '5': 25, '6': 25, '7': 25, - '8': 25, '9': 25}, + {'0': 26, '1': 26, '2': 26, '3': 26, + '4': 26, '5': 26, '6': 26, '7': 26, + '8': 26, '9': 26}, # 7 {'*': 12, '=': 13}, # 8 @@ -142,100 +142,105 @@ # 14 {'\n': 13}, # 15 - {automata.DEFAULT: 29, '\n': 26, - '\r': 26, "'": 27, '\\': 28}, + {automata.DEFAULT: 30, '\n': 27, + '\r': 27, "'": 28, '\\': 29}, # 16 - {automata.DEFAULT: 32, '\n': 26, - '\r': 26, '"': 30, '\\': 31}, + {automata.DEFAULT: 33, '\n': 27, + '\r': 27, '"': 31, '\\': 32}, # 17 {'\n': 13, '\r': 14}, # 18 - {automata.DEFAULT: 18, '\n': 26, '\r': 26}, + {automata.DEFAULT: 18, '\n': 27, '\r': 27}, # 19 - {'0': 33, '1': 33, '2': 33, '3': 33, - '4': 33, '5': 33, '6': 33, '7': 33, - '8': 33, '9': 33, 'A': 33, 'B': 33, - 'C': 33, 'D': 33, 'E': 33, 'F': 33, - 'a': 33, 'b': 33, 'c': 33, 'd': 33, - 'e': 33, 'f': 33}, + {'0': 34, '1': 34, '2': 34, '3': 34, + '4': 34, '5': 34, '6': 34, '7': 34, + '8': 34, '9': 34, 'A': 34, 'B': 34, + 'C': 34, 'D': 34, 'E': 34, 'F': 34, + 'a': 34, 'b': 34, 'c': 34, 'd': 34, + 'e': 34, 'f': 34}, # 20 - {'0': 34, '1': 34, '2': 34, '3': 34, - '4': 34, '5': 34, '6': 34, '7': 34}, + {'0': 35, '1': 35, '2': 35, '3': 35, + '4': 35, '5': 35, '6': 35, '7': 35}, # 21 - {'0': 35, '1': 35}, + {'0': 36, '1': 36}, # 22 - {'.': 23, '0': 22, '1': 22, '2': 22, - '3': 22, '4': 22, '5': 22, '6': 22, - '7': 22, '8': 22, '9': 22, 'E': 24, - 'J': 13, 'e': 24, 'j': 13}, + {'.': 24, '0': 22, '1': 23, '2': 23, + '3': 23, '4': 23, '5': 23, '6': 23, + '7': 23, '8': 23, '9': 23, 'E': 25, + 'J': 13, 'e': 25, 'j': 13}, # 23 - {'0': 23, '1': 23, '2': 23, '3': 23, - '4': 23, '5': 23, '6': 23, '7': 23, - '8': 23, '9': 23, 'E': 36, 'J': 13, - 'e': 36, 'j': 13}, + {'.': 24, '0': 23, '1': 23, '2': 23, + '3': 23, '4': 23, '5': 23, '6': 23, + '7': 23, '8': 23, '9': 23, 'E': 25, + 'J': 13, 'e': 25, 'j': 13}, # 24 - {'+': 37, '-': 37, '0': 38, '1': 38, - '2': 38, '3': 38, '4': 38, '5': 38, - '6': 38, '7': 38, '8': 38, '9': 38}, + {'0': 24, '1': 24, '2': 24, '3': 24, + '4': 24, '5': 24, '6': 24, '7': 24, + '8': 24, '9': 24, 'E': 37, 'J': 13, + 'e': 37, 'j': 13}, # 25 - {'0': 25, '1': 25, '2': 25, '3': 25, - '4': 25, '5': 25, '6': 25, '7': 25, - '8': 25, '9': 25, 'E': 36, 'J': 13, - 'e': 36, 'j': 13}, + {'+': 38, '-': 38, '0': 39, '1': 39, + '2': 39, '3': 39, '4': 39, '5': 39, + '6': 39, '7': 39, '8': 39, '9': 39}, # 26 + {'0': 26, '1': 26, '2': 26, '3': 26, + '4': 26, '5': 26, '6': 26, '7': 26, + '8': 26, '9': 26, 'E': 37, 'J': 13, + 'e': 37, 'j': 13}, + # 27 {}, - # 27 + # 28 {"'": 13}, - # 28 - {automata.DEFAULT: 39, '\n': 13, '\r': 14}, # 29 - {automata.DEFAULT: 29, '\n': 26, - '\r': 26, "'": 13, '\\': 28}, + {automata.DEFAULT: 40, '\n': 13, '\r': 14}, # 30 + {automata.DEFAULT: 30, '\n': 27, + '\r': 27, "'": 13, '\\': 29}, + # 31 {'"': 13}, - # 31 - {automata.DEFAULT: 40, '\n': 13, '\r': 14}, # 32 - {automata.DEFAULT: 32, '\n': 26, - '\r': 26, '"': 13, '\\': 31}, + {automata.DEFAULT: 41, '\n': 13, '\r': 14}, # 33 - {'0': 33, '1': 33, '2': 33, '3': 33, - '4': 33, '5': 33, '6': 33, '7': 33, - '8': 33, '9': 33, 'A': 33, 'B': 33, - 'C': 33, 'D': 33, 'E': 33, 'F': 33, - 'a': 33, 'b': 33, 'c': 33, 'd': 33, - 'e': 33, 'f': 33}, + {automata.DEFAULT: 33, '\n': 27, + '\r': 27, '"': 13, '\\': 32}, # 34 {'0': 34, '1': 34, '2': 34, '3': 34, - '4': 34, '5': 34, '6': 34, '7': 34}, + '4': 34, '5': 34, '6': 34, '7': 34, + '8': 34, '9': 34, 'A': 34, 'B': 34, + 'C': 34, 'D': 34, 'E': 34, 'F': 34, + 'a': 34, 'b': 34, 'c': 34, 'd': 34, + 'e': 34, 'f': 34}, # 35 - {'0': 35, '1': 35}, + {'0': 35, '1': 35, '2': 35, '3': 35, + '4': 35, '5': 35, '6': 35, '7': 35}, # 36 - {'+': 41, '-': 41, '0': 42, '1': 42, - '2': 42, '3': 42, '4': 42, '5': 42, - '6': 42, '7': 42, '8': 42, '9': 42}, + {'0': 36, '1': 36}, # 37 - {'0': 38, '1': 38, '2': 38, '3': 38, - '4': 38, '5': 38, '6': 38, '7': 38, - '8': 38, '9': 38}, + {'+': 42, '-': 42, '0': 43, '1': 43, + '2': 43, '3': 43, '4': 43, '5': 43, + '6': 43, '7': 43, '8': 43, '9': 43}, # 38 - {'0': 38, '1': 38, '2': 38, '3': 38, - '4': 38, '5': 38, '6': 38, '7': 38, - '8': 38, '9': 38, 'J': 13, 'j': 13}, + {'0': 39, '1': 39, '2': 39, '3': 39, + '4': 39, '5': 39, '6': 39, '7': 39, + '8': 39, '9': 39}, # 39 - {automata.DEFAULT: 39, '\n': 26, - '\r': 26, "'": 13, '\\': 28}, + {'0': 39, '1': 39, '2': 39, '3': 39, + '4': 39, '5': 39, '6': 39, '7': 39, + '8': 39, '9': 39, 'J': 13, 'j': 13}, # 40 - {automata.DEFAULT: 40, '\n': 26, - '\r': 26, '"': 13, '\\': 31}, + {automata.DEFAULT: 40, '\n': 27, + '\r': 27, "'": 13, '\\': 29}, # 41 - {'0': 42, '1': 42, '2': 42, '3': 42, - '4': 42, '5': 42, '6': 42, '7': 42, - '8': 42, '9': 42}, + {automata.DEFAULT: 41, '\n': 27, + '\r': 27, '"': 13, '\\': 32}, # 42 - {'0': 42, '1': 42, '2': 42, '3': 42, - '4': 42, '5': 42, '6': 42, '7': 42, - '8': 42, '9': 42, 'J': 13, 'j': 13}, + {'0': 43, '1': 43, '2': 43, '3': 43, + '4': 43, '5': 43, '6': 43, '7': 43, + '8': 43, '9': 43}, + # 43 + {'0': 43, '1': 43, '2': 43, '3': 43, + '4': 43, '5': 43, '6': 43, '7': 43, + '8': 43, '9': 43, 'J': 13, 'j': 13}, ] pseudoDFA = automata.DFA(states, accepts) From noreply at buildbot.pypy.org Thu Jan 19 12:55:37 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 19 Jan 2012 12:55:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain): use a better way to regenerate the DFA machinery: earlier you had to manually copy&paste lines into pytokenize.py, now you can just redirect the output to dfa_generated.py Message-ID: <20120119115537.582EC820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51485:1e634f696054 Date: 2012-01-19 12:54 +0100 http://bitbucket.org/pypy/pypy/changeset/1e634f696054/ Log: (antocuni, romain): use a better way to regenerate the DFA machinery: earlier you had to manually copy&paste lines into pytokenize.py, now you can just redirect the output to dfa_generated.py diff --git a/pypy/interpreter/pyparser/dfa_generated.py b/pypy/interpreter/pyparser/dfa_generated.py new file mode 100644 --- /dev/null +++ b/pypy/interpreter/pyparser/dfa_generated.py @@ -0,0 +1,287 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY gendfa.py +# DO NOT EDIT +# TO REGENERATE THE FILE, RUN: +# python gendfa.py > dfa_generated.py + +from pypy.interpreter.pyparser import automata +accepts = [True, True, True, True, True, True, True, True, + True, True, False, True, True, True, True, False, + False, False, True, False, False, False, True, + False, True, False, True, False, True, False, + False, True, False, False, True, True, True, + False, False, True, False, False, False, True] +states = [ + # 0 + {'\t': 0, '\n': 13, '\x0c': 0, + '\r': 14, ' ': 0, '!': 10, '"': 16, + '#': 18, '%': 12, '&': 12, "'": 15, + '(': 13, ')': 13, '*': 7, '+': 12, + ',': 13, '-': 12, '.': 6, '/': 11, + '0': 4, '1': 5, '2': 5, '3': 5, + '4': 5, '5': 5, '6': 5, '7': 5, + '8': 5, '9': 5, ':': 13, ';': 13, + '<': 9, '=': 12, '>': 8, '@': 13, + 'A': 1, 'B': 2, 'C': 1, 'D': 1, + 'E': 1, 'F': 1, 'G': 1, 'H': 1, + 'I': 1, 'J': 1, 'K': 1, 'L': 1, + 'M': 1, 'N': 1, 'O': 1, 'P': 1, + 'Q': 1, 'R': 3, 'S': 1, 'T': 1, + 'U': 1, 'V': 1, 'W': 1, 'X': 1, + 'Y': 1, 'Z': 1, '[': 13, '\\': 17, + ']': 13, '^': 12, '_': 1, '`': 13, + 'a': 1, 'b': 2, 'c': 1, 'd': 1, + 'e': 1, 'f': 1, 'g': 1, 'h': 1, + 'i': 1, 'j': 1, 'k': 1, 'l': 1, + 'm': 1, 'n': 1, 'o': 1, 'p': 1, + 'q': 1, 'r': 3, 's': 1, 't': 1, + 'u': 1, 'v': 1, 'w': 1, 'x': 1, + 'y': 1, 'z': 1, '{': 13, '|': 12, + '}': 13, '~': 13}, + # 1 + {'0': 1, '1': 1, '2': 1, '3': 1, + '4': 1, '5': 1, '6': 1, '7': 1, + '8': 1, '9': 1, 'A': 1, 'B': 1, + 'C': 1, 'D': 1, 'E': 1, 'F': 1, + 'G': 1, 'H': 1, 'I': 1, 'J': 1, + 'K': 1, 'L': 1, 'M': 1, 'N': 1, + 'O': 1, 'P': 1, 'Q': 1, 'R': 1, + 'S': 1, 'T': 1, 'U': 1, 'V': 1, + 'W': 1, 'X': 1, 'Y': 1, 'Z': 1, + '_': 1, 'a': 1, 'b': 1, 'c': 1, + 'd': 1, 'e': 1, 'f': 1, 'g': 1, + 'h': 1, 'i': 1, 'j': 1, 'k': 1, + 'l': 1, 'm': 1, 'n': 1, 'o': 1, + 'p': 1, 'q': 1, 'r': 1, 's': 1, + 't': 1, 'u': 1, 'v': 1, 'w': 1, + 'x': 1, 'y': 1, 'z': 1}, + # 2 + {'"': 16, "'": 15, '0': 1, '1': 1, + '2': 1, '3': 1, '4': 1, '5': 1, + '6': 1, '7': 1, '8': 1, '9': 1, + 'A': 1, 'B': 1, 'C': 1, 'D': 1, + 'E': 1, 'F': 1, 'G': 1, 'H': 1, + 'I': 1, 'J': 1, 'K': 1, 'L': 1, + 'M': 1, 'N': 1, 'O': 1, 'P': 1, + 'Q': 1, 'R': 3, 'S': 1, 'T': 1, + 'U': 1, 'V': 1, 'W': 1, 'X': 1, + 'Y': 1, 'Z': 1, '_': 1, 'a': 1, + 'b': 1, 'c': 1, 'd': 1, 'e': 1, + 'f': 1, 'g': 1, 'h': 1, 'i': 1, + 'j': 1, 'k': 1, 'l': 1, 'm': 1, + 'n': 1, 'o': 1, 'p': 1, 'q': 1, + 'r': 3, 's': 1, 't': 1, 'u': 1, + 'v': 1, 'w': 1, 'x': 1, 'y': 1, + 'z': 1}, + # 3 + {'"': 16, "'": 15, '0': 1, '1': 1, + '2': 1, '3': 1, '4': 1, '5': 1, + '6': 1, '7': 1, '8': 1, '9': 1, + 'A': 1, 'B': 1, 'C': 1, 'D': 1, + 'E': 1, 'F': 1, 'G': 1, 'H': 1, + 'I': 1, 'J': 1, 'K': 1, 'L': 1, + 'M': 1, 'N': 1, 'O': 1, 'P': 1, + 'Q': 1, 'R': 1, 'S': 1, 'T': 1, + 'U': 1, 'V': 1, 'W': 1, 'X': 1, + 'Y': 1, 'Z': 1, '_': 1, 'a': 1, + 'b': 1, 'c': 1, 'd': 1, 'e': 1, + 'f': 1, 'g': 1, 'h': 1, 'i': 1, + 'j': 1, 'k': 1, 'l': 1, 'm': 1, + 'n': 1, 'o': 1, 'p': 1, 'q': 1, + 'r': 1, 's': 1, 't': 1, 'u': 1, + 'v': 1, 'w': 1, 'x': 1, 'y': 1, + 'z': 1}, + # 4 + {'.': 24, '0': 22, '1': 23, '2': 23, + '3': 23, '4': 23, '5': 23, '6': 23, + '7': 23, '8': 23, '9': 23, 'B': 21, + 'E': 25, 'J': 13, 'O': 20, 'X': 19, + 'b': 21, 'e': 25, 'j': 13, 'o': 20, + 'x': 19}, + # 5 + {'.': 24, '0': 5, '1': 5, '2': 5, + '3': 5, '4': 5, '5': 5, '6': 5, + '7': 5, '8': 5, '9': 5, 'E': 25, + 'J': 13, 'e': 25, 'j': 13}, + # 6 + {'0': 26, '1': 26, '2': 26, '3': 26, + '4': 26, '5': 26, '6': 26, '7': 26, + '8': 26, '9': 26}, + # 7 + {'*': 12, '=': 13}, + # 8 + {'=': 13, '>': 12}, + # 9 + {'<': 12, '=': 13, '>': 13}, + # 10 + {'=': 13}, + # 11 + {'/': 12, '=': 13}, + # 12 + {'=': 13}, + # 13 + {}, + # 14 + {'\n': 13}, + # 15 + {automata.DEFAULT: 30, '\n': 27, + '\r': 27, "'": 28, '\\': 29}, + # 16 + {automata.DEFAULT: 33, '\n': 27, + '\r': 27, '"': 31, '\\': 32}, + # 17 + {'\n': 13, '\r': 14}, + # 18 + {automata.DEFAULT: 18, '\n': 27, '\r': 27}, + # 19 + {'0': 34, '1': 34, '2': 34, '3': 34, + '4': 34, '5': 34, '6': 34, '7': 34, + '8': 34, '9': 34, 'A': 34, 'B': 34, + 'C': 34, 'D': 34, 'E': 34, 'F': 34, + 'a': 34, 'b': 34, 'c': 34, 'd': 34, + 'e': 34, 'f': 34}, + # 20 + {'0': 35, '1': 35, '2': 35, '3': 35, + '4': 35, '5': 35, '6': 35, '7': 35}, + # 21 + {'0': 36, '1': 36}, + # 22 + {'.': 24, '0': 22, '1': 23, '2': 23, + '3': 23, '4': 23, '5': 23, '6': 23, + '7': 23, '8': 23, '9': 23, 'E': 25, + 'J': 13, 'e': 25, 'j': 13}, + # 23 + {'.': 24, '0': 23, '1': 23, '2': 23, + '3': 23, '4': 23, '5': 23, '6': 23, + '7': 23, '8': 23, '9': 23, 'E': 25, + 'J': 13, 'e': 25, 'j': 13}, + # 24 + {'0': 24, '1': 24, '2': 24, '3': 24, + '4': 24, '5': 24, '6': 24, '7': 24, + '8': 24, '9': 24, 'E': 37, 'J': 13, + 'e': 37, 'j': 13}, + # 25 + {'+': 38, '-': 38, '0': 39, '1': 39, + '2': 39, '3': 39, '4': 39, '5': 39, + '6': 39, '7': 39, '8': 39, '9': 39}, + # 26 + {'0': 26, '1': 26, '2': 26, '3': 26, + '4': 26, '5': 26, '6': 26, '7': 26, + '8': 26, '9': 26, 'E': 37, 'J': 13, + 'e': 37, 'j': 13}, + # 27 + {}, + # 28 + {"'": 13}, + # 29 + {automata.DEFAULT: 40, '\n': 13, '\r': 14}, + # 30 + {automata.DEFAULT: 30, '\n': 27, + '\r': 27, "'": 13, '\\': 29}, + # 31 + {'"': 13}, + # 32 + {automata.DEFAULT: 41, '\n': 13, '\r': 14}, + # 33 + {automata.DEFAULT: 33, '\n': 27, + '\r': 27, '"': 13, '\\': 32}, + # 34 + {'0': 34, '1': 34, '2': 34, '3': 34, + '4': 34, '5': 34, '6': 34, '7': 34, + '8': 34, '9': 34, 'A': 34, 'B': 34, + 'C': 34, 'D': 34, 'E': 34, 'F': 34, + 'a': 34, 'b': 34, 'c': 34, 'd': 34, + 'e': 34, 'f': 34}, + # 35 + {'0': 35, '1': 35, '2': 35, '3': 35, + '4': 35, '5': 35, '6': 35, '7': 35}, + # 36 + {'0': 36, '1': 36}, + # 37 + {'+': 42, '-': 42, '0': 43, '1': 43, + '2': 43, '3': 43, '4': 43, '5': 43, + '6': 43, '7': 43, '8': 43, '9': 43}, + # 38 + {'0': 39, '1': 39, '2': 39, '3': 39, + '4': 39, '5': 39, '6': 39, '7': 39, + '8': 39, '9': 39}, + # 39 + {'0': 39, '1': 39, '2': 39, '3': 39, + '4': 39, '5': 39, '6': 39, '7': 39, + '8': 39, '9': 39, 'J': 13, 'j': 13}, + # 40 + {automata.DEFAULT: 40, '\n': 27, + '\r': 27, "'": 13, '\\': 29}, + # 41 + {automata.DEFAULT: 41, '\n': 27, + '\r': 27, '"': 13, '\\': 32}, + # 42 + {'0': 43, '1': 43, '2': 43, '3': 43, + '4': 43, '5': 43, '6': 43, '7': 43, + '8': 43, '9': 43}, + # 43 + {'0': 43, '1': 43, '2': 43, '3': 43, + '4': 43, '5': 43, '6': 43, '7': 43, + '8': 43, '9': 43, 'J': 13, 'j': 13}, + ] +pseudoDFA = automata.DFA(states, accepts) + +accepts = [False, False, False, False, False, True] +states = [ + # 0 + {automata.DEFAULT: 0, '"': 1, '\\': 2}, + # 1 + {automata.DEFAULT: 4, '"': 3, '\\': 2}, + # 2 + {automata.DEFAULT: 4}, + # 3 + {automata.DEFAULT: 4, '"': 5, '\\': 2}, + # 4 + {automata.DEFAULT: 4, '"': 1, '\\': 2}, + # 5 + {automata.DEFAULT: 4, '"': 5, '\\': 2}, + ] +double3DFA = automata.NonGreedyDFA(states, accepts) + +accepts = [False, False, False, False, False, True] +states = [ + # 0 + {automata.DEFAULT: 0, "'": 1, '\\': 2}, + # 1 + {automata.DEFAULT: 4, "'": 3, '\\': 2}, + # 2 + {automata.DEFAULT: 4}, + # 3 + {automata.DEFAULT: 4, "'": 5, '\\': 2}, + # 4 + {automata.DEFAULT: 4, "'": 1, '\\': 2}, + # 5 + {automata.DEFAULT: 4, "'": 5, '\\': 2}, + ] +single3DFA = automata.NonGreedyDFA(states, accepts) + +accepts = [False, True, False, False] +states = [ + # 0 + {automata.DEFAULT: 0, "'": 1, '\\': 2}, + # 1 + {}, + # 2 + {automata.DEFAULT: 3}, + # 3 + {automata.DEFAULT: 3, "'": 1, '\\': 2}, + ] +singleDFA = automata.DFA(states, accepts) + +accepts = [False, True, False, False] +states = [ + # 0 + {automata.DEFAULT: 0, '"': 1, '\\': 2}, + # 1 + {}, + # 2 + {automata.DEFAULT: 3}, + # 3 + {automata.DEFAULT: 3, '"': 1, '\\': 2}, + ] +doubleDFA = automata.DFA(states, accepts) + diff --git a/pypy/interpreter/pyparser/genpytokenize.py b/pypy/interpreter/pyparser/gendfa.py rename from pypy/interpreter/pyparser/genpytokenize.py rename to pypy/interpreter/pyparser/gendfa.py --- a/pypy/interpreter/pyparser/genpytokenize.py +++ b/pypy/interpreter/pyparser/gendfa.py @@ -1,5 +1,5 @@ #! /usr/bin/env python -"""Module genPytokenize +"""Module gendfa Generates finite state automata for recognizing Python tokens. These are hand coded versions of the regular expressions originally appearing in Ping's @@ -7,6 +7,10 @@ When run from the command line, this should pretty print the DFA machinery. +To regenerate the dfa, run:: + + $ python gendfa.py > dfa_generated.py + $Id: genPytokenize.py,v 1.1 2003/10/02 17:37:17 jriehl Exp $ """ @@ -305,6 +309,12 @@ print def main (): + print "# THIS FILE IS AUTOMATICALLY GENERATED BY gendfa.py" + print "# DO NOT EDIT" + print "# TO REGENERATE THE FILE, RUN:" + print "# python gendfa.py > dfa_generated.py" + print + print "from pypy.interpreter.pyparser import automata" pseudoDFA = makePyPseudoDFA() output("pseudoDFA", "DFA", pseudoDFA) endDFAMap = makePyEndDFAMap() diff --git a/pypy/interpreter/pyparser/pytokenize.py b/pypy/interpreter/pyparser/pytokenize.py --- a/pypy/interpreter/pyparser/pytokenize.py +++ b/pypy/interpreter/pyparser/pytokenize.py @@ -17,298 +17,10 @@ # ______________________________________________________________________ from pypy.interpreter.pyparser import automata +from pypy.interpreter.pyparser.dfa_generated import * __all__ = [ "tokenize" ] -# ______________________________________________________________________ -# Automatically generated DFA's - -accepts = [True, True, True, True, True, True, True, True, - True, True, False, True, True, True, True, False, - False, False, True, False, False, False, True, - False, True, False, True, False, True, False, - False, True, False, False, True, True, True, - False, False, True, False, False, False, True] -states = [ - # 0 - {'\t': 0, '\n': 13, '\x0c': 0, - '\r': 14, ' ': 0, '!': 10, '"': 16, - '#': 18, '%': 12, '&': 12, "'": 15, - '(': 13, ')': 13, '*': 7, '+': 12, - ',': 13, '-': 12, '.': 6, '/': 11, - '0': 4, '1': 5, '2': 5, '3': 5, - '4': 5, '5': 5, '6': 5, '7': 5, - '8': 5, '9': 5, ':': 13, ';': 13, - '<': 9, '=': 12, '>': 8, '@': 13, - 'A': 1, 'B': 2, 'C': 1, 'D': 1, - 'E': 1, 'F': 1, 'G': 1, 'H': 1, - 'I': 1, 'J': 1, 'K': 1, 'L': 1, - 'M': 1, 'N': 1, 'O': 1, 'P': 1, - 'Q': 1, 'R': 3, 'S': 1, 'T': 1, - 'U': 1, 'V': 1, 'W': 1, 'X': 1, - 'Y': 1, 'Z': 1, '[': 13, '\\': 17, - ']': 13, '^': 12, '_': 1, '`': 13, - 'a': 1, 'b': 2, 'c': 1, 'd': 1, - 'e': 1, 'f': 1, 'g': 1, 'h': 1, - 'i': 1, 'j': 1, 'k': 1, 'l': 1, - 'm': 1, 'n': 1, 'o': 1, 'p': 1, - 'q': 1, 'r': 3, 's': 1, 't': 1, - 'u': 1, 'v': 1, 'w': 1, 'x': 1, - 'y': 1, 'z': 1, '{': 13, '|': 12, - '}': 13, '~': 13}, - # 1 - {'0': 1, '1': 1, '2': 1, '3': 1, - '4': 1, '5': 1, '6': 1, '7': 1, - '8': 1, '9': 1, 'A': 1, 'B': 1, - 'C': 1, 'D': 1, 'E': 1, 'F': 1, - 'G': 1, 'H': 1, 'I': 1, 'J': 1, - 'K': 1, 'L': 1, 'M': 1, 'N': 1, - 'O': 1, 'P': 1, 'Q': 1, 'R': 1, - 'S': 1, 'T': 1, 'U': 1, 'V': 1, - 'W': 1, 'X': 1, 'Y': 1, 'Z': 1, - '_': 1, 'a': 1, 'b': 1, 'c': 1, - 'd': 1, 'e': 1, 'f': 1, 'g': 1, - 'h': 1, 'i': 1, 'j': 1, 'k': 1, - 'l': 1, 'm': 1, 'n': 1, 'o': 1, - 'p': 1, 'q': 1, 'r': 1, 's': 1, - 't': 1, 'u': 1, 'v': 1, 'w': 1, - 'x': 1, 'y': 1, 'z': 1}, - # 2 - {'"': 16, "'": 15, '0': 1, '1': 1, - '2': 1, '3': 1, '4': 1, '5': 1, - '6': 1, '7': 1, '8': 1, '9': 1, - 'A': 1, 'B': 1, 'C': 1, 'D': 1, - 'E': 1, 'F': 1, 'G': 1, 'H': 1, - 'I': 1, 'J': 1, 'K': 1, 'L': 1, - 'M': 1, 'N': 1, 'O': 1, 'P': 1, - 'Q': 1, 'R': 3, 'S': 1, 'T': 1, - 'U': 1, 'V': 1, 'W': 1, 'X': 1, - 'Y': 1, 'Z': 1, '_': 1, 'a': 1, - 'b': 1, 'c': 1, 'd': 1, 'e': 1, - 'f': 1, 'g': 1, 'h': 1, 'i': 1, - 'j': 1, 'k': 1, 'l': 1, 'm': 1, - 'n': 1, 'o': 1, 'p': 1, 'q': 1, - 'r': 3, 's': 1, 't': 1, 'u': 1, - 'v': 1, 'w': 1, 'x': 1, 'y': 1, - 'z': 1}, - # 3 - {'"': 16, "'": 15, '0': 1, '1': 1, - '2': 1, '3': 1, '4': 1, '5': 1, - '6': 1, '7': 1, '8': 1, '9': 1, - 'A': 1, 'B': 1, 'C': 1, 'D': 1, - 'E': 1, 'F': 1, 'G': 1, 'H': 1, - 'I': 1, 'J': 1, 'K': 1, 'L': 1, - 'M': 1, 'N': 1, 'O': 1, 'P': 1, - 'Q': 1, 'R': 1, 'S': 1, 'T': 1, - 'U': 1, 'V': 1, 'W': 1, 'X': 1, - 'Y': 1, 'Z': 1, '_': 1, 'a': 1, - 'b': 1, 'c': 1, 'd': 1, 'e': 1, - 'f': 1, 'g': 1, 'h': 1, 'i': 1, - 'j': 1, 'k': 1, 'l': 1, 'm': 1, - 'n': 1, 'o': 1, 'p': 1, 'q': 1, - 'r': 1, 's': 1, 't': 1, 'u': 1, - 'v': 1, 'w': 1, 'x': 1, 'y': 1, - 'z': 1}, - # 4 - {'.': 24, '0': 22, '1': 23, '2': 23, - '3': 23, '4': 23, '5': 23, '6': 23, - '7': 23, '8': 23, '9': 23, 'B': 21, - 'E': 25, 'J': 13, 'O': 20, 'X': 19, - 'b': 21, 'e': 25, 'j': 13, 'o': 20, - 'x': 19}, - # 5 - {'.': 24, '0': 5, '1': 5, '2': 5, - '3': 5, '4': 5, '5': 5, '6': 5, - '7': 5, '8': 5, '9': 5, 'E': 25, - 'J': 13, 'e': 25, 'j': 13}, - # 6 - {'0': 26, '1': 26, '2': 26, '3': 26, - '4': 26, '5': 26, '6': 26, '7': 26, - '8': 26, '9': 26}, - # 7 - {'*': 12, '=': 13}, - # 8 - {'=': 13, '>': 12}, - # 9 - {'<': 12, '=': 13, '>': 13}, - # 10 - {'=': 13}, - # 11 - {'/': 12, '=': 13}, - # 12 - {'=': 13}, - # 13 - {}, - # 14 - {'\n': 13}, - # 15 - {automata.DEFAULT: 30, '\n': 27, - '\r': 27, "'": 28, '\\': 29}, - # 16 - {automata.DEFAULT: 33, '\n': 27, - '\r': 27, '"': 31, '\\': 32}, - # 17 - {'\n': 13, '\r': 14}, - # 18 - {automata.DEFAULT: 18, '\n': 27, '\r': 27}, - # 19 - {'0': 34, '1': 34, '2': 34, '3': 34, - '4': 34, '5': 34, '6': 34, '7': 34, - '8': 34, '9': 34, 'A': 34, 'B': 34, - 'C': 34, 'D': 34, 'E': 34, 'F': 34, - 'a': 34, 'b': 34, 'c': 34, 'd': 34, - 'e': 34, 'f': 34}, - # 20 - {'0': 35, '1': 35, '2': 35, '3': 35, - '4': 35, '5': 35, '6': 35, '7': 35}, - # 21 - {'0': 36, '1': 36}, - # 22 - {'.': 24, '0': 22, '1': 23, '2': 23, - '3': 23, '4': 23, '5': 23, '6': 23, - '7': 23, '8': 23, '9': 23, 'E': 25, - 'J': 13, 'e': 25, 'j': 13}, - # 23 - {'.': 24, '0': 23, '1': 23, '2': 23, - '3': 23, '4': 23, '5': 23, '6': 23, - '7': 23, '8': 23, '9': 23, 'E': 25, - 'J': 13, 'e': 25, 'j': 13}, - # 24 - {'0': 24, '1': 24, '2': 24, '3': 24, - '4': 24, '5': 24, '6': 24, '7': 24, - '8': 24, '9': 24, 'E': 37, 'J': 13, - 'e': 37, 'j': 13}, - # 25 - {'+': 38, '-': 38, '0': 39, '1': 39, - '2': 39, '3': 39, '4': 39, '5': 39, - '6': 39, '7': 39, '8': 39, '9': 39}, - # 26 - {'0': 26, '1': 26, '2': 26, '3': 26, - '4': 26, '5': 26, '6': 26, '7': 26, - '8': 26, '9': 26, 'E': 37, 'J': 13, - 'e': 37, 'j': 13}, - # 27 - {}, - # 28 - {"'": 13}, - # 29 - {automata.DEFAULT: 40, '\n': 13, '\r': 14}, - # 30 - {automata.DEFAULT: 30, '\n': 27, - '\r': 27, "'": 13, '\\': 29}, - # 31 - {'"': 13}, - # 32 - {automata.DEFAULT: 41, '\n': 13, '\r': 14}, - # 33 - {automata.DEFAULT: 33, '\n': 27, - '\r': 27, '"': 13, '\\': 32}, - # 34 - {'0': 34, '1': 34, '2': 34, '3': 34, - '4': 34, '5': 34, '6': 34, '7': 34, - '8': 34, '9': 34, 'A': 34, 'B': 34, - 'C': 34, 'D': 34, 'E': 34, 'F': 34, - 'a': 34, 'b': 34, 'c': 34, 'd': 34, - 'e': 34, 'f': 34}, - # 35 - {'0': 35, '1': 35, '2': 35, '3': 35, - '4': 35, '5': 35, '6': 35, '7': 35}, - # 36 - {'0': 36, '1': 36}, - # 37 - {'+': 42, '-': 42, '0': 43, '1': 43, - '2': 43, '3': 43, '4': 43, '5': 43, - '6': 43, '7': 43, '8': 43, '9': 43}, - # 38 - {'0': 39, '1': 39, '2': 39, '3': 39, - '4': 39, '5': 39, '6': 39, '7': 39, - '8': 39, '9': 39}, - # 39 - {'0': 39, '1': 39, '2': 39, '3': 39, - '4': 39, '5': 39, '6': 39, '7': 39, - '8': 39, '9': 39, 'J': 13, 'j': 13}, - # 40 - {automata.DEFAULT: 40, '\n': 27, - '\r': 27, "'": 13, '\\': 29}, - # 41 - {automata.DEFAULT: 41, '\n': 27, - '\r': 27, '"': 13, '\\': 32}, - # 42 - {'0': 43, '1': 43, '2': 43, '3': 43, - '4': 43, '5': 43, '6': 43, '7': 43, - '8': 43, '9': 43}, - # 43 - {'0': 43, '1': 43, '2': 43, '3': 43, - '4': 43, '5': 43, '6': 43, '7': 43, - '8': 43, '9': 43, 'J': 13, 'j': 13}, - ] -pseudoDFA = automata.DFA(states, accepts) - -accepts = [False, False, False, False, False, True] -states = [ - # 0 - {automata.DEFAULT: 0, '"': 1, '\\': 2}, - # 1 - {automata.DEFAULT: 4, '"': 3, '\\': 2}, - # 2 - {automata.DEFAULT: 4}, - # 3 - {automata.DEFAULT: 4, '"': 5, '\\': 2}, - # 4 - {automata.DEFAULT: 4, '"': 1, '\\': 2}, - # 5 - {automata.DEFAULT: 4, '"': 5, '\\': 2}, - ] -double3DFA = automata.NonGreedyDFA(states, accepts) - -accepts = [False, False, False, False, False, True] -states = [ - # 0 - {automata.DEFAULT: 0, "'": 1, '\\': 2}, - # 1 - {automata.DEFAULT: 4, "'": 3, '\\': 2}, - # 2 - {automata.DEFAULT: 4}, - # 3 - {automata.DEFAULT: 4, "'": 5, '\\': 2}, - # 4 - {automata.DEFAULT: 4, "'": 1, '\\': 2}, - # 5 - {automata.DEFAULT: 4, "'": 5, '\\': 2}, - ] -single3DFA = automata.NonGreedyDFA(states, accepts) - -accepts = [False, True, False, False] -states = [ - # 0 - {automata.DEFAULT: 0, "'": 1, '\\': 2}, - # 1 - {}, - # 2 - {automata.DEFAULT: 3}, - # 3 - {automata.DEFAULT: 3, "'": 1, '\\': 2}, - ] -singleDFA = automata.DFA(states, accepts) - -accepts = [False, True, False, False] -states = [ - # 0 - {automata.DEFAULT: 0, '"': 1, '\\': 2}, - # 1 - {}, - # 2 - {automata.DEFAULT: 3}, - # 3 - {automata.DEFAULT: 3, '"': 1, '\\': 2}, - ] -doubleDFA = automata.DFA(states, accepts) - - - -#_______________________________________________________________________ -# End of automatically generated DFA's - endDFAs = {"'" : singleDFA, '"' : doubleDFA, 'r' : None, From noreply at buildbot.pypy.org Thu Jan 19 15:50:37 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 19 Jan 2012 15:50:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain) make this test at least runnable by using a py3k compatible syntax. Tons of failures, of course Message-ID: <20120119145037.934CD820D8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51486:155402d74dfc Date: 2012-01-19 15:50 +0100 http://bitbucket.org/pypy/pypy/changeset/155402d74dfc/ Log: (antocuni, romain) make this test at least runnable by using a py3k compatible syntax. Tons of failures, of course diff --git a/pypy/module/marshal/test/test_marshal.py b/pypy/module/marshal/test/test_marshal.py --- a/pypy/module/marshal/test/test_marshal.py +++ b/pypy/module/marshal/test/test_marshal.py @@ -7,12 +7,13 @@ cls.w_tmpfile = cls.space.wrap(str(tmpfile)) def w_marshal_check(self, case): - import marshal, StringIO + import marshal + from io import StringIO s = marshal.dumps(case) - print repr(s) + print(repr(s)) x = marshal.loads(s) assert x == case and type(x) is type(case) - f = StringIO.StringIO() + f = StringIO() marshal.dump(case, f) f.seek(0) x = marshal.load(f) From noreply at buildbot.pypy.org Thu Jan 19 16:15:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 19 Jan 2012 16:15:05 +0100 (CET) Subject: [pypy-commit] pypy stm: (bivab, arigo) Message-ID: <20120119151505.B4F61820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51487:b52f0e138410 Date: 2012-01-19 16:14 +0100 http://bitbucket.org/pypy/pypy/changeset/b52f0e138410/ Log: (bivab, arigo) Wrote the app-level interface for 'transaction', with careful(?) messy locking. diff --git a/pypy/module/transaction/__init__.py b/pypy/module/transaction/__init__.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/__init__.py @@ -0,0 +1,20 @@ + +from pypy.interpreter.mixedmodule import MixedModule + +class Module(MixedModule): + """Transaction module. XXX document me + """ + + interpleveldefs = { + 'set_num_threads': 'interp_transaction.set_num_threads', + 'add': 'interp_transaction.add', + 'run': 'interp_transaction.run', + 'TransactionError': 'interp_transaction.state.w_error', + } + + appleveldefs = { + } + + def startup(self, space): + from pypy.module.transaction import interp_transaction + interp_transaction.state.startup(space) diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/interp_transaction.py @@ -0,0 +1,138 @@ +from pypy.interpreter.error import OperationError +from pypy.interpreter.gateway import unwrap_spec +from pypy.module.transaction import threadintf + + +NUM_THREADS_DEFAULT = 4 # by default + + +class State(object): + + def _freeze_(self): + self.__dict__.clear() + self.running = False + self.num_threads = NUM_THREADS_DEFAULT + + def startup(self, space): + self.space = space + self.pending = [] + self.ll_lock = threadintf.allocate_lock() + self.ll_no_tasks_pending_lock = threadintf.allocate_lock() + self.ll_unfinished_lock = threadintf.allocate_lock() + self.w_error = space.new_exception_class( + "transaction.TransactionError") + self.lock_no_tasks_pending() + self.lock_unfinished() + + def set_num_threads(self, num): + if self.running: + space = self.space + raise OperationError(space.w_ValueError, + space.wrap("cannot change the number of " + "threads when transaction.run() " + "is active")) + self.num_threads = num_threads + + def lock(self): + # XXX think about the interaction between locks and the GC + threadintf.acquire(self.ll_lock, True) + + def unlock(self): + threadintf.release(self.ll_lock) + + def lock_no_tasks_pending(self): + threadintf.acquire(self.ll_no_tasks_pending_lock, True) + + def unlock_no_tasks_pending(self): + threadintf.release(self.ll_no_tasks_pending_lock) + + def assert_locked_no_tasks_pending(self): + just_locked = threadintf.acquire(self.ll_no_tasks_pending_lock, False) + assert not just_locked + + def lock_unfinished(self): + threadintf.acquire(self.ll_unfinished_lock, True) + + def unlock_unfinished(self): + threadintf.release(self.ll_unfinished_lock) + + +state = State() +state._freeze_() + + + at unwrap_spec(num=int) +def set_num_threads(space, num): + if num < 1: + num = 1 + state.set_num_threads(num) + + +class Pending: + def __init__(self, w_callback, args): + self.w_callback = w_callback + self.args = args + + def run(self): + space = state.space + space.call_args(self.w_callback, self.args) + # xxx exceptions? + + +def add(space, w_callback, __args__): + state.lock() + was_empty = len(state.pending) == 0 + state.pending.append(Pending(w_callback, __args__)) + if was_empty: + state.unlock_no_tasks_pending() + state.unlock() + + +def _run_thread(): + state.lock() + # + while True: + if len(state.pending) == 0: + state.assert_locked_no_tasks_pending() + state.num_waiting_threads += 1 + if state.num_waiting_threads == state.num_threads: + state.finished = True + state.unlock_unfinished() + state.unlock_no_tasks_pending() + state.unlock() + # + state.lock_no_tasks_pending() + state.unlock_no_tasks_pending() + # + state.lock() + if state.finished: + break + state.num_waiting_threads -= 1 + else: + pending = state.pending.pop(0) + if len(state.pending) == 0: + state.lock_no_tasks_pending() + state.unlock() + pending.run() + state.lock() + # + state.unlock() + + +def run(space): + if state.running: + raise OperationError( + state.w_error, + space.wrap("recursive invocation of transaction.run()")) + state.num_waiting_threads = 0 + state.finished = False + state.running = True + # + for i in range(state.num_threads): + threadintf.start_new_thread(_run_thread, ()) + # + state.lock_unfinished() + assert state.num_waiting_threads == state.num_threads + assert len(state.pending) == 0 + state.lock_no_tasks_pending() + state.running = False diff --git a/pypy/module/transaction/test/__init__.py b/pypy/module/transaction/test/__init__.py new file mode 100644 diff --git a/pypy/module/transaction/test/test_interp_transaction.py b/pypy/module/transaction/test/test_interp_transaction.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/test/test_interp_transaction.py @@ -0,0 +1,24 @@ +from pypy.module.transaction import interp_transaction + + +class FakeSpace: + def new_exception_class(self, name): + return "some error class" + def call_args(self, w_callback, args): + w_callback(*args) + + +def test_linear_list(): + space = FakeSpace() + interp_transaction.state.startup(space) + seen = [] + # + def do(n): + seen.append(n) + if n < 200: + interp_transaction.add(space, do, (n+1,)) + # + interp_transaction.add(space, do, (0,)) + assert seen == [] + interp_transaction.run(space) + assert seen == range(201) diff --git a/pypy/module/transaction/test/test_transaction.py b/pypy/module/transaction/test/test_transaction.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/test/test_transaction.py @@ -0,0 +1,16 @@ +import py +from pypy.conftest import gettestobjspace + + +class AppTestTransaction: + def setup_class(cls): + cls.space = gettestobjspace(usemodules=['transaction']) + + def test_simple(self): + import transaction + lst = [] + transaction.add(lst.append, 5) + transaction.add(lst.append, 6) + transaction.add(lst.append, 7) + transaction.run() + assert sorted(lst) == [5, 6, 7] diff --git a/pypy/module/transaction/threadintf.py b/pypy/module/transaction/threadintf.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/threadintf.py @@ -0,0 +1,19 @@ +import thread + + +def allocate_lock(): + "NOT_RPYTHON" + return thread.allocate_lock() + +def acquire(lock, wait): + "NOT_RPYTHON" + lock.acquire(wait) + +def release(lock): + "NOT_RPYTHON" + lock.release() + +def start_new_thread(callback, args): + "NOT_RPYTHON" + thread.start_new_thread(callback, args) + From noreply at buildbot.pypy.org Thu Jan 19 17:17:22 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Thu, 19 Jan 2012 17:17:22 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, rguillebert) Add a marshalling test of pycode objects Message-ID: <20120119161722.7E228820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51488:176a25c48305 Date: 2012-01-19 17:17 +0100 http://bitbucket.org/pypy/pypy/changeset/176a25c48305/ Log: (antocuni, rguillebert) Add a marshalling test of pycode objects diff --git a/pypy/module/marshal/test/test_marshalimpl.py b/pypy/module/marshal/test/test_marshalimpl.py --- a/pypy/module/marshal/test/test_marshalimpl.py +++ b/pypy/module/marshal/test/test_marshalimpl.py @@ -55,6 +55,17 @@ z = marshal.loads('I\x00\x1c\xf4\xab\xfd\xff\xff\xff') assert z == -10000000000 + def test_marshal_code_object(self): + def foo(a, b): + pass + + import marshal + s = marshal.dumps(foo.__code__) + code2 = marshal.loads(s) + for attr_name in dir(code2): + if attr_name.startswith("co_"): + assert getattr(code2, attr_name) == getattr(foo.__code__, attr_name) + class AppTestMarshalSmallLong(AppTestMarshalMore): def setup_class(cls): From noreply at buildbot.pypy.org Thu Jan 19 17:45:00 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 19 Jan 2012 17:45:00 +0100 (CET) Subject: [pypy-commit] pypy numppy-flatitter: just reuse what we have (evil laugh) Message-ID: <20120119164500.6A835820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numppy-flatitter Changeset: r51489:e7ccfba90d71 Date: 2012-01-19 18:44 +0200 http://bitbucket.org/pypy/pypy/changeset/e7ccfba90d71/ Log: just reuse what we have (evil laugh) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1310,6 +1310,7 @@ self.shapelen = len(arr.shape) self.iter = OneDimIterator(arr.start, self.strides[0], self.shape[0]) + self.start = arr.start self.base = arr def descr_next(self, space): @@ -1322,42 +1323,11 @@ def descr_iter(self): return self - def descr_getitem(self, space, w_idx): - if not space.isinstance_w(w_idx, space.w_int): - raise OperationError(space.w_ValueError, space.wrap( - "non-integer indexing not supported yet")) - _i = space.int_w(w_idx) - if _i<0: - i = self.size + _i - else: - i = _i - if i >= self.size or i < 0: - raise operationerrfmt(space.w_IndexError, - "index (%d) out of range (%d<=index<%d", - _i, -self.size, self.size) - result = self.getitem(self.base.start + i * self.strides[0]) - return result - - def descr_setitem(self, space, w_idx, w_value): - if not space.isinstance_w(w_idx, space.w_int): - raise OperationError(space.w_ValueError, space.wrap( - "non-integer indexing not supported yet")) - _i = space.int_w(w_idx) - if _i<0: - i = self.size + _i - else: - i = _i - if i >= self.size or i < 0: - raise operationerrfmt(space.w_IndexError, - "index (%d) out of range (%d<=index<%d", - _i, -self.size, self.size) - self.setitem(self.base.start + i * self.strides[0], w_value) - W_FlatIterator.typedef = TypeDef( 'flatiter', next = interp2app(W_FlatIterator.descr_next), __iter__ = interp2app(W_FlatIterator.descr_iter), - __getitem__ = interp2app(W_FlatIterator.descr_getitem), - __setitem__ = interp2app(W_FlatIterator.descr_setitem), + __getitem__ = interp2app(BaseArray.descr_getitem), + __setitem__ = interp2app(BaseArray.descr_setitem), ) W_FlatIterator.acceptable_as_base_class = False From noreply at buildbot.pypy.org Thu Jan 19 18:00:14 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 19 Jan 2012 18:00:14 +0100 (CET) Subject: [pypy-commit] pypy stm: (bivab, arigo) Message-ID: <20120119170014.E6C34820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51490:601a7695b552 Date: 2012-01-19 16:41 +0100 http://bitbucket.org/pypy/pypy/changeset/601a7695b552/ Log: (bivab, arigo) A failing test. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -97,7 +97,6 @@ state.num_waiting_threads += 1 if state.num_waiting_threads == state.num_threads: state.finished = True - state.unlock_unfinished() state.unlock_no_tasks_pending() state.unlock() # @@ -105,9 +104,9 @@ state.unlock_no_tasks_pending() # state.lock() + state.num_waiting_threads -= 1 if state.finished: break - state.num_waiting_threads -= 1 else: pending = state.pending.pop(0) if len(state.pending) == 0: @@ -116,6 +115,8 @@ pending.run() state.lock() # + if state.num_waiting_threads == 0: # only the last thread to leave + state.unlock_unfinished() state.unlock() @@ -132,7 +133,7 @@ threadintf.start_new_thread(_run_thread, ()) # state.lock_unfinished() - assert state.num_waiting_threads == state.num_threads + assert state.num_waiting_threads == 0 assert len(state.pending) == 0 state.lock_no_tasks_pending() state.running = False diff --git a/pypy/module/transaction/test/test_interp_transaction.py b/pypy/module/transaction/test/test_interp_transaction.py --- a/pypy/module/transaction/test/test_interp_transaction.py +++ b/pypy/module/transaction/test/test_interp_transaction.py @@ -1,3 +1,4 @@ +import time from pypy.module.transaction import interp_transaction @@ -22,3 +23,49 @@ assert seen == [] interp_transaction.run(space) assert seen == range(201) + + +def test_tree_of_transactions(): + space = FakeSpace() + interp_transaction.state.startup(space) + seen = [] + # + def do(level): + seen.append(level) + if level < 11: + interp_transaction.add(space, do, (level+1,)) + interp_transaction.add(space, do, (level+1,)) + # + interp_transaction.add(space, do, (0,)) + assert seen == [] + interp_transaction.run(space) + for i in range(12): + assert seen.count(i) == 2 ** i + assert len(seen) == 2 ** 12 - 1 + + +def test_transactional_simple(): + space = FakeSpace() + interp_transaction.state.startup(space) + lst = [] + def f(n): + lst.append(n+0) + lst.append(n+1) + time.sleep(0.05) + lst.append(n+2) + lst.append(n+3) + lst.append(n+4) + time.sleep(0.25) + lst.append(n+5) + lst.append(n+6) + interp_transaction.add(space, f, (10,)) + interp_transaction.add(space, f, (20,)) + interp_transaction.add(space, f, (30,)) + interp_transaction.run(space) + assert len(lst) == 7 * 3 + seen = set() + for start in range(0, 21, 7): + seen.add(lst[start]) + for index in range(7): + assert lst[start + index] == lst[start] + index + assert seen == set([10, 20, 30]) diff --git a/pypy/module/transaction/test/test_transaction.py b/pypy/module/transaction/test/test_transaction.py --- a/pypy/module/transaction/test/test_transaction.py +++ b/pypy/module/transaction/test/test_transaction.py @@ -4,6 +4,7 @@ class AppTestTransaction: def setup_class(cls): + py.test.skip("XXX not transactional!") cls.space = gettestobjspace(usemodules=['transaction']) def test_simple(self): @@ -14,3 +15,26 @@ transaction.add(lst.append, 7) transaction.run() assert sorted(lst) == [5, 6, 7] + + def test_almost_as_simple(self): + import transaction + lst = [] + def f(n): + lst.append(n+0) + lst.append(n+1) + lst.append(n+2) + lst.append(n+3) + lst.append(n+4) + lst.append(n+5) + lst.append(n+6) + transaction.add(f, 10) + transaction.add(f, 20) + transaction.add(f, 30) + transaction.run() + assert len(lst) == 7 * 3 + seen = set() + for start in range(0, 21, 7): + seen.append(lst[start]) + for index in range(7): + assert lst[start + index] == lst[start] + index + assert seen == set([10, 20, 30]) From noreply at buildbot.pypy.org Thu Jan 19 18:00:16 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 19 Jan 2012 18:00:16 +0100 (CET) Subject: [pypy-commit] pypy stm: (bivab, arigo) Message-ID: <20120119170016.27211820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51491:fd78c427e13b Date: 2012-01-19 17:51 +0100 http://bitbucket.org/pypy/pypy/changeset/fd78c427e13b/ Log: (bivab, arigo) Fixed rlib.rstm to run correctly on top of CPython with multiple threads. diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -1,3 +1,4 @@ +import thread from pypy.rlib.objectmodel import specialize, we_are_translated, keepalive_until_here from pypy.rpython.lltypesystem import rffi, lltype, rclass from pypy.rpython.annlowlevel import (cast_base_ptr_to_instance, @@ -5,6 +6,8 @@ llhelper) from pypy.translator.stm import _rffi_stm +_global_lock = thread.allocate_lock() + @specialize.memo() def _get_stm_callback(func, argcls): def _stm_callback(llarg): @@ -26,14 +29,29 @@ llarg = cast_instance_to_base_ptr(arg) llarg = rffi.cast(rffi.VOIDP, llarg) else: - # only for tests + # only for tests: we want (1) to test the calls to the C library, + # but also (2) to work with multiple Python threads, so we acquire + # and release some custom GIL here --- even though it doesn't make + # sense from an STM point of view :-/ + _global_lock.acquire() lltype.TLS.stm_callback_arg = arg llarg = lltype.nullptr(rffi.VOIDP.TO) callback = _get_stm_callback(func, argcls) llcallback = llhelper(_rffi_stm.CALLBACK, callback) _rffi_stm.stm_perform_transaction(llcallback, llarg) keepalive_until_here(arg) + if not we_are_translated(): + _global_lock.release() -descriptor_init = _rffi_stm.stm_descriptor_init -descriptor_done = _rffi_stm.stm_descriptor_done -debug_get_state = _rffi_stm.stm_debug_get_state +def descriptor_init(): + if not we_are_translated(): _global_lock.acquire() + _rffi_stm.stm_descriptor_init() + if not we_are_translated(): _global_lock.release() + +def descriptor_done(): + if not we_are_translated(): _global_lock.acquire() + _rffi_stm.stm_descriptor_done() + if not we_are_translated(): _global_lock.release() + +def debug_get_state(): + return _rffi_stm.stm_debug_get_state() diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py --- a/pypy/rlib/test/test_rstm.py +++ b/pypy/rlib/test/test_rstm.py @@ -1,4 +1,4 @@ -import os +import os, thread, time from pypy.rlib.debug import debug_print from pypy.rlib import rstm from pypy.translator.stm.test.support import CompiledSTMTests @@ -29,6 +29,20 @@ assert rstm.debug_get_state() == -1 assert arg.x == 42 +def test_stm_multiple_threads(): + ok = [] + def f(i): + test_stm_perform_transaction() + ok.append(i) + for i in range(10): + thread.start_new_thread(f, (i,)) + timeout = 10 + while len(ok) < 10: + time.sleep(0.1) + timeout -= 0.1 + assert timeout >= 0.0, "timeout!" + assert sorted(ok) == range(10) + class TestTransformSingleThread(CompiledSTMTests): From noreply at buildbot.pypy.org Thu Jan 19 18:00:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 19 Jan 2012 18:00:17 +0100 (CET) Subject: [pypy-commit] pypy stm: (bivab, arigo) Message-ID: <20120119170017.5B2B0820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51492:bc50696a4294 Date: 2012-01-19 17:59 +0100 http://bitbucket.org/pypy/pypy/changeset/bc50696a4294/ Log: (bivab, arigo) - really create a low-level transaction for running each transaction - changed the addition of new transactions to go into a per-thread list until the transaction really commits - enable some more tests diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import unwrap_spec from pypy.module.transaction import threadintf +from pypy.rlib import rstm NUM_THREADS_DEFAULT = 4 # by default @@ -21,8 +22,9 @@ self.ll_unfinished_lock = threadintf.allocate_lock() self.w_error = space.new_exception_class( "transaction.TransactionError") - self.lock_no_tasks_pending() self.lock_unfinished() + self.main_thread_id = threadintf.thread_id() + self.pending_lists = {self.main_thread_id: self.pending} def set_num_threads(self, num): if self.running: @@ -46,9 +48,11 @@ def unlock_no_tasks_pending(self): threadintf.release(self.ll_no_tasks_pending_lock) - def assert_locked_no_tasks_pending(self): + def is_locked_no_tasks_pending(self): just_locked = threadintf.acquire(self.ll_no_tasks_pending_lock, False) - assert not just_locked + if just_locked: + threadintf.release(self.ll_no_tasks_pending_lock) + return not just_locked def lock_unfinished(self): threadintf.acquire(self.ll_unfinished_lock, True) @@ -69,31 +73,47 @@ class Pending: + _alloc_nonmovable_ = True + def __init__(self, w_callback, args): self.w_callback = w_callback self.args = args def run(self): + rstm.perform_transaction(Pending._run_in_transaction, Pending, self) + + @staticmethod + def _run_in_transaction(pending): space = state.space - space.call_args(self.w_callback, self.args) + space.call_args(pending.w_callback, pending.args) # xxx exceptions? def add(space, w_callback, __args__): - state.lock() + id = threadintf.thread_id() + state.pending_lists[id].append(Pending(w_callback, __args__)) + + +def add_list(new_pending_list): + if len(new_pending_list) == 0: + return was_empty = len(state.pending) == 0 - state.pending.append(Pending(w_callback, __args__)) + state.pending += new_pending_list + del new_pending_list[:] if was_empty: state.unlock_no_tasks_pending() - state.unlock() def _run_thread(): state.lock() + my_pending_list = [] + my_thread_id = threadintf.thread_id() + state.pending_lists[my_thread_id] = my_pending_list + rstm.descriptor_init() # while True: if len(state.pending) == 0: - state.assert_locked_no_tasks_pending() + assert state.is_locked_no_tasks_pending() state.num_waiting_threads += 1 if state.num_waiting_threads == state.num_threads: state.finished = True @@ -114,7 +134,10 @@ state.unlock() pending.run() state.lock() + add_list(my_pending_list) # + rstm.descriptor_done() + del state.pending_lists[my_thread_id] if state.num_waiting_threads == 0: # only the last thread to leave state.unlock_unfinished() state.unlock() @@ -125,6 +148,9 @@ raise OperationError( state.w_error, space.wrap("recursive invocation of transaction.run()")) + assert not state.is_locked_no_tasks_pending() + if len(state.pending) == 0: + return state.num_waiting_threads = 0 state.finished = False state.running = True @@ -132,8 +158,10 @@ for i in range(state.num_threads): threadintf.start_new_thread(_run_thread, ()) # - state.lock_unfinished() + state.lock_unfinished() # wait for all threads to finish + # assert state.num_waiting_threads == 0 assert len(state.pending) == 0 - state.lock_no_tasks_pending() + assert state.pending_lists.keys() == [state.main_thread_id] + assert not state.is_locked_no_tasks_pending() state.running = False diff --git a/pypy/module/transaction/test/test_transaction.py b/pypy/module/transaction/test/test_transaction.py --- a/pypy/module/transaction/test/test_transaction.py +++ b/pypy/module/transaction/test/test_transaction.py @@ -4,7 +4,6 @@ class AppTestTransaction: def setup_class(cls): - py.test.skip("XXX not transactional!") cls.space = gettestobjspace(usemodules=['transaction']) def test_simple(self): @@ -34,7 +33,7 @@ assert len(lst) == 7 * 3 seen = set() for start in range(0, 21, 7): - seen.append(lst[start]) + seen.add(lst[start]) for index in range(7): assert lst[start + index] == lst[start] + index assert seen == set([10, 20, 30]) diff --git a/pypy/module/transaction/threadintf.py b/pypy/module/transaction/threadintf.py --- a/pypy/module/transaction/threadintf.py +++ b/pypy/module/transaction/threadintf.py @@ -7,7 +7,7 @@ def acquire(lock, wait): "NOT_RPYTHON" - lock.acquire(wait) + return lock.acquire(wait) def release(lock): "NOT_RPYTHON" @@ -17,3 +17,6 @@ "NOT_RPYTHON" thread.start_new_thread(callback, args) +def thread_id(): + "NOT_RPYTHON" + return thread.get_ident() From noreply at buildbot.pypy.org Thu Jan 19 18:08:54 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 19 Jan 2012 18:08:54 +0100 (CET) Subject: [pypy-commit] pypy numppy-flatitter: an example of failing test Message-ID: <20120119170854.BEEEE820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numppy-flatitter Changeset: r51493:d0f36182d3eb Date: 2012-01-19 19:08 +0200 http://bitbucket.org/pypy/pypy/changeset/d0f36182d3eb/ Log: an example of failing test diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1309,6 +1309,11 @@ raises(IndexError, "b[11]") raises(IndexError, "b[-11]") + def test_flatiter_view(self): + from _numpypy import arange + a = arange(10).reshape(5, 2) + assert (a[::2].flat == [0, 1, 4, 5, 8, 9]) + def test_flatiter_transpose(self): from _numpypy import arange a = arange(10) From noreply at buildbot.pypy.org Thu Jan 19 18:11:24 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 19 Jan 2012 18:11:24 +0100 (CET) Subject: [pypy-commit] pypy numppy-flatitter: add REVIEW Message-ID: <20120119171124.160B5820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numppy-flatitter Changeset: r51494:3a1f1a964ab4 Date: 2012-01-19 19:10 +0200 http://bitbucket.org/pypy/pypy/changeset/3a1f1a964ab4/ Log: add REVIEW diff --git a/pypy/module/micronumpy/REVIEW b/pypy/module/micronumpy/REVIEW new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/REVIEW @@ -0,0 +1,10 @@ +* I think we should wait for indexing-by-arrays-2, since this would clean up + the iterator interface + +* I commited a failing test, it seems the entire approach is pretty doomed. + Instead we should keep the parent iterator (and not create a OneDim one) + +* For getitem, we need to reuse parent getitem, but with some hook + that recomputes indexes. That hook would be used for any sort of access, + be it slices or be it integers, but in general I would like to avoid code + duplication, since the indexing is getting slowly fairly complex. \ No newline at end of file From noreply at buildbot.pypy.org Thu Jan 19 18:21:21 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 19 Jan 2012 18:21:21 +0100 (CET) Subject: [pypy-commit] pypy stm: (bivab, arigo) Message-ID: <20120119172121.91244820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51495:5d6f1589889c Date: 2012-01-19 18:19 +0100 http://bitbucket.org/pypy/pypy/changeset/5d6f1589889c/ Log: (bivab, arigo) Translation fixes. diff --git a/pypy/module/transaction/__init__.py b/pypy/module/transaction/__init__.py --- a/pypy/module/transaction/__init__.py +++ b/pypy/module/transaction/__init__.py @@ -9,10 +9,10 @@ 'set_num_threads': 'interp_transaction.set_num_threads', 'add': 'interp_transaction.add', 'run': 'interp_transaction.run', - 'TransactionError': 'interp_transaction.state.w_error', } appleveldefs = { + 'TransactionError': 'app_transaction.TransactionError', } def startup(self, space): diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -13,15 +13,20 @@ self.__dict__.clear() self.running = False self.num_threads = NUM_THREADS_DEFAULT + self.pending = [] + self.pending_lists = {0: self.pending} + self.ll_lock = threadintf.null_ll_lock + self.ll_no_tasks_pending_lock = threadintf.null_ll_lock + self.ll_unfinished_lock = threadintf.null_ll_lock def startup(self, space): self.space = space - self.pending = [] + w_module = space.getbuiltinmodule('transaction') + self.w_error = space.getattr(w_module, space.wrap('TransactionError')) + # self.ll_lock = threadintf.allocate_lock() self.ll_no_tasks_pending_lock = threadintf.allocate_lock() self.ll_unfinished_lock = threadintf.allocate_lock() - self.w_error = space.new_exception_class( - "transaction.TransactionError") self.lock_unfinished() self.main_thread_id = threadintf.thread_id() self.pending_lists = {self.main_thread_id: self.pending} @@ -33,7 +38,7 @@ space.wrap("cannot change the number of " "threads when transaction.run() " "is active")) - self.num_threads = num_threads + self.num_threads = num def lock(self): # XXX think about the interaction between locks and the GC diff --git a/pypy/module/transaction/test/test_transaction.py b/pypy/module/transaction/test/test_transaction.py --- a/pypy/module/transaction/test/test_transaction.py +++ b/pypy/module/transaction/test/test_transaction.py @@ -6,6 +6,10 @@ def setup_class(cls): cls.space = gettestobjspace(usemodules=['transaction']) + def test_set_num_threads(self): + import transaction + transaction.set_num_threads(4) + def test_simple(self): import transaction lst = [] diff --git a/pypy/module/transaction/threadintf.py b/pypy/module/transaction/threadintf.py --- a/pypy/module/transaction/threadintf.py +++ b/pypy/module/transaction/threadintf.py @@ -1,22 +1,36 @@ import thread +from pypy.module.thread import ll_thread +from pypy.rlib.objectmodel import we_are_translated +null_ll_lock = ll_thread.null_ll_lock + def allocate_lock(): - "NOT_RPYTHON" - return thread.allocate_lock() + if we_are_translated(): + return ll_thread.allocate_ll_lock() + else: + return thread.allocate_lock() def acquire(lock, wait): - "NOT_RPYTHON" - return lock.acquire(wait) + if we_are_translated(): + return ll_thread.acquire_NOAUTO(lock, wait) + else: + return lock.acquire(wait) def release(lock): - "NOT_RPYTHON" - lock.release() + if we_are_translated(): + ll_thread.release_NOAUTO(lock) + else: + lock.release() def start_new_thread(callback, args): - "NOT_RPYTHON" - thread.start_new_thread(callback, args) + if we_are_translated(): + ll_thread.start_new_thread(callback, args) + else: + thread.start_new_thread(callback, args) def thread_id(): - "NOT_RPYTHON" - return thread.get_ident() + if we_are_translated(): + return ll_thread.get_ident() + else: + return thread.get_ident() From noreply at buildbot.pypy.org Thu Jan 19 18:21:22 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 19 Jan 2012 18:21:22 +0100 (CET) Subject: [pypy-commit] pypy stm: Forgot to add two files. Message-ID: <20120119172122.C7EFA820D8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51496:4692940c82e4 Date: 2012-01-19 18:21 +0100 http://bitbucket.org/pypy/pypy/changeset/4692940c82e4/ Log: Forgot to add two files. diff --git a/pypy/module/transaction/app_transaction.py b/pypy/module/transaction/app_transaction.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/app_transaction.py @@ -0,0 +1,3 @@ + +class TransactionError(Exception): + pass diff --git a/pypy/module/transaction/test/test_ztranslation.py b/pypy/module/transaction/test/test_ztranslation.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/test/test_ztranslation.py @@ -0,0 +1,4 @@ +from pypy.objspace.fake.checkmodule import checkmodule + +def test_checkmodule(): + checkmodule('transaction') From noreply at buildbot.pypy.org Thu Jan 19 18:22:56 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 19 Jan 2012 18:22:56 +0100 (CET) Subject: [pypy-commit] pypy numpypy-ufuncs: improve the test Message-ID: <20120119172256.301BF820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-ufuncs Changeset: r51497:77ac35b7fd12 Date: 2012-01-19 19:22 +0200 http://bitbucket.org/pypy/pypy/changeset/77ac35b7fd12/ Log: improve the test diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -193,11 +193,15 @@ def test_floorceil(self): from _numpypy import array, floor, ceil import math - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() inf = float("inf") data = [1.5, 2.9999, -1.999, inf] results = [math.floor(x) for x in data] From noreply at buildbot.pypy.org Thu Jan 19 18:24:24 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 19 Jan 2012 18:24:24 +0100 (CET) Subject: [pypy-commit] pypy default: (mattip) merge numpypy-ufuncs, adding ceil Message-ID: <20120119172424.B4602820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51498:7745b3fcec92 Date: 2012-01-19 19:23 +0200 http://bitbucket.org/pypy/pypy/changeset/7745b3fcec92/ Log: (mattip) merge numpypy-ufuncs, adding ceil diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -70,6 +70,7 @@ ("exp", "exp"), ("fabs", "fabs"), ("floor", "floor"), + ("ceil", "ceil"), ("greater", "greater"), ("greater_equal", "greater_equal"), ("less", "less"), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -449,6 +449,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -374,6 +374,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) @@ -436,4 +440,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" From noreply at buildbot.pypy.org Thu Jan 19 18:24:25 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 19 Jan 2012 18:24:25 +0100 (CET) Subject: [pypy-commit] pypy numpypy-ufuncs: close merged branch Message-ID: <20120119172425.E5981820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpypy-ufuncs Changeset: r51499:8e57f6c0502b Date: 2012-01-19 19:23 +0200 http://bitbucket.org/pypy/pypy/changeset/8e57f6c0502b/ Log: close merged branch From noreply at buildbot.pypy.org Thu Jan 19 18:28:47 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Thu, 19 Jan 2012 18:28:47 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain) killed the code that checked for version numbers of Python Message-ID: <20120119172847.838DE820D8@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51500:b3f6b7f219a0 Date: 2012-01-19 18:28 +0100 http://bitbucket.org/pypy/pypy/changeset/b3f6b7f219a0/ Log: (antocuni, romain) killed the code that checked for version numbers of Python 2, changed the way we compute the magic number for pyc files diff --git a/pypy/interpreter/nestedscope.py b/pypy/interpreter/nestedscope.py --- a/pypy/interpreter/nestedscope.py +++ b/pypy/interpreter/nestedscope.py @@ -220,18 +220,9 @@ def MAKE_CLOSURE(self, numdefaults, next_instr): w_codeobj = self.popvalue() codeobj = self.space.interp_w(pycode.PyCode, w_codeobj) - if codeobj.magic >= 0xa0df281: # CPython 2.5 AST branch merge - w_freevarstuple = self.popvalue() - freevars = [self.space.interp_w(Cell, cell) - for cell in self.space.fixedview(w_freevarstuple)] - else: - n = len(codeobj.co_freevars) - freevars = [None] * n - while True: - n -= 1 - if n < 0: - break - freevars[n] = self.space.interp_w(Cell, self.popvalue()) + w_freevarstuple = self.popvalue() + freevars = [self.space.interp_w(Cell, cell) + for cell in self.space.fixedview(w_freevarstuple)] defaultarguments = self.popvalues(numdefaults) fn = function.Function(self.space, codeobj, self.w_globals, defaultarguments, freevars) diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -24,12 +24,19 @@ def unpack_str_tuple(space,w_str_tuple): return [space.str_w(w_el) for w_el in space.unpackiterable(w_str_tuple)] - # Magic numbers for the bytecode version in code objects. # See comments in pypy/module/imp/importing. cpython_magic, = struct.unpack("= 0xa0df2ef - # Implementation since 2.7a0: 62191 (introduce SETUP_WITH) - or self.pycode.magic >= 0xa0df2d1): - # implementation since 2.6a1: 62161 (WITH_CLEANUP optimization) - self.popvalue() - self.popvalue() - w_unroller = self.popvalue() - w_exitfunc = self.popvalue() - self.pushvalue(w_unroller) - self.pushvalue(self.space.w_None) - self.pushvalue(self.space.w_None) - elif self.pycode.magic >= 0xa0df28c: - # Implementation since 2.5a0: 62092 (changed WITH_CLEANUP opcode) - w_exitfunc = self.popvalue() - w_unroller = self.peekvalue(2) - else: - raise NotImplementedError("WITH_CLEANUP for CPython <= 2.4") + self.popvalue() + self.popvalue() + w_unroller = self.popvalue() + w_exitfunc = self.popvalue() + self.pushvalue(w_unroller) + self.pushvalue(self.space.w_None) + self.pushvalue(self.space.w_None) unroller = self.space.interpclass_w(w_unroller) is_app_exc = (unroller is not None and diff --git a/pypy/module/imp/importing.py b/pypy/module/imp/importing.py --- a/pypy/module/imp/importing.py +++ b/pypy/module/imp/importing.py @@ -810,16 +810,14 @@ a .pyc file in text mode the magic number will be wrong; also, the Apple MPW compiler swaps their values, botching string constants. - CPython uses values between 20121 - 62xxx + CPython 2 uses values between 20121 - 62xxx + CPython 3 uses values greater than 3000 + PyPy uses values under 3000 """ -# picking a magic number is a mess. So far it works because we -# have only one extra opcode, which bumps the magic number by +2, and CPython -# leaves a gap of 10 when it increases -# its own magic number. To avoid assigning exactly the same numbers -# as CPython we always add a +2. We'll have to think again when we -# get three more new opcodes +# Depending on which opcodes are enabled, eg. CALL_METHOD we bump the version +# number by some constant # # * CALL_METHOD +2 # From noreply at buildbot.pypy.org Thu Jan 19 20:34:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 19 Jan 2012 20:34:15 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: progress; Message-ID: <20120119193415.B5737820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51501:c5926e6a02bc Date: 2012-01-19 21:31 +0200 http://bitbucket.org/pypy/pypy/changeset/c5926e6a02bc/ Log: progress; diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -24,6 +24,9 @@ def __init__(self, arr): self.arr = arr.get_concrete() + def extend_shape(self, shape): + shape.extend(self.arr.shape) + class BoolArrayChunk(BaseChunk): def __init__(self, arr): self.arr = arr.get_concrete() diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -10,7 +10,7 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator, Chunk, ViewIterator + SkipLastAxisIterator, Chunk, ViewIterator, BoolArrayChunk, IntArrayChunk numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -229,6 +229,17 @@ n_old_elems_to_use *= old_shape[oldI] return new_strides +def wrap_chunk(space, w_idx, size): + if (space.isinstance_w(w_idx, space.w_int) or + space.isinstance_w(w_idx, space.w_slice)): + return Chunk(*space.decode_index4(w_idx, size)) + arr = convert_to_array(space, w_idx) + if arr.find_dtype().is_bool_type(): + return BoolArrayChunk(arr) + elif arr.find_dtype().is_int_type(): + return IntArrayChunk(arr) + raise OperationError(space.w_IndexError, space.wrap("arrays used as indices must be of integer (or boolean) type")) + class BaseArray(Wrappable): _attrs_ = ["invalidates", "shape", 'size'] @@ -485,6 +496,8 @@ elif (space.isinstance_w(w_idx, space.w_slice) or space.isinstance_w(w_idx, space.w_int)): return False + if isinstance(w_idx, BaseArray): + return False lgt = space.len_w(w_idx) if lgt > shape_len: raise OperationError(space.w_IndexError, @@ -494,14 +507,15 @@ for w_item in space.fixedview(w_idx): if space.isinstance_w(w_item, space.w_slice): return False + if isinstance(w_item, BaseArray): + return False return True @jit.unroll_safe def _prepare_slice_args(self, space, w_idx): - if (space.isinstance_w(w_idx, space.w_int) or - space.isinstance_w(w_idx, space.w_slice)): - return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] - return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in + if not space.isinstance_w(w_idx, space.w_tuple): + return [wrap_chunk(space, w_idx, self.shape[0])] + return [wrap_chunk(space, w_item, self.shape[i]) for i, w_item in enumerate(space.fixedview(w_idx))] def count_all_true(self, arr): @@ -563,6 +577,7 @@ if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and w_idx.find_dtype().is_bool_type()): return self.getitem_filter(space, w_idx) + # XXX deal with a scalar if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -589,6 +604,13 @@ view = self.create_slice(chunks).get_concrete() view.setslice(space, w_value) + def force_slice(self, shape, chunks): + size = 1 + for elem in shape: + size *= elem + res = W_NDimArray(size, shape, self.find_dtype()) + xxx + @jit.unroll_safe def create_slice(self, chunks): shape = [] @@ -598,6 +620,9 @@ s = i + 1 assert s >= 0 shape += self.shape[s:] + for chunk in chunks: + if not isinstance(chunk, Chunk): + return self.force_slice(shape, chunks) if not isinstance(self, ConcreteArray): return VirtualSlice(self, chunks, shape) r = calculate_slice_strides(self.shape, self.start, self.strides, diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1301,16 +1301,13 @@ raises(TypeError, getattr, array(3), '__array_interface__') def test_array_indexing_one_elem(self): - skip("not yet") from _numpypy import array, arange raises(IndexError, 'arange(3)[array([3.5])]') a = arange(3)[array([1])] - assert a == 1 - assert a[0] == 1 + assert a == [1] raises(IndexError,'arange(3)[array([15])]') assert arange(3)[array([-3])] == 0 raises(IndexError,'arange(3)[array([-15])]') - assert arange(3)[array(1)] == 1 def test_array_indexing_bool(self): from _numpypy import arange From noreply at buildbot.pypy.org Thu Jan 19 20:34:16 2012 From: noreply at buildbot.pypy.org (fijal) Date: Thu, 19 Jan 2012 20:34:16 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: mistake? Message-ID: <20120119193416.EFC7D820D8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: matrixmath-dot Changeset: r51502:275370ca0170 Date: 2012-01-19 21:33 +0200 http://bitbucket.org/pypy/pypy/changeset/275370ca0170/ Log: mistake? diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -458,7 +458,6 @@ ri = ViewIterator(0, _r[0], _r[1], arr.broadcast_shape) while not frame.done(): v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - z = result.getitem(ri.offset) value = add(sig.calc_dtype, v, result.getitem(ri.offset)) result.setitem(ri.offset, value) frame.next(shapelen) From noreply at buildbot.pypy.org Thu Jan 19 23:20:22 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 19 Jan 2012 23:20:22 +0100 (CET) Subject: [pypy-commit] pypy py3k: (antocuni, romain): implement keyword-only arguments in the astcompiler and in the interpreter Message-ID: <20120119222022.9273E82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r51503:243e484588fb Date: 2012-01-19 23:20 +0100 http://bitbucket.org/pypy/pypy/changeset/243e484588fb/ Log: (antocuni, romain): implement keyword-only arguments in the astcompiler and in the interpreter diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -10,19 +10,27 @@ class Signature(object): _immutable_ = True _immutable_fields_ = ["argnames[*]"] - __slots__ = ("argnames", "varargname", "kwargname") + __slots__ = ("argnames", "kwonlyargnames", "varargname", "kwargname") - def __init__(self, argnames, varargname=None, kwargname=None): + def __init__(self, argnames, varargname=None, kwargname=None, kwonlyargnames=None): self.argnames = argnames self.varargname = varargname self.kwargname = kwargname + if kwonlyargnames is None: + kwonlyargnames = [] + self.kwonlyargnames = kwonlyargnames @jit.elidable def find_argname(self, name): try: return self.argnames.index(name) except ValueError: - return -1 + pass + try: + return len(self.argnames) + self.kwonlyargnames.index(name) + except ValueError: + pass + return -1 def num_argnames(self): return len(self.argnames) @@ -43,20 +51,22 @@ argnames = self.argnames if self.varargname is not None: argnames = argnames + [self.varargname] + argnames += self.kwonlyargnames if self.kwargname is not None: argnames = argnames + [self.kwargname] return argnames def __repr__(self): - return "Signature(%r, %r, %r)" % ( - self.argnames, self.varargname, self.kwargname) + return "Signature(%r, %r, %r, %r)" % ( + self.argnames, self.varargname, self.kwargname, self.kwonlyargnames) def __eq__(self, other): if not isinstance(other, Signature): return NotImplemented return (self.argnames == other.argnames and self.varargname == other.varargname and - self.kwargname == other.kwargname) + self.kwargname == other.kwargname and + self.kwonlyargnames == other.kwonlyargnames) def __ne__(self, other): if not isinstance(other, Signature): diff --git a/pypy/interpreter/astcompiler/assemble.py b/pypy/interpreter/astcompiler/assemble.py --- a/pypy/interpreter/astcompiler/assemble.py +++ b/pypy/interpreter/astcompiler/assemble.py @@ -147,6 +147,7 @@ self.free_vars = _list_to_dict(scope.free_vars, len(self.cell_vars)) self.w_consts = space.newdict() self.argcount = 0 + self.kwonlyargcount = 0 self.lineno_set = False self.lineno = 0 self.add_none_to_final_return = True @@ -459,6 +460,7 @@ bytecode = ''.join([block.get_code() for block in blocks]) return pycode.PyCode(self.space, self.argcount, + self.kwonlyargcount, len(self.var_names), stack_depth, flags, diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -1173,6 +1173,9 @@ if args.args: self._handle_nested_args(args.args) self.argcount = len(args.args) + if args.kwonlyargs: + self._handle_nested_args(args.kwonlyargs) + self.kwonlyargcount = len(args.kwonlyargs) if func.body: for i in range(start, len(func.body)): func.body[i].walkabout(self) diff --git a/pypy/interpreter/astcompiler/symtable.py b/pypy/interpreter/astcompiler/symtable.py --- a/pypy/interpreter/astcompiler/symtable.py +++ b/pypy/interpreter/astcompiler/symtable.py @@ -495,6 +495,8 @@ assert isinstance(scope, FunctionScope) # Annotator hint. if arguments.args: self._handle_params(arguments.args, True) + if arguments.kwonlyargs: + self._handle_params(arguments.kwonlyargs, True) if arguments.vararg: self.note_symbol(arguments.vararg, SYM_PARAM) scope.note_variable_arg(arguments.vararg) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -5,6 +5,7 @@ from pypy.interpreter.pyparser.test import expressions from pypy.interpreter.pycode import PyCode from pypy.interpreter.pyparser.error import SyntaxError, IndentationError +from pypy.interpreter.error import OperationError from pypy.tool import stdlib_opcode as ops def compile_with_astcompiler(expr, mode, space): @@ -41,9 +42,11 @@ source = str(py.code.Source(source)) space = self.space code = compile_with_astcompiler(source, 'exec', space) - # 2.7 bytecode is too different, the standard `dis` module crashes + # 3.2 bytecode is too different, the standard `dis` module crashes # on older cpython versions - if sys.version_info >= (2, 7): + if sys.version_info >= (3, 2): + # this will only (maybe) work in the far future, when we run pypy + # on top of Python 3. For now, it's just disabled print code.dump() w_dict = space.newdict() @@ -242,7 +245,9 @@ return a, b """) decl = str(decl) + '\n' - yield self.st, decl + "x=f(1, b=2)", "x", (1, 2) + self.st(decl + "x=f(1, b=2)", "x", (1, 2)) + operr = py.test.raises(OperationError, 'self.st(decl + "x=f(1, 2)", "x", (1, 2))') + assert operr.value.w_type is self.space.w_TypeError def test_listmakers(self): yield (self.st, diff --git a/pypy/interpreter/astcompiler/test/test_symtable.py b/pypy/interpreter/astcompiler/test/test_symtable.py --- a/pypy/interpreter/astcompiler/test/test_symtable.py +++ b/pypy/interpreter/astcompiler/test/test_symtable.py @@ -131,6 +131,11 @@ for name in ("a", "b", "c"): assert scp.lookup(name) == symtable.SCOPE_LOCAL + def test_arguments_kwonly(self): + scp = self.func_scope("def f(a, *, b): pass") + for name in ("a", "b"): + assert scp.lookup(name) == symtable.SCOPE_LOCAL + def test_function(self): scp = self.func_scope("def f(): x = 4") assert scp.lookup("x") == symtable.SCOPE_LOCAL diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -16,7 +16,7 @@ from pypy.rlib.rarithmetic import intmask from pypy.rlib.debug import make_sure_not_resized from pypy.rlib import jit -from pypy.rlib.objectmodel import compute_hash +from pypy.rlib.objectmodel import compute_hash, we_are_translated from pypy.tool.stdlib_opcode import opcodedesc, HAVE_ARGUMENT # helper @@ -42,8 +42,16 @@ def cpython_code_signature(code): "([list-of-arg-names], vararg-name-or-None, kwarg-name-or-None)." argcount = code.co_argcount + if we_are_translated(): + kwonlyargcount = code.co_kwonlyargcount + else: + # for compatibility with CPython 2.7 code objects + kwonlyargcount = getattr(code, 'co_kwonlyargcount', 0) assert argcount >= 0 # annotator hint + assert kwonlyargcount >= 0 argnames = list(code.co_varnames[:argcount]) + n = argcount + kwonlyargcount + kwonlyargs = list(code.co_varnames[argcount:n]) if code.co_flags & CO_VARARGS: varargname = code.co_varnames[argcount] argcount += 1 @@ -54,7 +62,7 @@ argcount += 1 else: kwargname = None - return Signature(argnames, varargname, kwargname) + return Signature(argnames, varargname, kwargname, kwonlyargs) class PyCode(eval.Code): "CPython-style code objects." @@ -62,7 +70,7 @@ _immutable_fields_ = ["co_consts_w[*]", "co_names_w[*]", "co_varnames[*]", "co_freevars[*]", "co_cellvars[*]"] - def __init__(self, space, argcount, nlocals, stacksize, flags, + def __init__(self, space, argcount, kwonlyargcount, nlocals, stacksize, flags, code, consts, names, varnames, filename, name, firstlineno, lnotab, freevars, cellvars, hidden_applevel=False, magic = default_magic): @@ -72,6 +80,7 @@ eval.Code.__init__(self, name) assert nlocals >= 0 self.co_argcount = argcount + self.co_kwonlyargcount = kwonlyargcount self.co_nlocals = nlocals self.co_stacksize = stacksize self.co_flags = flags @@ -175,6 +184,7 @@ # stick the underlying CPython magic value, if the code object # comes from there return cls(space, code.co_argcount, + getattr(code, 'co_kwonlyargcount', 0), code.co_nlocals, code.co_stacksize, code.co_flags, @@ -250,6 +260,7 @@ consts[num] = self.space.unwrap(w) num += 1 return new.code( self.co_argcount, + self.co_kwonlyargcount, self.co_nlocals, self.co_stacksize, self.co_flags, @@ -297,6 +308,7 @@ return space.w_False areEqual = (self.co_name == other.co_name and self.co_argcount == other.co_argcount and + self.co_kwonlyargcount == other.co_kwonlyargcount and self.co_nlocals == other.co_nlocals and self.co_flags == other.co_flags and self.co_firstlineno == other.co_firstlineno and @@ -323,6 +335,7 @@ space = self.space result = compute_hash(self.co_name) result ^= self.co_argcount + result ^= self.co_kwonlyargcount result ^= self.co_nlocals result ^= self.co_flags result ^= self.co_firstlineno @@ -337,12 +350,12 @@ w_result = space.xor(w_result, space.hash(w_const)) return w_result - @unwrap_spec(argcount=int, nlocals=int, stacksize=int, flags=int, + @unwrap_spec(argcount=int, kwonlyargcount=int, nlocals=int, stacksize=int, flags=int, codestring=str, filename=str, name=str, firstlineno=int, lnotab=str, magic=int) def descr_code__new__(space, w_subtype, - argcount, nlocals, stacksize, flags, + argcount, kwonlyargcount, nlocals, stacksize, flags, codestring, w_constants, w_names, w_varnames, filename, name, firstlineno, lnotab, w_freevars=NoneNotWrapped, @@ -351,6 +364,9 @@ if argcount < 0: raise OperationError(space.w_ValueError, space.wrap("code: argcount must not be negative")) + if kwonlyargcount < 0: + raise OperationError(space.w_ValueError, + space.wrap("code: kwonlyargcount must not be negative")) if nlocals < 0: raise OperationError(space.w_ValueError, space.wrap("code: nlocals must not be negative")) @@ -369,7 +385,7 @@ else: cellvars = [] code = space.allocate_instance(PyCode, w_subtype) - PyCode.__init__(code, space, argcount, nlocals, stacksize, flags, codestring, consts_w[:], names, + PyCode.__init__(code, space, argcount, kwonlyargcount, nlocals, stacksize, flags, codestring, consts_w[:], names, varnames, filename, name, firstlineno, lnotab, freevars, cellvars, magic=magic) return space.wrap(code) @@ -381,6 +397,7 @@ w = space.wrap tup = [ w(self.co_argcount), + w(self.co_kwonlyargcount), w(self.co_nlocals), w(self.co_stacksize), w(self.co_flags), diff --git a/pypy/interpreter/test/test_argument.py b/pypy/interpreter/test/test_argument.py --- a/pypy/interpreter/test/test_argument.py +++ b/pypy/interpreter/test/test_argument.py @@ -26,12 +26,12 @@ assert sig.has_kwarg() assert sig.scope_length() == 4 assert sig.getallvarnames() == ["a", "b", "c", "c"] - sig = Signature(["a", "b", "c"], "d", "c") + sig = Signature(["a", "b", "c"], "d", "c", ["kwonly"]) assert sig.num_argnames() == 3 assert sig.has_vararg() assert sig.has_kwarg() assert sig.scope_length() == 5 - assert sig.getallvarnames() == ["a", "b", "c", "d", "c"] + assert sig.getallvarnames() == ["a", "b", "c", "d", "kwonly", "c"] def test_eq(self): sig1 = Signature(["a", "b", "c"], "d", "c") @@ -40,11 +40,12 @@ def test_find_argname(self): - sig = Signature(["a", "b", "c"], None, None) + sig = Signature(["a", "b", "c"], None, None, ["kwonly"]) assert sig.find_argname("a") == 0 assert sig.find_argname("b") == 1 assert sig.find_argname("c") == 2 assert sig.find_argname("d") == -1 + assert sig.find_argname("kwonly") == 3 def test_tuply(self): sig = Signature(["a", "b", "c"], "d", "e") diff --git a/pypy/objspace/std/marshal_impl.py b/pypy/objspace/std/marshal_impl.py --- a/pypy/objspace/std/marshal_impl.py +++ b/pypy/objspace/std/marshal_impl.py @@ -313,6 +313,7 @@ # see pypy.interpreter.pycode for the layout x = space.interp_w(PyCode, w_pycode) m.put_int(x.co_argcount) + m.put_int(x.co_kwonlyargcount) m.put_int(x.co_nlocals) m.put_int(x.co_stacksize) m.put_int(x.co_flags) @@ -355,6 +356,7 @@ def unmarshal_pycode(space, u, tc): argcount = u.get_int() + kwonlyargcount = u.get_int() nlocals = u.get_int() stacksize = u.get_int() flags = u.get_int() @@ -370,7 +372,7 @@ name = unmarshal_str(u) firstlineno = u.get_int() lnotab = unmarshal_str(u) - code = PyCode(space, argcount, nlocals, stacksize, flags, + code = PyCode(space, argcount, kwonlyargcount, nlocals, stacksize, flags, code, consts_w[:], names, varnames, filename, name, firstlineno, lnotab, freevars, cellvars) return space.wrap(code) From noreply at buildbot.pypy.org Fri Jan 20 08:42:10 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 20 Jan 2012 08:42:10 +0100 (CET) Subject: [pypy-commit] pypy numppy-flatitter: use the parent iterator Message-ID: <20120120074210.1987382CF8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numppy-flatitter Changeset: r51504:47faffcb9ec3 Date: 2012-01-19 21:30 +0200 http://bitbucket.org/pypy/pypy/changeset/47faffcb9ec3/ Log: use the parent iterator diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1308,8 +1308,10 @@ ViewArray.__init__(self, size, [size], arr.dtype, order=arr.order, parent=arr) self.shapelen = len(arr.shape) - self.iter = OneDimIterator(arr.start, self.strides[0], - self.shape[0]) + sig = arr.find_sig() + #self.iter = OneDimIterator(arr.start, self.strides[0], + # self.shape[0]) + self.iter = sig.create_frame(arr).get_final_iter() self.start = arr.start self.base = arr From noreply at buildbot.pypy.org Fri Jan 20 08:42:11 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 20 Jan 2012 08:42:11 +0100 (CET) Subject: [pypy-commit] pypy numppy-flatitter: correct test for missing == operator, un-skip failing test Message-ID: <20120120074211.557B382CF8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numppy-flatitter Changeset: r51505:23731c70eb07 Date: 2012-01-19 22:44 +0200 http://bitbucket.org/pypy/pypy/changeset/23731c70eb07/ Log: correct test for missing == operator, un-skip failing test diff --git a/pypy/module/micronumpy/REVIEW b/pypy/module/micronumpy/REVIEW --- a/pypy/module/micronumpy/REVIEW +++ b/pypy/module/micronumpy/REVIEW @@ -1,10 +1,9 @@ * I think we should wait for indexing-by-arrays-2, since this would clean up the iterator interface -* I commited a failing test, it seems the entire approach is pretty doomed. - Instead we should keep the parent iterator (and not create a OneDim one) - * For getitem, we need to reuse parent getitem, but with some hook that recomputes indexes. That hook would be used for any sort of access, be it slices or be it integers, but in general I would like to avoid code - duplication, since the indexing is getting slowly fairly complex. \ No newline at end of file + duplication, since the indexing is getting slowly fairly complex. + +* iterating over a transposed array still fails. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1325,11 +1325,15 @@ def descr_iter(self): return self + def descr_index(self, space): + return space.wrap(self.iter.offset) + W_FlatIterator.typedef = TypeDef( 'flatiter', next = interp2app(W_FlatIterator.descr_next), __iter__ = interp2app(W_FlatIterator.descr_iter), __getitem__ = interp2app(BaseArray.descr_getitem), __setitem__ = interp2app(BaseArray.descr_setitem), + index = GetSetProperty(W_FlatIterator.descr_index), ) W_FlatIterator.acceptable_as_base_class = False diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1308,16 +1308,22 @@ assert b[-2] == 8 raises(IndexError, "b[11]") raises(IndexError, "b[-11]") + assert b.index == 3 def test_flatiter_view(self): from _numpypy import arange a = arange(10).reshape(5, 2) - assert (a[::2].flat == [0, 1, 4, 5, 8, 9]) + #no == yet. + # a[::2].flat == [0, 1, 4, 5, 8, 9] + isequal = True + for y,z in zip(a[::2].flat, [0, 1, 4, 5, 8, 9]): + if y != z: + isequal = False + assert isequal == True def test_flatiter_transpose(self): from _numpypy import arange a = arange(10) - skip('out-of-order transformations do not work yet') assert a.reshape(2,5).T.flat[3] == 6 def test_slice_copy(self): From noreply at buildbot.pypy.org Fri Jan 20 08:42:12 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 20 Jan 2012 08:42:12 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: refactor and rework, still need more tests Message-ID: <20120120074212.93C4982CF8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51506:2bcfa95fe92a Date: 2012-01-20 09:41 +0200 http://bitbucket.org/pypy/pypy/changeset/2bcfa95fe92a/ Log: refactor and rework, still need more tests diff --git a/pypy/module/micronumpy/dot.py b/pypy/module/micronumpy/dot.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/dot.py @@ -0,0 +1,68 @@ +from pypy.module.micronumpy import interp_ufuncs +from pypy.module.micronumpy.strides import calculate_dot_strides +from pypy.interpreter.error import OperationError, operationerrfmt +from pypy.module.micronumpy.interp_iter import ViewIterator + + +def match_dot_shapes(space, left, right): + my_critical_dim_size = left.shape[-1] + right_critical_dim_size = right.shape[0] + right_critical_dim = 0 + right_critical_dim_stride = right.strides[0] + out_shape = [] + if len(right.shape) > 1: + right_critical_dim = len(right.shape) - 2 + right_critical_dim_size = right.shape[right_critical_dim] + right_critical_dim_stride = right.strides[right_critical_dim] + assert right_critical_dim >= 0 + out_shape += left.shape[:-1] + \ + right.shape[0:right_critical_dim] + \ + right.shape[right_critical_dim + 1:] + elif len(right.shape) > 0: + #dot does not reduce for scalars + out_shape += left.shape[:-1] + if my_critical_dim_size != right_critical_dim_size: + raise OperationError(space.w_ValueError, space.wrap( + "objects are not aligned")) + return out_shape, right_critical_dim + + +def multidim_dot(space, left, right, result, dtype, right_critical_dim): + ''' assumes left, right are concrete arrays + given left.shape == [3, 5, 7], + right.shape == [2, 7, 4] + result.shape == [3, 5, 2, 4] + broadcast shape should be [3, 5, 2, 7, 4] + result should skip dims 3 which is results.ndims - 1 + left should skip 2, 4 which is a.ndims-1 + range(right.ndims) + except where it==(right.ndims-2) + right should skip 0, 1 + ''' + mul = interp_ufuncs.get(space).multiply.func + add = interp_ufuncs.get(space).add.func + broadcast_shape = left.shape[:-1] + right.shape + left_skip = [len(left.shape) - 1 + i for i in range(len(right.shape)) + if i != right_critical_dim] + right_skip = range(len(left.shape) - 1) + result_skip = [len(result.shape) - 1] + shapelen = len(broadcast_shape) + _r = calculate_dot_strides(result.strides, result.backstrides, + broadcast_shape, result_skip) + outi = ViewIterator(0, _r[0], _r[1], broadcast_shape) + _r = calculate_dot_strides(left.strides, left.backstrides, + broadcast_shape, left_skip) + lefti = ViewIterator(0, _r[0], _r[1], broadcast_shape) + _r = calculate_dot_strides(right.strides, right.backstrides, + broadcast_shape, right_skip) + righti = ViewIterator(0, _r[0], _r[1], broadcast_shape) + while not outi.done(): + v = mul(dtype, left.getitem(lefti.offset), + right.getitem(righti.offset)) + value = add(dtype, v, result.getitem(outi.offset)) + result.setitem(outi.offset, value) + outi = outi.next(shapelen) + righti = righti.next(shapelen) + lefti = lefti.next(shapelen) + assert lefti.done() + assert righti.done() + return result diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -16,10 +16,6 @@ def __init__(self, res_shape): self.res_shape = res_shape -class DotTransform(BaseTransform): - def __init__(self, res_shape, skip_dims): - self.res_shape = res_shape - self.skip_dims = skip_dims class BaseIterator(object): def next(self, shapelen): @@ -90,10 +86,6 @@ self.strides, self.backstrides, t.chunks) return ViewIterator(r[1], r[2], r[3], r[0]) - elif isinstance(t, DotTransform): - r = calculate_dot_strides(self.strides, self.backstrides, - t.res_shape, t.skip_dims) - return ViewIterator(self.offset, r[0], r[1], t.res_shape) @jit.unroll_safe def next(self, shapelen): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -3,14 +3,15 @@ from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature -from pypy.module.micronumpy.strides import calculate_slice_strides,\ - calculate_dot_strides +from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator, ViewIterator + SkipLastAxisIterator +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes, dot_docstring + numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -212,28 +213,6 @@ n_old_elems_to_use *= old_shape[oldI] return new_strides -def match_dot_shapes(space, self, other): - my_critical_dim_size = self.shape[-1] - other_critical_dim_size = other.shape[0] - other_critical_dim = 0 - other_critical_dim_stride = other.strides[0] - out_shape = [] - if len(other.shape) > 1: - other_critical_dim = len(other.shape) - 2 - other_critical_dim_size = other.shape[other_critical_dim] - other_critical_dim_stride = other.strides[other_critical_dim] - assert other_critical_dim >= 0 - out_shape += self.shape[:-1] + \ - other.shape[0:other_critical_dim] + \ - other.shape[other_critical_dim + 1:] - elif len(other.shape) > 0: - #dot does not reduce for scalars - out_shape += self.shape[:-1] - if my_critical_dim_size != other_critical_dim_size: - raise OperationError(space.w_ValueError, space.wrap( - "objects are not aligned")) - return out_shape, other_critical_dim - class BaseArray(Wrappable): _attrs_ = ["invalidates", "shape", 'size'] @@ -399,14 +378,6 @@ descr_argmin = _reduce_argmax_argmin_impl("min") def descr_dot(self, space, w_other): - '''Dot product of two arrays. - - For 2-D arrays it is equivalent to matrix multiplication, and for 1-D - arrays to inner product of vectors (without complex conjugation). For - N dimensions it is a sum product over the last axis of `a` and - the second-to-last of `b`:: - - dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])''' other = convert_to_array(space, w_other) if isinstance(other, Scalar): return self.descr_mul(space, other) @@ -425,43 +396,10 @@ for o in out_shape: out_size *= o result = W_NDimArray(out_size, out_shape, dtype) - # given a.shape == [3, 5, 7], - # b.shape == [2, 7, 4] - # result.shape == [3, 5, 2, 4] - # all iterators shapes should be [3, 5, 2, 7, 4] - # result should skip dims 3 which is results.ndims - 1 - # a should skip 2, 4 which is a.ndims-1 + range(b.ndims) - # except where it==(b.ndims-2) - # b should skip 0, 1 - mul = interp_ufuncs.get(space).multiply.func - add = interp_ufuncs.get(space).add.func - broadcast_shape = self.shape[:-1] + other.shape - #Aww, cmon, this is the product of a warped mind. - left_skip = [len(self.shape) - 1 + i for i in range(len(other.shape)) if i != other_critical_dim] - right_skip = range(len(self.shape) - 1) - arr = DotArray(mul, 'DotName', out_shape, dtype, self, other, - left_skip, right_skip) - arr.broadcast_shape = broadcast_shape - arr.result_skip = [len(out_shape) - 1] - #Make this lazy someday... - sig = signature.find_sig(signature.DotSignature(mul, 'dot', dtype, - self.create_sig(), other.create_sig()), arr) - assert isinstance(sig, signature.DotSignature) - self.do_dot_loop(sig, result, arr, add) - return result - - def do_dot_loop(self, sig, result, arr, add): - frame = sig.create_frame(arr) - shapelen = len(arr.broadcast_shape) - _r = calculate_dot_strides(result.strides, result.backstrides, - arr.broadcast_shape, arr.result_skip) - ri = ViewIterator(0, _r[0], _r[1], arr.broadcast_shape) - while not frame.done(): - v = sig.eval(frame, arr).convert_to(sig.calc_dtype) - value = add(sig.calc_dtype, v, result.getitem(ri.offset)) - result.setitem(ri.offset, value) - frame.next(shapelen) - ri = ri.next(shapelen) + # This is the place to add fpypy and blas + return multidim_dot(space, self.get_concrete(), + other.get_concrete(), result, dtype, + other_critical_dim) def get_concrete(self): raise NotImplementedError @@ -933,23 +871,6 @@ left, right) self.dim = dim -class DotArray(Call2): - """ NOTE: this is only used as a container, you should never - encounter such things in the wild. Remove this comment - when we'll make Dot lazy - """ - _immutable_fields_ = ['left', 'right'] - - def __init__(self, ufunc, name, shape, dtype, left, right, left_skip, right_skip): - Call2.__init__(self, ufunc, name, shape, dtype, dtype, - left, right) - self.left_skip = left_skip - self.right_skip = right_skip - def create_sig(self): - #if self.forced_result is not None: - # return self.forced_result.create_sig() - assert NotImplementedError - class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not """ @@ -1304,6 +1225,8 @@ return space.wrap(arr) def dot(space, w_obj, w_obj2): + '''see numpypy.dot. Does not exist as an ndarray method in numpy. + ''' w_arr = convert_to_array(space, w_obj) if isinstance(w_arr, Scalar): return convert_to_array(space, w_obj2).descr_dot(space, w_arr) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -2,7 +2,7 @@ from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ ConstantIterator, AxisIterator, ViewTransform,\ - BroadcastTransform, DotTransform + BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote """ Signature specifies both the numpy expression that has been constructed @@ -449,21 +449,3 @@ def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) - -class DotSignature(Call2): - def _invent_numbering(self, cache, allnumbers): - self.left._invent_numbering(new_cache(), allnumbers) - self.right._invent_numbering(new_cache(), allnumbers) - - def _create_iter(self, iterlist, arraylist, arr, transforms): - from pypy.module.micronumpy.interp_numarray import DotArray - - assert isinstance(arr, DotArray) - rtransforms = transforms + [DotTransform(arr.broadcast_shape, arr.right_skip)] - ltransforms = transforms + [DotTransform(arr.broadcast_shape, arr.left_skip)] - self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) - self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) - - def debug_repr(self): - return 'DotSig(%s, %s %s)' % (self.name, self.right.debug_repr(), - self.left.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -869,7 +869,7 @@ def test_dot(self): from _numpypy import array, dot, arange a = array(range(5)) - assert a.dot(a) == 30.0 + assert dot(a, a) == 30.0 a = array(range(5)) assert a.dot(range(5)) == 30 @@ -887,9 +887,11 @@ #Superfluous shape test makes the intention of the test clearer assert a.shape == (2, 3, 4) assert b.shape == (4, 3) - c = a.dot(b) + c = dot(a, b) assert (c == [[[14, 38, 62], [38, 126, 214], [62, 214, 366]], [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() + c = dot(a, b[:, :, 2]) + assert (c == [[38, 126, 214], [302, 390, 478]]).all() def test_dot_constant(self): from _numpypy import array From noreply at buildbot.pypy.org Fri Jan 20 09:23:07 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 20 Jan 2012 09:23:07 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: whoops Message-ID: <20120120082307.E327782CF8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51507:600fcfb76aab Date: 2012-01-20 10:22 +0200 http://bitbucket.org/pypy/pypy/changeset/600fcfb76aab/ Log: whoops diff --git a/pypy/module/micronumpy/dot.py b/pypy/module/micronumpy/dot.py --- a/pypy/module/micronumpy/dot.py +++ b/pypy/module/micronumpy/dot.py @@ -2,8 +2,17 @@ from pypy.module.micronumpy.strides import calculate_dot_strides from pypy.interpreter.error import OperationError, operationerrfmt from pypy.module.micronumpy.interp_iter import ViewIterator +from pypy.module.micronumpy.signature import new_printable_location +from pypy.rlib import jit +dot_driver = jit.JitDriver( + greens=['shapelen', 'left', 'right'], + reds=['lefti', 'righti', 'outi', 'result'], + get_printable_location=new_printable_location('dot'), + name='dot', +) + def match_dot_shapes(space, left, right): my_critical_dim_size = left.shape[-1] right_critical_dim_size = right.shape[0] @@ -27,6 +36,7 @@ return out_shape, right_critical_dim + at jit.unroll_safe def multidim_dot(space, left, right, result, dtype, right_critical_dim): ''' assumes left, right are concrete arrays given left.shape == [3, 5, 7], @@ -56,6 +66,14 @@ broadcast_shape, right_skip) righti = ViewIterator(0, _r[0], _r[1], broadcast_shape) while not outi.done(): + dot_driver.jit_merge_point(left=left, + right=right, + shape_len=shape_len, + lefti=lefti, + righti=righti, + outi=outi, + result=result, + ) v = mul(dtype, left.getitem(lefti.offset), right.getitem(righti.offset)) value = add(dtype, v, result.getitem(outi.offset)) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -10,7 +10,7 @@ from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ SkipLastAxisIterator -from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes, dot_docstring +from pypy.module.micronumpy.dot import multidim_dot, match_dot_shapes numpy_driver = jit.JitDriver( diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -368,6 +368,20 @@ 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) + def ddefine_dot(): + return """ + a = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] + b=[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11]] + c = a.dot(b) + c -> 1 -> 2 + """ + + def test_dot(self): + py.test.skip("not yet") + result = self.run("dot") + assert result == 184 + self.check_simple_loop({}) + class TestNumpyOld(LLJitMixin): def setup_class(cls): From noreply at buildbot.pypy.org Fri Jan 20 10:03:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 10:03:05 +0100 (CET) Subject: [pypy-commit] pypy stm: void fields. Message-ID: <20120120090305.127E082CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51508:5c885d90cfd1 Date: 2012-01-19 22:20 +0100 http://bitbucket.org/pypy/pypy/changeset/5c885d90cfd1/ Log: void fields. diff --git a/pypy/translator/stm/test/test_llstm.py b/pypy/translator/stm/test/test_llstm.py --- a/pypy/translator/stm/test/test_llstm.py +++ b/pypy/translator/stm/test/test_llstm.py @@ -9,7 +9,7 @@ ('c1', lltype.Char), ('c2', lltype.Char), ('c3', lltype.Char), ('l', lltype.SignedLongLong), ('f', lltype.Float), ('sa', lltype.SingleFloat), - ('sb', lltype.SingleFloat)) + ('sb', lltype.SingleFloat), ('v', lltype.Void)) rll1 = r_longlong(-10000000000003) rll2 = r_longlong(-300400500600700) rf1 = -12.38976129 @@ -29,6 +29,7 @@ assert a.f == rf1 assert float(a.sa) == float(rs1a) assert float(a.sb) == float(rs1b) + assert a.v == None assert stm_getfield(a, 'x') == -611 assert stm_getfield(a, 'c2') == '\\' assert stm_getfield(a, 'c1') == '/' @@ -58,6 +59,7 @@ a.f = rf1 a.sa = rs1a a.sb = rs1b + a.v = None stm_descriptor_init() stm_perform_transaction(llhelper(CALLBACK, callback1), rffi.cast(rffi.VOIDP, a)) @@ -70,6 +72,7 @@ assert a.f == rf1 assert float(a.sa) == float(rs1a) assert float(a.sb) == float(rs1b) + assert a.v == None assert a.y == 10 lltype.free(a, flavor='raw') @@ -83,6 +86,7 @@ assert a.f == rf1 assert float(a.sa) == float(rs1a) assert float(a.sb) == float(rs1b) + assert a.v == None assert stm_getfield(a, 'x') == -611 assert stm_getfield(a, 'c1') == '&' assert stm_getfield(a, 'c2') == '*' @@ -115,6 +119,7 @@ assert a.f == rf1 assert float(a.sa) == float(rs1a) assert float(a.sb) == float(rs1b) + assert a.v == None if a.y < 10: a.y += 1 # non-transactionally stm_abort_and_retry() @@ -131,6 +136,7 @@ a.f = rf1 a.sa = rs1a a.sb = rs1b + a.v = None stm_descriptor_init() stm_perform_transaction(llhelper(CALLBACK, callback2), rffi.cast(rffi.VOIDP, a)) @@ -143,5 +149,6 @@ assert a.f == rf2 assert float(a.sa) == float(rs2a) assert float(a.sb) == float(rs2b) + assert a.v == None assert a.y == 10 lltype.free(a, flavor='raw') diff --git a/pypy/translator/stm/test/test_transform.py b/pypy/translator/stm/test/test_transform.py --- a/pypy/translator/stm/test/test_transform.py +++ b/pypy/translator/stm/test/test_transform.py @@ -53,6 +53,16 @@ res = eval_stm_graph(interp, graph, [p], stm_mode="regular_transaction") assert res == 42 +def test_void_field(): + S = lltype.GcStruct('S', ('v', lltype.Void)) + p = lltype.malloc(S, immortal=True) + def func(p): + p.v = None + return p.v + interp, graph = get_interpreter(func, [p]) + transform_graph(graph) + assert summary(graph) == {'getfield': 1, 'setfield': 1} + def test_getarraysize(): A = lltype.GcArray(lltype.Signed) p = lltype.malloc(A, 100, immortal=True) diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -109,7 +109,9 @@ def stt_getfield(self, newoperations, op): STRUCT = op.args[0].concretetype.TO - if STRUCT._immutable_field(op.args[1].value): + if op.result.concretetype is lltype.Void: + op1 = op + elif STRUCT._immutable_field(op.args[1].value): op1 = op elif STRUCT._gckind == 'raw': turn_inevitable(newoperations, "getfield-raw") @@ -120,7 +122,9 @@ def stt_setfield(self, newoperations, op): STRUCT = op.args[0].concretetype.TO - if STRUCT._immutable_field(op.args[1].value): + if op.args[2].concretetype is lltype.Void: + op1 = op + elif STRUCT._immutable_field(op.args[1].value): op1 = op elif STRUCT._gckind == 'raw': turn_inevitable(newoperations, "setfield-raw") @@ -131,7 +135,9 @@ def stt_getarrayitem(self, newoperations, op): ARRAY = op.args[0].concretetype.TO - if ARRAY._immutable_field(): + if op.result.concretetype is lltype.Void: + op1 = op + elif ARRAY._immutable_field(): op1 = op elif ARRAY._gckind == 'raw': turn_inevitable(newoperations, "getarrayitem-raw") @@ -142,7 +148,9 @@ def stt_setarrayitem(self, newoperations, op): ARRAY = op.args[0].concretetype.TO - if ARRAY._immutable_field(): + if op.args[2].concretetype is lltype.Void: + op1 = op + elif ARRAY._immutable_field(): op1 = op elif ARRAY._gckind == 'raw': turn_inevitable(newoperations, "setarrayitem-raw") @@ -153,7 +161,9 @@ def stt_getinteriorfield(self, newoperations, op): OUTER = op.args[0].concretetype.TO - if OUTER._hints.get('immutable'): + if op.result.concretetype is lltype.Void: + op1 = op + elif OUTER._hints.get('immutable'): op1 = op elif OUTER._gckind == 'raw': turn_inevitable(newoperations, "getinteriorfield-raw") @@ -164,7 +174,9 @@ def stt_setinteriorfield(self, newoperations, op): OUTER = op.args[0].concretetype.TO - if OUTER._hints.get('immutable'): + if op.args[-1].concretetype is lltype.Void: + op1 = op + elif OUTER._hints.get('immutable'): op1 = op elif OUTER._gckind == 'raw': turn_inevitable(newoperations, "setinteriorfield-raw") From noreply at buildbot.pypy.org Fri Jan 20 10:03:06 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 10:03:06 +0100 (CET) Subject: [pypy-commit] pypy stm: Re-enable cast_ptr_to_adr for now. Message-ID: <20120120090306.4C28582CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51509:f1643c0913ad Date: 2012-01-19 23:02 +0100 http://bitbucket.org/pypy/pypy/changeset/f1643c0913ad/ Log: Re-enable cast_ptr_to_adr for now. diff --git a/pypy/translator/c/funcgen.py b/pypy/translator/c/funcgen.py --- a/pypy/translator/c/funcgen.py +++ b/pypy/translator/c/funcgen.py @@ -686,9 +686,9 @@ def OP_CAST_PTR_TO_ADR(self, op): if self.lltypemap(op.args[0]).TO._gckind == 'gc' and self._is_stm(): - raise AssertionError("cast_ptr_to_adr(gcref) is a bad idea " - "with STM. Consider checking config.stm " - "in %r" % (self.graph,)) + from pypy.translator.c.support import log + log.WARNING("cast_ptr_to_adr(gcref) might be a bad idea with STM:") + log.WARNING(" %r" % (self.graph,)) return self.OP_CAST_POINTER(op) def OP_CAST_INT_TO_PTR(self, op): diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -1,5 +1,6 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rarithmetic import r_longlong, r_singlefloat +from pypy.rlib.objectmodel import compute_identity_hash from pypy.translator.stm.test.support import CompiledSTMTests from pypy.translator.stm._rffi_stm import (CALLBACK, stm_perform_transaction, stm_descriptor_init, stm_descriptor_done) @@ -50,6 +51,16 @@ _, data = cbuilder.cmdexec('', err=True) assert data.endswith('ok!\n') + def test_compile_identity_hash(self): + class A: + pass + def entry_point(argv): + a = A() + debug_print(compute_identity_hash(a)) + return 0 + t, cbuilder = self.compile(entry_point) + _, data = cbuilder.cmdexec('', err=True) + # ____________________________________________________________ A = lltype.GcStruct('A', ('x', lltype.Signed), ('y', lltype.Signed), From noreply at buildbot.pypy.org Fri Jan 20 10:03:07 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 10:03:07 +0100 (CET) Subject: [pypy-commit] pypy stm: Fix for --gc=none. Message-ID: <20120120090307.7CCDB82CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51510:a0831447f4d7 Date: 2012-01-19 23:12 +0100 http://bitbucket.org/pypy/pypy/changeset/a0831447f4d7/ Log: Fix for --gc=none. diff --git a/pypy/translator/c/src/mem.h b/pypy/translator/c/src/mem.h --- a/pypy/translator/c/src/mem.h +++ b/pypy/translator/c/src/mem.h @@ -230,6 +230,9 @@ #define OP_BOEHM_DISAPPEARING_LINK(link, obj, r) /* nothing */ #define OP_GC__DISABLE_FINALIZERS(r) /* nothing */ #define OP_GC__ENABLE_FINALIZERS(r) /* nothing */ +#define GC_REGISTER_FINALIZER(a, b, c, d, e) /* nothing */ +#define GC_gcollect() /* nothing */ +#define GC_set_max_heap_size(a) /* nothing */ #endif /************************************************************/ From noreply at buildbot.pypy.org Fri Jan 20 10:03:08 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 10:03:08 +0100 (CET) Subject: [pypy-commit] pypy stm: Shut off spurious warnings. Message-ID: <20120120090308.AEAB982CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51511:08154d958352 Date: 2012-01-19 23:17 +0100 http://bitbucket.org/pypy/pypy/changeset/08154d958352/ Log: Shut off spurious warnings. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -861,19 +861,19 @@ } // XXX little-endian only! -unsigned long stm_read_partial_word(int fieldsize, char *addr) +unsigned long stm_read_partial_word(int fieldsize, void *addr) { int misalignment = ((long)addr) & (sizeof(void*)-1); - long *p = (long*)(addr - misalignment); + long *p = (long*)((char *)addr - misalignment); unsigned long word = stm_read_word(p); return word >> (misalignment * 8); } // XXX little-endian only! -void stm_write_partial_word(int fieldsize, char *addr, unsigned long nval) +void stm_write_partial_word(int fieldsize, void *addr, unsigned long nval) { int misalignment = ((long)addr) & (sizeof(void*)-1); - long *p = (long*)(addr - misalignment); + long *p = (long*)((char *)addr - misalignment); long val = nval << (misalignment * 8); long word = stm_read_word(p); long mask = ((1L << (fieldsize * 8)) - 1) << (misalignment * 8); @@ -951,7 +951,7 @@ #if PYPY_LONG_BIT == 32 stm_write_word(addr, ii); /* 32 bits */ #else - stm_write_partial_word(4, (char *)addr, ii); /* 64 bits */ + stm_write_partial_word(4, addr, ii); /* 64 bits */ #endif } diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -58,8 +58,8 @@ (long*)(((char*)(base)) + ((offset) & ~(sizeof(void*)-1)))) \ >> (8 * ((offset) & (sizeof(void*)-1)))) -unsigned long stm_read_partial_word(int fieldsize, char *addr); -void stm_write_partial_word(int fieldsize, char *addr, unsigned long nval); +unsigned long stm_read_partial_word(int fieldsize, void *addr); +void stm_write_partial_word(int fieldsize, void *addr, unsigned long nval); double stm_read_double(long *addr); void stm_write_double(long *addr, double val); From noreply at buildbot.pypy.org Fri Jan 20 10:03:09 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 10:03:09 +0100 (CET) Subject: [pypy-commit] pypy stm: Bah. Temporary workaround: can't use bool_t because casting to Message-ID: <20120120090309.E1FFC82CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51512:3bb490226684 Date: 2012-01-20 00:16 +0100 http://bitbucket.org/pypy/pypy/changeset/3bb490226684/ Log: Bah. Temporary workaround: can't use bool_t because casting to a bool_t has unexpected results for stm_*_partial_word() diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -19,7 +19,7 @@ #include -#ifdef __GNUC__ /* other platforms too, probably */ +#if 0 //def __GNUC__ /* other platforms too, probably */ XXX FIX ME typedef _Bool bool_t; #else typedef unsigned char bool_t; diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -513,6 +513,9 @@ long stm_read_word(long* addr) { struct tx_descriptor *d = thread_descriptor; +#ifdef RPY_STM_ASSERT + assert((((long)addr) & (sizeof(void*)-1)) == 0); +#endif if (!d->transaction_active) return *addr; @@ -576,6 +579,7 @@ void stm_write_word(long* addr, long val) { struct tx_descriptor *d = thread_descriptor; + assert((((long)addr) & (sizeof(void*)-1)) == 0); if (!d->transaction_active) { *addr = val; return; @@ -864,7 +868,7 @@ unsigned long stm_read_partial_word(int fieldsize, void *addr) { int misalignment = ((long)addr) & (sizeof(void*)-1); - long *p = (long*)((char *)addr - misalignment); + long *p = (long*)(((char *)addr) - misalignment); unsigned long word = stm_read_word(p); return word >> (misalignment * 8); } @@ -873,7 +877,7 @@ void stm_write_partial_word(int fieldsize, void *addr, unsigned long nval) { int misalignment = ((long)addr) & (sizeof(void*)-1); - long *p = (long*)((char *)addr - misalignment); + long *p = (long*)(((char *)addr) - misalignment); long val = nval << (misalignment * 8); long word = stm_read_word(p); long mask = ((1L << (fieldsize * 8)) - 1) << (misalignment * 8); diff --git a/pypy/translator/stm/test/test_llstm.py b/pypy/translator/stm/test/test_llstm.py --- a/pypy/translator/stm/test/test_llstm.py +++ b/pypy/translator/stm/test/test_llstm.py @@ -97,7 +97,13 @@ assert float(stm_getfield(a, 'sb')) == float(rs1b) stm_setfield(a, 'x', 42 * a.y) stm_setfield(a, 'c1', '(') + assert stm_getfield(a, 'c1') == '(' + assert stm_getfield(a, 'c2') == '*' + assert stm_getfield(a, 'c3') == '#' stm_setfield(a, 'c2', '?') + assert stm_getfield(a, 'c1') == '(' + assert stm_getfield(a, 'c2') == '?' + assert stm_getfield(a, 'c3') == '#' stm_setfield(a, 'c3', ')') stm_setfield(a, 'l', rll2) stm_setfield(a, 'f', rf2) From noreply at buildbot.pypy.org Fri Jan 20 10:12:28 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 10:12:28 +0100 (CET) Subject: [pypy-commit] pypy stm: Skip this import if it fails because of _weakref Message-ID: <20120120091228.A8D4F82CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51513:ffedd17ff570 Date: 2012-01-20 10:12 +0100 http://bitbucket.org/pypy/pypy/changeset/ffedd17ff570/ Log: Skip this import if it fails because of _weakref diff --git a/lib-python/modified-2.7/UserDict.py b/lib-python/modified-2.7/UserDict.py --- a/lib-python/modified-2.7/UserDict.py +++ b/lib-python/modified-2.7/UserDict.py @@ -85,8 +85,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: From noreply at buildbot.pypy.org Fri Jan 20 10:15:32 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 10:15:32 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: Planning for today Message-ID: <20120120091532.6359482CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: extradoc Changeset: r4047:586ccee3a515 Date: 2012-01-20 10:14 +0100 http://bitbucket.org/pypy/extradoc/changeset/586ccee3a515/ Log: Planning for today diff --git a/sprintinfo/leysin-winter-2012/planning.txt b/sprintinfo/leysin-winter-2012/planning.txt --- a/sprintinfo/leysin-winter-2012/planning.txt +++ b/sprintinfo/leysin-winter-2012/planning.txt @@ -14,17 +14,19 @@ * review the JVM backend pull request (DONE) -* py3k (romain, anto) +* py3k (romain, anto) (some progress) * ffistruct * Cython backend -* Debug the ARM backend (bivab, armin around) +* Debug the ARM backend (bivab, armin around) (some progress) * STM - refactored the RPython API (DONE) - - app-level transaction module (armin, bivab around) + - app-level transaction module (DONE) - start work on the GC * concurrent-marksweep GC + +* lightweight tracing experiment (everyone) From noreply at buildbot.pypy.org Fri Jan 20 10:54:44 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 10:54:44 +0100 (CET) Subject: [pypy-commit] pypy stm: add 5 operations that cause a pypy-stm to go into inevitable mode Message-ID: <20120120095444.19D1D82CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51514:ceeb6b131e29 Date: 2012-01-20 10:54 +0100 http://bitbucket.org/pypy/pypy/changeset/ceeb6b131e29/ Log: add 5 operations that cause a pypy-stm to go into inevitable mode diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -8,7 +8,8 @@ ALWAYS_ALLOW_OPERATIONS = set([ 'direct_call', 'force_cast', 'keepalive', 'cast_ptr_to_adr', - 'debug_print', 'debug_assert', + 'debug_print', 'debug_assert', 'cast_opaque_ptr', 'hint', + 'indirect_call', 'stack_current', ]) ALWAYS_ALLOW_OPERATIONS |= set(lloperation.enum_tryfold_ops()) @@ -210,6 +211,8 @@ flags = op.args[1].value return flags['flavor'] == 'gc' + stt_malloc_nonmovable = stt_malloc + def stt_gc_stack_bottom(self, newoperations, op): ## self.seen_gc_stack_bottom = True newoperations.append(op) From noreply at buildbot.pypy.org Fri Jan 20 11:21:12 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 20 Jan 2012 11:21:12 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: merge default Message-ID: <20120120102112.5902F82CF8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51515:54c7d0197d66 Date: 2012-01-20 11:15 +0200 http://bitbucket.org/pypy/pypy/changeset/54c7d0197d66/ Log: merge default diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from .fromnumeric import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py rename from lib_pypy/numpypy/fromnumeric.py rename to lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -85,7 +85,7 @@ array([4, 3, 6]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') # not deprecated --- copy if necessary, view otherwise @@ -273,7 +273,7 @@ [-1, -2, -3, -4, -5]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def repeat(a, repeats, axis=None): @@ -315,7 +315,7 @@ [3, 4]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def put(a, ind, v, mode='raise'): @@ -366,7 +366,7 @@ array([ 0, 1, 2, 3, -5]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def swapaxes(a, axis1, axis2): @@ -410,7 +410,7 @@ [3, 7]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def transpose(a, axes=None): @@ -451,8 +451,11 @@ (2, 1, 3) """ - raise NotImplemented('Waiting on interp level method') - + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T def sort(a, axis=-1, kind='quicksort', order=None): """ @@ -553,7 +556,7 @@ dtype=[('name', '|S10'), ('height', ' 1: raise OperationError(space.w_ValueError, space.wrap( @@ -790,6 +794,9 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + def create_sig(self): return signature.ScalarSignature(self.dtype) @@ -895,7 +902,7 @@ Intermediate class for performing binary operations. """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -935,7 +942,7 @@ def __init__(self, shape, dtype, left, right): Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) - + def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() @@ -954,7 +961,7 @@ when we'll make AxisReduce lazy """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) @@ -1026,14 +1033,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: @@ -1168,6 +1175,9 @@ array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): def create_sig(self): @@ -1391,6 +1401,8 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -451,6 +451,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -728,15 +728,16 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array, mean + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - b = mean(a, axis=0) - b[0,0]==35. + b = a.mean(axis=0) + b[0, 0]==35. + assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from _numpypy import array @@ -757,6 +758,7 @@ assert array([]).sum() == 0.0 raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() @@ -771,6 +773,8 @@ assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_identity(self): from _numpypy import identity, array @@ -1309,6 +1313,26 @@ assert arange(3)[array([-3])] == 0 raises(IndexError,'arange(3)[array([-15])]') + def test_fill(self): + from _numpypy import array + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + def test_array_indexing_bool(self): from _numpypy import arange a = arange(10) @@ -1464,9 +1488,11 @@ assert repr(a) == "array(0.0)" a = array(0.2) assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" def test_repr_multi(self): - from _numpypy import arange, zeros + from _numpypy import arange, zeros, array a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1489,6 +1515,9 @@ [498, 999], [499, 1000], [500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' def test_repr_slice(self): from _numpypy import array, zeros @@ -1573,18 +1602,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = array(range(105)).reshape(3, 5, 7) - assert a.reshape(1, -1).shape == (1, 105) - assert a.reshape(1, 1, -1).shape == (1, 1, 105) - assert a.reshape(-1, 1, 1).shape == (105, 1, 1) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -350,7 +350,8 @@ self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, - 'int_eq': 1, 'guard_false': 1, 'jump': 1}) + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -391,6 +391,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -10,6 +11,7 @@ from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver class Cache(object): in_recursion = False @@ -111,13 +113,24 @@ def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd for op in operations: if ops_offset is None: ofs = -1 else: ofs = ops_offset.get(op, 0) - l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, - logops.repr_of_resop(op))) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_greenkey)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) return l_w class WrappedBox(Wrappable): @@ -150,6 +163,15 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_greenkey) + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -182,6 +204,25 @@ box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) +class DebugMergePoint(WrappedOp): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.w_greenkey = w_greenkey + self.jd_name = jd_name + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_bytecode_no(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_jitdriver_name(self, space): + return space.wrap(self.jd_name) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -195,3 +236,15 @@ WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), + pycode = GetSetProperty(DebugMergePoint.get_pycode), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -92,6 +92,7 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) cls.orig_oplist = oplist @@ -117,6 +118,10 @@ assert elem[2][2] == False assert len(elem[3]) == 4 int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() @@ -211,3 +216,18 @@ assert op.getarg(0).getint() == 4 op.result = box assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' + assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py rename from lib_pypy/numpypy/test/test_fromnumeric.py rename to pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -1,7 +1,7 @@ - from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -class AppTestFromNumeric(BaseNumpyAppTest): + +class AppTestFromNumeric(BaseNumpyAppTest): def test_argmax(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, argmax @@ -18,12 +18,12 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - # assert (argmax(a, axis=0) == array([0, 0, 0])).all() - # assert (argmax(a, axis=1) == array([0, 0])).all() + assert (argmin(a, axis=0) == array([0, 0, 0])).all() + assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 - + def test_shape(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, identity, shape @@ -44,7 +44,7 @@ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 - + def test_amin(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, amin @@ -86,14 +86,14 @@ assert ndim([[1,2,3],[4,5,6]]) == 2 assert ndim(array([[1,2,3],[4,5,6]])) == 2 assert ndim(1) == 0 - + def test_rank(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, rank assert rank([[1,2,3],[4,5,6]]) == 2 assert rank(array([[1,2,3],[4,5,6]])) == 2 assert rank(1) == 0 - + def test_var(self): from numpypy import array, var a = array([[1,2],[3,4]]) @@ -107,3 +107,31 @@ assert std(a) == 1.1180339887498949 # assert (std(a, axis=0) == array([ 1., 1.])).all() # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + + def test_mean(self): + from numpypy import array, mean + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) + + def test_transpose(self): + from numpypy import arange, array, transpose, ones + x = arange(4).reshape((2,2)) + assert (transpose(x) == array([[0, 2],[1, 3]])).all() + # Once axes argument is implemented, add more tests + raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") + # x = ones((1, 2, 3)) + # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -64,6 +64,12 @@ check(', '.join([u'a']), u'a') check(', '.join(['a', u'b']), u'a, b') check(u', '.join(['a', 'b']), u'a, b') + try: + u''.join([u'a', 2, 3]) + except TypeError, e: + assert 'sequence item 1' in str(e) + else: + raise Exception("DID NOT RAISE") if sys.version_info >= (2,3): def test_contains_ex(self): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -201,7 +201,7 @@ return space.newbool(container.find(item) != -1) def unicode_join__Unicode_ANY(space, w_self, w_list): - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -216,22 +216,21 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = len(self) * (size - 1) + for i in range(size): + try: + prealloc_size += len(space.unicode_w(list_w[i])) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if isinstance(w_s, W_UnicodeObject): - # shortcut for performance - sb.append(w_s._value) - else: - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -52,7 +52,10 @@ from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] - res = _cast_to_box(llres) + if llres: + res = _cast_to_box(llres) + else: + res = None return _cast_to_gcref(ResOperation(no, args, res)) @register_helper(annmodel.SomePtr(llmemory.GCREF)) diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -420,7 +420,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/lltypesystem/rtuple.py b/pypy/rpython/lltypesystem/rtuple.py --- a/pypy/rpython/lltypesystem/rtuple.py +++ b/pypy/rpython/lltypesystem/rtuple.py @@ -27,6 +27,10 @@ def newtuple(cls, llops, r_tuple, items_v): # items_v should have the lowleveltype of the internal reprs + assert len(r_tuple.items_r) == len(items_v) + for r_item, v_item in zip(r_tuple.items_r, items_v): + assert r_item.lowleveltype == v_item.concretetype + # if len(r_tuple.items_r) == 0: return inputconst(Void, ()) # a Void empty tuple c1 = inputconst(Void, r_tuple.lowleveltype.TO) diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/rrange.py b/pypy/rpython/rrange.py --- a/pypy/rpython/rrange.py +++ b/pypy/rpython/rrange.py @@ -204,7 +204,10 @@ v_index = hop.gendirectcall(self.ll_getnextindex, v_enumerate) hop2 = hop.copy() hop2.args_r = [self.r_baseiter] + r_item_src = self.r_baseiter.r_list.external_item_repr + r_item_dst = hop.r_result.items_r[1] v_item = self.r_baseiter.rtype_next(hop2) + v_item = hop.llops.convertvar(v_item, r_item_src, r_item_dst) return hop.r_result.newtuple(hop.llops, hop.r_result, [v_index, v_item]) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): diff --git a/pypy/rpython/test/test_rrange.py b/pypy/rpython/test/test_rrange.py --- a/pypy/rpython/test/test_rrange.py +++ b/pypy/rpython/test/test_rrange.py @@ -169,6 +169,22 @@ res = self.interpret(fn, [2]) assert res == 789 + def test_enumerate_instances(self): + class A: + pass + def fn(n): + a = A() + b = A() + a.k = 10 + b.k = 20 + for i, x in enumerate([a, b]): + if i == n: + return x.k + return 5 + res = self.interpret(fn, [1]) + assert res == 20 + + class TestLLtype(BaseTestRrange, LLRtypeMixin): from pypy.rpython.lltypesystem import rrange diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -140,13 +140,15 @@ bytecode_name = None is_bytecode = True inline_level = None + has_dmp = False def parse_code_data(self, arg): m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', arg) if m is None: # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = arg + if arg: + self.bytecode_name = arg else: self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() self.startlineno = int(lineno) @@ -218,7 +220,7 @@ self.inputargs = inputargs self.chunks = chunks for chunk in self.chunks: - if chunk.filename is not None: + if chunk.bytecode_name is not None: self.startlineno = chunk.startlineno self.filename = chunk.filename self.name = chunk.name diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -283,3 +283,13 @@ assert loops[-1].count == 1234 assert loops[1].count == 123 assert loops[2].count == 12 + +def test_parse_nonpython(): + loop = parse(""" + [] + debug_merge_point(0, 'random') + debug_merge_point(0, ' #15 COMPARE_OP') + """) + f = Function.from_operations(loop.operations, LoopStorage()) + assert f.chunks[-1].filename == 'x.py' + assert f.filename is None diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Fri Jan 20 11:21:13 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 20 Jan 2012 11:21:13 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: first approximation of array[array-of-int] Message-ID: <20120120102113.91D5582CF8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51516:d1cbcc34848d Date: 2012-01-20 12:20 +0200 http://bitbucket.org/pypy/pypy/changeset/d1cbcc34848d/ Log: first approximation of array[array-of-int] diff --git a/pypy/module/micronumpy/interp_boxes.py b/pypy/module/micronumpy/interp_boxes.py --- a/pypy/module/micronumpy/interp_boxes.py +++ b/pypy/module/micronumpy/interp_boxes.py @@ -104,7 +104,10 @@ _attrs_ = () class W_IntegerBox(W_NumberBox): - pass + def convert_to_int(self): + from pypy.rpython.lltypesystem import rffi, lltype + + return rffi.cast(lltype.Signed, self.value) class W_SignedIntegerBox(W_IntegerBox): pass diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -20,6 +20,9 @@ if self.step != 0: shape.append(self.lgt) + def get_iter(self): + xxx + class IntArrayChunk(BaseChunk): def __init__(self, arr): self.arr = arr.get_concrete() @@ -27,10 +30,22 @@ def extend_shape(self, shape): shape.extend(self.arr.shape) + def get_iter(self): + return self.arr.create_iter() + + def get_index(self, iter): + return self.arr.getitem(iter.offset).convert_to_int() + class BoolArrayChunk(BaseChunk): def __init__(self, arr): self.arr = arr.get_concrete() + def extend_shape(self, shape): + xxx + + def get_iter(self): + xxx + class BaseTransform(object): pass @@ -212,6 +227,30 @@ return self._done # ------ other iterators that are not part of the computation frame ---------- + +class ChunkIterator(object): + def __init__(self, shape, chunks): + self.chunks = chunks + self.indices = [0] * len(shape) + self.shape = shape + self.chunk_iters = [chunk.get_iter() for chunk in self.chunks] + + def next(self, shapelen): + for i in range(shapelen - 1, -1, -1): + if self.indices[i] < self.shape[i] - 1: + self.indices[i] += 1 + self.chunk_iters[i] = self.chunk_iters[i].next() + break + else: + self.indices[i] = 0 + # XXX reset one dim iter probably + return self + + def get_index(self, shapelen): + l = [] + for i in range(shapelen): + l.append(self.chunks[i].get_index(self.chunk_iters[i])) + return l class SkipLastAxisIterator(object): def __init__(self, arr): diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -10,7 +10,8 @@ from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator, Chunk, ViewIterator, BoolArrayChunk, IntArrayChunk + SkipLastAxisIterator, Chunk, ViewIterator, BoolArrayChunk, IntArrayChunk,\ + ChunkIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -583,6 +584,9 @@ item = concrete._index_of_single_item(space, w_idx) return concrete.getitem(item) chunks = self._prepare_slice_args(space, w_idx) + for chunk in chunks: + if not isinstance(chunk, Chunk): + return self.get_concrete().force_slice_getitem(space, chunks) return space.wrap(self.create_slice(chunks)) def descr_setitem(self, space, w_idx, w_value): @@ -604,13 +608,6 @@ view = self.create_slice(chunks).get_concrete() view.setslice(space, w_value) - def force_slice(self, shape, chunks): - size = 1 - for elem in shape: - size *= elem - res = W_NDimArray(size, shape, self.find_dtype()) - xxx - @jit.unroll_safe def create_slice(self, chunks): shape = [] @@ -620,9 +617,6 @@ s = i + 1 assert s >= 0 shape += self.shape[s:] - for chunk in chunks: - if not isinstance(chunk, Chunk): - return self.force_slice(shape, chunks) if not isinstance(self, ConcreteArray): return VirtualSlice(self, chunks, shape) r = calculate_slice_strides(self.shape, self.start, self.strides, @@ -1104,6 +1098,20 @@ builder.append(']') @jit.unroll_safe + def _index_of_single_item_int(self, space, index): + item = self.start + for i in range(len(index)): + v = index[i] + if v < 0: + v += self.shape[i] + if v < 0 or v >= self.shape[i]: + raise operationerrfmt(space.w_IndexError, + "index (%d) out of range (0<=index<%d", i, self.shape[i], + ) + item += v * self.strides[i] + return item + + @jit.unroll_safe def _index_of_single_item(self, space, w_idx): if space.isinstance_w(w_idx, space.w_int): idx = space.int_w(w_idx) @@ -1115,17 +1123,7 @@ return self.start + idx * self.strides[0] index = [space.int_w(w_item) for w_item in space.fixedview(w_idx)] - item = self.start - for i in range(len(index)): - v = index[i] - if v < 0: - v += self.shape[i] - if v < 0 or v >= self.shape[i]: - raise operationerrfmt(space.w_IndexError, - "index (%d) out of range (0<=index<%d", i, self.shape[i], - ) - item += v * self.strides[i] - return item + return self._index_of_single_item_int(space, index) def setslice(self, space, w_value): res_shape = shape_agreement(space, self.shape, w_value.shape) @@ -1178,6 +1176,28 @@ def fill(self, space, w_value): self.setslice(space, scalar_w(space, self.dtype, w_value)) + def force_slice_getitem(self, space, chunks): + shape = [] + i = -1 + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) + s = i + 1 + assert s >= 0 + shape += self.shape[s:] + size = 1 + for elem in shape: + size *= elem + res = W_NDimArray(size, shape, self.find_dtype()) + ri = res.create_iter() + ci = ChunkIterator(shape, chunks) + shapelen = len(shape) + while not ri.done(): + index = ci.get_index(shapelen) + v = self.getitem(self._index_of_single_item_int(space, index)) + res.setitem(ri.offset, v) + ri = ri.next(shapelen) + ci = ci.next(shapelen) + return res class ViewArray(ConcreteArray): def create_sig(self): From noreply at buildbot.pypy.org Fri Jan 20 11:24:28 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 20 Jan 2012 11:24:28 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-2: more tests and a fix Message-ID: <20120120102428.2B10F82CF8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-2 Changeset: r51517:b5668c7a1c53 Date: 2012-01-20 12:24 +0200 http://bitbucket.org/pypy/pypy/changeset/b5668c7a1c53/ Log: more tests and a fix diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -239,7 +239,7 @@ for i in range(shapelen - 1, -1, -1): if self.indices[i] < self.shape[i] - 1: self.indices[i] += 1 - self.chunk_iters[i] = self.chunk_iters[i].next() + self.chunk_iters[i] = self.chunk_iters[i].next(shapelen) break else: self.indices[i] = 0 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1312,6 +1312,10 @@ raises(IndexError,'arange(3)[array([15])]') assert arange(3)[array([-3])] == 0 raises(IndexError,'arange(3)[array([-15])]') + a = arange(10) + assert (a[array([3, 2, 1])] == [3, 2, 1]).all() + raises(IndexError, 'a[array([3, 2, 1, 15])]') + assert (a[array([3, 2, 5, 3])] == [3, 2, 5, 3]).all() def test_fill(self): from _numpypy import array From noreply at buildbot.pypy.org Fri Jan 20 12:56:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 12:56:36 +0100 (CET) Subject: [pypy-commit] pypy stm: Kill parts of the code that are outdated. Message-ID: <20120120115636.55FBB82CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51518:83ad741c9a27 Date: 2012-01-20 12:56 +0100 http://bitbucket.org/pypy/pypy/changeset/83ad741c9a27/ Log: Kill parts of the code that are outdated. diff --git a/pypy/translator/stm/_rffi_stm.py b/pypy/translator/stm/_rffi_stm.py --- a/pypy/translator/stm/_rffi_stm.py +++ b/pypy/translator/stm/_rffi_stm.py @@ -26,17 +26,8 @@ stm_descriptor_init = llexternal('stm_descriptor_init', [], lltype.Void) stm_descriptor_done = llexternal('stm_descriptor_done', [], lltype.Void) -##begin_transaction = llexternal('STM_begin_transaction', [], lltype.Void) -##begin_inevitable_transaction = llexternal('stm_begin_inevitable_transaction', -## [], lltype.Void) -##commit_transaction = llexternal('stm_commit_transaction', [], lltype.Signed) stm_try_inevitable = llexternal('stm_try_inevitable', [], lltype.Void) -##descriptor_init_and_being_inevitable_transaction = llexternal( -## 'stm_descriptor_init_and_being_inevitable_transaction', [], lltype.Void) -##commit_transaction_and_descriptor_done = llexternal( -## 'stm_commit_transaction_and_descriptor_done', [], lltype.Void) - stm_read_word = llexternal('stm_read_word', [SignedP], lltype.Signed) stm_write_word = llexternal('stm_write_word', [SignedP, lltype.Signed], lltype.Void) diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -59,9 +59,6 @@ /************************************************************/ -/* Uncomment the line to try this extra code. Doesn't work reliably so far */ -/*#define COMMIT_OTHER_INEV*/ - #define ABORT_REASONS 8 #define SPINLOOP_REASONS 10 #define OTHERINEV_REASONS 5 @@ -75,9 +72,6 @@ unsigned num_commits; unsigned num_aborts[ABORT_REASONS]; unsigned num_spinloops[SPINLOOP_REASONS]; -#ifdef COMMIT_OTHER_INEV - unsigned num_otherinev[OTHERINEV_REASONS]; -#endif unsigned int spinloop_counter; owner_version_t my_lock_word; struct RedoLog redolog; /* last item, because it's the biggest one */ @@ -93,10 +87,6 @@ if there is an inevitable transaction running */ static volatile unsigned long global_timestamp = 2; static __thread struct tx_descriptor *thread_descriptor = NULL_TX; -#ifdef COMMIT_OTHER_INEV -static struct tx_descriptor *volatile thread_descriptor_inev; -static volatile unsigned long d_inev_checking = 0; -#endif /************************************************************/ @@ -352,7 +342,7 @@ static pthread_mutex_t mutex_inevitable = PTHREAD_MUTEX_INITIALIZER; # ifdef RPY_STM_ASSERT unsigned long locked_by = 0; -void mutex_lock(void) +static void mutex_lock(void) { unsigned long pself = (unsigned long)pthread_self(); if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, @@ -363,7 +353,7 @@ if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "%lx: mutex inev locked\n", pself); } -void mutex_unlock(void) +static void mutex_unlock(void) { unsigned long pself = (unsigned long)pthread_self(); locked_by = 0; @@ -380,85 +370,7 @@ # define mutex_unlock() /* nothing */ #endif -#ifdef COMMIT_OTHER_INEV -unsigned long can_commit_with_other_inevitable(struct tx_descriptor *d, - unsigned long expected) -{ - int i; - owner_version_t ovt; - unsigned long result = 0; - struct tx_descriptor *d_inev; - - // 'd_inev_checking' is 1 or 2 when an inevitable transaction is running - // and didn't start committing yet; otherwise it is 0. It is normally 1 - // except in this function. - if (!bool_cas(&d_inev_checking, 1, 2)) - { - d->num_otherinev[4]++; - return 0; - } - - // optimization only: did the inevitable thread 'd_inev' read any data - // that we are about to commit? If we are sure that the answer is - // negative, then commit anyway, because it cannot make the inevitable - // thread fail. We can safely check an approximation of this, because - // we hold a lock on all orecs that we would like to write. So if all - // orecs read by d_inev are not locked now, then no conflict. This - // function is allowed to "fail" and give up rather than spinloop - // waiting for a condition to be true, which is potentially dangerous - // here, because we acquired all the locks. - - // Note that if the inevitable thread itself adds in parallel an extra - // orec to d_inev->reads, *and* if this new orec is locked, then we - // will miss it here; but the d_inev thread will spinloop waiting for - // us to be done. So even if we commit, the d_inev thread will just - // wait and load the new committed value. - - // while we are in this function, the d_inev thread is prevented from - // going too far with the commitTransaction() code because d_inev_checking - // is greater than 1; it will just tx_spinloop(9). (And of course it - // cannot abort.) - - d_inev = thread_descriptor_inev; - if (!bool_cas(&d_inev->reads.locked, 0, 1)) - { - d->num_otherinev[1]++; - goto give_up_1; - } - - for (i=d_inev->reads.size; i--; ) - { - ovt = d_inev->reads.items[i]->v; // read this orec - if (ovt == d->my_lock_word) - { - d->num_otherinev[2]++; - goto give_up_2; - } - } - assert(expected & 1); - if (!change_global_timestamp(d, expected, expected + 2)) - { - d->num_otherinev[3]++; - goto give_up_2; - } - - /* success: scale d_inet forward */ - d->num_otherinev[0]++; - result = expected + 1; - assert(d_inev->start_time == result - 2); - d_inev->start_time = result; - CFENCE; - - give_up_2: - d_inev->reads.locked = 0; - - give_up_1: - d_inev_checking = 1; - return result; -} -#endif - -void wait_end_inevitability(struct tx_descriptor *d) +static void wait_end_inevitability(struct tx_descriptor *d) { unsigned long curts; releaseLocksForRetry(d); @@ -485,16 +397,11 @@ acquireLocks(d); } -void commitInevitableTransaction(struct tx_descriptor *d) +static void commitInevitableTransaction(struct tx_descriptor *d) { unsigned long ts; _Bool ok; -#ifdef COMMIT_OTHER_INEV - // reset d_inev_checking back from 1 to 0 - while (!bool_cas(&d_inev_checking, 1, 0)) - tx_spinloop(9); -#endif // no-one else can modify global_timestamp if I'm inevitable // and d_inev_checking is 0 ts = get_global_timestamp(d); @@ -529,12 +436,6 @@ volatile orec_t* o = get_orec((void*)addr); owner_version_t ovt; -#ifdef COMMIT_OTHER_INEV - // log orec BEFORE we spinloop waiting for the orec lock to be released, - // for can_commit_with_other_inevitable() - oreclist_insert(&d->reads, (orec_t*)o); -#endif - retry: // read the orec BEFORE we read anything else ovt = o->v; @@ -551,13 +452,7 @@ } // else this location is too new, scale forward owner_version_t newts = get_global_timestamp(d) & ~1; -#ifdef COMMIT_OTHER_INEV - d->reads.size--; // ignore the newly logged orec -#endif validate_fast(d, 1); -#ifdef COMMIT_OTHER_INEV - d->reads.size++; -#endif d->start_time = newts; } @@ -569,9 +464,7 @@ if (o->v != ovt) goto retry; /* oups, try again */ -#ifndef COMMIT_OTHER_INEV oreclist_insert(&d->reads, (orec_t*)o); -#endif return tmp; } @@ -649,12 +542,6 @@ p += sprintf(p, "%c%d", i == 1 ? '|' : ',', d->num_spinloops[i]); -#ifdef COMMIT_OTHER_INEV - for (i=0; inum_otherinev[i]); -#endif - p += sprintf(p, "]\n"); fwrite(line, 1, p - line, PYPY_DEBUG_FILE); } @@ -664,18 +551,7 @@ free(d); } -void* stm_perform_transaction(void*(*callback)(void*), void *arg) -{ - void *result; - /* you need to call descriptor_init() before calling stm_perform_transaction */ - assert(thread_descriptor != NULL_TX); - STM_begin_transaction(); - result = callback(arg); - stm_commit_transaction(); - return result; -} - -void stm_begin_transaction(jmp_buf* buf) +static void begin_transaction(jmp_buf* buf) { struct tx_descriptor *d = thread_descriptor; assert(!d->transaction_active); @@ -684,7 +560,7 @@ d->start_time = d->last_known_global_timestamp & ~1; } -long stm_commit_transaction(void) +static long commit_transaction(void) { struct tx_descriptor *d = thread_descriptor; @@ -720,15 +596,6 @@ unsigned long expected = get_global_timestamp(d); if (expected & 1) { -#ifdef COMMIT_OTHER_INEV - // there is another inevitable transaction running. - expected = can_commit_with_other_inevitable(d, expected); - if (expected != 0) - { - d->end_time = expected; - break; - } -#endif // wait until it is done. hopefully we can then proceed // without conflicts. wait_end_inevitability(d); @@ -757,6 +624,20 @@ return d->end_time; } +void* stm_perform_transaction(void*(*callback)(void*), void *arg) +{ + void *result; + jmp_buf _jmpbuf; + /* you need to call descriptor_init() before calling + stm_perform_transaction() */ + assert(thread_descriptor != NULL_TX); + setjmp(_jmpbuf); + begin_transaction(&_jmpbuf); + result = callback(arg); + commit_transaction(); + return result; +} + void stm_try_inevitable(STM_CCHARP1(why)) { /* when a transaction is inevitable, its start_time is equal to @@ -809,56 +690,11 @@ mutex_unlock(); } d->setjmp_buf = NULL; /* inevitable from now on */ -#ifdef COMMIT_OTHER_INEV - thread_descriptor_inev = d; - CFENCE; - d_inev_checking = 1; -#endif #ifdef RPY_STM_ASSERT PYPY_DEBUG_STOP("stm-inevitable"); #endif } -void stm_try_inevitable_if(jmp_buf *buf STM_CCHARP(why)) -{ - struct tx_descriptor *d = thread_descriptor; - if (d->setjmp_buf == buf) - stm_try_inevitable(STM_EXPLAIN1(why)); -} - -void stm_begin_inevitable_transaction(void) -{ - struct tx_descriptor *d = thread_descriptor; - unsigned long curtime; - - assert(!d->transaction_active); - - retry: - mutex_lock(); /* possibly waiting here */ - - while (1) - { - curtime = global_timestamp; - if (curtime & 1) - { - mutex_unlock(); - tx_spinloop(5); - goto retry; - } - if (bool_cas(&global_timestamp, curtime, curtime + 1)) - break; - } - assert(!d->transaction_active); - d->transaction_active = 1; - d->setjmp_buf = NULL; - d->start_time = curtime; -#ifdef COMMIT_OTHER_INEV - thread_descriptor_inev = d; - CFENCE; - d_inev_checking = 1; -#endif -} - void stm_abort_and_retry(void) { tx_abort(7); /* manual abort */ diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -12,14 +12,10 @@ #include "src/commondefs.h" #ifdef RPY_STM_ASSERT -# define STM_CCHARP(arg) , char* arg # define STM_CCHARP1(arg) char* arg -# define STM_EXPLAIN(info) , info # define STM_EXPLAIN1(info) info #else -# define STM_CCHARP(arg) /* nothing */ # define STM_CCHARP1(arg) void -# define STM_EXPLAIN(info) /* nothing */ # define STM_EXPLAIN1(info) /* nothing */ #endif @@ -27,31 +23,15 @@ void stm_descriptor_init(void); void stm_descriptor_done(void); void* stm_perform_transaction(void*(*)(void*), void*); -void stm_begin_transaction(jmp_buf* buf); -long stm_commit_transaction(void); long stm_read_word(long* addr); void stm_write_word(long* addr, long val); void stm_try_inevitable(STM_CCHARP1(why)); -void stm_try_inevitable_if(jmp_buf* buf STM_CCHARP(why)); -void stm_begin_inevitable_transaction(void); void stm_abort_and_retry(void); -void stm_descriptor_init_and_being_inevitable_transaction(void); -void stm_commit_transaction_and_descriptor_done(void); long stm_debug_get_state(void); /* -1: descriptor_init() was not called 0: not in a transaction 1: in a regular transaction 2: in an inevitable transaction */ -/* for testing only: */ -#define STM_begin_transaction() ; \ - jmp_buf _jmpbuf; \ - setjmp(_jmpbuf); \ - stm_begin_transaction(&_jmpbuf) - -#define STM_DECLARE_VARIABLE() ; jmp_buf jmpbuf -#define STM_MAKE_INEVITABLE() stm_try_inevitable_if(&jmpbuf \ - STM_EXPLAIN("return")) - // XXX little-endian only! #define STM_read_partial_word(T, base, offset) \ (T)(stm_read_word( \ From noreply at buildbot.pypy.org Fri Jan 20 14:25:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 14:25:05 +0100 (CET) Subject: [pypy-commit] pypy stm: Clean up the implementation of the reads and writes of less than one word. Message-ID: <20120120132505.D0C9A82CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51519:fe658ba14686 Date: 2012-01-20 13:24 +0100 http://bitbucket.org/pypy/pypy/changeset/fe658ba14686/ Log: Clean up the implementation of the reads and writes of less than one word. diff --git a/pypy/translator/c/src/g_prerequisite.h b/pypy/translator/c/src/g_prerequisite.h --- a/pypy/translator/c/src/g_prerequisite.h +++ b/pypy/translator/c/src/g_prerequisite.h @@ -19,7 +19,7 @@ #include -#if 0 //def __GNUC__ /* other platforms too, probably */ XXX FIX ME +#ifdef __GNUC__ /* other platforms too, probably */ typedef _Bool bool_t; #else typedef unsigned char bool_t; diff --git a/pypy/translator/stm/funcgen.py b/pypy/translator/stm/funcgen.py --- a/pypy/translator/stm/funcgen.py +++ b/pypy/translator/stm/funcgen.py @@ -27,6 +27,7 @@ return '%s = (%s)%s((long*)&%s);' % ( newvalue, cresulttypename, funcname, expr) else: + assert fieldsize in (1, 2, 4) if simple_struct: # assume that the object is aligned, and any possible misalignment # comes from the field offset, so that it can be resolved at @@ -35,14 +36,20 @@ structdef = funcgen.db.gettypedefnode(STRUCT) basename = funcgen.expr(op.args[0]) fieldname = op.args[1].value - return '%s = STM_read_partial_word(%s, %s, offsetof(%s, %s));' % ( + trailing = '' + if T == lltype.Bool: + trailing = ' & 1' # needed in this case, otherwise casting + # a several-bytes value to bool_t would + # take into account all the several bytes + return '%s = (%s)(stm_fx_read_partial(%s, offsetof(%s, %s))%s);'% ( newvalue, cresulttypename, basename, cdecl(funcgen.db.gettype(STRUCT), ''), - structdef.c_struct_field_name(fieldname)) + structdef.c_struct_field_name(fieldname), + trailing) # else: - return '%s = stm_read_partial_word(sizeof(%s), &%s);' % ( - newvalue, cresulttypename, expr) + return '%s = (%s)stm_read_partial_%d(&%s);' % ( + newvalue, cresulttypename, fieldsize, expr) def _stm_generic_set(funcgen, op, targetexpr, T): basename = funcgen.expr(op.args[0]) @@ -69,10 +76,9 @@ return '%s((long*)&%s, (%s)%s);' % ( funcname, targetexpr, newtype, newvalue) else: - itemtypename = funcgen.db.gettype(T) - citemtypename = cdecl(itemtypename, '') - return ('stm_write_partial_word(sizeof(%s), &%s, %s);' % ( - citemtypename, targetexpr, newvalue)) + assert fieldsize in (1, 2, 4) + return ('stm_write_partial_%d(&%s, (unsigned long)%s);' % ( + fieldsize, targetexpr, newvalue)) def field_expr(funcgen, args): diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -701,25 +701,47 @@ } // XXX little-endian only! -unsigned long stm_read_partial_word(int fieldsize, void *addr) -{ - int misalignment = ((long)addr) & (sizeof(void*)-1); - long *p = (long*)(((char *)addr) - misalignment); - unsigned long word = stm_read_word(p); - return word >> (misalignment * 8); +#define READ_PARTIAL_WORD(T, fieldsize, addr) \ + int misalignment = ((long)addr) & (sizeof(void*)-1); \ + long *p = (long*)(((char *)addr) - misalignment); \ + unsigned long word = stm_read_word(p); \ + assert(sizeof(T) == fieldsize); \ + return (T)(word >> (misalignment * 8)); + +unsigned char stm_read_partial_1(void *addr) { + READ_PARTIAL_WORD(unsigned char, 1, addr) } +unsigned short stm_read_partial_2(void *addr) { + READ_PARTIAL_WORD(unsigned short, 2, addr) +} +#if PYPY_LONG_BIT == 64 +unsigned int stm_read_partial_4(void *addr) { + READ_PARTIAL_WORD(unsigned int, 4, addr) +} +#endif // XXX little-endian only! -void stm_write_partial_word(int fieldsize, void *addr, unsigned long nval) -{ - int misalignment = ((long)addr) & (sizeof(void*)-1); - long *p = (long*)(((char *)addr) - misalignment); - long val = nval << (misalignment * 8); - long word = stm_read_word(p); - long mask = ((1L << (fieldsize * 8)) - 1) << (misalignment * 8); - val = (val & mask) | (word & ~mask); +#define WRITE_PARTIAL_WORD(fieldsize, addr, nval) \ + int misalignment = ((long)addr) & (sizeof(void*)-1); \ + long *p = (long*)(((char *)addr) - misalignment); \ + long val = ((long)nval) << (misalignment * 8); \ + long word = stm_read_word(p); \ + long mask = ((1L << (fieldsize * 8)) - 1) << (misalignment * 8); \ + val = (val & mask) | (word & ~mask); \ stm_write_word(p, val); + +void stm_write_partial_1(void *addr, unsigned char nval) { + WRITE_PARTIAL_WORD(1, addr, nval) } +void stm_write_partial_2(void *addr, unsigned short nval) { + WRITE_PARTIAL_WORD(2, addr, nval) +} +#if PYPY_LONG_BIT == 64 +void stm_write_partial_4(void *addr, unsigned int nval) { + WRITE_PARTIAL_WORD(4, addr, nval) +} +#endif + #if PYPY_LONG_BIT == 32 long long stm_read_doubleword(long *addr) @@ -791,7 +813,7 @@ #if PYPY_LONG_BIT == 32 stm_write_word(addr, ii); /* 32 bits */ #else - stm_write_partial_word(4, addr, ii); /* 64 bits */ + stm_write_partial_4(addr, ii); /* 64 bits */ #endif } diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -33,13 +33,21 @@ 2: in an inevitable transaction */ // XXX little-endian only! -#define STM_read_partial_word(T, base, offset) \ - (T)(stm_read_word( \ +/* this macro is used if 'base' is a word-aligned pointer and 'offset' + is a compile-time constant */ +#define stm_fx_read_partial(base, offset) \ + (stm_read_word( \ (long*)(((char*)(base)) + ((offset) & ~(sizeof(void*)-1)))) \ >> (8 * ((offset) & (sizeof(void*)-1)))) -unsigned long stm_read_partial_word(int fieldsize, void *addr); -void stm_write_partial_word(int fieldsize, void *addr, unsigned long nval); +unsigned char stm_read_partial_1(void *addr); +unsigned short stm_read_partial_2(void *addr); +void stm_write_partial_1(void *addr, unsigned char nval); +void stm_write_partial_2(void *addr, unsigned short nval); +#if PYPY_LONG_BIT == 64 +unsigned int stm_read_partial_4(void *addr); +void stm_write_partial_4(void *addr, unsigned int nval); +#endif double stm_read_double(long *addr); void stm_write_double(long *addr, double val); From noreply at buildbot.pypy.org Fri Jan 20 14:25:07 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 14:25:07 +0100 (CET) Subject: [pypy-commit] pypy stm: (bivab, romain, arigo) Message-ID: <20120120132507.11EC282CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51520:3b0b849eed24 Date: 2012-01-20 14:24 +0100 http://bitbucket.org/pypy/pypy/changeset/3b0b849eed24/ Log: (bivab, romain, arigo) Found out that a "volatile struct { int x; }" is not the same thing as a "struct { volatile int x; }". In fact the "volatile" in the first example seems to have no effect. Bah. Fixed by removing the struct completely, as nowadays it contains only one field. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -34,9 +34,7 @@ (((unsigned long)(num)) > ((unsigned long)(max_age))) typedef long owner_version_t; -typedef struct { - owner_version_t v; // the current version number -} orec_t; +typedef volatile owner_version_t orec_t; /*** Specify the number of orecs in the global array. */ #define NUM_STRIPES 1048576 @@ -45,14 +43,14 @@ static char orecs[NUM_STRIPES * sizeof(orec_t)]; /*** map addresses to orec table entries */ -inline static volatile orec_t* get_orec(void* addr) +inline static orec_t *get_orec(void* addr) { unsigned long index = (unsigned long)addr; #ifdef RPY_STM_ASSERT assert(!(index & (sizeof(orec_t)-1))); #endif char *p = orecs + (index & ((NUM_STRIPES-1) * sizeof(orec_t))); - return (volatile orec_t *)p; + return (orec_t *)p; } #include "src_stm/lists.c" @@ -158,9 +156,9 @@ appears in the redolog list. If it's not, then p == -1. */ if (item->p != -1) { - volatile orec_t* o = get_orec(item->addr); + orec_t* o = get_orec(item->addr); CFENCE; - o->v = newver; + *o = newver; } } REDOLOG_LOOP_END; } @@ -173,8 +171,8 @@ { if (item->p != -1) { - volatile orec_t* o = get_orec(item->addr); - o->v = item->p; + orec_t* o = get_orec(item->addr); + *o = item->p; } } REDOLOG_LOOP_END; } @@ -187,8 +185,8 @@ { if (item->p != -1) { - volatile orec_t* o = get_orec(item->addr); - o->v = item->p; + orec_t* o = get_orec(item->addr); + *o = item->p; item->p = -1; } } REDOLOG_LOOP_END; @@ -202,11 +200,11 @@ REDOLOG_LOOP_BACKWARD(d->redolog, item) { // get orec, read its version# - volatile orec_t* o = get_orec(item->addr); + orec_t* o = get_orec(item->addr); owner_version_t ovt; retry: - ovt = o->v; + ovt = *o; // if orec not locked, lock it // @@ -214,7 +212,7 @@ // reads. Since most writes are also reads, we'll just abort under this // condition. This can introduce false conflicts if (!IS_LOCKED_OR_NEWER(ovt, d->start_time)) { - if (!bool_cas(&o->v, ovt, d->my_lock_word)) + if (!bool_cas(o, ovt, d->my_lock_word)) goto retry; // save old version to item->p. Now we hold the lock. // in case of duplicate orecs, only the last one has p != -1. @@ -291,7 +289,7 @@ for (i=0; ireads.size; i++) { retry: - ovt = d->reads.items[i]->v; + ovt = *(d->reads.items[i]); if (IS_LOCKED_OR_NEWER(ovt, d->start_time)) { // If locked, we wait until it becomes unlocked. The chances are @@ -321,7 +319,7 @@ assert(!is_inevitable(d)); for (i=0; ireads.size; i++) { - ovt = d->reads.items[i]->v; // read this orec + ovt = *(d->reads.items[i]); // read this orec if (IS_LOCKED_OR_NEWER(ovt, d->start_time)) { if (!IS_LOCKED(ovt)) @@ -433,12 +431,12 @@ not_found:; // get the orec addr - volatile orec_t* o = get_orec((void*)addr); + orec_t* o = get_orec((void*)addr); owner_version_t ovt; retry: // read the orec BEFORE we read anything else - ovt = o->v; + ovt = *o; CFENCE; // this tx doesn't hold any locks, so if the lock for this addr is held, @@ -461,10 +459,10 @@ // postvalidate AFTER reading addr: CFENCE; - if (o->v != ovt) + if (*o != ovt) goto retry; /* oups, try again */ - oreclist_insert(&d->reads, (orec_t*)o); + oreclist_insert(&d->reads, o); return tmp; } From noreply at buildbot.pypy.org Fri Jan 20 15:00:20 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 15:00:20 +0100 (CET) Subject: [pypy-commit] pypy stm: Add two __builtin_expect() to optimize the order of the assembler, maybe. Message-ID: <20120120140020.D974582CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51521:4f9f97138a8c Date: 2012-01-20 15:00 +0100 http://bitbucket.org/pypy/pypy/changeset/4f9f97138a8c/ Log: Add two __builtin_expect() to optimize the order of the assembler, maybe. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -31,7 +31,7 @@ #define IS_LOCKED(num) ((num) < 0) #define IS_LOCKED_OR_NEWER(num, max_age) \ - (((unsigned long)(num)) > ((unsigned long)(max_age))) + __builtin_expect(((unsigned long)(num)) > ((unsigned long)(max_age)), 0) typedef long owner_version_t; typedef volatile owner_version_t orec_t; diff --git a/pypy/translator/stm/src_stm/lists.c b/pypy/translator/stm/src_stm/lists.c --- a/pypy/translator/stm/src_stm/lists.c +++ b/pypy/translator/stm/src_stm/lists.c @@ -102,7 +102,7 @@ unsigned long _key = (unsigned long)(addr1); \ char *_p = (char *)((redolog).toplevel.items); \ char *_entry = *(char **)(_p + (_key & TREE_MASK)); \ - if (_entry == NULL) \ + if (__builtin_expect(_entry == NULL, 1)) \ goto_not_found; /* common case, hopefully */ \ result = _redolog_find(_entry, addr1); \ if (result == NULL || result->addr != (addr1)) \ From noreply at buildbot.pypy.org Fri Jan 20 15:17:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 20 Jan 2012 15:17:23 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-bool: A branch to merge what's there in indexing by arrays, the rest left for the Message-ID: <20120120141723.926E382CF8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-bool Changeset: r51522:8cc13fd6cd81 Date: 2012-01-20 16:16 +0200 http://bitbucket.org/pypy/pypy/changeset/8cc13fd6cd81/ Log: A branch to merge what's there in indexing by arrays, the rest left for the future when I figure out. Fix translation diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -6,10 +6,7 @@ # structures to describe slicing -class BaseChunk(object): - pass - -class Chunk(BaseChunk): +class Chunk(object): def __init__(self, start, stop, step, lgt): self.start = start self.stop = stop @@ -20,14 +17,6 @@ if self.step != 0: shape.append(self.lgt) -class IntArrayChunk(BaseChunk): - def __init__(self, arr): - self.arr = arr.get_concrete() - -class BoolArrayChunk(BaseChunk): - def __init__(self, arr): - self.arr = arr.get_concrete() - class BaseTransform(object): pass diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -249,15 +249,16 @@ class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) self.func = func self.comparison_func = comparison_func + self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -268,6 +269,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -304,10 +306,12 @@ def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -425,8 +429,10 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), - ("bitwise_and", "bitwise_and", 2, {"identity": 1}), - ("bitwise_or", "bitwise_or", 2, {"identity": 0}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -477,7 +483,7 @@ extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -353,12 +353,13 @@ assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() def test_bitwise(self): - from _numpypy import bitwise_and, bitwise_or, arange + from _numpypy import bitwise_and, bitwise_or, arange, array a = arange(6).reshape(2, 3) assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() assert (a & 1 == bitwise_and(a, 1)).all() assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() assert (a | 1 == bitwise_or(a, 1)).all() + raises(TypeError, 'array([1.0]) & 1') def test_comparisons(self): import operator diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -171,14 +171,6 @@ @simple_binary_op def min(self, v1, v2): return min(v1, v2) - - @simple_binary_op - def bitwise_and(self, v1, v2): - return v1 & v2 - - @simple_binary_op - def bitwise_or(self, v1, v2): - return v1 | v2 class Bool(BaseType, Primitive): @@ -270,6 +262,14 @@ assert v == 0 return 0 + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box From noreply at buildbot.pypy.org Fri Jan 20 15:19:33 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 20 Jan 2012 15:19:33 +0100 (CET) Subject: [pypy-commit] pypy numpy-indexing-by-arrays-bool: merge default Message-ID: <20120120141933.CB48082CF8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-indexing-by-arrays-bool Changeset: r51523:3f86800ad9ba Date: 2012-01-20 16:18 +0200 http://bitbucket.org/pypy/pypy/changeset/3f86800ad9ba/ Log: merge default diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from .fromnumeric import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py rename from lib_pypy/numpypy/fromnumeric.py rename to lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -85,7 +85,7 @@ array([4, 3, 6]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') # not deprecated --- copy if necessary, view otherwise @@ -273,7 +273,7 @@ [-1, -2, -3, -4, -5]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def repeat(a, repeats, axis=None): @@ -315,7 +315,7 @@ [3, 4]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def put(a, ind, v, mode='raise'): @@ -366,7 +366,7 @@ array([ 0, 1, 2, 3, -5]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def swapaxes(a, axis1, axis2): @@ -410,7 +410,7 @@ [3, 7]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def transpose(a, axes=None): @@ -451,8 +451,11 @@ (2, 1, 3) """ - raise NotImplemented('Waiting on interp level method') - + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T def sort(a, axis=-1, kind='quicksort', order=None): """ @@ -553,7 +556,7 @@ dtype=[('name', '|S10'), ('height', ' 1: raise OperationError(space.w_ValueError, space.wrap( @@ -765,6 +769,9 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + def create_sig(self): return signature.ScalarSignature(self.dtype) @@ -870,7 +877,7 @@ Intermediate class for performing binary operations. """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -910,7 +917,7 @@ def __init__(self, shape, dtype, left, right): Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) - + def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() @@ -929,7 +936,7 @@ when we'll make AxisReduce lazy """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) @@ -1001,14 +1008,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: @@ -1143,6 +1150,9 @@ array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): def create_sig(self): @@ -1366,6 +1376,8 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -457,6 +457,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -728,15 +728,16 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array, mean + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - b = mean(a, axis=0) - b[0,0]==35. + b = a.mean(axis=0) + b[0, 0]==35. + assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from _numpypy import array @@ -757,6 +758,7 @@ assert array([]).sum() == 0.0 raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() @@ -771,6 +773,8 @@ assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_identity(self): from _numpypy import identity, array @@ -1312,6 +1316,26 @@ raises(IndexError,'arange(3)[array([-15])]') assert arange(3)[array(1)] == 1 + def test_fill(self): + from _numpypy import array + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + def test_array_indexing_bool(self): from _numpypy import arange a = arange(10) @@ -1329,6 +1353,8 @@ a[a & 1 == 1] = array([8, 9, 10]) assert (a == [[0, 8], [2, 9], [4, 10]]).all() + + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct @@ -1467,9 +1493,11 @@ assert repr(a) == "array(0.0)" a = array(0.2) assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" def test_repr_multi(self): - from _numpypy import arange, zeros + from _numpypy import arange, zeros, array a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1492,6 +1520,9 @@ [498, 999], [499, 1000], [500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' def test_repr_slice(self): from _numpypy import array, zeros @@ -1576,18 +1607,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = array(range(105)).reshape(3, 5, 7) - assert a.reshape(1, -1).shape == (1, 105) - assert a.reshape(1, 1, -1).shape == (1, 1, 105) - assert a.reshape(-1, 1, 1).shape == (105, 1, 1) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -350,7 +350,8 @@ self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, - 'int_eq': 1, 'guard_false': 1, 'jump': 1}) + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -391,6 +391,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -10,6 +11,7 @@ from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver class Cache(object): in_recursion = False @@ -111,13 +113,24 @@ def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd for op in operations: if ops_offset is None: ofs = -1 else: ofs = ops_offset.get(op, 0) - l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, - logops.repr_of_resop(op))) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_greenkey)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) return l_w class WrappedBox(Wrappable): @@ -150,6 +163,15 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_greenkey) + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -182,6 +204,25 @@ box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) +class DebugMergePoint(WrappedOp): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.w_greenkey = w_greenkey + self.jd_name = jd_name + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_bytecode_no(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_jitdriver_name(self, space): + return space.wrap(self.jd_name) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -195,3 +236,15 @@ WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), + pycode = GetSetProperty(DebugMergePoint.get_pycode), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -92,6 +92,7 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) cls.orig_oplist = oplist @@ -117,6 +118,10 @@ assert elem[2][2] == False assert len(elem[3]) == 4 int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() @@ -211,3 +216,18 @@ assert op.getarg(0).getint() == 4 op.result = box assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' + assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py rename from lib_pypy/numpypy/test/test_fromnumeric.py rename to pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -1,7 +1,7 @@ - from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -class AppTestFromNumeric(BaseNumpyAppTest): + +class AppTestFromNumeric(BaseNumpyAppTest): def test_argmax(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, argmax @@ -18,12 +18,12 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - # assert (argmax(a, axis=0) == array([0, 0, 0])).all() - # assert (argmax(a, axis=1) == array([0, 0])).all() + assert (argmin(a, axis=0) == array([0, 0, 0])).all() + assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 - + def test_shape(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, identity, shape @@ -44,7 +44,7 @@ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 - + def test_amin(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, amin @@ -86,14 +86,14 @@ assert ndim([[1,2,3],[4,5,6]]) == 2 assert ndim(array([[1,2,3],[4,5,6]])) == 2 assert ndim(1) == 0 - + def test_rank(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, rank assert rank([[1,2,3],[4,5,6]]) == 2 assert rank(array([[1,2,3],[4,5,6]])) == 2 assert rank(1) == 0 - + def test_var(self): from numpypy import array, var a = array([[1,2],[3,4]]) @@ -107,3 +107,31 @@ assert std(a) == 1.1180339887498949 # assert (std(a, axis=0) == array([ 1., 1.])).all() # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + + def test_mean(self): + from numpypy import array, mean + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) + + def test_transpose(self): + from numpypy import arange, array, transpose, ones + x = arange(4).reshape((2,2)) + assert (transpose(x) == array([[0, 2],[1, 3]])).all() + # Once axes argument is implemented, add more tests + raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") + # x = ones((1, 2, 3)) + # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -64,6 +64,12 @@ check(', '.join([u'a']), u'a') check(', '.join(['a', u'b']), u'a, b') check(u', '.join(['a', 'b']), u'a, b') + try: + u''.join([u'a', 2, 3]) + except TypeError, e: + assert 'sequence item 1' in str(e) + else: + raise Exception("DID NOT RAISE") if sys.version_info >= (2,3): def test_contains_ex(self): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -201,7 +201,7 @@ return space.newbool(container.find(item) != -1) def unicode_join__Unicode_ANY(space, w_self, w_list): - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -216,22 +216,21 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = len(self) * (size - 1) + for i in range(size): + try: + prealloc_size += len(space.unicode_w(list_w[i])) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if isinstance(w_s, W_UnicodeObject): - # shortcut for performance - sb.append(w_s._value) - else: - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -52,7 +52,10 @@ from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] - res = _cast_to_box(llres) + if llres: + res = _cast_to_box(llres) + else: + res = None return _cast_to_gcref(ResOperation(no, args, res)) @register_helper(annmodel.SomePtr(llmemory.GCREF)) diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -420,7 +420,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/lltypesystem/rtuple.py b/pypy/rpython/lltypesystem/rtuple.py --- a/pypy/rpython/lltypesystem/rtuple.py +++ b/pypy/rpython/lltypesystem/rtuple.py @@ -27,6 +27,10 @@ def newtuple(cls, llops, r_tuple, items_v): # items_v should have the lowleveltype of the internal reprs + assert len(r_tuple.items_r) == len(items_v) + for r_item, v_item in zip(r_tuple.items_r, items_v): + assert r_item.lowleveltype == v_item.concretetype + # if len(r_tuple.items_r) == 0: return inputconst(Void, ()) # a Void empty tuple c1 = inputconst(Void, r_tuple.lowleveltype.TO) diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/rrange.py b/pypy/rpython/rrange.py --- a/pypy/rpython/rrange.py +++ b/pypy/rpython/rrange.py @@ -204,7 +204,10 @@ v_index = hop.gendirectcall(self.ll_getnextindex, v_enumerate) hop2 = hop.copy() hop2.args_r = [self.r_baseiter] + r_item_src = self.r_baseiter.r_list.external_item_repr + r_item_dst = hop.r_result.items_r[1] v_item = self.r_baseiter.rtype_next(hop2) + v_item = hop.llops.convertvar(v_item, r_item_src, r_item_dst) return hop.r_result.newtuple(hop.llops, hop.r_result, [v_index, v_item]) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): diff --git a/pypy/rpython/test/test_rrange.py b/pypy/rpython/test/test_rrange.py --- a/pypy/rpython/test/test_rrange.py +++ b/pypy/rpython/test/test_rrange.py @@ -169,6 +169,22 @@ res = self.interpret(fn, [2]) assert res == 789 + def test_enumerate_instances(self): + class A: + pass + def fn(n): + a = A() + b = A() + a.k = 10 + b.k = 20 + for i, x in enumerate([a, b]): + if i == n: + return x.k + return 5 + res = self.interpret(fn, [1]) + assert res == 20 + + class TestLLtype(BaseTestRrange, LLRtypeMixin): from pypy.rpython.lltypesystem import rrange diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -140,13 +140,15 @@ bytecode_name = None is_bytecode = True inline_level = None + has_dmp = False def parse_code_data(self, arg): m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', arg) if m is None: # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = arg + if arg: + self.bytecode_name = arg else: self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() self.startlineno = int(lineno) @@ -218,7 +220,7 @@ self.inputargs = inputargs self.chunks = chunks for chunk in self.chunks: - if chunk.filename is not None: + if chunk.bytecode_name is not None: self.startlineno = chunk.startlineno self.filename = chunk.filename self.name = chunk.name diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -283,3 +283,13 @@ assert loops[-1].count == 1234 assert loops[1].count == 123 assert loops[2].count == 12 + +def test_parse_nonpython(): + loop = parse(""" + [] + debug_merge_point(0, 'random') + debug_merge_point(0, ' #15 COMPARE_OP') + """) + f = Function.from_operations(loop.operations, LoopStorage()) + assert f.chunks[-1].filename == 'x.py' + assert f.filename is None diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Fri Jan 20 15:21:49 2012 From: noreply at buildbot.pypy.org (fijal) Date: Fri, 20 Jan 2012 15:21:49 +0100 (CET) Subject: [pypy-commit] pypy default: merge numpy-indexing-by-arrays-bool, this adds some basic indexing by bool Message-ID: <20120120142149.4095B82CF8@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51524:ab36137540d5 Date: 2012-01-20 16:21 +0200 http://bitbucket.org/pypy/pypy/changeset/ab36137540d5/ Log: merge numpy-indexing-by-arrays-bool, this adds some basic indexing by bool arrays (only when shapes match) as well as bitwise_and/or and a few helper functions. More work on indexing by arrays to be done in the future diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -86,6 +86,8 @@ ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ('bitwise_and', 'bitwise_and'), + ('bitwise_or', 'bitwise_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -47,6 +47,10 @@ def getitem(self, storage, i): return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + def getitem_bool(self, storage, i): + isize = self.itemtype.get_element_size() + return self.itemtype.read_bool(storage, isize, i, 0) + def setitem(self, storage, i, box): self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) @@ -85,6 +89,12 @@ def descr_get_shape(self, space): return space.newtuple([]) + def is_int_type(self): + return self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR + + def is_bool_type(self): + return self.kind == BOOLLTR + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", __new__ = interp2app(W_Dtype.descr__new__.im_func), diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -4,6 +4,19 @@ from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ calculate_slice_strides +# structures to describe slicing + +class Chunk(object): + def __init__(self, start, stop, step, lgt): + self.start = start + self.stop = stop + self.step = step + self.lgt = lgt + + def extend_shape(self, shape): + if self.step != 0: + shape.append(self.lgt) + class BaseTransform(object): pass @@ -38,11 +51,18 @@ self.size = size def next(self, shapelen): + return self._next(1) + + def _next(self, ofs): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + 1 + arr.offset = self.offset + ofs return arr + def next_no_increase(self, shapelen): + # a hack to make JIT believe this is always virtual + return self._next(0) + def done(self): return self.offset >= self.size diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -2,14 +2,15 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ + interp_boxes from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + SkipLastAxisIterator, Chunk, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -39,7 +40,24 @@ get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) - +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['s', 'frame', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], + name='numpy_filter', +) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', +) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -270,6 +288,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -479,11 +500,69 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(self) + shapelen = len(arr.shape) + s = 0 + iter = None + while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = concr.create_iter() + sig = self.find_sig() + frame = sig.create_frame(self) + v = None + while not frame.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen, self=self) + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + + def setitem_filter(self, space, idx, val): + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -493,6 +572,11 @@ def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -509,9 +593,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -724,8 +807,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result @@ -945,7 +1027,7 @@ builder.append('\n' + indent) else: builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: @@ -962,7 +1044,7 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 @@ -1091,6 +1173,10 @@ parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1137,6 +1223,9 @@ self.shape = new_shape self.calc_strides(new_shape) + def create_iter(self): + return ArrayIterator(self.size) + def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1191,6 +1280,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, @@ -1257,6 +1347,9 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -249,15 +249,16 @@ class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) self.func = func self.comparison_func = comparison_func + self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -268,6 +269,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -304,10 +306,12 @@ def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -425,6 +429,10 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -476,7 +484,7 @@ extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -82,6 +82,16 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + def get_final_iter(self): final_iter = promote(self.final_iter) if final_iter < 0: diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -10,12 +10,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -1302,15 +1304,25 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_array_indexing_one_elem(self): + skip("not yet") + from _numpypy import array, arange + raises(IndexError, 'arange(3)[array([3.5])]') + a = arange(3)[array([1])] + assert a == 1 + assert a[0] == 1 + raises(IndexError,'arange(3)[array([15])]') + assert arange(3)[array([-3])] == 0 + raises(IndexError,'arange(3)[array([-15])]') + assert arange(3)[array(1)] == 1 + def test_fill(self): from _numpypy import array - a = array([1, 2, 3]) a.fill(10) assert (a == [10, 10, 10]).all() a.fill(False) assert (a == [0, 0, 0]).all() - b = a[:1] b.fill(4) assert (b == [4]).all() @@ -1324,6 +1336,24 @@ d.fill(100) assert d == 100 + def test_array_indexing_bool(self): + from _numpypy import arange + a = arange(10) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + a = arange(10).reshape(5, 2) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() + + def test_array_indexing_bool_setitem(self): + from _numpypy import arange, array + a = arange(6) + a[a > 3] = 15 + assert (a == [0, 1, 2, 3, 15, 15]).all() + a = arange(6).reshape(3, 2) + a[a & 1 == 1] = array([8, 9, 10]) + assert (a == [[0, 8], [2, 9], [4, 10]]).all() + + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -355,11 +355,20 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): - from numpypy import add, arange + from _numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_bitwise(self): + from _numpypy import bitwise_and, bitwise_or, arange, array + a = arange(6).reshape(2, 3) + assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() + assert (a & 1 == bitwise_and(a, 1)).all() + assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() + assert (a | 1 == bitwise_or(a, 1)).all() + raises(TypeError, 'array([1.0]) & 1') + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -217,6 +217,7 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. + py.test.skip("too fragile") self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, 'getfield_gc_pure': 8, diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -94,6 +94,9 @@ width, storage, i, offset )) + def read_bool(self, storage, width, i, offset): + raise NotImplementedError + def store(self, storage, width, i, offset, box): value = self.unbox(box) libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), @@ -168,6 +171,7 @@ @simple_binary_op def min(self, v1, v2): return min(v1, v2) + class Bool(BaseType, Primitive): T = lltype.Bool @@ -185,6 +189,11 @@ else: return self.False + + def read_bool(self, storage, width, i, offset): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def coerce_subtype(self, space, w_subtype, w_item): # Doesn't return subclasses so it can return the constants. return self._coerce(space, w_item) @@ -253,6 +262,14 @@ assert v == 0 return 0 + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box From noreply at buildbot.pypy.org Fri Jan 20 15:32:23 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Fri, 20 Jan 2012 15:32:23 +0100 (CET) Subject: [pypy-commit] pypy default: resuffle pypy.tool.version and add support for .hg_archival.txt, fixes issue952 Message-ID: <20120120143223.5C1FB82CF8@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r51525:79770e0c2f93 Date: 2012-01-20 15:31 +0100 http://bitbucket.org/pypy/pypy/changeset/79770e0c2f93/ Log: resuffle pypy.tool.version and add support for .hg_archival.txt, fixes issue952 diff --git a/pypy/tool/test/test_version.py b/pypy/tool/test/test_version.py --- a/pypy/tool/test/test_version.py +++ b/pypy/tool/test/test_version.py @@ -1,6 +1,22 @@ import os, sys import py -from pypy.tool.version import get_repo_version_info +from pypy.tool.version import get_repo_version_info, _get_hg_archive_version + +def test_hg_archival_version(tmpdir): + def version_for(name, **kw): + path = tmpdir.join(name) + path.write('\n'.join('%s: %s' % x for x in kw.items())) + return _get_hg_archive_version(str(path)) + + assert version_for('release', + tag='release-123', + node='000', + ) == ('PyPy', 'release-123', '000') + assert version_for('somebranch', + node='000', + branch='something', + ) == ('PyPy', 'something', '000') + def test_get_repo_version_info(): assert get_repo_version_info(None) diff --git a/pypy/tool/version.py b/pypy/tool/version.py --- a/pypy/tool/version.py +++ b/pypy/tool/version.py @@ -3,111 +3,139 @@ from subprocess import Popen, PIPE import pypy pypydir = os.path.dirname(os.path.abspath(pypy.__file__)) +pypyroot = os.path.dirname(pypydir) +default_retval = 'PyPy', '?', '?' + +def maywarn(err, repo_type='Mercurial'): + if not err: + return + + from pypy.tool.ansi_print import ansi_log + log = py.log.Producer("version") + py.log.setconsumer("version", ansi_log) + log.WARNING('Errors getting %s information: %s' % (repo_type, err)) def get_repo_version_info(hgexe=None): '''Obtain version information by invoking the 'hg' or 'git' commands.''' - # TODO: support extracting from .hg_archival.txt - - default_retval = 'PyPy', '?', '?' - pypyroot = os.path.abspath(os.path.join(pypydir, '..')) - - def maywarn(err, repo_type='Mercurial'): - if not err: - return - - from pypy.tool.ansi_print import ansi_log - log = py.log.Producer("version") - py.log.setconsumer("version", ansi_log) - log.WARNING('Errors getting %s information: %s' % (repo_type, err)) # Try to see if we can get info from Git if hgexe is not specified. if not hgexe: if os.path.isdir(os.path.join(pypyroot, '.git')): - gitexe = py.path.local.sysfind('git') - if gitexe: - try: - p = Popen( - [str(gitexe), 'rev-parse', 'HEAD'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - except OSError, e: - maywarn(e, 'Git') - return default_retval - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return default_retval - revision_id = p.stdout.read().strip()[:12] - p = Popen( - [str(gitexe), 'describe', '--tags', '--exact-match'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - if p.wait() != 0: - p = Popen( - [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, - cwd=pypyroot - ) - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return 'PyPy', '?', revision_id - branch = '?' - for line in p.stdout.read().strip().split('\n'): - if line.startswith('* '): - branch = line[1:].strip() - if branch == '(no branch)': - branch = '?' - break - return 'PyPy', branch, revision_id - return 'PyPy', p.stdout.read().strip(), revision_id + return _get_git_version() # Fallback to trying Mercurial. if hgexe is None: hgexe = py.path.local.sysfind('hg') - if not os.path.isdir(os.path.join(pypyroot, '.hg')): + if os.path.isfile(os.path.join(pypyroot, '.hg_archival.txt')): + return _get_hg_archive_version(os.path.join(pypyroot, '.hg_archival.txt')) + elif not os.path.isdir(os.path.join(pypyroot, '.hg')): maywarn('Not running from a Mercurial repository!') return default_retval elif not hgexe: maywarn('Cannot find Mercurial command!') return default_retval else: - env = dict(os.environ) - # get Mercurial into scripting mode - env['HGPLAIN'] = '1' - # disable user configuration, extensions, etc. - env['HGRCPATH'] = os.devnull + return _get_hg_version(hgexe) - try: - p = Popen([str(hgexe), 'version', '-q'], - stdout=PIPE, stderr=PIPE, env=env) - except OSError, e: - maywarn(e) - return default_retval - if not p.stdout.read().startswith('Mercurial Distributed SCM'): - maywarn('command does not identify itself as Mercurial') - return default_retval +def _get_hg_version(hgexe): + env = dict(os.environ) + # get Mercurial into scripting mode + env['HGPLAIN'] = '1' + # disable user configuration, extensions, etc. + env['HGRCPATH'] = os.devnull - p = Popen([str(hgexe), 'id', '-i', pypyroot], + try: + p = Popen([str(hgexe), 'version', '-q'], stdout=PIPE, stderr=PIPE, env=env) - hgid = p.stdout.read().strip() + except OSError, e: + maywarn(e) + return default_retval + + if not p.stdout.read().startswith('Mercurial Distributed SCM'): + maywarn('command does not identify itself as Mercurial') + return default_retval + + p = Popen([str(hgexe), 'id', '-i', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgid = p.stdout.read().strip() + maywarn(p.stderr.read()) + if p.wait() != 0: + hgid = '?' + + p = Popen([str(hgexe), 'id', '-t', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] + maywarn(p.stderr.read()) + if p.wait() != 0: + hgtags = ['?'] + + if hgtags: + return 'PyPy', hgtags[0], hgid + else: + # use the branch instead + p = Popen([str(hgexe), 'id', '-b', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgbranch = p.stdout.read().strip() maywarn(p.stderr.read()) + + return 'PyPy', hgbranch, hgid + + +def _get_hg_archive_version(path): + fp = open(path) + try: + data = dict(x.split(': ', 1) for x in fp.read().splitlines()) + finally: + fp.close() + if 'tag' in data: + return 'PyPy', data['tag'], data['node'] + else: + return 'PyPy', data['branch'], data['node'] + + +def _get_git_version(): + #XXX: this function is a untested hack, + # so the git mirror tav made will work + gitexe = py.path.local.sysfind('git') + if not gitexe: + return default_retval + + try: + p = Popen( + [str(gitexe), 'rev-parse', 'HEAD'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + except OSError, e: + maywarn(e, 'Git') + return default_retval + if p.wait() != 0: + maywarn(p.stderr.read(), 'Git') + return default_retval + revision_id = p.stdout.read().strip()[:12] + p = Popen( + [str(gitexe), 'describe', '--tags', '--exact-match'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + if p.wait() != 0: + p = Popen( + [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, + cwd=pypyroot + ) if p.wait() != 0: - hgid = '?' + maywarn(p.stderr.read(), 'Git') + return 'PyPy', '?', revision_id + branch = '?' + for line in p.stdout.read().strip().split('\n'): + if line.startswith('* '): + branch = line[1:].strip() + if branch == '(no branch)': + branch = '?' + break + return 'PyPy', branch, revision_id + return 'PyPy', p.stdout.read().strip(), revision_id - p = Popen([str(hgexe), 'id', '-t', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] - maywarn(p.stderr.read()) - if p.wait() != 0: - hgtags = ['?'] - if hgtags: - return 'PyPy', hgtags[0], hgid - else: - # use the branch instead - p = Popen([str(hgexe), 'id', '-b', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgbranch = p.stdout.read().strip() - maywarn(p.stderr.read()) - - return 'PyPy', hgbranch, hgid +if __name__ == '__main__': + print get_repo_version_info() From noreply at buildbot.pypy.org Fri Jan 20 15:35:10 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:10 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: (antocuni, arigo, bivab, romain): a branch where to experiment with 'lightweight tracing', in which we inline only the opcode implementations and nothing else Message-ID: <20120120143510.7848E82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51526:08855ad04675 Date: 2012-01-20 11:10 +0100 http://bitbucket.org/pypy/pypy/changeset/08855ad04675/ Log: (antocuni, arigo, bivab, romain): a branch where to experiment with 'lightweight tracing', in which we inline only the opcode implementations and nothing else From noreply at buildbot.pypy.org Fri Jan 20 15:35:11 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:11 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: (antocuni, arigo, romain): introduce the is_core flag on JitCodes, which will be used to select which graphs to inline when tracing in core-only mode Message-ID: <20120120143511.B43E282CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51527:655088dbaa64 Date: 2012-01-20 12:01 +0100 http://bitbucket.org/pypy/pypy/changeset/655088dbaa64/ Log: (antocuni, arigo, romain): introduce the is_core flag on JitCodes, which will be used to select which graphs to inline when tracing in core-only mode diff --git a/pypy/jit/codewriter/call.py b/pypy/jit/codewriter/call.py --- a/pypy/jit/codewriter/call.py +++ b/pypy/jit/codewriter/call.py @@ -53,6 +53,7 @@ ll_args, ll_res) todo.append(c_func.value._obj.graph) candidate_graphs = set(todo) + core_candidate_graphs = set(todo) def callers(): graph = top_graph @@ -77,8 +78,11 @@ assert is_candidate(graph) todo.append(graph) candidate_graphs.add(graph) + if policy.is_core_graph(graph): + core_candidate_graphs.add(graph) coming_from[graph] = top_graph self.candidate_graphs = candidate_graphs + self.core_candidate_graphs = core_candidate_graphs return candidate_graphs def graphs_from(self, op, is_candidate=None): @@ -148,6 +152,9 @@ # used only after find_all_graphs() return graph in self.candidate_graphs + def is_core(self, graph): + return graph in self.core_candidate_graphs + def grab_initial_jitcodes(self): for jd in self.jitdrivers_sd: jd.mainjitcode = self.get_jitcode(jd.portal_graph) @@ -164,8 +171,9 @@ return self.jitcodes[graph] except KeyError: fnaddr, calldescr = self.get_jitcode_calldescr(graph) + is_core = self.is_core(graph) jitcode = JitCode(graph.name, fnaddr, calldescr, - called_from=called_from) + called_from=called_from, is_core=is_core) self.jitcodes[graph] = jitcode self.unfinished_graphs.append(graph) return jitcode diff --git a/pypy/jit/codewriter/jitcode.py b/pypy/jit/codewriter/jitcode.py --- a/pypy/jit/codewriter/jitcode.py +++ b/pypy/jit/codewriter/jitcode.py @@ -8,11 +8,12 @@ _empty_r = [] _empty_f = [] - def __init__(self, name, fnaddr=None, calldescr=None, called_from=None): + def __init__(self, name, fnaddr=None, calldescr=None, called_from=None, is_core=False): self.name = name self.fnaddr = fnaddr self.calldescr = calldescr self.is_portal = False + self.is_core = is_core self._called_from = called_from # debugging self._ssarepr = None # debugging diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -38,6 +38,9 @@ def look_inside_function(self, func): return True # look into everything by default + def is_core_graph(self, graph): + return True + def _reject_function(self, func): if hasattr(func, '_jit_look_inside_'): return not func._jit_look_inside_ diff --git a/pypy/jit/codewriter/test/test_call.py b/pypy/jit/codewriter/test/test_call.py --- a/pypy/jit/codewriter/test/test_call.py +++ b/pypy/jit/codewriter/test/test_call.py @@ -10,6 +10,9 @@ def look_inside_graph(self, graph): return True + def is_core_graph(self, graph): + return True + def test_graphs_from_direct_call(): cc = CallControl() @@ -159,6 +162,7 @@ return lltype.functionptr(F, 'bar') # cc = CallControl(FakeCPU(FakeRTyper())) + cc.core_candidate_graphs = set() class somegraph: name = "foo" jitcode = cc.get_jitcode(somegraph) @@ -210,3 +214,33 @@ op = block.operations[-1] call_descr = cc.getcalldescr(op) assert call_descr.extrainfo.has_random_effects() + + +def test_mark_jitcode_as_core(): + from pypy.jit.codewriter.test.test_flatten import FakeCPU + + class MyPolicy: + def look_inside_graph(self, graph): + return graph.name in ('f', 'g') + + def is_core_graph(self, graph): + if graph.name == 'f': + return True + return False + + def g(x): + return x + 2 + def f(x): + return g(x) + 1 + rtyper = support.annotate(f, [7]) + jitdriver_sd = FakeJitDriverSD(rtyper.annotator.translator.graphs[0]) + cc = CallControl(jitdrivers_sd=[jitdriver_sd]) + res = cc.find_all_graphs(MyPolicy()) + # hack hack hack + cc.cpu = FakeCPU(rtyper) + cc.rtyper = rtyper + graphs = dict([(graph.name, graph) for graph in res]) + jitcode_f = cc.get_jitcode(graphs['f']) + jitcode_g = cc.get_jitcode(graphs['g']) + assert jitcode_f.is_core + assert not jitcode_g.is_core diff --git a/pypy/jit/codewriter/test/test_codewriter.py b/pypy/jit/codewriter/test/test_codewriter.py --- a/pypy/jit/codewriter/test/test_codewriter.py +++ b/pypy/jit/codewriter/test/test_codewriter.py @@ -40,6 +40,9 @@ def look_inside_graph(self, graph): return graph.name != 'dont_look' + def is_core_graph(self, graph): + return True + class FakeJitDriverSD: def __init__(self, portal_graph): self.portal_graph = portal_graph @@ -162,6 +165,9 @@ name = graph.name return not (name.startswith('instantiate_') and name.endswith('A2')) + + def is_core_graph(self, graph): + return True class A1: pass class A2(A1): From noreply at buildbot.pypy.org Fri Jan 20 15:35:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:12 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: (antocuni, arigo, romain) complain if a graph is marked as access_direct but not core, because in this case we would have a wrong behaviour when tracing in core-only mode Message-ID: <20120120143512.E8F2582CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51528:da475d18764e Date: 2012-01-20 12:11 +0100 http://bitbucket.org/pypy/pypy/changeset/da475d18764e/ Log: (antocuni, arigo, romain) complain if a graph is marked as access_direct but not core, because in this case we would have a wrong behaviour when tracing in core-only mode diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -76,19 +76,22 @@ if res and contains_loop: self.unsafe_loopy_graphs.add(graph) res = res and not contains_loop - if (see_function and not res and - getattr(graph, "access_directly", False)): - # This happens when we have a function which has an argument with - # the access_directly flag, and the annotator has determined we will - # see the function. (See - # pypy/annotation/specialize.py:default_specialize) However, - # look_inside_graph just decided that we will not see it. (It has a - # loop or unsupported variables.) If we return False, the call will - # be turned into a residual call, but the graph is access_directly! - # If such a function is called and accesses a virtualizable, the JIT - # will not notice, and the virtualizable will fall out of sync. So, - # we fail loudly now. - raise ValueError("access_directly on a function which we don't see %s" % graph) + if getattr(graph, "access_directly", False): + if (see_function and not res): + # This happens when we have a function which has an argument with + # the access_directly flag, and the annotator has determined we will + # see the function. (See + # pypy/annotation/specialize.py:default_specialize) However, + # look_inside_graph just decided that we will not see it. (It has a + # loop or unsupported variables.) If we return False, the call will + # be turned into a residual call, but the graph is access_directly! + # If such a function is called and accesses a virtualizable, the JIT + # will not notice, and the virtualizable will fall out of sync. So, + # we fail loudly now. + raise ValueError("access_directly on a function which we don't see %s" % graph) + if not self.is_core_graph(graph): + # the same comment as above applies when we run in core-only tracing mode + raise ValueError("access_directly on a function which is not core: %s" % graph) return res def contains_unsupported_variable_type(graph, supports_floats, diff --git a/pypy/jit/codewriter/test/test_policy.py b/pypy/jit/codewriter/test/test_policy.py --- a/pypy/jit/codewriter/test/test_policy.py +++ b/pypy/jit/codewriter/test/test_policy.py @@ -130,3 +130,23 @@ h_graph = rtyper.annotator.translator.graphs[1] assert h_graph.func is h py.test.raises(ValueError, JitPolicy().look_inside_graph, h_graph) + + +def test_access_directly_but_not_core(): + class MyPolicy(JitPolicy): + def is_core_graph(self, graph): + assert graph.name.startswith('h__AccessDirect') + return False + + class X: + _virtualizable2_ = ["a"] + def h(x, y): + return x.a + y + def f(y): + x = jit.hint(X(), access_directly=True) + x.a = 4 + h(x, y) + rtyper = support.annotate(f, [3]) + h_graph = rtyper.annotator.translator.graphs[1] + assert h_graph.func is h + py.test.raises(ValueError, MyPolicy().look_inside_graph, h_graph) From noreply at buildbot.pypy.org Fri Jan 20 15:35:14 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:14 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: implement the 'fast' jit mode, in which we trace only inside the 'core' graphs, and do residual calls to everything else; still in-progress, at least one case is missing, see next checkin Message-ID: <20120120143514.466E982CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51529:927230d64a20 Date: 2012-01-20 14:12 +0100 http://bitbucket.org/pypy/pypy/changeset/927230d64a20/ Log: implement the 'fast' jit mode, in which we trace only inside the 'core' graphs, and do residual calls to everything else; still in- progress, at least one case is missing, see next checkin diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -778,15 +778,30 @@ result = vinfo.get_array_length(virtualizable, arrayindex) return ConstInt(result) + def perform_call_maybe(self, jitcode, argboxes): + core_only_mode = (self.metainterp.jitdriver_sd.warmstate.jitmode == 'fast') + if core_only_mode and self.jitcode.is_core: + # never inline in this mode + funcbox = ConstInt(jitcode.get_fnaddr_as_int()) + # jitcode always has a calldescr, but it might not have the + # correct effectinfo. The result is that we might generate a + # call_may_force when a call might suffice; however, we don't care + # too much because for the PyPy interpreter most calls are to + # space.*, so they would be call_may_force anyway. + calldescr = jitcode.calldescr + return self.do_residual_call(funcbox, calldescr, argboxes) + # normal mode + return self.metainterp.perform_call(jitcode, argboxes) + @arguments("jitcode", "boxes") def _opimpl_inline_call1(self, jitcode, argboxes): - return self.metainterp.perform_call(jitcode, argboxes) + return self.perform_call_maybe(jitcode, argboxes) @arguments("jitcode", "boxes2") def _opimpl_inline_call2(self, jitcode, argboxes): - return self.metainterp.perform_call(jitcode, argboxes) + return self.perform_call_maybe(jitcode, argboxes) @arguments("jitcode", "boxes3") def _opimpl_inline_call3(self, jitcode, argboxes): - return self.metainterp.perform_call(jitcode, argboxes) + return self.perform_call_maybe(jitcode, argboxes) opimpl_inline_call_r_i = _opimpl_inline_call1 opimpl_inline_call_r_r = _opimpl_inline_call1 diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2942,6 +2942,39 @@ res = self.meta_interp(f, [32]) assert res == f(32) + def test_residual_call_from_core_graph(self): + class MyPolicy(JitPolicy): + def is_core_graph(self, graph): + return graph.name == 'f' + myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'res']) + + def a(x): + return x + def b(x, y): + return x+y + def c(x, y, z): + return x+y+z + def f(x, y): + res = 0 + while y > 0: + myjitdriver.can_enter_jit(x=x, y=y, res=res) + myjitdriver.jit_merge_point(x=x, y=y, res=res) + res = a(x) + b(x, res) + c(x, -x, -x) # at the end, it's like doing x+res :-) + y -= 1 + return res + res = self.meta_interp(f, [6, 7], policy=MyPolicy(), jitmode='fast') # fast == trace only core graphs + assert res == 42 + self.check_trace_count(1) + # this is suboptimal because we get a call_may_force instead of a + # call. Look at the comment inside pyjitpl...perform_call_maybe for + # details + self.check_resops({'jump': 1, 'int_gt': 2, 'guard_true': 2, 'int_sub': 2, + 'int_neg': 1, 'int_add': 4, + 'call_may_force': 6, + 'guard_no_exception': 6, + 'guard_not_forced': 6}) + + class TestOOtype(BasicTests, OOJitMixin): diff --git a/pypy/jit/metainterp/warmspot.py b/pypy/jit/metainterp/warmspot.py --- a/pypy/jit/metainterp/warmspot.py +++ b/pypy/jit/metainterp/warmspot.py @@ -67,7 +67,8 @@ backendopt=False, trace_limit=sys.maxint, inline=False, loop_longevity=0, retrace_limit=5, function_threshold=4, - enable_opts=ALL_OPTS_NAMES, max_retrace_guards=15, **kwds): + enable_opts=ALL_OPTS_NAMES, max_retrace_guards=15, + jitmode='full', **kwds): from pypy.config.config import ConfigError translator = interp.typer.annotator.translator try: @@ -92,6 +93,7 @@ jd.warmstate.set_param_loop_longevity(loop_longevity) jd.warmstate.set_param_retrace_limit(retrace_limit) jd.warmstate.set_param_max_retrace_guards(max_retrace_guards) + jd.warmstate.set_param_jitmode(jitmode) jd.warmstate.set_param_enable_opts(enable_opts) warmrunnerdesc.finish() if graph_and_interp_only: diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -213,6 +213,9 @@ def set_param_inlining(self, value): self.inlining = value + def set_param_jitmode(self, value): + self.jitmode = value + def set_param_enable_opts(self, value): from pypy.jit.metainterp.optimizeopt import ALL_OPTS_DICT, ALL_OPTS_NAMES diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -402,6 +402,7 @@ 'retrace_limit': 'how many times we can try retracing before giving up', 'max_retrace_guards': 'number of extra guards a retrace can cause', 'max_unroll_loops': 'number of extra unrollings a loop can cause', + 'jitmode': '"full" (default) or "fast"', 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' } @@ -414,6 +415,7 @@ 'retrace_limit': 5, 'max_retrace_guards': 15, 'max_unroll_loops': 4, + 'jitmode': 'full', 'enable_opts': 'all', } unroll_parameters = unrolling_iterable(PARAMETERS.items()) From noreply at buildbot.pypy.org Fri Jan 20 15:35:15 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:15 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: fix tests Message-ID: <20120120143515.7E32A82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51530:04c7e6ed45c9 Date: 2012-01-20 14:18 +0100 http://bitbucket.org/pypy/pypy/changeset/04c7e6ed45c9/ Log: fix tests diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -40,6 +40,7 @@ _cell = FakeJitCell() trace_limit = sys.maxint + jitmode = 'full' enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True From noreply at buildbot.pypy.org Fri Jan 20 15:35:16 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:16 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: make sure not to inline indirect calls when in core-only mode Message-ID: <20120120143516.BE01882CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51531:03e571309f26 Date: 2012-01-20 14:24 +0100 http://bitbucket.org/pypy/pypy/changeset/03e571309f26/ Log: make sure not to inline indirect calls when in core-only mode diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1388,7 +1388,7 @@ jitcode = sd.bytecode_for_address(key) if jitcode is not None: # we should follow calls to this graph - return self.metainterp.perform_call(jitcode, argboxes) + return self.perform_call_maybe(jitcode, argboxes) else: # but we should not follow calls to that graph return self.do_residual_call(funcbox, calldescr, argboxes) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2974,6 +2974,37 @@ 'guard_no_exception': 6, 'guard_not_forced': 6}) + def test_dont_inline_residual_call_from_core_graph(self): + class MyPolicy(JitPolicy): + def is_core_graph(self, graph): + return graph.name == 'f' + myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'res']) + + def a(x): + return x+2 + def b(x): + return x+1 + def f(x, y): + res = 0 + while y > 0: + myjitdriver.can_enter_jit(x=x, y=y, res=res) + myjitdriver.jit_merge_point(x=x, y=y, res=res) + if y == 5: + f = a + else: + f = b + y -= 1 + res += f(x) + return res + res = self.meta_interp(f, [5, 7], policy=MyPolicy(), jitmode='fast') # fast == trace only core graphs + assert res == 43 + self.check_trace_count(1) + self.check_resops({'jump': 1, 'int_gt': 2, 'guard_true': 2, 'int_sub': 2, + 'int_eq': 2, 'int_add': 2, 'guard_false': 2, + 'call_may_force': 2, + 'guard_no_exception': 2, + 'guard_not_forced': 2}) + class TestOOtype(BasicTests, OOJitMixin): From noreply at buildbot.pypy.org Fri Jan 20 15:35:18 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:18 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: make sure to inline core-to-core calls Message-ID: <20120120143518.087D282CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51532:4a040e6b95e4 Date: 2012-01-20 15:01 +0100 http://bitbucket.org/pypy/pypy/changeset/4a040e6b95e4/ Log: make sure to inline core-to-core calls diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -780,8 +780,8 @@ def perform_call_maybe(self, jitcode, argboxes): core_only_mode = (self.metainterp.jitdriver_sd.warmstate.jitmode == 'fast') - if core_only_mode and self.jitcode.is_core: - # never inline in this mode + # in core_only_mode, don't inline calls from core to non-core graphs + if core_only_mode and self.jitcode.is_core and not jitcode.is_core: funcbox = ConstInt(jitcode.get_fnaddr_as_int()) # jitcode always has a calldescr, but it might not have the # correct effectinfo. The result is that we might generate a diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -3005,6 +3005,27 @@ 'guard_no_exception': 2, 'guard_not_forced': 2}) + def test_inline_core_to_core_calls(self): + class MyPolicy(JitPolicy): + def is_core_graph(self, graph): + return graph.name in ('f', 'a') + myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'res']) + + def a(x, y): + return x+y + def f(x, y): + res = 0 + while y > 0: + myjitdriver.can_enter_jit(x=x, y=y, res=res) + myjitdriver.jit_merge_point(x=x, y=y, res=res) + res = a(res, x) + y -= 1 + return res + res = self.meta_interp(f, [6, 7], policy=MyPolicy(), jitmode='fast') # fast == trace only core graphs + assert res == 42 + self.check_trace_count(1) + self.check_resops({'jump': 1, 'int_gt': 2, 'guard_true': 2, 'int_sub': 2, + 'int_add': 2}) class TestOOtype(BasicTests, OOJitMixin): From noreply at buildbot.pypy.org Fri Jan 20 15:35:19 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:19 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: rename 'fast' mode to 'core-only' Message-ID: <20120120143519.4934A82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51533:94130878552c Date: 2012-01-20 15:05 +0100 http://bitbucket.org/pypy/pypy/changeset/94130878552c/ Log: rename 'fast' mode to 'core-only' diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -779,7 +779,7 @@ return ConstInt(result) def perform_call_maybe(self, jitcode, argboxes): - core_only_mode = (self.metainterp.jitdriver_sd.warmstate.jitmode == 'fast') + core_only_mode = (self.metainterp.jitdriver_sd.warmstate.jitmode == 'core-only') # in core_only_mode, don't inline calls from core to non-core graphs if core_only_mode and self.jitcode.is_core and not jitcode.is_core: funcbox = ConstInt(jitcode.get_fnaddr_as_int()) diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2962,7 +2962,7 @@ res = a(x) + b(x, res) + c(x, -x, -x) # at the end, it's like doing x+res :-) y -= 1 return res - res = self.meta_interp(f, [6, 7], policy=MyPolicy(), jitmode='fast') # fast == trace only core graphs + res = self.meta_interp(f, [6, 7], policy=MyPolicy(), jitmode='core-only') assert res == 42 self.check_trace_count(1) # this is suboptimal because we get a call_may_force instead of a @@ -2996,7 +2996,7 @@ y -= 1 res += f(x) return res - res = self.meta_interp(f, [5, 7], policy=MyPolicy(), jitmode='fast') # fast == trace only core graphs + res = self.meta_interp(f, [5, 7], policy=MyPolicy(), jitmode='core-only') assert res == 43 self.check_trace_count(1) self.check_resops({'jump': 1, 'int_gt': 2, 'guard_true': 2, 'int_sub': 2, @@ -3021,7 +3021,7 @@ res = a(res, x) y -= 1 return res - res = self.meta_interp(f, [6, 7], policy=MyPolicy(), jitmode='fast') # fast == trace only core graphs + res = self.meta_interp(f, [6, 7], policy=MyPolicy(), jitmode='core-only') assert res == 42 self.check_trace_count(1) self.check_resops({'jump': 1, 'int_gt': 2, 'guard_true': 2, 'int_sub': 2, diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -402,7 +402,7 @@ 'retrace_limit': 'how many times we can try retracing before giving up', 'max_retrace_guards': 'number of extra guards a retrace can cause', 'max_unroll_loops': 'number of extra unrollings a loop can cause', - 'jitmode': '"full" (default) or "fast"', + 'jitmode': '"full" (default) or "core-only"', 'enable_opts': 'optimizations to enable or all, INTERNAL USE ONLY' } From noreply at buildbot.pypy.org Fri Jan 20 15:35:20 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:20 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: always disable inlining when setting the jitmode to core-only Message-ID: <20120120143520.7EB4B82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51534:730dac0eff41 Date: 2012-01-20 15:08 +0100 http://bitbucket.org/pypy/pypy/changeset/730dac0eff41/ Log: always disable inlining when setting the jitmode to core-only diff --git a/pypy/jit/metainterp/test/test_warmstate.py b/pypy/jit/metainterp/test/test_warmstate.py --- a/pypy/jit/metainterp/test/test_warmstate.py +++ b/pypy/jit/metainterp/test/test_warmstate.py @@ -324,3 +324,12 @@ cell = get_jitcell(True, i) cell.counter = -2 assert len(warmstate._jitcell_dict) == i + 1 + +def test_set_params(): + warmstate = WarmEnterState(None, None) + assert warmstate.inlining == 1 + assert warmstate.jitmode == 'full' + warmstate.set_param_jitmode('core-only') + assert warmstate.inlining == 0 + assert warmstate.jitmode == 'core-only' + diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -215,6 +215,8 @@ def set_param_jitmode(self, value): self.jitmode = value + if value == 'core-only': + self.set_param_inlining(0) def set_param_enable_opts(self, value): from pypy.jit.metainterp.optimizeopt import ALL_OPTS_DICT, ALL_OPTS_NAMES From noreply at buildbot.pypy.org Fri Jan 20 15:35:21 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:21 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: (antocuni, arigo): if we are in core mode, we know for sure that the jitcode we are tracing is_core Message-ID: <20120120143521.B7E2A82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51535:d82aeadcfb55 Date: 2012-01-20 15:11 +0100 http://bitbucket.org/pypy/pypy/changeset/d82aeadcfb55/ Log: (antocuni, arigo): if we are in core mode, we know for sure that the jitcode we are tracing is_core diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -781,7 +781,8 @@ def perform_call_maybe(self, jitcode, argboxes): core_only_mode = (self.metainterp.jitdriver_sd.warmstate.jitmode == 'core-only') # in core_only_mode, don't inline calls from core to non-core graphs - if core_only_mode and self.jitcode.is_core and not jitcode.is_core: + if core_only_mode and not jitcode.is_core: + assert self.jitcode.is_core funcbox = ConstInt(jitcode.get_fnaddr_as_int()) # jitcode always has a calldescr, but it might not have the # correct effectinfo. The result is that we might generate a From noreply at buildbot.pypy.org Fri Jan 20 15:35:23 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:23 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: (antocuni, arigo): use a boolean instead of a string to store the value of core-only-mode, and don't disable inlining automatically (for now at least) Message-ID: <20120120143523.0041A82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51536:54d6ea8b9249 Date: 2012-01-20 15:16 +0100 http://bitbucket.org/pypy/pypy/changeset/54d6ea8b9249/ Log: (antocuni, arigo): use a boolean instead of a string to store the value of core-only-mode, and don't disable inlining automatically (for now at least) diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -779,7 +779,7 @@ return ConstInt(result) def perform_call_maybe(self, jitcode, argboxes): - core_only_mode = (self.metainterp.jitdriver_sd.warmstate.jitmode == 'core-only') + core_only_mode = (self.metainterp.jitdriver_sd.warmstate.jitmode_core_only) # in core_only_mode, don't inline calls from core to non-core graphs if core_only_mode and not jitcode.is_core: assert self.jitcode.is_core diff --git a/pypy/jit/metainterp/test/support.py b/pypy/jit/metainterp/test/support.py --- a/pypy/jit/metainterp/test/support.py +++ b/pypy/jit/metainterp/test/support.py @@ -40,7 +40,7 @@ _cell = FakeJitCell() trace_limit = sys.maxint - jitmode = 'full' + jitmode_core_only = False enable_opts = ALL_OPTS_DICT func._jit_unroll_safe_ = True diff --git a/pypy/jit/metainterp/test/test_warmstate.py b/pypy/jit/metainterp/test/test_warmstate.py --- a/pypy/jit/metainterp/test/test_warmstate.py +++ b/pypy/jit/metainterp/test/test_warmstate.py @@ -324,12 +324,3 @@ cell = get_jitcell(True, i) cell.counter = -2 assert len(warmstate._jitcell_dict) == i + 1 - -def test_set_params(): - warmstate = WarmEnterState(None, None) - assert warmstate.inlining == 1 - assert warmstate.jitmode == 'full' - warmstate.set_param_jitmode('core-only') - assert warmstate.inlining == 0 - assert warmstate.jitmode == 'core-only' - diff --git a/pypy/jit/metainterp/warmstate.py b/pypy/jit/metainterp/warmstate.py --- a/pypy/jit/metainterp/warmstate.py +++ b/pypy/jit/metainterp/warmstate.py @@ -214,9 +214,7 @@ self.inlining = value def set_param_jitmode(self, value): - self.jitmode = value - if value == 'core-only': - self.set_param_inlining(0) + self.jitmode_core_only = (value == 'core-only') def set_param_enable_opts(self, value): from pypy.jit.metainterp.optimizeopt import ALL_OPTS_DICT, ALL_OPTS_NAMES From noreply at buildbot.pypy.org Fri Jan 20 15:35:24 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:24 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: (antocuni, arigo): improve this test Message-ID: <20120120143524.39E5D82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51537:4cb941b199d3 Date: 2012-01-20 15:28 +0100 http://bitbucket.org/pypy/pypy/changeset/4cb941b199d3/ Log: (antocuni, arigo): improve this test diff --git a/pypy/jit/metainterp/test/test_ajit.py b/pypy/jit/metainterp/test/test_ajit.py --- a/pypy/jit/metainterp/test/test_ajit.py +++ b/pypy/jit/metainterp/test/test_ajit.py @@ -2974,36 +2974,45 @@ 'guard_no_exception': 6, 'guard_not_forced': 6}) - def test_dont_inline_residual_call_from_core_graph(self): + def test_dont_inline_indirect_call_from_core_graph_to_non_core_graph(self): class MyPolicy(JitPolicy): def is_core_graph(self, graph): - return graph.name == 'f' - myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'res']) + return graph.name in ('f', 'a') + myjitdriver = JitDriver(greens = [], reds = ['x', 'y', 'z', 'res']) def a(x): return x+2 def b(x): return x+1 - def f(x, y): + def f(x, y, z): res = 0 while y > 0: - myjitdriver.can_enter_jit(x=x, y=y, res=res) - myjitdriver.jit_merge_point(x=x, y=y, res=res) - if y == 5: + myjitdriver.can_enter_jit(x=x, y=y, z=z, res=res) + myjitdriver.jit_merge_point(x=x, y=y, z=z, res=res) + if z: f = a else: f = b y -= 1 res += f(x) return res - res = self.meta_interp(f, [5, 7], policy=MyPolicy(), jitmode='core-only') - assert res == 43 + # indirect call to b (non-core) + res = self.meta_interp(f, [5, 7, 0], policy=MyPolicy(), jitmode='core-only') + assert res == 42 self.check_trace_count(1) self.check_resops({'jump': 1, 'int_gt': 2, 'guard_true': 2, 'int_sub': 2, - 'int_eq': 2, 'int_add': 2, 'guard_false': 2, + 'int_is_true': 1, 'int_add': 2, 'guard_false': 1, 'call_may_force': 2, 'guard_no_exception': 2, 'guard_not_forced': 2}) + # + # indirect call to a (core) + res = self.meta_interp(f, [5, 7, 1], policy=MyPolicy(), jitmode='core-only') + assert res == 49 + self.check_trace_count(1) + self.check_resops({'jump': 1, 'int_gt': 2, 'guard_true': 3, 'int_sub': 2, + 'int_is_true': 1, 'int_add': 3}) + def test_inline_core_to_core_calls(self): class MyPolicy(JitPolicy): From noreply at buildbot.pypy.org Fri Jan 20 15:35:25 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:35:25 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: (antocuni, arigo, romain around): try to implement a reasonable is_core_function policy for the pypy interpreter Message-ID: <20120120143525.6D95E82CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51538:c19f30c468d8 Date: 2012-01-20 15:34 +0100 http://bitbucket.org/pypy/pypy/changeset/c19f30c468d8/ Log: (antocuni, arigo, romain around): try to implement a reasonable is_core_function policy for the pypy interpreter diff --git a/pypy/jit/codewriter/policy.py b/pypy/jit/codewriter/policy.py --- a/pypy/jit/codewriter/policy.py +++ b/pypy/jit/codewriter/policy.py @@ -39,6 +39,14 @@ return True # look into everything by default def is_core_graph(self, graph): + try: + func = graph.func + except AttributeError: + return True + else: + return self.is_core_function(func) + + def is_core_function(self, func): return True def _reject_function(self, func): diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -150,3 +150,8 @@ return False return True + + def is_core_function(self, func): + mod = func.__module__ or '?' + is_interpreter = mod.startswith('pypy.interpreter.') + return is_interpreter or mod.startswith('pypy.module.pypyjit.') From noreply at buildbot.pypy.org Fri Jan 20 15:41:51 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 20 Jan 2012 15:41:51 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: progress Message-ID: <20120120144151.1A22C82CF8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51539:a2e2a35726cc Date: 2012-01-20 11:12 +0200 http://bitbucket.org/pypy/pypy/changeset/a2e2a35726cc/ Log: progress diff --git a/pypy/module/micronumpy/dot.py b/pypy/module/micronumpy/dot.py --- a/pypy/module/micronumpy/dot.py +++ b/pypy/module/micronumpy/dot.py @@ -7,8 +7,8 @@ dot_driver = jit.JitDriver( - greens=['shapelen', 'left', 'right'], - reds=['lefti', 'righti', 'outi', 'result'], + greens=['shape_len', 'left'], + reds=['lefti', 'righti', 'outi', 'result', 'right'], get_printable_location=new_printable_location('dot'), name='dot', ) @@ -55,7 +55,7 @@ if i != right_critical_dim] right_skip = range(len(left.shape) - 1) result_skip = [len(result.shape) - 1] - shapelen = len(broadcast_shape) + shape_len = len(broadcast_shape) _r = calculate_dot_strides(result.strides, result.backstrides, broadcast_shape, result_skip) outi = ViewIterator(0, _r[0], _r[1], broadcast_shape) @@ -78,9 +78,9 @@ right.getitem(righti.offset)) value = add(dtype, v, result.getitem(outi.offset)) result.setitem(outi.offset, value) - outi = outi.next(shapelen) - righti = righti.next(shapelen) - lefti = lefti.next(shapelen) + outi = outi.next(shape_len) + righti = righti.next(shape_len) + lefti = lefti.next(shape_len) assert lefti.done() assert righti.done() return result diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -368,16 +368,15 @@ 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) - def ddefine_dot(): + def define_dot(): return """ a = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] b=[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11]] - c = a.dot(b) + c = dot(a, b) c -> 1 -> 2 """ def test_dot(self): - py.test.skip("not yet") result = self.run("dot") assert result == 184 self.check_simple_loop({}) From noreply at buildbot.pypy.org Fri Jan 20 15:41:52 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 20 Jan 2012 15:41:52 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: test for slice of transposed array fails Message-ID: <20120120144152.4F68E82CF8@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51540:bb1838ae2306 Date: 2012-01-20 16:40 +0200 http://bitbucket.org/pypy/pypy/changeset/bb1838ae2306/ Log: test for slice of transposed array fails diff --git a/pypy/module/micronumpy/dot.py b/pypy/module/micronumpy/dot.py --- a/pypy/module/micronumpy/dot.py +++ b/pypy/module/micronumpy/dot.py @@ -6,10 +6,13 @@ from pypy.rlib import jit +def dot_printable_location(shapelen, sig): + return 'numpy dot [%d dims]' % (shapelen) + dot_driver = jit.JitDriver( greens=['shape_len', 'left'], - reds=['lefti', 'righti', 'outi', 'result', 'right'], - get_printable_location=new_printable_location('dot'), + reds=['lefti', 'righti', 'outi', 'result', 'right','sig','dtype'], + get_printable_location=dot_printable_location, name='dot', ) @@ -41,31 +44,35 @@ ''' assumes left, right are concrete arrays given left.shape == [3, 5, 7], right.shape == [2, 7, 4] + then result.shape == [3, 5, 2, 4] - broadcast shape should be [3, 5, 2, 7, 4] - result should skip dims 3 which is results.ndims - 1 - left should skip 2, 4 which is a.ndims-1 + range(right.ndims) + broadcast shape should be [3, 5, 2, 7, 4] + result should skip dims 3 which is len(result_shape) - 1 + (note that if right is 1d, result should + skip len(result_shape)) + left should skip 2, 4 which is a.ndims-1 + range(right.ndims) except where it==(right.ndims-2) - right should skip 0, 1 + right should skip 0, 1 ''' - mul = interp_ufuncs.get(space).multiply.func - add = interp_ufuncs.get(space).add.func broadcast_shape = left.shape[:-1] + right.shape + shape_len = len(broadcast_shape) left_skip = [len(left.shape) - 1 + i for i in range(len(right.shape)) if i != right_critical_dim] right_skip = range(len(left.shape) - 1) - result_skip = [len(result.shape) - 1] - shape_len = len(broadcast_shape) + result_skip = [len(result.shape) - (len(right.shape) > 1)] _r = calculate_dot_strides(result.strides, result.backstrides, broadcast_shape, result_skip) - outi = ViewIterator(0, _r[0], _r[1], broadcast_shape) + outi = ViewIterator(result.start, _r[0], _r[1], broadcast_shape) _r = calculate_dot_strides(left.strides, left.backstrides, broadcast_shape, left_skip) - lefti = ViewIterator(0, _r[0], _r[1], broadcast_shape) + lefti = ViewIterator(left.start, _r[0], _r[1], broadcast_shape) _r = calculate_dot_strides(right.strides, right.backstrides, broadcast_shape, right_skip) - righti = ViewIterator(0, _r[0], _r[1], broadcast_shape) + righti = ViewIterator(right.start, _r[0], _r[1], broadcast_shape) + if right.size==4: + xxx while not outi.done(): + ''' dot_driver.jit_merge_point(left=left, right=right, shape_len=shape_len, @@ -73,10 +80,17 @@ righti=righti, outi=outi, result=result, + dtype=dtype, + sig=None, #For get_printable_location ) - v = mul(dtype, left.getitem(lefti.offset), - right.getitem(righti.offset)) - value = add(dtype, v, result.getitem(outi.offset)) + ''' + lval = left.getitem(lefti.offset).convert_to(dtype) + rval = right.getitem(righti.offset).convert_to(dtype) + outval = result.getitem(outi.offset).convert_to(dtype) + v = dtype.itemtype.mul(lval, rval) + value = dtype.itemtype.add(v, outval) + #Do I need to convert it to result.dtype or does settiem do that? + assert outi.offset < result.size result.setitem(outi.offset, value) outi = outi.next(shape_len) righti = righti.next(shape_len) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -890,7 +890,7 @@ c = dot(a, b) assert (c == [[[14, 38, 62], [38, 126, 214], [62, 214, 366]], [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() - c = dot(a, b[:, :, 2]) + c = dot(a, b[:, 2]) assert (c == [[38, 126, 214], [302, 390, 478]]).all() def test_dot_constant(self): From noreply at buildbot.pypy.org Fri Jan 20 15:52:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 20 Jan 2012 15:52:34 +0100 (CET) Subject: [pypy-commit] pypy core-only-tracing: tentative rpython fix Message-ID: <20120120145234.399C282CF8@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: core-only-tracing Changeset: r51541:8789a7c425dc Date: 2012-01-20 15:52 +0100 http://bitbucket.org/pypy/pypy/changeset/8789a7c425dc/ Log: tentative rpython fix diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -527,8 +527,8 @@ raise ValueError name = parts[0] value = parts[1] - if name == 'enable_opts': - set_param(driver, 'enable_opts', value) + if name == 'enable_opts' or name == 'jitmode': + set_param(driver, name, value) else: for name1, _ in unroll_parameters: if name1 == name and name1 != 'enable_opts': From pullrequests-noreply at bitbucket.org Fri Jan 20 17:07:55 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Fri, 20 Jan 2012 16:07:55 -0000 Subject: [pypy-commit] [OPEN] Pull request #22 for pypy/pypy: fix documentation bugs Message-ID: A new pull request has been opened by tomo cocoa. cocoatomo/pypydoc has changes to be pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/22/fix-documentation-bugs Title: fix documentation bugs some miscellaneous typo were fixed Changes to be pulled: -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Fri Jan 20 17:14:39 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 17:14:39 +0100 (CET) Subject: [pypy-commit] pypy default: For clarity. It used to work before too with a too-short array on 64-bits, Message-ID: <20120120161439.924D482CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51542:437f207e58f0 Date: 2012-01-20 17:13 +0100 http://bitbucket.org/pypy/pypy/changeset/437f207e58f0/ Log: For clarity. It used to work before too with a too-short array on 64-bits, but that's only because it would only need to read 'rax' or 'xmm0', which were both in the array. diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -6,7 +6,7 @@ from pypy.jit.codewriter import longlong from pypy.jit.metainterp import history, compile from pypy.jit.backend.x86.assembler import Assembler386 -from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS +from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS, IS_X86_32 from pypy.jit.backend.x86.profagent import ProfileAgent from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.jit.backend.x86 import regloc @@ -142,7 +142,9 @@ cast_ptr_to_int._annspecialcase_ = 'specialize:arglltype(0)' cast_ptr_to_int = staticmethod(cast_ptr_to_int) - all_null_registers = lltype.malloc(rffi.LONGP.TO, 24, + all_null_registers = lltype.malloc(rffi.LONGP.TO, + IS_X86_32 and (16+8) # 16 + 8 regs + or (16+16), # 16 + 16 regs flavor='raw', zero=True, immortal=True) From noreply at buildbot.pypy.org Fri Jan 20 17:14:40 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 17:14:40 +0100 (CET) Subject: [pypy-commit] pypy default: I think that this "if" doesn't make sense. Kill tentatively. Message-ID: <20120120161440.C8FC582CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51543:a80226d0b3cd Date: 2012-01-20 17:14 +0100 http://bitbucket.org/pypy/pypy/changeset/a80226d0b3cd/ Log: I think that this "if" doesn't make sense. Kill tentatively. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -2042,10 +2042,7 @@ size = sizeloc.value signloc = arglocs[1] - if isinstance(op.getarg(0), Const): - x = imm(op.getarg(0).getint()) - else: - x = arglocs[2] + x = arglocs[2] # the function address if x is eax: tmp = ecx else: From noreply at buildbot.pypy.org Fri Jan 20 17:17:16 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:17:16 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) make sure to pass a zeroed pice of memory to failure_recovery_func when forcing for the area where registers whould be stored. Message-ID: <20120120161716.EF1F882CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51544:58888d2298ed Date: 2012-01-20 13:21 +0100 http://bitbucket.org/pypy/pypy/changeset/58888d2298ed/ Log: (arigo, bivab) make sure to pass a zeroed pice of memory to failure_recovery_func when forcing for the area where registers whould be stored. diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -1,5 +1,4 @@ from pypy.jit.backend.arm.assembler import AssemblerARM -from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD from pypy.jit.backend.arm.registers import all_regs, all_vfp_regs from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.rpython.llinterp import LLInterpreter @@ -100,6 +99,10 @@ cast_ptr_to_int._annspecialcase_ = 'specialize:arglltype(0)' cast_ptr_to_int = staticmethod(cast_ptr_to_int) + all_null_registers = lltype.malloc(rffi.LONGP.TO, + len(all_vfp_regs) * 2 + len(all_regs), + flavor='raw', zero=True, immortal=True) + def force(self, addr_of_force_index): TP = rffi.CArrayPtr(lltype.Signed) fail_index = rffi.cast(TP, addr_of_force_index)[0] @@ -107,16 +110,12 @@ faildescr = self.get_fail_descr_from_number(fail_index) rffi.cast(TP, addr_of_force_index)[0] = ~fail_index bytecode = self.assembler._find_failure_recovery_bytecode(faildescr) + addr_all_null_regsiters = rffi.cast(rffi.LONG, self.all_null_registers) # start of "no gc operation!" block - frame_depth = faildescr._arm_current_frame_depth * WORD - addr_end_of_frame = (addr_of_force_index - - (frame_depth + - len(all_regs) * WORD + - len(all_vfp_regs) * DOUBLE_WORD)) fail_index_2 = self.assembler.failure_recovery_func( bytecode, addr_of_force_index, - addr_end_of_frame) + addr_all_null_regsiters) self.assembler.leave_jitted_hook() # end of "no gc operation!" block assert fail_index == fail_index_2 From noreply at buildbot.pypy.org Fri Jan 20 17:17:18 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:17:18 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Add a test to make sure that cpu.force uses a zeroed piece of memory as the area where the registers should be read when restoring the values to the failboxes Message-ID: <20120120161718.354A382CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51545:f523474c7b3e Date: 2012-01-20 13:54 +0100 http://bitbucket.org/pypy/pypy/changeset/f523474c7b3e/ Log: Add a test to make sure that cpu.force uses a zeroed piece of memory as the area where the registers should be read when restoring the values to the failboxes diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -3298,6 +3298,43 @@ fail = self.cpu.execute_token(looptoken, null_box.getref_base()) assert fail.identifier == 99 + def test_forcing_op_with_fail_arg_in_reg(self): + values = [] + def maybe_force(token, flag): + self.cpu.force(token) + values.append(self.cpu.get_latest_value_int(0)) + values.append(token) + return 42 + + FUNC = self.FuncType([lltype.Signed, lltype.Signed], lltype.Signed) + func_ptr = llhelper(lltype.Ptr(FUNC), maybe_force) + funcbox = self.get_funcbox(self.cpu, func_ptr).constbox() + calldescr = self.cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, + EffectInfo.MOST_GENERAL) + i0 = BoxInt() + i1 = BoxInt() + i2 = BoxInt() + tok = BoxInt() + faildescr = BasicFailDescr(23) + ops = [ + ResOperation(rop.FORCE_TOKEN, [], tok), + ResOperation(rop.CALL_MAY_FORCE, [funcbox, tok, i1], i2, + descr=calldescr), + ResOperation(rop.GUARD_NOT_FORCED, [], None, descr=faildescr), + ResOperation(rop.FINISH, [i2], None, descr=BasicFailDescr(0)) + ] + ops[2].setfailargs([i2]) + looptoken = JitCellToken() + self.cpu.compile_loop([i0, i1], ops, looptoken) + fail = self.cpu.execute_token(looptoken, 20, 0) + assert fail.identifier == 23 + assert self.cpu.get_latest_value_int(0) == 42 + # make sure that force reads the registers from a zeroed piece of + # memory + assert values[0] == 0 + token = self.cpu.get_latest_force_token() + assert values[1] == token + class OOtypeBackendTest(BaseBackendTest): type_system = 'ootype' From noreply at buildbot.pypy.org Fri Jan 20 17:17:19 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:17:19 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge heads Message-ID: <20120120161719.6917682CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51546:b5daa363d5b9 Date: 2012-01-20 13:56 +0100 http://bitbucket.org/pypy/pypy/changeset/b5daa363d5b9/ Log: merge heads diff --git a/pypy/jit/backend/arm/runner.py b/pypy/jit/backend/arm/runner.py --- a/pypy/jit/backend/arm/runner.py +++ b/pypy/jit/backend/arm/runner.py @@ -1,5 +1,4 @@ from pypy.jit.backend.arm.assembler import AssemblerARM -from pypy.jit.backend.arm.arch import WORD, DOUBLE_WORD from pypy.jit.backend.arm.registers import all_regs, all_vfp_regs from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.rpython.llinterp import LLInterpreter @@ -100,6 +99,10 @@ cast_ptr_to_int._annspecialcase_ = 'specialize:arglltype(0)' cast_ptr_to_int = staticmethod(cast_ptr_to_int) + all_null_registers = lltype.malloc(rffi.LONGP.TO, + len(all_vfp_regs) * 2 + len(all_regs), + flavor='raw', zero=True, immortal=True) + def force(self, addr_of_force_index): TP = rffi.CArrayPtr(lltype.Signed) fail_index = rffi.cast(TP, addr_of_force_index)[0] @@ -107,16 +110,12 @@ faildescr = self.get_fail_descr_from_number(fail_index) rffi.cast(TP, addr_of_force_index)[0] = ~fail_index bytecode = self.assembler._find_failure_recovery_bytecode(faildescr) + addr_all_null_regsiters = rffi.cast(rffi.LONG, self.all_null_registers) # start of "no gc operation!" block - frame_depth = faildescr._arm_current_frame_depth * WORD - addr_end_of_frame = (addr_of_force_index - - (frame_depth + - len(all_regs) * WORD + - len(all_vfp_regs) * DOUBLE_WORD)) fail_index_2 = self.assembler.failure_recovery_func( bytecode, addr_of_force_index, - addr_end_of_frame) + addr_all_null_regsiters) self.assembler.leave_jitted_hook() # end of "no gc operation!" block assert fail_index == fail_index_2 From noreply at buildbot.pypy.org Fri Jan 20 17:17:20 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:17:20 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) implement the BLX ARM instruction to branch to an address stored in a register Message-ID: <20120120161720.9D23382CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51547:95557903da83 Date: 2012-01-20 17:11 +0100 http://bitbucket.org/pypy/pypy/changeset/95557903da83/ Log: (arigo, bivab) implement the BLX ARM instruction to branch to an address stored in a register diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py --- a/pypy/jit/backend/arm/codebuilder.py +++ b/pypy/jit/backend/arm/codebuilder.py @@ -178,14 +178,11 @@ def BL(self, addr, c=cond.AL): target = rffi.cast(rffi.INT, addr) - if c == cond.AL: - self.ADD_ri(reg.lr.value, reg.pc.value, arch.PC_OFFSET / 2) - self.LDR_ri(reg.pc.value, reg.pc.value, imm=-arch.PC_OFFSET / 2) - self.write32(target) - else: - self.gen_load_int(reg.ip.value, target, cond=c) - self.MOV_rr(reg.lr.value, reg.pc.value, cond=c) - self.MOV_rr(reg.pc.value, reg.ip.value, cond=c) + self.gen_load_int(reg.ip.value, target, cond=c) + self.BLX(reg.ip.value) + + def BLX(self, reg, c=cond.AL): + self.write32(c << 28 | 0x12FFF3 << 4 | (reg & 0xF)) def MOVT_ri(self, rd, imm16, c=cond.AL): """Move Top writes an immediate value to the top halfword of the From noreply at buildbot.pypy.org Fri Jan 20 17:17:21 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:17:21 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) refactor the code used to make calls to handle more work in the register allocator and perform calls only using locations. Message-ID: <20120120161721.D843A82CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51548:d06bbcb1c9fb Date: 2012-01-20 17:13 +0100 http://bitbucket.org/pypy/pypy/changeset/d06bbcb1c9fb/ Log: (arigo, bivab) refactor the code used to make calls to handle more work in the register allocator and perform calls only using locations. diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -361,14 +361,13 @@ self.gen_func_epilog() return fcond - def emit_op_call(self, op, args, regalloc, fcond, - force_index=NO_FORCE_INDEX): - adr = args[0].value - arglist = op.getarglist()[1:] + def emit_op_call(self, op, arglocs, regalloc, fcond, force_index=NO_FORCE_INDEX): if force_index == NO_FORCE_INDEX: force_index = self.write_new_force_index() - cond = self._emit_call(force_index, adr, arglist, - regalloc, fcond, op.result) + resloc = arglocs[0] + adr = arglocs[1] + arglist = arglocs[2:] + cond = self._emit_call(force_index, adr, arglist, fcond, resloc) descr = op.getdescr() #XXX Hack, Hack, Hack if (op.result and not we_are_translated()): @@ -379,15 +378,10 @@ self._ensure_result_bit_extension(loc, size, signed) return cond - # XXX improve this interface - # emit_op_call_may_force - # XXX improve freeing of stuff here - # XXX add an interface that takes locations instead of boxes - def _emit_call(self, force_index, adr, args, regalloc, fcond=c.AL, - result=None): - n_args = len(args) - reg_args = count_reg_args(args) - + def _emit_call(self, force_index, adr, arglocs, fcond=c.AL, resloc=None): + assert self._regalloc.before_call_called + n_args = len(arglocs) + reg_args = count_reg_args(arglocs) # all arguments past the 4th go on the stack n = 0 # used to count the number of words pushed on the stack, so we #can later modify the SP back to its original value @@ -396,7 +390,7 @@ stack_args = [] count = 0 for i in range(reg_args, n_args): - arg = args[i] + arg = arglocs[i] if arg.type != FLOAT: count += 1 n += WORD @@ -417,8 +411,7 @@ if arg is None: self.mc.PUSH([r.ip.value]) else: - self.regalloc_push(regalloc.loc(arg)) - + self.regalloc_push(arg) # collect variables that need to go in registers and the registers they # will be stored in num = 0 @@ -427,16 +420,16 @@ non_float_regs = [] float_locs = [] for i in range(reg_args): - arg = args[i] + arg = arglocs[i] if arg.type == FLOAT and count % 2 != 0: num += 1 count = 0 reg = r.caller_resp[num] if arg.type == FLOAT: - float_locs.append((regalloc.loc(arg), reg)) + float_locs.append((arg, reg)) else: - non_float_locs.append(regalloc.loc(arg)) + non_float_locs.append(arg) non_float_regs.append(reg) if arg.type == FLOAT: @@ -457,14 +450,12 @@ #the actual call self.mc.BL(adr) self.mark_gc_roots(force_index) - regalloc.possibly_free_vars(args) # readjust the sp in case we passed some args on the stack if n > 0: self._adjust_sp(-n, fcond=fcond) # restore the argumets stored on the stack - if result is not None: - resloc = regalloc.after_call(result) + if resloc is not None: if resloc.is_vfp_reg(): # move result to the allocated register self.mov_to_vfp_loc(r.r0, r.r1, resloc) @@ -889,8 +880,8 @@ length_box = TempInt() length_loc = regalloc.force_allocate_reg(length_box, forbidden_vars, selected_reg=r.r2) - imm = regalloc.convert_to_imm(args[4]) - self.load(length_loc, imm) + immloc = regalloc.convert_to_imm(args[4]) + self.load(length_loc, immloc) if is_unicode: bytes_box = TempPtr() bytes_loc = regalloc.force_allocate_reg(bytes_box, @@ -902,8 +893,9 @@ length_box = bytes_box length_loc = bytes_loc # call memcpy() - self._emit_call(NO_FORCE_INDEX, self.memcpy_addr, - [dstaddr_box, srcaddr_box, length_box], regalloc) + regalloc.before_call() + self._emit_call(NO_FORCE_INDEX, imm(self.memcpy_addr), + [dstaddr_loc, srcaddr_loc, length_loc]) regalloc.possibly_free_var(length_box) regalloc.possibly_free_var(dstaddr_box) @@ -993,17 +985,19 @@ # XXX Split into some helper methods def emit_guard_call_assembler(self, op, guard_op, arglocs, regalloc, fcond): + tmploc = arglocs[1] + resloc = arglocs[2] + callargs = arglocs[3:] + faildescr = guard_op.getdescr() fail_index = self.cpu.get_fail_descr_number(faildescr) self._write_fail_index(fail_index) - descr = op.getdescr() assert isinstance(descr, JitCellToken) - # XXX check this - # assert len(arglocs) - 2 == descr.compiled_loop_token._debug_nbargs - resbox = TempInt() - self._emit_call(fail_index, descr._arm_func_addr, - op.getarglist(), regalloc, fcond, result=resbox) + # check value + assert tmploc is r.r0 + self._emit_call(fail_index, imm(descr._arm_func_addr), + callargs, fcond, resloc=tmploc) if op.result is None: value = self.cpu.done_with_this_frame_void_v else: @@ -1016,12 +1010,8 @@ value = self.cpu.done_with_this_frame_float_v else: raise AssertionError(kind) - # check value - resloc = regalloc.try_allocate_reg(resbox) - assert resloc is r.r0 self.mc.gen_load_int(r.ip.value, value) - self.mc.CMP_rr(resloc.value, r.ip.value) - regalloc.possibly_free_var(resbox) + self.mc.CMP_rr(tmploc.value, r.ip.value) fast_jmp_pos = self.mc.currpos() self.mc.BKPT() @@ -1035,14 +1025,12 @@ asm_helper_adr = self.cpu.cast_adr_to_int(jd.assembler_helper_adr) with saved_registers(self.mc, r.caller_resp[1:] + [r.ip], r.caller_vfp_resp): - # resbox is allready in r0 - self.mov_loc_loc(arglocs[1], r.r1) + # result of previous call is in r0 + self.mov_loc_loc(arglocs[0], r.r1) self.mc.BL(asm_helper_adr) - if op.result: - resloc = regalloc.after_call(op.result) - if resloc.is_vfp_reg(): - # move result to the allocated register - self.mov_to_vfp_loc(r.r0, r.r1, resloc) + if op.result and resloc.is_vfp_reg(): + # move result to the allocated register + self.mov_to_vfp_loc(r.r0, r.r1, resloc) # jump to merge point jmp_pos = self.mc.currpos() @@ -1063,11 +1051,10 @@ fielddescr = jd.vable_token_descr assert isinstance(fielddescr, FieldDescr) ofs = fielddescr.offset - resloc = regalloc.force_allocate_reg(resbox) - self.mov_loc_loc(arglocs[1], r.ip) - self.mc.MOV_ri(resloc.value, 0) - self.mc.STR_ri(resloc.value, r.ip.value, ofs) - regalloc.possibly_free_var(resbox) + tmploc = regalloc.get_scratch_reg(INT) + self.mov_loc_loc(arglocs[0], r.ip) + self.mc.MOV_ri(tmploc.value, 0) + self.mc.STR_ri(tmploc.value, r.ip.value, ofs) if op.result is not None: # load the return value from fail_boxes_xxx[0] @@ -1080,8 +1067,6 @@ adr = self.fail_boxes_float.get_addr_for_num(0) else: raise AssertionError(kind) - resloc = regalloc.force_allocate_reg(op.result) - regalloc.possibly_free_var(resbox) self.mc.gen_load_int(r.ip.value, adr) if op.result.type == FLOAT: self.mc.VLDR(resloc.value, r.ip.value) @@ -1118,14 +1103,48 @@ def emit_guard_call_may_force(self, op, guard_op, arglocs, regalloc, fcond): + faildescr = guard_op.getdescr() + fail_index = self.cpu.get_fail_descr_number(faildescr) + self._write_fail_index(fail_index) + numargs = op.numargs() + callargs = arglocs[2:numargs] + adr = arglocs[1] + resloc = arglocs[0] + self._emit_call(fail_index, adr, callargs, fcond, resloc) + + self.mc.LDR_ri(r.ip.value, r.fp.value) + self.mc.CMP_ri(r.ip.value, 0) + self._emit_guard(guard_op, arglocs[1 + numargs:], c.GE, save_exc=True) + return fcond + + def emit_guard_call_release_gil(self, op, guard_op, arglocs, regalloc, + fcond): + + # first, close the stack in the sense of the asmgcc GC root tracker + gcrootmap = self.cpu.gc_ll_descr.gcrootmap + numargs = op.numargs() + resloc = arglocs[0] + adr = arglocs[1] + callargs = arglocs[2:numargs] + + if gcrootmap: + self.call_release_gil(gcrootmap, arglocs, fcond) + # do the call + faildescr = guard_op.getdescr() + fail_index = self.cpu.get_fail_descr_number(faildescr) + self._write_fail_index(fail_index) + + self._emit_call(fail_index, adr, callargs, fcond, resloc) + # then reopen the stack + if gcrootmap: + self.call_reacquire_gil(gcrootmap, resloc, fcond) + self.mc.LDR_ri(r.ip.value, r.fp.value) self.mc.CMP_ri(r.ip.value, 0) - self._emit_guard(guard_op, arglocs, c.GE, save_exc=True) + self._emit_guard(guard_op, arglocs[1 + numargs:], c.GE, save_exc=True) return fcond - emit_guard_call_release_gil = emit_guard_call_may_force - def call_release_gil(self, gcrootmap, save_registers, fcond): # First, we need to save away the registers listed in # 'save_registers' that are not callee-save. XXX We assume that @@ -1136,8 +1155,7 @@ regs_to_save.append(reg) assert gcrootmap.is_shadow_stack with saved_registers(self.mc, regs_to_save): - self._emit_call(NO_FORCE_INDEX, self.releasegil_addr, [], - self._regalloc, fcond) + self._emit_call(NO_FORCE_INDEX, imm(self.releasegil_addr), [], fcond) def call_reacquire_gil(self, gcrootmap, save_loc, fcond): # save the previous result into the stack temporarily. @@ -1154,8 +1172,7 @@ regs_to_save.append(r.ip) # for alingment assert gcrootmap.is_shadow_stack with saved_registers(self.mc, regs_to_save, vfp_regs_to_save): - self._emit_call(NO_FORCE_INDEX, self.reacqgil_addr, [], - self._regalloc, fcond) + self._emit_call(NO_FORCE_INDEX, imm(self.reacqgil_addr), [], fcond) def write_new_force_index(self): # for shadowstack only: get a new, unused force_index number and diff --git a/pypy/jit/backend/arm/regalloc.py b/pypy/jit/backend/arm/regalloc.py --- a/pypy/jit/backend/arm/regalloc.py +++ b/pypy/jit/backend/arm/regalloc.py @@ -553,12 +553,28 @@ args = self.prepare_op_math_sqrt(op, fcond) self.assembler.emit_op_math_sqrt(op, args, self, fcond) return - args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] + return self._prepare_call(op) + + def _prepare_call(self, op, force_store=[], save_all_regs=False): + args = [] + args.append(None) + for i in range(op.numargs()): + args.append(self.loc(op.getarg(i))) + # spill variables that need to be saved around calls + self.vfprm.before_call(save_all_regs=save_all_regs) + if not save_all_regs: + gcrootmap = self.assembler.cpu.gc_ll_descr.gcrootmap + if gcrootmap and gcrootmap.is_shadow_stack: + save_all_regs = 2 + self.rm.before_call(save_all_regs=save_all_regs) + if op.result: + resloc = self.after_call(op.result) + args[0] = resloc + self.before_call_called = True return args def prepare_op_call_malloc_gc(self, op, fcond): - args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] - return args + return self._prepare_call(op) def _prepare_guard(self, op, args=None): if args is None: @@ -1033,58 +1049,25 @@ self._compute_hint_frame_locations_from_descr(descr) def prepare_guard_call_may_force(self, op, guard_op, fcond): - faildescr = guard_op.getdescr() - fail_index = self.cpu.get_fail_descr_number(faildescr) - self.assembler._write_fail_index(fail_index) - args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] - for v in guard_op.getfailargs(): - if v in self.rm.reg_bindings or v in self.vfprm.reg_bindings: - self.force_spill_var(v) - self.assembler.emit_op_call(op, args, self, fcond, fail_index) - locs = self._prepare_guard(guard_op) - self.possibly_free_vars(guard_op.getfailargs()) - return locs - - def prepare_guard_call_release_gil(self, op, guard_op, fcond): - # first, close the stack in the sense of the asmgcc GC root tracker - gcrootmap = self.cpu.gc_ll_descr.gcrootmap - if gcrootmap: - arglocs = [] - args = op.getarglist() - for i in range(op.numargs()): - loc = self._ensure_value_is_boxed(op.getarg(i), args) - arglocs.append(loc) - self.assembler.call_release_gil(gcrootmap, arglocs, fcond) - # do the call - faildescr = guard_op.getdescr() - fail_index = self.cpu.get_fail_descr_number(faildescr) - self.assembler._write_fail_index(fail_index) - args = [imm(rffi.cast(lltype.Signed, op.getarg(0).getint()))] - self.assembler.emit_op_call(op, args, self, fcond, fail_index) - # then reopen the stack - if gcrootmap: - if op.result: - result_loc = self.call_result_location(op.result) - else: - result_loc = None - self.assembler.call_reacquire_gil(gcrootmap, result_loc, fcond) - locs = self._prepare_guard(guard_op) - return locs + args = self._prepare_call(op, save_all_regs=True) + return self._prepare_guard(guard_op, args) + prepare_guard_call_release_gil = prepare_guard_call_may_force def prepare_guard_call_assembler(self, op, guard_op, fcond): descr = op.getdescr() assert isinstance(descr, JitCellToken) jd = descr.outermost_jitdriver_sd assert jd is not None - size = jd.portal_calldescr.get_result_size() vable_index = jd.index_of_virtualizable if vable_index >= 0: self._sync_var(op.getarg(vable_index)) vable = self.frame_manager.loc(op.getarg(vable_index)) else: vable = imm(0) + # make sure the call result location is free + tmploc = self.get_scratch_reg(INT, selected_reg=r.r0) self.possibly_free_vars(guard_op.getfailargs()) - return [imm(size), vable] + return [vable, tmploc] + self._prepare_call(op, save_all_regs=True) def _prepare_args_for_new_op(self, new_args): gc_ll_descr = self.cpu.gc_ll_descr From noreply at buildbot.pypy.org Fri Jan 20 17:17:23 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:17:23 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) Add a test that checks the behaviour of calling functions stored in boxes Message-ID: <20120120161723.1BD0D82CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51549:9bd04064df9c Date: 2012-01-20 17:14 +0100 http://bitbucket.org/pypy/pypy/changeset/9bd04064df9c/ Log: (arigo, bivab) Add a test that checks the behaviour of calling functions stored in boxes diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -573,6 +573,28 @@ res = self.execute_operation(rop.CALL, [funcbox] + map(BoxInt, args), 'int', descr=calldescr) assert res.value == func(*args) + def test_call_box_func(self): + def a(a1, a2): + return a1 + a2 + def b(b1, b2): + return b1 * b2 + + arg1 = 40 + arg2 = 2 + for f in [a, b]: + TP = lltype.Signed + FPTR = self.Ptr(self.FuncType([TP, TP], TP)) + func_ptr = llhelper(FPTR, f) + FUNC = deref(FPTR) + funcconst = self.get_funcbox(self.cpu, func_ptr) + funcbox = funcconst.clonebox() + calldescr = self.cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, + EffectInfo.MOST_GENERAL) + res = self.execute_operation(rop.CALL, + [funcbox, BoxInt(arg1), BoxInt(arg2)], + 'int', descr=calldescr) + assert res.getint() == f(arg1, arg2) + def test_call_stack_alignment(self): # test stack alignment issues, notably for Mac OS/X. # also test the ordering of the arguments. From noreply at buildbot.pypy.org Fri Jan 20 17:17:24 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:17:24 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: (arigo, bivab) add support for calling functions using indirect calls Message-ID: <20120120161724.538ED82CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51550:ec71ed6721d4 Date: 2012-01-20 17:15 +0100 http://bitbucket.org/pypy/pypy/changeset/ec71ed6721d4/ Log: (arigo, bivab) add support for calling functions using indirect calls diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -437,10 +437,14 @@ else: num += 1 count += 1 - - # spill variables that need to be saved around calls - regalloc.before_call(save_all_regs=2) - + # Check that the address of the function we want to call is not + # currently stored in one of the registers used to pass the arguments. + # If this happens to be the case we remap the register to r4 and use r4 + # to call the function + if adr in non_float_regs: + non_float_locs.append(adr) + non_float_regs.append(r.r4) + adr = r.r4 # remap values stored in core registers remap_frame_layout(self, non_float_locs, non_float_regs, r.ip) @@ -448,7 +452,15 @@ self.mov_from_vfp_loc(loc, reg, r.all_regs[reg.value + 1]) #the actual call - self.mc.BL(adr) + if adr.is_imm(): + self.mc.BL(adr.value) + elif adr.is_stack(): + self.mov_loc_loc(adr, r.ip) + adr = r.ip + else: + assert adr.is_reg() + if adr.is_reg(): + self.mc.BLX(adr.value) self.mark_gc_roots(force_index) # readjust the sp in case we passed some args on the stack if n > 0: From noreply at buildbot.pypy.org Fri Jan 20 17:17:25 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:17:25 +0100 (CET) Subject: [pypy-commit] pypy default: (arigo, bivab) Add a test that checks the behaviour of calling functions stored in boxes Message-ID: <20120120161725.8912482CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: Changeset: r51551:509859199b7d Date: 2012-01-20 17:14 +0100 http://bitbucket.org/pypy/pypy/changeset/509859199b7d/ Log: (arigo, bivab) Add a test that checks the behaviour of calling functions stored in boxes diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -551,6 +551,28 @@ res = self.execute_operation(rop.CALL, [funcbox] + map(BoxInt, args), 'int', descr=calldescr) assert res.value == func(*args) + def test_call_box_func(self): + def a(a1, a2): + return a1 + a2 + def b(b1, b2): + return b1 * b2 + + arg1 = 40 + arg2 = 2 + for f in [a, b]: + TP = lltype.Signed + FPTR = self.Ptr(self.FuncType([TP, TP], TP)) + func_ptr = llhelper(FPTR, f) + FUNC = deref(FPTR) + funcconst = self.get_funcbox(self.cpu, func_ptr) + funcbox = funcconst.clonebox() + calldescr = self.cpu.calldescrof(FUNC, FUNC.ARGS, FUNC.RESULT, + EffectInfo.MOST_GENERAL) + res = self.execute_operation(rop.CALL, + [funcbox, BoxInt(arg1), BoxInt(arg2)], + 'int', descr=calldescr) + assert res.getint() == f(arg1, arg2) + def test_call_stack_alignment(self): # test stack alignment issues, notably for Mac OS/X. # also test the ordering of the arguments. From noreply at buildbot.pypy.org Fri Jan 20 17:23:11 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 17:23:11 +0100 (CET) Subject: [pypy-commit] pypy default: Fix the new test_call_box_func() on 64-bit. Message-ID: <20120120162311.19CFC82CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51552:245cb20ecdc0 Date: 2012-01-20 17:22 +0100 http://bitbucket.org/pypy/pypy/changeset/245cb20ecdc0/ Log: Fix the new test_call_box_func() on 64-bit. diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1118,6 +1118,12 @@ for src, dst in singlefloats: self.mc.MOVD(dst, src) # Finally remap the arguments in the main regs + # If x is a register and is in dst_locs, then oups, it needs to + # be moved away: + if x in dst_locs: + src_locs.append(x) + dst_locs.append(r10) + x = r10 remap_frame_layout(self, src_locs, dst_locs, X86_64_SCRATCH_REG) self._regalloc.reserve_param(len(pass_on_stack)) From noreply at buildbot.pypy.org Fri Jan 20 17:24:18 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 17:24:18 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: forgot to kill this line Message-ID: <20120120162418.581D682CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51553:5172f0c3f717 Date: 2012-01-20 17:24 +0100 http://bitbucket.org/pypy/pypy/changeset/5172f0c3f717/ Log: forgot to kill this line diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -379,7 +379,6 @@ return cond def _emit_call(self, force_index, adr, arglocs, fcond=c.AL, resloc=None): - assert self._regalloc.before_call_called n_args = len(arglocs) reg_args = count_reg_args(arglocs) # all arguments past the 4th go on the stack From noreply at buildbot.pypy.org Fri Jan 20 17:26:32 2012 From: noreply at buildbot.pypy.org (cocoatomo) Date: Fri, 20 Jan 2012 17:26:32 +0100 (CET) Subject: [pypy-commit] pypy default: fix typo Message-ID: <20120120162632.E75C382CF8@wyvern.cs.uni-duesseldorf.de> Author: cocoatomo Branch: Changeset: r51554:ec4009b69fc3 Date: 2012-01-20 11:14 +0900 http://bitbucket.org/pypy/pypy/changeset/ec4009b69fc3/ Log: fix typo diff --git a/pypy/doc/translation.rst b/pypy/doc/translation.rst --- a/pypy/doc/translation.rst +++ b/pypy/doc/translation.rst @@ -155,7 +155,7 @@ function. The two input variables are the exception class and the exception value, respectively. (No other block will actually link to the exceptblock if the function does not - explicitely raise exceptions.) + explicitly raise exceptions.) ``Block`` @@ -325,7 +325,7 @@ Mutable objects need special treatment during annotation, because the annotation of contained values needs to be possibly updated to account for mutation operations, and consequently the annotation information -reflown through the relevant parts of the flow the graphs. +reflown through the relevant parts of the flow graphs. * ``SomeList`` stands for a list of homogeneous type (i.e. all the elements of the list are represented by a single common ``SomeXxx`` @@ -503,8 +503,8 @@ Since RPython is a garbage collected language there is a lot of heap memory allocation going on all the time, which would either not occur at all in a more -traditional explicitely managed language or results in an object which dies at -a time known in advance and can thus be explicitely deallocated. For example a +traditional explicitly managed language or results in an object which dies at +a time known in advance and can thus be explicitly deallocated. For example a loop of the following form:: for i in range(n): @@ -696,7 +696,7 @@ So far it is the second most mature high level backend after GenCLI: it still can't translate the full Standard Interpreter, but after the -Leysin sprint we were able to compile and run the rpytstone and +Leysin sprint we were able to compile and run the rpystone and richards benchmarks. GenJVM is almost entirely the work of Niko Matsakis, who worked on it From noreply at buildbot.pypy.org Fri Jan 20 17:26:34 2012 From: noreply at buildbot.pypy.org (cocoatomo) Date: Fri, 20 Jan 2012 17:26:34 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120120162634.96EC582CF8@wyvern.cs.uni-duesseldorf.de> Author: cocoatomo Branch: Changeset: r51555:3f7ae53430c4 Date: 2012-01-20 14:06 +0900 http://bitbucket.org/pypy/pypy/changeset/3f7ae53430c4/ Log: merge diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from .fromnumeric import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py rename from lib_pypy/numpypy/fromnumeric.py rename to lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -85,7 +85,7 @@ array([4, 3, 6]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') # not deprecated --- copy if necessary, view otherwise @@ -273,7 +273,7 @@ [-1, -2, -3, -4, -5]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def repeat(a, repeats, axis=None): @@ -315,7 +315,7 @@ [3, 4]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def put(a, ind, v, mode='raise'): @@ -366,7 +366,7 @@ array([ 0, 1, 2, 3, -5]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def swapaxes(a, axis1, axis2): @@ -410,7 +410,7 @@ [3, 7]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def transpose(a, axes=None): @@ -451,8 +451,11 @@ (2, 1, 3) """ - raise NotImplemented('Waiting on interp level method') - + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T def sort(a, axis=-1, kind='quicksort', order=None): """ @@ -553,7 +556,7 @@ dtype=[('name', '|S10'), ('height', '1: + w_res = arr.descr_sum(interp.space, + self.args[1].execute(interp)) + else: + w_res = arr.descr_sum(interp.space) elif self.name == "prod": w_res = arr.descr_prod(interp.space) elif self.name == "max": @@ -416,7 +420,7 @@ ('\]', 'array_right'), ('(->)|[\+\-\*\/]', 'operator'), ('=', 'assign'), - (',', 'coma'), + (',', 'comma'), ('\|', 'pipe'), ('\(', 'paren_left'), ('\)', 'paren_right'), @@ -504,7 +508,7 @@ return SliceConstant(start, stop, step) - def parse_expression(self, tokens): + def parse_expression(self, tokens, accept_comma=False): stack = [] while tokens.remaining(): token = tokens.pop() @@ -524,9 +528,13 @@ stack.append(RangeConstant(tokens.pop().v)) end = tokens.pop() assert end.name == 'pipe' + elif accept_comma and token.name == 'comma': + continue else: tokens.push() break + if accept_comma: + return stack stack.reverse() lhs = stack.pop() while stack: @@ -540,7 +548,7 @@ args = [] tokens.pop() # lparen while tokens.get(0).name != 'paren_right': - args.append(self.parse_expression(tokens)) + args += self.parse_expression(tokens, accept_comma=True) return FunctionCall(name, args) def parse_array_const(self, tokens): @@ -556,7 +564,7 @@ token = tokens.pop() if token.name == 'array_right': return elems - assert token.name == 'coma' + assert token.name == 'comma' def parse_statement(self, tokens): if (tokens.get(0).name == 'identifier' and diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -20,7 +20,7 @@ class W_Dtype(Wrappable): _immutable_fields_ = ["itemtype", "num", "kind"] - def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[]): + def __init__(self, itemtype, num, kind, name, char, w_box_type, alternate_constructors=[], aliases=[]): self.itemtype = itemtype self.num = num self.kind = kind @@ -28,6 +28,7 @@ self.char = char self.w_box_type = w_box_type self.alternate_constructors = alternate_constructors + self.aliases = aliases def malloc(self, length): # XXX find out why test_zjit explodes with tracking of allocations @@ -62,7 +63,7 @@ elif space.isinstance_w(w_dtype, space.w_str): name = space.str_w(w_dtype) for dtype in cache.builtin_dtypes: - if dtype.name == name or dtype.char == name: + if dtype.name == name or dtype.char == name or name in dtype.aliases: return dtype else: for dtype in cache.builtin_dtypes: @@ -107,7 +108,7 @@ kind=BOOLLTR, name="bool", char="?", - w_box_type = space.gettypefor(interp_boxes.W_BoolBox), + w_box_type=space.gettypefor(interp_boxes.W_BoolBox), alternate_constructors=[space.w_bool], ) self.w_int8dtype = W_Dtype( @@ -116,7 +117,7 @@ kind=SIGNEDLTR, name="int8", char="b", - w_box_type = space.gettypefor(interp_boxes.W_Int8Box) + w_box_type=space.gettypefor(interp_boxes.W_Int8Box) ) self.w_uint8dtype = W_Dtype( types.UInt8(), @@ -124,7 +125,7 @@ kind=UNSIGNEDLTR, name="uint8", char="B", - w_box_type = space.gettypefor(interp_boxes.W_UInt8Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt8Box), ) self.w_int16dtype = W_Dtype( types.Int16(), @@ -132,7 +133,7 @@ kind=SIGNEDLTR, name="int16", char="h", - w_box_type = space.gettypefor(interp_boxes.W_Int16Box), + w_box_type=space.gettypefor(interp_boxes.W_Int16Box), ) self.w_uint16dtype = W_Dtype( types.UInt16(), @@ -140,7 +141,7 @@ kind=UNSIGNEDLTR, name="uint16", char="H", - w_box_type = space.gettypefor(interp_boxes.W_UInt16Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt16Box), ) self.w_int32dtype = W_Dtype( types.Int32(), @@ -148,7 +149,7 @@ kind=SIGNEDLTR, name="int32", char="i", - w_box_type = space.gettypefor(interp_boxes.W_Int32Box), + w_box_type=space.gettypefor(interp_boxes.W_Int32Box), ) self.w_uint32dtype = W_Dtype( types.UInt32(), @@ -156,7 +157,7 @@ kind=UNSIGNEDLTR, name="uint32", char="I", - w_box_type = space.gettypefor(interp_boxes.W_UInt32Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt32Box), ) if LONG_BIT == 32: name = "int32" @@ -168,7 +169,7 @@ kind=SIGNEDLTR, name=name, char="l", - w_box_type = space.gettypefor(interp_boxes.W_LongBox), + w_box_type=space.gettypefor(interp_boxes.W_LongBox), alternate_constructors=[space.w_int], ) self.w_ulongdtype = W_Dtype( @@ -177,7 +178,7 @@ kind=UNSIGNEDLTR, name="u" + name, char="L", - w_box_type = space.gettypefor(interp_boxes.W_ULongBox), + w_box_type=space.gettypefor(interp_boxes.W_ULongBox), ) self.w_int64dtype = W_Dtype( types.Int64(), @@ -185,7 +186,7 @@ kind=SIGNEDLTR, name="int64", char="q", - w_box_type = space.gettypefor(interp_boxes.W_Int64Box), + w_box_type=space.gettypefor(interp_boxes.W_Int64Box), alternate_constructors=[space.w_long], ) self.w_uint64dtype = W_Dtype( @@ -194,7 +195,7 @@ kind=UNSIGNEDLTR, name="uint64", char="Q", - w_box_type = space.gettypefor(interp_boxes.W_UInt64Box), + w_box_type=space.gettypefor(interp_boxes.W_UInt64Box), ) self.w_float32dtype = W_Dtype( types.Float32(), @@ -202,7 +203,7 @@ kind=FLOATINGLTR, name="float32", char="f", - w_box_type = space.gettypefor(interp_boxes.W_Float32Box), + w_box_type=space.gettypefor(interp_boxes.W_Float32Box), ) self.w_float64dtype = W_Dtype( types.Float64(), @@ -212,6 +213,7 @@ char="d", w_box_type = space.gettypefor(interp_boxes.W_Float64Box), alternate_constructors=[space.w_float], + aliases=["float"], ) self.builtin_dtypes = [ diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -1,19 +1,20 @@ from pypy.rlib import jit from pypy.rlib.objectmodel import instantiate -from pypy.module.micronumpy.strides import calculate_broadcast_strides +from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ + calculate_slice_strides -# Iterators for arrays -# -------------------- -# all those iterators with the exception of BroadcastIterator iterate over the -# entire array in C order (the last index changes the fastest). This will -# yield all elements. Views iterate over indices and look towards strides and -# backstrides to find the correct position. Notably the offset between -# x[..., i + 1] and x[..., i] will be strides[-1]. Offset between -# x[..., k + 1, 0] and x[..., k, i_max] will be backstrides[-2] etc. +class BaseTransform(object): + pass -# BroadcastIterator works like that, but for indexes that don't change source -# in the original array, strides[i] == backstrides[i] == 0 +class ViewTransform(BaseTransform): + def __init__(self, chunks): + # 4-tuple specifying slicing + self.chunks = chunks + +class BroadcastTransform(BaseTransform): + def __init__(self, res_shape): + self.res_shape = res_shape class BaseIterator(object): def next(self, shapelen): @@ -22,6 +23,15 @@ def done(self): raise NotImplementedError + def apply_transformations(self, arr, transformations): + v = self + for transform in transformations: + v = v.transform(arr, transform) + return v + + def transform(self, arr, t): + raise NotImplementedError + class ArrayIterator(BaseIterator): def __init__(self, size): self.offset = 0 @@ -36,6 +46,10 @@ def done(self): return self.offset >= self.size + def transform(self, arr, t): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).transform(arr, t) + class OneDimIterator(BaseIterator): def __init__(self, start, step, stop): self.offset = start @@ -52,26 +66,29 @@ def done(self): return self.offset == self.size -def view_iter_from_arr(arr): - return ViewIterator(arr.start, arr.strides, arr.backstrides, arr.shape) - class ViewIterator(BaseIterator): - def __init__(self, start, strides, backstrides, shape, res_shape=None): + def __init__(self, start, strides, backstrides, shape): self.offset = start self._done = False - if res_shape is not None and res_shape != shape: - r = calculate_broadcast_strides(strides, backstrides, - shape, res_shape) - self.strides, self.backstrides = r - self.res_shape = res_shape - else: - self.strides = strides - self.backstrides = backstrides - self.res_shape = shape + self.strides = strides + self.backstrides = backstrides + self.res_shape = shape self.indices = [0] * len(self.res_shape) + def transform(self, arr, t): + if isinstance(t, BroadcastTransform): + r = calculate_broadcast_strides(self.strides, self.backstrides, + self.res_shape, t.res_shape) + return ViewIterator(self.offset, r[0], r[1], t.res_shape) + elif isinstance(t, ViewTransform): + r = calculate_slice_strides(self.res_shape, self.offset, + self.strides, + self.backstrides, t.chunks) + return ViewIterator(r[1], r[2], r[3], r[0]) + @jit.unroll_safe def next(self, shapelen): + shapelen = jit.promote(len(self.res_shape)) offset = self.offset indices = [0] * shapelen for i in range(shapelen): @@ -96,6 +113,13 @@ res._done = done return res + def apply_transformations(self, arr, transformations): + v = BaseIterator.apply_transformations(self, arr, transformations) + if len(arr.shape) == 1: + return OneDimIterator(self.offset, self.strides[0], + self.res_shape[0]) + return v + def done(self): return self._done @@ -103,11 +127,59 @@ def next(self, shapelen): return self + def transform(self, arr, t): + pass + +class AxisIterator(BaseIterator): + def __init__(self, start, dim, shape, strides, backstrides): + self.res_shape = shape[:] + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + self.first_line = True + self.indices = [0] * len(shape) + self._done = False + self.offset = start + self.dim = dim + + @jit.unroll_safe + def next(self, shapelen): + offset = self.offset + first_line = self.first_line + indices = [0] * shapelen + for i in range(shapelen): + indices[i] = self.indices[i] + done = False + for i in range(shapelen - 1, -1, -1): + if indices[i] < self.res_shape[i] - 1: + if i == self.dim: + first_line = False + indices[i] += 1 + offset += self.strides[i] + break + else: + if i == self.dim: + first_line = True + indices[i] = 0 + offset -= self.backstrides[i] + else: + done = True + res = instantiate(AxisIterator) + res.offset = offset + res.indices = indices + res.strides = self.strides + res.backstrides = self.backstrides + res.res_shape = self.res_shape + res._done = done + res.first_line = first_line + res.dim = self.dim + return res + + def done(self): + return self._done + # ------ other iterators that are not part of the computation frame ---------- - -class AxisIterator(object): - """ This object will return offsets of each start of the last stride - """ + +class SkipLastAxisIterator(object): def __init__(self, arr): self.arr = arr self.indices = [0] * (len(arr.shape) - 1) @@ -125,4 +197,3 @@ self.offset -= self.arr.backstrides[i] else: self.done = True - diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -8,8 +8,8 @@ from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder -from pypy.module.micronumpy.interp_iter import ArrayIterator,\ - view_iter_from_arr, OneDimIterator, AxisIterator +from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ + SkipLastAxisIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -35,11 +35,12 @@ slice_driver = jit.JitDriver( greens=['shapelen', 'sig'], virtualizables=['frame'], - reds=['self', 'frame', 'source', 'res_iter'], + reds=['self', 'frame', 'arr'], get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) + def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] batch = space.listview(w_iterable) @@ -156,9 +157,6 @@ # (meaning that the realignment of elements crosses from one step into another) # return None so that the caller can raise an exception. def calc_new_strides(new_shape, old_shape, old_strides): - # Return the proper strides for new_shape, or None if the mapping crosses - # stepping boundaries - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and # len(new_shape) > 0 steps = [] @@ -166,6 +164,7 @@ oldI = 0 new_strides = [] if old_strides[0] < old_strides[-1]: + #Start at old_shape[0], old_stides[0] for i in range(len(old_shape)): steps.append(old_strides[i] / last_step) last_step *= old_shape[i] @@ -183,10 +182,11 @@ if n_new_elems_used == n_old_elems_to_use: oldI += 1 if oldI >= len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] else: + #Start at old_shape[-1], old_strides[-1] for i in range(len(old_shape) - 1, -1, -1): steps.insert(0, old_strides[i] / last_step) last_step *= old_shape[i] @@ -206,7 +206,7 @@ if n_new_elems_used == n_old_elems_to_use: oldI -= 1 if oldI < -len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] return new_strides @@ -286,13 +286,17 @@ descr_rpow = _binop_right_impl("power") descr_rmod = _binop_right_impl("mod") - def _reduce_ufunc_impl(ufunc_name): - def impl(self, space): - return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, self, multidim=True) + def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) + return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") - descr_prod = _reduce_ufunc_impl("multiply") + descr_sum_promote = _reduce_ufunc_impl("add", True) + descr_prod = _reduce_ufunc_impl("multiply", True) descr_max = _reduce_ufunc_impl("maximum") descr_min = _reduce_ufunc_impl("minimum") @@ -377,7 +381,7 @@ else: w_res = self.descr_mul(space, w_other) assert isinstance(w_res, BaseArray) - return w_res.descr_sum(space) + return w_res.descr_sum(space, space.wrap(-1)) def get_concrete(self): raise NotImplementedError @@ -565,21 +569,31 @@ ) return w_result - def descr_mean(self, space): - return space.div(self.descr_sum(space), space.wrap(self.size)) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) + w_denom = space.wrap(self.size) + else: + dim = space.int_w(w_axis) + w_denom = space.wrap(self.shape[dim]) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) def descr_var(self, space): # var = mean((values - mean(values)) ** 2) - w_res = self.descr_sub(space, self.descr_mean(space)) - assert isinstance(w_res, BaseArray) + w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) + assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) - return w_res.descr_mean(space) + return w_res.descr_mean(space, space.w_None) def descr_std(self, space): # std(v) = sqrt(var(v)) return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -613,11 +627,12 @@ def getitem(self, item): raise NotImplementedError - def find_sig(self, res_shape=None): + def find_sig(self, res_shape=None, arr=None): """ find a correct signature for the array """ res_shape = res_shape or self.shape - return signature.find_sig(self.create_sig(res_shape), self) + arr = arr or self + return signature.find_sig(self.create_sig(), arr) def descr_array_iface(self, space): if not self.shape: @@ -671,7 +686,10 @@ def copy(self, space): return Scalar(self.dtype, self.value) - def create_sig(self, res_shape): + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + + def create_sig(self): return signature.ScalarSignature(self.dtype) def get_concrete_or_scalar(self): @@ -689,7 +707,8 @@ self.name = name def _del_sources(self): - # Function for deleting references to source arrays, to allow garbage-collecting them + # Function for deleting references to source arrays, + # to allow garbage-collecting them raise NotImplementedError def compute(self): @@ -741,11 +760,11 @@ self.size = size VirtualArray.__init__(self, 'slice', shape, child.find_dtype()) - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() return signature.VirtualSliceSignature( - self.child.create_sig(res_shape)) + self.child.create_sig()) def force_if_needed(self): if self.forced_result is None: @@ -755,6 +774,7 @@ def _del_sources(self): self.child = None + class Call1(VirtualArray): def __init__(self, ufunc, name, shape, res_dtype, values): VirtualArray.__init__(self, name, shape, res_dtype) @@ -765,16 +785,17 @@ def _del_sources(self): self.values = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) - return signature.Call1(self.ufunc, self.name, - self.values.create_sig(res_shape)) + return self.forced_result.create_sig() + return signature.Call1(self.ufunc, self.name, self.values.create_sig()) class Call2(VirtualArray): """ Intermediate class for performing binary operations. """ + _immutable_fields_ = ['left', 'right'] + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -789,12 +810,55 @@ self.left = None self.right = None - def create_sig(self, res_shape): + def create_sig(self): if self.forced_result is not None: - return self.forced_result.create_sig(res_shape) + return self.forced_result.create_sig() + if self.shape != self.left.shape and self.shape != self.right.shape: + return signature.BroadcastBoth(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.left.shape: + return signature.BroadcastLeft(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) + elif self.shape != self.right.shape: + return signature.BroadcastRight(self.ufunc, self.name, + self.calc_dtype, + self.left.create_sig(), + self.right.create_sig()) return signature.Call2(self.ufunc, self.name, self.calc_dtype, - self.left.create_sig(res_shape), - self.right.create_sig(res_shape)) + self.left.create_sig(), self.right.create_sig()) + +class SliceArray(Call2): + def __init__(self, shape, dtype, left, right): + Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, + right) + + def create_sig(self): + lsig = self.left.create_sig() + rsig = self.right.create_sig() + if self.shape != self.right.shape: + return signature.SliceloopBroadcastSignature(self.ufunc, + self.name, + self.calc_dtype, + lsig, rsig) + return signature.SliceloopSignature(self.ufunc, self.name, + self.calc_dtype, + lsig, rsig) + +class AxisReduce(Call2): + """ NOTE: this is only used as a container, you should never + encounter such things in the wild. Remove this comment + when we'll make AxisReduce lazy + """ + _immutable_fields_ = ['left', 'right'] + + def __init__(self, ufunc, name, shape, dtype, left, right, dim): + Call2.__init__(self, ufunc, name, shape, dtype, dtype, + left, right) + self.dim = dim class ConcreteArray(BaseArray): """ An array that have actual storage, whether owned or not @@ -849,11 +913,6 @@ self.strides = strides self.backstrides = backstrides - def array_sig(self, res_shape): - if res_shape is not None and self.shape != res_shape: - return signature.ViewSignature(self.dtype) - return signature.ArraySignature(self.dtype) - def to_str(self, space, comma, builder, indent=' ', use_ellipsis=False): '''Modifies builder with a representation of the array/slice The items will be seperated by a comma if comma is 1 @@ -867,14 +926,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: @@ -890,7 +949,7 @@ view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: - builder.append(ccomma +'\n' + indent + '...' + ncomma) + builder.append(ccomma + '\n' + indent + '...' + ncomma) i = self.shape[0] - 3 else: i += 1 @@ -968,20 +1027,22 @@ self.dtype is w_value.find_dtype()): self._fast_setslice(space, w_value) else: - self._sliceloop(w_value, res_shape) + arr = SliceArray(self.shape, self.dtype, self, w_value) + self._sliceloop(arr) def _fast_setslice(self, space, w_value): assert isinstance(w_value, ConcreteArray) itemsize = self.dtype.itemtype.get_element_size() - if len(self.shape) == 1: + shapelen = len(self.shape) + if shapelen == 1: rffi.c_memcpy( rffi.ptradd(self.storage, self.start * itemsize), rffi.ptradd(w_value.storage, w_value.start * itemsize), self.size * itemsize ) else: - dest = AxisIterator(self) - source = AxisIterator(w_value) + dest = SkipLastAxisIterator(self) + source = SkipLastAxisIterator(w_value) while not dest.done: rffi.c_memcpy( rffi.ptradd(self.storage, dest.offset * itemsize), @@ -991,30 +1052,28 @@ source.next() dest.next() - def _sliceloop(self, source, res_shape): - sig = source.find_sig(res_shape) - frame = sig.create_frame(source, res_shape) - res_iter = view_iter_from_arr(self) - shapelen = len(res_shape) - while not res_iter.done(): - slice_driver.jit_merge_point(sig=sig, - frame=frame, - shapelen=shapelen, - self=self, source=source, - res_iter=res_iter) - self.setitem(res_iter.offset, sig.eval(frame, source).convert_to( - self.find_dtype())) + def _sliceloop(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(arr) + shapelen = len(self.shape) + while not frame.done(): + slice_driver.jit_merge_point(sig=sig, frame=frame, self=self, + arr=arr, + shapelen=shapelen) + sig.eval(frame, arr) frame.next(shapelen) - res_iter = res_iter.next(shapelen) def copy(self, space): array = W_NDimArray(self.size, self.shape[:], self.dtype, self.order) array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): - def create_sig(self, res_shape): + def create_sig(self): return signature.ViewSignature(self.dtype) @@ -1078,8 +1137,8 @@ self.shape = new_shape self.calc_strides(new_shape) - def create_sig(self, res_shape): - return self.array_sig(res_shape) + def create_sig(self): + return signature.ArraySignature(self.dtype) def __del__(self): lltype.free(self.storage, flavor='raw', track_allocation=False) @@ -1224,6 +1283,8 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -3,20 +3,29 @@ from pypy.interpreter.gateway import interp2app from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype -from pypy.module.micronumpy.signature import ReduceSignature, ScalarSignature,\ - find_sig, new_printable_location +from pypy.module.micronumpy.signature import ReduceSignature,\ + find_sig, new_printable_location, AxisReduceSignature, ScalarSignature from pypy.rlib import jit from pypy.rlib.rarithmetic import LONG_BIT from pypy.tool.sourcetools import func_with_new_name reduce_driver = jit.JitDriver( - greens = ['shapelen', "sig"], - virtualizables = ["frame"], - reds = ["frame", "self", "dtype", "value", "obj"], + greens=['shapelen', "sig"], + virtualizables=["frame"], + reds=["frame", "self", "dtype", "value", "obj"], get_printable_location=new_printable_location('reduce'), name='numpy_reduce', ) +axisreduce_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['self','arr', 'identity', 'frame'], + name='numpy_axisreduce', + get_printable_location=new_printable_location('axisreduce'), +) + + class W_Ufunc(Wrappable): _attrs_ = ["name", "promote_to_float", "promote_bools", "identity"] _immutable_fields_ = ["promote_to_float", "promote_bools", "name"] @@ -49,18 +58,72 @@ ) return self.call(space, __args__.arguments_w) - def descr_reduce(self, space, w_obj): - return self.reduce(space, w_obj, multidim=False) + def descr_reduce(self, space, w_obj, w_dim=0): + """reduce(...) + reduce(a, axis=0) - def reduce(self, space, w_obj, multidim): - from pypy.module.micronumpy.interp_numarray import convert_to_array, Scalar - + Reduces `a`'s dimension by one, by applying ufunc along one axis. + + Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then + :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` = + the result of iterating `j` over :math:`range(N_i)`, cumulatively applying + ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`. + For a one-dimensional array, reduce produces results equivalent to: + :: + + r = op.identity # op = ufunc + for i in xrange(len(A)): + r = op(r, A[i]) + return r + + For example, add.reduce() is equivalent to sum(). + + Parameters + ---------- + a : array_like + The array to act on. + axis : int, optional + The axis along which to apply the reduction. + + Examples + -------- + >>> np.multiply.reduce([2,3,5]) + 30 + + A multi-dimensional array example: + + >>> X = np.arange(8).reshape((2,2,2)) + >>> X + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> np.add.reduce(X, 0) + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X) # confirm: default axis value is 0 + array([[ 4, 6], + [ 8, 10]]) + >>> np.add.reduce(X, 1) + array([[ 2, 4], + [10, 12]]) + >>> np.add.reduce(X, 2) + array([[ 1, 5], + [ 9, 13]]) + """ + return self.reduce(space, w_obj, False, False, w_dim) + + def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): + from pypy.module.micronumpy.interp_numarray import convert_to_array, \ + Scalar if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) - + dim = space.int_w(w_dim) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) + if dim >= len(obj.shape): + raise OperationError(space.w_ValueError, space.wrap("axis(=%d) out of bounds" % dim)) if isinstance(obj, Scalar): raise OperationError(space.w_TypeError, space.wrap("cannot reduce " "on a scalar")) @@ -68,26 +131,80 @@ size = obj.size dtype = find_unaryop_result_dtype( space, obj.find_dtype(), - promote_to_largest=True + promote_to_float=self.promote_to_float, + promote_to_largest=promote_to_largest, + promote_bools=True ) shapelen = len(obj.shape) + if self.identity is None and size == 0: + raise operationerrfmt(space.w_ValueError, "zero-size array to " + "%s.reduce without identity", self.name) + if shapelen > 1 and dim >= 0: + res = self.do_axis_reduce(obj, dtype, dim) + return space.wrap(res) + scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, - ScalarSignature(dtype), - obj.create_sig(obj.shape)), obj) + scalarsig, + obj.create_sig()), obj) frame = sig.create_frame(obj) - if shapelen > 1 and not multidim: - raise OperationError(space.w_NotImplementedError, - space.wrap("not implemented yet")) if self.identity is None: - if size == 0: - raise operationerrfmt(space.w_ValueError, "zero-size array to " - "%s.reduce without identity", self.name) value = sig.eval(frame, obj).convert_to(dtype) frame.next(shapelen) else: value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) + def do_axis_reduce(self, obj, dtype, dim): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + W_NDimArray + + shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] + size = 1 + for s in shape: + size *= s + result = W_NDimArray(size, shape, dtype) + rightsig = obj.create_sig() + # note - this is just a wrapper so signature can fetch + # both left and right, nothing more, especially + # this is not a true virtual array, because shapes + # don't quite match + arr = AxisReduce(self.func, self.name, obj.shape, dtype, + result, obj, dim) + scalarsig = ScalarSignature(dtype) + sig = find_sig(AxisReduceSignature(self.func, self.name, dtype, + scalarsig, rightsig), arr) + assert isinstance(sig, AxisReduceSignature) + frame = sig.create_frame(arr) + shapelen = len(obj.shape) + if self.identity is not None: + identity = self.identity.convert_to(dtype) + else: + identity = None + self.reduce_axis_loop(frame, sig, shapelen, arr, identity) + return result + + def reduce_axis_loop(self, frame, sig, shapelen, arr, identity): + # note - we can be advanterous here, depending on the exact field + # layout. For now let's say we iterate the original way and + # simply follow the original iteration order + while not frame.done(): + axisreduce_driver.jit_merge_point(frame=frame, self=self, + sig=sig, + identity=identity, + shapelen=shapelen, arr=arr) + iter = frame.get_final_iter() + v = sig.eval(frame, arr).convert_to(sig.calc_dtype) + if iter.first_line: + if identity is not None: + value = self.func(sig.calc_dtype, identity, v) + else: + value = v + else: + cur = arr.left.getitem(iter.offset) + value = self.func(sig.calc_dtype, cur, v) + arr.left.setitem(iter.offset, value) + frame.next(shapelen) + def reduce_loop(self, shapelen, sig, frame, value, obj, dtype): while not frame.done(): reduce_driver.jit_merge_point(sig=sig, @@ -95,10 +212,12 @@ value=value, obj=obj, frame=frame, dtype=dtype) assert isinstance(sig, ReduceSignature) - value = sig.binfunc(dtype, value, sig.eval(frame, obj).convert_to(dtype)) + value = sig.binfunc(dtype, value, + sig.eval(frame, obj).convert_to(dtype)) frame.next(shapelen) return value + class W_Ufunc1(W_Ufunc): argcount = 1 @@ -183,6 +302,7 @@ reduce = interp2app(W_Ufunc.descr_reduce), ) + def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, promote_bools=False): # dt1.num should be <= dt2.num @@ -231,6 +351,7 @@ dtypenum += 3 return interp_dtype.get_dtype_cache(space).builtin_dtypes[dtypenum] + def find_unaryop_result_dtype(space, dt, promote_to_float=False, promote_bools=False, promote_to_largest=False): if promote_bools and (dt.kind == interp_dtype.BOOLLTR): @@ -255,6 +376,7 @@ assert False return dt + def find_dtype_for_scalar(space, w_obj, current_guess=None): bool_dtype = interp_dtype.get_dtype_cache(space).w_booldtype long_dtype = interp_dtype.get_dtype_cache(space).w_longdtype @@ -327,6 +449,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), @@ -348,7 +471,8 @@ identity = extra_kwargs.get("identity") if identity is not None: - identity = interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) + identity = \ + interp_dtype.get_dtype_cache(space).w_longdtype.box(identity) extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -1,10 +1,32 @@ from pypy.rlib.objectmodel import r_dict, compute_identity_hash, compute_hash from pypy.rlib.rarithmetic import intmask from pypy.module.micronumpy.interp_iter import ViewIterator, ArrayIterator, \ - OneDimIterator, ConstantIterator -from pypy.module.micronumpy.strides import calculate_slice_strides + ConstantIterator, AxisIterator, ViewTransform,\ + BroadcastTransform from pypy.rlib.jit import hint, unroll_safe, promote +""" Signature specifies both the numpy expression that has been constructed +and the assembler to be compiled. This is a very important observation - +Two expressions will be using the same assembler if and only if they are +compiled to the same signature. + +This is also a very convinient tool for specializations. For example +a + a and a + b (where a != b) will compile to different assembler because +we specialize on the same array access. + +When evaluating, signatures will create iterators per signature node, +potentially sharing some of them. Iterators depend also on the actual +expression, they're not only dependant on the array itself. For example +a + b where a is dim 2 and b is dim 1 would create a broadcasted iterator for +the array b. + +Such iterator changes are called Transformations. An actual iterator would +be a combination of array and various transformation, like view, broadcast, +dimension swapping etc. + +See interp_iter for transformations +""" + def new_printable_location(driver_name): def get_printable_location(shapelen, sig): return 'numpy ' + sig.debug_repr() + ' [%d dims,%s]' % (shapelen, driver_name) @@ -33,7 +55,8 @@ return sig class NumpyEvalFrame(object): - _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]'] + _virtualizable2_ = ['iterators[*]', 'final_iter', 'arraylist[*]', + 'value', 'identity'] @unroll_safe def __init__(self, iterators, arrays): @@ -51,7 +74,7 @@ def done(self): final_iter = promote(self.final_iter) if final_iter < 0: - return False + assert False return self.iterators[final_iter].done() @unroll_safe @@ -59,6 +82,12 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + def get_final_iter(self): + final_iter = promote(self.final_iter) + if final_iter < 0: + assert False + return self.iterators[final_iter] + def _add_ptr_to_cache(ptr, cache): i = 0 for p in cache: @@ -70,6 +99,9 @@ cache.append(ptr) return res +def new_cache(): + return r_dict(sigeq_no_numbering, sighash) + class Signature(object): _attrs_ = ['iter_no', 'array_no'] _immutable_fields_ = ['iter_no', 'array_no'] @@ -78,7 +110,7 @@ iter_no = 0 def invent_numbering(self): - cache = r_dict(sigeq_no_numbering, sighash) + cache = new_cache() allnumbers = [] self._invent_numbering(cache, allnumbers) @@ -95,13 +127,13 @@ allnumbers.append(no) self.iter_no = no - def create_frame(self, arr, res_shape=None): - res_shape = res_shape or arr.shape + def create_frame(self, arr): iterlist = [] arraylist = [] - self._create_iter(iterlist, arraylist, arr, res_shape, []) + self._create_iter(iterlist, arraylist, arr, []) return NumpyEvalFrame(iterlist, arraylist) + class ConcreteSignature(Signature): _immutable_fields_ = ['dtype'] @@ -120,16 +152,6 @@ def hash(self): return compute_identity_hash(self.dtype) - def allocate_view_iter(self, arr, res_shape, chunklist): - r = arr.shape, arr.start, arr.strides, arr.backstrides - if chunklist: - for chunkelem in chunklist: - r = calculate_slice_strides(r[0], r[1], r[2], r[3], chunkelem) - shape, start, strides, backstrides = r - if len(res_shape) == 1: - return OneDimIterator(start, strides[0], res_shape[0]) - return ViewIterator(start, strides, backstrides, shape, res_shape) - class ArraySignature(ConcreteSignature): def debug_repr(self): return 'Array' @@ -141,22 +163,21 @@ # is not of a concrete class it means that we have a _forced_result, # otherwise the signature would not match assert isinstance(concr, ConcreteArray) + assert concr.dtype is self.dtype self.array_no = _add_ptr_to_cache(concr.storage, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import ConcreteArray concr = arr.get_concrete() assert isinstance(concr, ConcreteArray) storage = concr.storage if self.iter_no >= len(iterlist): - iterlist.append(self.allocate_iter(concr, res_shape, chunklist)) + iterlist.append(self.allocate_iter(concr, transforms)) if self.array_no >= len(arraylist): arraylist.append(storage) - def allocate_iter(self, arr, res_shape, chunklist): - if chunklist: - return self.allocate_view_iter(arr, res_shape, chunklist) - return ArrayIterator(arr.size) + def allocate_iter(self, arr, transforms): + return ArrayIterator(arr.size).apply_transformations(arr, transforms) def eval(self, frame, arr): iter = frame.iterators[self.iter_no] @@ -169,7 +190,7 @@ def _invent_array_numbering(self, arr, cache): pass - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): if self.iter_no >= len(iterlist): iter = ConstantIterator() iterlist.append(iter) @@ -189,8 +210,9 @@ allnumbers.append(no) self.iter_no = no - def allocate_iter(self, arr, res_shape, chunklist): - return self.allocate_view_iter(arr, res_shape, chunklist) + def allocate_iter(self, arr, transforms): + return ViewIterator(arr.start, arr.strides, arr.backstrides, + arr.shape).apply_transformations(arr, transforms) class VirtualSliceSignature(Signature): def __init__(self, child): @@ -201,6 +223,9 @@ assert isinstance(arr, VirtualSlice) self.child._invent_array_numbering(arr.child, cache) + def _invent_numbering(self, cache, allnumbers): + self.child._invent_numbering(new_cache(), allnumbers) + def hash(self): return intmask(self.child.hash() ^ 1234) @@ -210,12 +235,11 @@ assert isinstance(other, VirtualSliceSignature) return self.child.eq(other.child, compare_array_no) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import VirtualSlice assert isinstance(arr, VirtualSlice) - chunklist.append(arr.chunks) - self.child._create_iter(iterlist, arraylist, arr.child, res_shape, - chunklist) + transforms = transforms + [ViewTransform(arr.chunks)] + self.child._create_iter(iterlist, arraylist, arr.child, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import VirtualSlice @@ -251,11 +275,10 @@ assert isinstance(arr, Call1) self.child._invent_array_numbering(arr.values, cache) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call1 assert isinstance(arr, Call1) - self.child._create_iter(iterlist, arraylist, arr.values, res_shape, - chunklist) + self.child._create_iter(iterlist, arraylist, arr.values, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call1 @@ -296,29 +319,68 @@ self.left._invent_numbering(cache, allnumbers) self.right._invent_numbering(cache, allnumbers) - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): + def _create_iter(self, iterlist, arraylist, arr, transforms): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) - self.left._create_iter(iterlist, arraylist, arr.left, res_shape, - chunklist) - self.right._create_iter(iterlist, arraylist, arr.right, res_shape, - chunklist) + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) def eval(self, frame, arr): from pypy.module.micronumpy.interp_numarray import Call2 assert isinstance(arr, Call2) lhs = self.left.eval(frame, arr.left).convert_to(self.calc_dtype) rhs = self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + return self.binfunc(self.calc_dtype, lhs, rhs) def debug_repr(self): return 'Call2(%s, %s, %s)' % (self.name, self.left.debug_repr(), self.right.debug_repr()) +class BroadcastLeft(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + +class BroadcastRight(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(cache, allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class BroadcastBoth(Call2): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(new_cache(), allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + ltransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, ltransforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + class ReduceSignature(Call2): - def _create_iter(self, iterlist, arraylist, arr, res_shape, chunklist): - self.right._create_iter(iterlist, arraylist, arr, res_shape, chunklist) + def _create_iter(self, iterlist, arraylist, arr, transforms): + self.right._create_iter(iterlist, arraylist, arr, transforms) def _invent_numbering(self, cache, allnumbers): self.right._invent_numbering(cache, allnumbers) @@ -328,3 +390,63 @@ def eval(self, frame, arr): return self.right.eval(frame, arr) + + def debug_repr(self): + return 'ReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + +class SliceloopSignature(Call2): + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import Call2 + + assert isinstance(arr, Call2) + ofs = frame.iterators[0].offset + arr.left.setitem(ofs, self.right.eval(frame, arr.right).convert_to( + self.calc_dtype)) + + def debug_repr(self): + return 'SliceLoop(%s, %s, %s)' % (self.name, self.left.debug_repr(), + self.right.debug_repr()) + +class SliceloopBroadcastSignature(SliceloopSignature): + def _invent_numbering(self, cache, allnumbers): + self.left._invent_numbering(new_cache(), allnumbers) + self.right._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import SliceArray + + assert isinstance(arr, SliceArray) + rtransforms = transforms + [BroadcastTransform(arr.shape)] + self.left._create_iter(iterlist, arraylist, arr.left, transforms) + self.right._create_iter(iterlist, arraylist, arr.right, rtransforms) + +class AxisReduceSignature(Call2): + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_numarray import AxisReduce,\ + ConcreteArray + + assert isinstance(arr, AxisReduce) + left = arr.left + assert isinstance(left, ConcreteArray) + iterlist.append(AxisIterator(left.start, arr.dim, arr.shape, + left.strides, left.backstrides)) + self.right._create_iter(iterlist, arraylist, arr.right, transforms) + + def _invent_numbering(self, cache, allnumbers): + allnumbers.append(0) + self.right._invent_numbering(cache, allnumbers) + + def _invent_array_numbering(self, arr, cache): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + self.right._invent_array_numbering(arr.right, cache) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_numarray import AxisReduce + + assert isinstance(arr, AxisReduce) + return self.right.eval(frame, arr.right).convert_to(self.calc_dtype) + + def debug_repr(self): + return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -157,6 +157,8 @@ assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([105, 1], [3, 5, 7], [35, 7, 1]) == [1, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1]) == [105, 1] class AppTestNumArray(BaseNumpyAppTest): @@ -246,6 +248,10 @@ c = b.copy() assert (c == b).all() + a = arange(15).reshape(5,3) + b = a.copy() + assert (b == a).all() + def test_iterator_init(self): from _numpypy import array a = array(range(5)) @@ -724,6 +730,12 @@ a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 + a = array(range(105)).reshape(3, 5, 7) + b = a.mean(axis=0) + b[0, 0]==35. + assert a.mean(axis=0)[0, 0] == 35 + assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from _numpypy import array @@ -734,6 +746,34 @@ a = array([True] * 5, bool) assert a.sum() == 5 + raises(TypeError, 'a.sum(2, 3)') + + def test_reduce_nd(self): + from numpypy import arange, array, multiply + a = arange(15).reshape(5, 3) + assert a.sum() == 105 + assert a.max() == 14 + assert array([]).sum() == 0.0 + raises(ValueError, 'array([]).max()') + assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() + assert (a.sum(1) == [3, 12, 21, 30, 39]).all() + assert (a.max(0) == [12, 13, 14]).all() + assert (a.max(1) == [2, 5, 8, 11, 14]).all() + assert ((a + a).max() == 28) + assert ((a + a).max(0) == [24, 26, 28]).all() + assert ((a + a).sum(1) == [6, 24, 42, 60, 78]).all() + assert (multiply.reduce(a) == array([0, 3640, 12320])).all() + a = array(range(105)).reshape(3, 5, 7) + assert (a[:, 1, :].sum(0) == [126, 129, 132, 135, 138, 141, 144]).all() + assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() + raises (ValueError, 'a[:, 1, :].sum(2)') + assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() + assert (a.reshape(1,-1).sum(0) == range(105)).all() + assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() + def test_identity(self): from _numpypy import identity, array from _numpypy import int32, float64, dtype @@ -1262,6 +1302,28 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_fill(self): + from _numpypy import array + + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): @@ -1401,9 +1463,11 @@ assert repr(a) == "array(0.0)" a = array(0.2) assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" def test_repr_multi(self): - from _numpypy import arange, zeros + from _numpypy import arange, zeros, array a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1426,6 +1490,9 @@ [498, 999], [499, 1000], [500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' def test_repr_slice(self): from _numpypy import array, zeros @@ -1510,14 +1577,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: @@ -298,7 +306,7 @@ for i in range(len(a)): assert b[i] == math.atan(a[i]) - a = array([float('nan')]) + a = array([float('nan')]) b = arctan(a) assert math.isnan(b[0]) @@ -336,9 +344,9 @@ from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(TypeError, add.reduce, 1) + raises(ValueError, add.reduce, 1) - def test_reduce(self): + def test_reduce_1d(self): from _numpypy import add, maximum assert add.reduce([1, 2, 3]) == 6 @@ -346,6 +354,12 @@ assert maximum.reduce([1, 2, 3]) == 3 raises(ValueError, maximum.reduce, []) + def test_reduceND(self): + from numpypy import add, arange + a = arange(12).reshape(3, 4) + assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() + assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -47,6 +47,8 @@ def f(i): interp = InterpreterState(codes[i]) interp.run(space) + if not len(interp.results): + raise Exception("need results") w_res = interp.results[-1] if isinstance(w_res, BaseArray): concr = w_res.get_concrete_or_scalar() @@ -115,6 +117,28 @@ "int_add": 1, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) + def define_axissum(): + return """ + a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]] + b = sum(a,0) + b -> 1 + """ + + def test_axissum(self): + result = self.run("axissum") + assert result == 30 + # XXX note - the bridge here is fairly crucial and yet it's pretty + # bogus. We need to improve the situation somehow. + self.check_simple_loop({'getinteriorfield_raw': 2, + 'setinteriorfield_raw': 1, + 'arraylen_gc': 1, + 'guard_true': 1, + 'int_lt': 1, + 'jump': 1, + 'float_add': 1, + 'int_add': 3, + }) + def define_prod(): return """ a = |30| @@ -193,9 +217,9 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. - self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 26, + self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, - 'getfield_gc_pure': 4, + 'getfield_gc_pure': 8, 'guard_class': 8, 'int_add': 8, 'float_mul': 2, 'jump': 2, 'int_ge': 4, 'getinteriorfield_raw': 4, 'float_add': 2, @@ -212,7 +236,8 @@ def test_ufunc(self): result = self.run("ufunc") assert result == -6 - self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, "float_neg": 1, + self.check_simple_loop({"getinteriorfield_raw": 2, "float_add": 1, + "float_neg": 1, "setinteriorfield_raw": 1, "int_add": 2, "int_ge": 1, "guard_false": 1, "jump": 1, 'arraylen_gc': 1}) @@ -322,10 +347,10 @@ result = self.run("setslice") assert result == 11.0 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, - 'setinteriorfield_raw': 1, 'int_add': 3, - 'int_lt': 1, 'guard_true': 1, 'jump': 1, - 'arraylen_gc': 3}) + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, + 'setinteriorfield_raw': 1, 'int_add': 2, + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ @@ -339,11 +364,12 @@ result = self.run("virtual_slice") assert result == 4 self.check_trace_count(1) - self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add' : 1, + self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, 'int_ge': 1, 'guard_false': 1, 'jump': 1, 'arraylen_gc': 1}) + class TestNumpyOld(LLJitMixin): def setup_class(cls): py.test.skip("old") @@ -377,4 +403,3 @@ result = self.meta_interp(f, [5], listops=True, backendopt=True) assert result == f(5) - diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -374,6 +374,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) @@ -436,4 +440,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -10,6 +11,7 @@ from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver class Cache(object): in_recursion = False @@ -111,13 +113,24 @@ def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd for op in operations: if ops_offset is None: ofs = -1 else: ofs = ops_offset.get(op, 0) - l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, - logops.repr_of_resop(op))) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_greenkey)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) return l_w class WrappedBox(Wrappable): @@ -150,6 +163,15 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_greenkey) + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -182,6 +204,25 @@ box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) +class DebugMergePoint(WrappedOp): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.w_greenkey = w_greenkey + self.jd_name = jd_name + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_bytecode_no(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_jitdriver_name(self, space): + return space.wrap(self.jd_name) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -195,3 +236,15 @@ WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), + pycode = GetSetProperty(DebugMergePoint.get_pycode), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -92,6 +92,7 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) cls.orig_oplist = oplist @@ -117,6 +118,10 @@ assert elem[2][2] == False assert len(elem[3]) == 4 int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() @@ -211,3 +216,18 @@ assert op.getarg(0).getint() == 4 op.result = box assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' + assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py rename from lib_pypy/numpypy/test/test_fromnumeric.py rename to pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -1,7 +1,7 @@ - from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -class AppTestFromNumeric(BaseNumpyAppTest): + +class AppTestFromNumeric(BaseNumpyAppTest): def test_argmax(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, argmax @@ -18,12 +18,12 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - # assert (argmax(a, axis=0) == array([0, 0, 0])).all() - # assert (argmax(a, axis=1) == array([0, 0])).all() + assert (argmin(a, axis=0) == array([0, 0, 0])).all() + assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 - + def test_shape(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, identity, shape @@ -44,7 +44,7 @@ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 - + def test_amin(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, amin @@ -86,14 +86,14 @@ assert ndim([[1,2,3],[4,5,6]]) == 2 assert ndim(array([[1,2,3],[4,5,6]])) == 2 assert ndim(1) == 0 - + def test_rank(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, rank assert rank([[1,2,3],[4,5,6]]) == 2 assert rank(array([[1,2,3],[4,5,6]])) == 2 assert rank(1) == 0 - + def test_var(self): from numpypy import array, var a = array([[1,2],[3,4]]) @@ -107,3 +107,31 @@ assert std(a) == 1.1180339887498949 # assert (std(a, axis=0) == array([ 1., 1.])).all() # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + + def test_mean(self): + from numpypy import array, mean + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) + + def test_transpose(self): + from numpypy import arange, array, transpose, ones + x = arange(4).reshape((2,2)) + assert (transpose(x) == array([[0, 2],[1, 3]])).all() + # Once axes argument is implemented, add more tests + raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") + # x = ones((1, 2, 3)) + # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -64,6 +64,12 @@ check(', '.join([u'a']), u'a') check(', '.join(['a', u'b']), u'a, b') check(u', '.join(['a', 'b']), u'a, b') + try: + u''.join([u'a', 2, 3]) + except TypeError, e: + assert 'sequence item 1' in str(e) + else: + raise Exception("DID NOT RAISE") if sys.version_info >= (2,3): def test_contains_ex(self): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -201,7 +201,7 @@ return space.newbool(container.find(item) != -1) def unicode_join__Unicode_ANY(space, w_self, w_list): - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -216,22 +216,21 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = len(self) * (size - 1) + for i in range(size): + try: + prealloc_size += len(space.unicode_w(list_w[i])) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if isinstance(w_s, W_UnicodeObject): - # shortcut for performance - sb.append(w_s._value) - else: - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -52,7 +52,10 @@ from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] - res = _cast_to_box(llres) + if llres: + res = _cast_to_box(llres) + else: + res = None return _cast_to_gcref(ResOperation(no, args, res)) @register_helper(annmodel.SomePtr(llmemory.GCREF)) diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -420,7 +420,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/lltypesystem/rtuple.py b/pypy/rpython/lltypesystem/rtuple.py --- a/pypy/rpython/lltypesystem/rtuple.py +++ b/pypy/rpython/lltypesystem/rtuple.py @@ -27,6 +27,10 @@ def newtuple(cls, llops, r_tuple, items_v): # items_v should have the lowleveltype of the internal reprs + assert len(r_tuple.items_r) == len(items_v) + for r_item, v_item in zip(r_tuple.items_r, items_v): + assert r_item.lowleveltype == v_item.concretetype + # if len(r_tuple.items_r) == 0: return inputconst(Void, ()) # a Void empty tuple c1 = inputconst(Void, r_tuple.lowleveltype.TO) diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/rrange.py b/pypy/rpython/rrange.py --- a/pypy/rpython/rrange.py +++ b/pypy/rpython/rrange.py @@ -204,7 +204,10 @@ v_index = hop.gendirectcall(self.ll_getnextindex, v_enumerate) hop2 = hop.copy() hop2.args_r = [self.r_baseiter] + r_item_src = self.r_baseiter.r_list.external_item_repr + r_item_dst = hop.r_result.items_r[1] v_item = self.r_baseiter.rtype_next(hop2) + v_item = hop.llops.convertvar(v_item, r_item_src, r_item_dst) return hop.r_result.newtuple(hop.llops, hop.r_result, [v_index, v_item]) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): diff --git a/pypy/rpython/test/test_rrange.py b/pypy/rpython/test/test_rrange.py --- a/pypy/rpython/test/test_rrange.py +++ b/pypy/rpython/test/test_rrange.py @@ -169,6 +169,22 @@ res = self.interpret(fn, [2]) assert res == 789 + def test_enumerate_instances(self): + class A: + pass + def fn(n): + a = A() + b = A() + a.k = 10 + b.k = 20 + for i, x in enumerate([a, b]): + if i == n: + return x.k + return 5 + res = self.interpret(fn, [1]) + assert res == 20 + + class TestLLtype(BaseTestRrange, LLRtypeMixin): from pypy.rpython.lltypesystem import rrange diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -140,13 +140,15 @@ bytecode_name = None is_bytecode = True inline_level = None + has_dmp = False def parse_code_data(self, arg): m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', arg) if m is None: # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = arg + if arg: + self.bytecode_name = arg else: self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() self.startlineno = int(lineno) @@ -218,7 +220,7 @@ self.inputargs = inputargs self.chunks = chunks for chunk in self.chunks: - if chunk.filename is not None: + if chunk.bytecode_name is not None: self.startlineno = chunk.startlineno self.filename = chunk.filename self.name = chunk.name diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -283,3 +283,13 @@ assert loops[-1].count == 1234 assert loops[1].count == 123 assert loops[2].count == 12 + +def test_parse_nonpython(): + loop = parse(""" + [] + debug_merge_point(0, 'random') + debug_merge_point(0, ' #15 COMPARE_OP') + """) + f = Function.from_operations(loop.operations, LoopStorage()) + assert f.chunks[-1].filename == 'x.py' + assert f.filename is None diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Fri Jan 20 17:26:35 2012 From: noreply at buildbot.pypy.org (cocoatomo) Date: Fri, 20 Jan 2012 17:26:35 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120120162635.DE0A082CF8@wyvern.cs.uni-duesseldorf.de> Author: cocoatomo Branch: Changeset: r51556:8853197b01a6 Date: 2012-01-21 01:04 +0900 http://bitbucket.org/pypy/pypy/changeset/8853197b01a6/ Log: merge diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -86,6 +86,8 @@ ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ('bitwise_and', 'bitwise_and'), + ('bitwise_or', 'bitwise_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -47,6 +47,10 @@ def getitem(self, storage, i): return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + def getitem_bool(self, storage, i): + isize = self.itemtype.get_element_size() + return self.itemtype.read_bool(storage, isize, i, 0) + def setitem(self, storage, i, box): self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) @@ -85,6 +89,12 @@ def descr_get_shape(self, space): return space.newtuple([]) + def is_int_type(self): + return self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR + + def is_bool_type(self): + return self.kind == BOOLLTR + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", __new__ = interp2app(W_Dtype.descr__new__.im_func), diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -4,6 +4,19 @@ from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ calculate_slice_strides +# structures to describe slicing + +class Chunk(object): + def __init__(self, start, stop, step, lgt): + self.start = start + self.stop = stop + self.step = step + self.lgt = lgt + + def extend_shape(self, shape): + if self.step != 0: + shape.append(self.lgt) + class BaseTransform(object): pass @@ -38,11 +51,18 @@ self.size = size def next(self, shapelen): + return self._next(1) + + def _next(self, ofs): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + 1 + arr.offset = self.offset + ofs return arr + def next_no_increase(self, shapelen): + # a hack to make JIT believe this is always virtual + return self._next(0) + def done(self): return self.offset >= self.size diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -2,14 +2,15 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ + interp_boxes from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + SkipLastAxisIterator, Chunk, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -39,7 +40,24 @@ get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) - +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['s', 'frame', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], + name='numpy_filter', +) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', +) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -270,6 +288,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -479,11 +500,69 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(self) + shapelen = len(arr.shape) + s = 0 + iter = None + while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = concr.create_iter() + sig = self.find_sig() + frame = sig.create_frame(self) + v = None + while not frame.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen, self=self) + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + + def setitem_filter(self, space, idx, val): + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -493,6 +572,11 @@ def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -509,9 +593,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -724,8 +807,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result @@ -945,7 +1027,7 @@ builder.append('\n' + indent) else: builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: @@ -962,7 +1044,7 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 @@ -1091,6 +1173,10 @@ parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1137,6 +1223,9 @@ self.shape = new_shape self.calc_strides(new_shape) + def create_iter(self): + return ArrayIterator(self.size) + def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1191,6 +1280,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, @@ -1257,6 +1347,9 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -249,15 +249,16 @@ class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) self.func = func self.comparison_func = comparison_func + self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -268,6 +269,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -304,10 +306,12 @@ def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -425,6 +429,10 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -476,7 +484,7 @@ extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -82,6 +82,16 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + def get_final_iter(self): final_iter = promote(self.final_iter) if final_iter < 0: diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -10,12 +10,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -1302,15 +1304,25 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_array_indexing_one_elem(self): + skip("not yet") + from _numpypy import array, arange + raises(IndexError, 'arange(3)[array([3.5])]') + a = arange(3)[array([1])] + assert a == 1 + assert a[0] == 1 + raises(IndexError,'arange(3)[array([15])]') + assert arange(3)[array([-3])] == 0 + raises(IndexError,'arange(3)[array([-15])]') + assert arange(3)[array(1)] == 1 + def test_fill(self): from _numpypy import array - a = array([1, 2, 3]) a.fill(10) assert (a == [10, 10, 10]).all() a.fill(False) assert (a == [0, 0, 0]).all() - b = a[:1] b.fill(4) assert (b == [4]).all() @@ -1324,6 +1336,24 @@ d.fill(100) assert d == 100 + def test_array_indexing_bool(self): + from _numpypy import arange + a = arange(10) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + a = arange(10).reshape(5, 2) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() + + def test_array_indexing_bool_setitem(self): + from _numpypy import arange, array + a = arange(6) + a[a > 3] = 15 + assert (a == [0, 1, 2, 3, 15, 15]).all() + a = arange(6).reshape(3, 2) + a[a & 1 == 1] = array([8, 9, 10]) + assert (a == [[0, 8], [2, 9], [4, 10]]).all() + + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -355,11 +355,20 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): - from numpypy import add, arange + from _numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_bitwise(self): + from _numpypy import bitwise_and, bitwise_or, arange, array + a = arange(6).reshape(2, 3) + assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() + assert (a & 1 == bitwise_and(a, 1)).all() + assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() + assert (a | 1 == bitwise_or(a, 1)).all() + raises(TypeError, 'array([1.0]) & 1') + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -217,6 +217,7 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. + py.test.skip("too fragile") self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, 'getfield_gc_pure': 8, diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -94,6 +94,9 @@ width, storage, i, offset )) + def read_bool(self, storage, width, i, offset): + raise NotImplementedError + def store(self, storage, width, i, offset, box): value = self.unbox(box) libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), @@ -168,6 +171,7 @@ @simple_binary_op def min(self, v1, v2): return min(v1, v2) + class Bool(BaseType, Primitive): T = lltype.Bool @@ -185,6 +189,11 @@ else: return self.False + + def read_bool(self, storage, width, i, offset): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def coerce_subtype(self, space, w_subtype, w_item): # Doesn't return subclasses so it can return the constants. return self._coerce(space, w_item) @@ -253,6 +262,14 @@ assert v == 0 return 0 + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box diff --git a/pypy/tool/test/test_version.py b/pypy/tool/test/test_version.py --- a/pypy/tool/test/test_version.py +++ b/pypy/tool/test/test_version.py @@ -1,6 +1,22 @@ import os, sys import py -from pypy.tool.version import get_repo_version_info +from pypy.tool.version import get_repo_version_info, _get_hg_archive_version + +def test_hg_archival_version(tmpdir): + def version_for(name, **kw): + path = tmpdir.join(name) + path.write('\n'.join('%s: %s' % x for x in kw.items())) + return _get_hg_archive_version(str(path)) + + assert version_for('release', + tag='release-123', + node='000', + ) == ('PyPy', 'release-123', '000') + assert version_for('somebranch', + node='000', + branch='something', + ) == ('PyPy', 'something', '000') + def test_get_repo_version_info(): assert get_repo_version_info(None) diff --git a/pypy/tool/version.py b/pypy/tool/version.py --- a/pypy/tool/version.py +++ b/pypy/tool/version.py @@ -3,111 +3,139 @@ from subprocess import Popen, PIPE import pypy pypydir = os.path.dirname(os.path.abspath(pypy.__file__)) +pypyroot = os.path.dirname(pypydir) +default_retval = 'PyPy', '?', '?' + +def maywarn(err, repo_type='Mercurial'): + if not err: + return + + from pypy.tool.ansi_print import ansi_log + log = py.log.Producer("version") + py.log.setconsumer("version", ansi_log) + log.WARNING('Errors getting %s information: %s' % (repo_type, err)) def get_repo_version_info(hgexe=None): '''Obtain version information by invoking the 'hg' or 'git' commands.''' - # TODO: support extracting from .hg_archival.txt - - default_retval = 'PyPy', '?', '?' - pypyroot = os.path.abspath(os.path.join(pypydir, '..')) - - def maywarn(err, repo_type='Mercurial'): - if not err: - return - - from pypy.tool.ansi_print import ansi_log - log = py.log.Producer("version") - py.log.setconsumer("version", ansi_log) - log.WARNING('Errors getting %s information: %s' % (repo_type, err)) # Try to see if we can get info from Git if hgexe is not specified. if not hgexe: if os.path.isdir(os.path.join(pypyroot, '.git')): - gitexe = py.path.local.sysfind('git') - if gitexe: - try: - p = Popen( - [str(gitexe), 'rev-parse', 'HEAD'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - except OSError, e: - maywarn(e, 'Git') - return default_retval - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return default_retval - revision_id = p.stdout.read().strip()[:12] - p = Popen( - [str(gitexe), 'describe', '--tags', '--exact-match'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - if p.wait() != 0: - p = Popen( - [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, - cwd=pypyroot - ) - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return 'PyPy', '?', revision_id - branch = '?' - for line in p.stdout.read().strip().split('\n'): - if line.startswith('* '): - branch = line[1:].strip() - if branch == '(no branch)': - branch = '?' - break - return 'PyPy', branch, revision_id - return 'PyPy', p.stdout.read().strip(), revision_id + return _get_git_version() # Fallback to trying Mercurial. if hgexe is None: hgexe = py.path.local.sysfind('hg') - if not os.path.isdir(os.path.join(pypyroot, '.hg')): + if os.path.isfile(os.path.join(pypyroot, '.hg_archival.txt')): + return _get_hg_archive_version(os.path.join(pypyroot, '.hg_archival.txt')) + elif not os.path.isdir(os.path.join(pypyroot, '.hg')): maywarn('Not running from a Mercurial repository!') return default_retval elif not hgexe: maywarn('Cannot find Mercurial command!') return default_retval else: - env = dict(os.environ) - # get Mercurial into scripting mode - env['HGPLAIN'] = '1' - # disable user configuration, extensions, etc. - env['HGRCPATH'] = os.devnull + return _get_hg_version(hgexe) - try: - p = Popen([str(hgexe), 'version', '-q'], - stdout=PIPE, stderr=PIPE, env=env) - except OSError, e: - maywarn(e) - return default_retval - if not p.stdout.read().startswith('Mercurial Distributed SCM'): - maywarn('command does not identify itself as Mercurial') - return default_retval +def _get_hg_version(hgexe): + env = dict(os.environ) + # get Mercurial into scripting mode + env['HGPLAIN'] = '1' + # disable user configuration, extensions, etc. + env['HGRCPATH'] = os.devnull - p = Popen([str(hgexe), 'id', '-i', pypyroot], + try: + p = Popen([str(hgexe), 'version', '-q'], stdout=PIPE, stderr=PIPE, env=env) - hgid = p.stdout.read().strip() + except OSError, e: + maywarn(e) + return default_retval + + if not p.stdout.read().startswith('Mercurial Distributed SCM'): + maywarn('command does not identify itself as Mercurial') + return default_retval + + p = Popen([str(hgexe), 'id', '-i', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgid = p.stdout.read().strip() + maywarn(p.stderr.read()) + if p.wait() != 0: + hgid = '?' + + p = Popen([str(hgexe), 'id', '-t', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] + maywarn(p.stderr.read()) + if p.wait() != 0: + hgtags = ['?'] + + if hgtags: + return 'PyPy', hgtags[0], hgid + else: + # use the branch instead + p = Popen([str(hgexe), 'id', '-b', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgbranch = p.stdout.read().strip() maywarn(p.stderr.read()) + + return 'PyPy', hgbranch, hgid + + +def _get_hg_archive_version(path): + fp = open(path) + try: + data = dict(x.split(': ', 1) for x in fp.read().splitlines()) + finally: + fp.close() + if 'tag' in data: + return 'PyPy', data['tag'], data['node'] + else: + return 'PyPy', data['branch'], data['node'] + + +def _get_git_version(): + #XXX: this function is a untested hack, + # so the git mirror tav made will work + gitexe = py.path.local.sysfind('git') + if not gitexe: + return default_retval + + try: + p = Popen( + [str(gitexe), 'rev-parse', 'HEAD'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + except OSError, e: + maywarn(e, 'Git') + return default_retval + if p.wait() != 0: + maywarn(p.stderr.read(), 'Git') + return default_retval + revision_id = p.stdout.read().strip()[:12] + p = Popen( + [str(gitexe), 'describe', '--tags', '--exact-match'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + if p.wait() != 0: + p = Popen( + [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, + cwd=pypyroot + ) if p.wait() != 0: - hgid = '?' + maywarn(p.stderr.read(), 'Git') + return 'PyPy', '?', revision_id + branch = '?' + for line in p.stdout.read().strip().split('\n'): + if line.startswith('* '): + branch = line[1:].strip() + if branch == '(no branch)': + branch = '?' + break + return 'PyPy', branch, revision_id + return 'PyPy', p.stdout.read().strip(), revision_id - p = Popen([str(hgexe), 'id', '-t', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] - maywarn(p.stderr.read()) - if p.wait() != 0: - hgtags = ['?'] - if hgtags: - return 'PyPy', hgtags[0], hgid - else: - # use the branch instead - p = Popen([str(hgexe), 'id', '-b', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgbranch = p.stdout.read().strip() - maywarn(p.stderr.read()) - - return 'PyPy', hgbranch, hgid +if __name__ == '__main__': + print get_repo_version_info() From noreply at buildbot.pypy.org Fri Jan 20 18:01:54 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 20 Jan 2012 18:01:54 +0100 (CET) Subject: [pypy-commit] pypy stm: Propagate the exception that occurs in a transaction. Message-ID: <20120120170154.2B71A82CF8@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51558:b4fd35482b14 Date: 2012-01-20 18:01 +0100 http://bitbucket.org/pypy/pypy/changeset/b4fd35482b14/ Log: Propagate the exception that occurs in a transaction. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -89,9 +89,13 @@ @staticmethod def _run_in_transaction(pending): - space = state.space - space.call_args(pending.w_callback, pending.args) - # xxx exceptions? + if state.got_exception is not None: + return # return early if there is already a 'got_exception' + try: + space = state.space + space.call_args(pending.w_callback, pending.args) + except Exception, e: + state.got_exception = e def add(space, w_callback, __args__): @@ -159,6 +163,7 @@ state.num_waiting_threads = 0 state.finished = False state.running = True + state.got_exception = None # for i in range(state.num_threads): threadintf.start_new_thread(_run_thread, ()) @@ -170,3 +175,9 @@ assert state.pending_lists.keys() == [state.main_thread_id] assert not state.is_locked_no_tasks_pending() state.running = False + # + # now re-raise the exception that we got in a transaction + if state.got_exception is not None: + e = state.got_exception + state.got_exception = None + raise e diff --git a/pypy/module/transaction/test/test_transaction.py b/pypy/module/transaction/test/test_transaction.py --- a/pypy/module/transaction/test/test_transaction.py +++ b/pypy/module/transaction/test/test_transaction.py @@ -41,3 +41,21 @@ for index in range(7): assert lst[start + index] == lst[start] + index assert seen == set([10, 20, 30]) + + def test_propagate_exception(self): + import transaction, time + lst = [] + def f(n): + lst.append(n) + time.sleep(0.5) + raise ValueError(n) + transaction.add(f, 10) + transaction.add(f, 20) + transaction.add(f, 30) + try: + transaction.run() + assert 0, "should have raised ValueError" + except ValueError, e: + pass + assert len(lst) == 1 + assert lst[0] == e.args[0] From noreply at buildbot.pypy.org Fri Jan 20 18:46:33 2012 From: noreply at buildbot.pypy.org (bivab) Date: Fri, 20 Jan 2012 18:46:33 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: Forgot to pass the condition flag here. Argh!! Message-ID: <20120120174633.C8A3682CF8@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51559:0109b2cfd41f Date: 2012-01-20 18:46 +0100 http://bitbucket.org/pypy/pypy/changeset/0109b2cfd41f/ Log: Forgot to pass the condition flag here. Argh!! diff --git a/pypy/jit/backend/arm/codebuilder.py b/pypy/jit/backend/arm/codebuilder.py --- a/pypy/jit/backend/arm/codebuilder.py +++ b/pypy/jit/backend/arm/codebuilder.py @@ -179,7 +179,7 @@ def BL(self, addr, c=cond.AL): target = rffi.cast(rffi.INT, addr) self.gen_load_int(reg.ip.value, target, cond=c) - self.BLX(reg.ip.value) + self.BLX(reg.ip.value, c) def BLX(self, reg, c=cond.AL): self.write32(c << 28 | 0x12FFF3 << 4 | (reg & 0xF)) From noreply at buildbot.pypy.org Sat Jan 21 10:25:39 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sat, 21 Jan 2012 10:25:39 +0100 (CET) Subject: [pypy-commit] pypy py3k: Port the generator applevel tests to py3k Message-ID: <20120121092539.26C3282CB2@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51560:328a6033f00a Date: 2012-01-21 10:25 +0100 http://bitbucket.org/pypy/pypy/changeset/328a6033f00a/ Log: Port the generator applevel tests to py3k diff --git a/pypy/interpreter/test/test_generator.py b/pypy/interpreter/test/test_generator.py --- a/pypy/interpreter/test/test_generator.py +++ b/pypy/interpreter/test/test_generator.py @@ -3,14 +3,14 @@ def test_generator(self): def f(): yield 1 - assert f().next() == 1 + assert next(f()) == 1 def test_generator2(self): def f(): yield 1 g = f() - assert g.next() == 1 - raises(StopIteration, g.next) + assert next(g) == 1 + raises(StopIteration, next, g) def test_attributes(self): def f(): @@ -21,9 +21,9 @@ assert g.__name__ == 'f' assert g.gi_frame is not None assert not g.gi_running - g.next() + next(g) assert not g.gi_running - raises(StopIteration, g.next) + raises(StopIteration, next, g) assert not g.gi_running assert g.gi_frame is None assert g.gi_code is f.func_code @@ -43,13 +43,13 @@ def test_generator5(self): d = {} - exec """if 1: + exec("""if 1: def f(): v = (yield ) yield v g = f() - g.next() - """ in d + next(g) + """, d, d) g = d['g'] assert g.send(42) == 42 @@ -73,13 +73,13 @@ except: yield 3 g = f() - assert g.next() == 1 + assert next(g) == 1 assert g.throw(NameError("Error")) == 3 - raises(StopIteration, g.next) + raises(StopIteration, next, g) def test_throw4(self): d = {} - exec """if 1: + exec("""if 1: def f(): try: yield 1 @@ -87,12 +87,12 @@ except: yield 3 g = f() - """ in d + """, d, d) g = d['g'] - assert g.next() == 1 - assert g.next() == 2 + assert next(g) == 1 + assert next(g) == 2 assert g.throw(NameError("Error")) == 3 - raises(StopIteration, g.next) + raises(StopIteration, next, g) def test_throw5(self): def f(): @@ -105,7 +105,7 @@ except: pass g = f() - g.next() + next(g) # String exceptions are not allowed anymore raises(TypeError, g.throw, "Error") assert g.throw(Exception) == 3 @@ -158,9 +158,9 @@ def f(): yield 1 g = f() - res = g.next() + res = next(g) assert res == 1 - raises(StopIteration, g.next) + raises(StopIteration, next, g) raises(NameError, g.throw, NameError) def test_close(self): @@ -176,7 +176,7 @@ except GeneratorExit: raise StopIteration g = f() - g.next() + next(g) assert g.close() is None def test_close3(self): @@ -186,7 +186,7 @@ except GeneratorExit: raise NameError g = f() - g.next() + next(g) raises(NameError, g.close) def test_close_fail(self): @@ -196,22 +196,22 @@ except GeneratorExit: yield 2 g = f() - g.next() + next(g) raises(RuntimeError, g.close) def test_close_on_collect(self): ## we need to exec it, else it won't run on python2.4 d = {} - exec """ + exec(""" def f(): try: yield finally: f.x = 42 - """.strip() in d + """.strip(), d, d) g = d['f']() - g.next() + next(g) del g import gc gc.collect() @@ -233,30 +233,31 @@ def test_generator_propagate_stopiteration(self): def f(): it = iter([1]) - while 1: yield it.next() + while 1: yield next(it) g = f() assert [x for x in g] == [1] def test_generator_restart(self): def g(): - i = me.next() + i = next(me) yield i me = g() - raises(ValueError, me.next) + raises(ValueError, next, me) def test_generator_expression(self): - exec "res = sum(i*i for i in range(5))" - assert res == 30 + d = {} + exec("res = sum(i*i for i in range(5))", d, d) + assert d['res'] == 30 def test_generator_expression_2(self): d = {} - exec """ + exec(""" def f(): total = sum(i for i in [x for x in z]) return total, x z = [1, 2, 7] res = f() -""" in d +""", d, d) assert d['res'] == (10, 7) def test_repr(self): @@ -278,4 +279,4 @@ def f(): yield 1 raise StopIteration - assert tuple(f()) == (1,) \ No newline at end of file + assert tuple(f()) == (1,) From noreply at buildbot.pypy.org Sat Jan 21 10:42:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 21 Jan 2012 10:42:02 +0100 (CET) Subject: [pypy-commit] pypy default: Fix a complexity bug: if 'b' is a large bytearray, Message-ID: <20120121094202.F3BC382CB2@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51561:a9118db9d0bc Date: 2012-01-21 10:41 +0100 http://bitbucket.org/pypy/pypy/changeset/a9118db9d0bc/ Log: Fix a complexity bug: if 'b' is a large bytearray, b[small_slice] = small_string used to take a time proportional to how long the large bytearray is. diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -624,8 +624,8 @@ if step == 1: assert start >= 0 - assert slicelength >= 0 - del items[start:start+slicelength] + if slicelength > 0: + del items[start:start+slicelength] else: n = len(items) i = start @@ -662,10 +662,11 @@ while i >= lim: items[i] = items[i-delta] i -= 1 - elif start >= 0: + elif delta == 0: + pass + else: + assert start >= 0 # start<0 is only possible with slicelength==0 del items[start:start+delta] - else: - assert delta==0 # start<0 is only possible with slicelength==0 elif len2 != slicelength: # No resize for extended slices raise operationerrfmt(space.w_ValueError, "attempt to " "assign sequence of size %d to extended slice of size %d", diff --git a/pypy/objspace/std/test/test_bytearrayobject.py b/pypy/objspace/std/test/test_bytearrayobject.py --- a/pypy/objspace/std/test/test_bytearrayobject.py +++ b/pypy/objspace/std/test/test_bytearrayobject.py @@ -1,5 +1,9 @@ +from pypy import conftest class AppTestBytesArray: + def setup_class(cls): + cls.w_runappdirect = cls.space.wrap(conftest.option.runappdirect) + def test_basics(self): b = bytearray() assert type(b) is bytearray @@ -439,3 +443,15 @@ def test_reduce(self): assert bytearray('caf\xe9').__reduce__() == ( bytearray, (u'caf\xe9', 'latin-1'), None) + + def test_setitem_slice_performance(self): + # because of a complexity bug, this used to take forever on a + # translated pypy. On CPython2.6 -A, it takes around 8 seconds. + if self.runappdirect: + count = 16*1024*1024 + else: + count = 1024 + b = bytearray(count) + for i in range(count): + b[i:i+1] = 'y' + assert str(b) == 'y' * count From noreply at buildbot.pypy.org Sat Jan 21 10:49:06 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sat, 21 Jan 2012 10:49:06 +0100 (CET) Subject: [pypy-commit] pypy py3k: Some applevel code porting for test_interpreter Message-ID: <20120121094906.6F92C82CB2@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51562:76974f372468 Date: 2012-01-21 10:48 +0100 http://bitbucket.org/pypy/pypy/changeset/76974f372468/ Log: Some applevel code porting for test_interpreter diff --git a/pypy/interpreter/test/test_interpreter.py b/pypy/interpreter/test/test_interpreter.py --- a/pypy/interpreter/test/test_interpreter.py +++ b/pypy/interpreter/test/test_interpreter.py @@ -265,7 +265,7 @@ out = Out() try: sys.stdout = out - print 10 + print(10) assert out.args == ['10','\n'] finally: sys.stdout = save @@ -281,15 +281,15 @@ self.data.append((type(x), x)) sys.stdout = out = Out() try: - print unichr(0xa2) + print(unichr(0xa2)) assert out.data == [(unicode, unichr(0xa2)), (str, "\n")] out.data = [] out.encoding = "cp424" # ignored! - print unichr(0xa2) + print(unichr(0xa2)) assert out.data == [(unicode, unichr(0xa2)), (str, "\n")] del out.data[:] del out.encoding - print u"foo\t", u"bar\n", u"trick", u"baz\n" # softspace handling + print "foo\t", "bar\n", "trick", "baz\n" # softspace handling assert out.data == [(unicode, "foo\t"), (unicode, "bar\n"), (unicode, "trick"), @@ -307,7 +307,7 @@ def f(): f() try: f() - except RuntimeError, e: + except RuntimeError as e: assert str(e) == "maximum recursion depth exceeded" else: assert 0, "should have raised!" From noreply at buildbot.pypy.org Sat Jan 21 11:14:33 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sat, 21 Jan 2012 11:14:33 +0100 (CET) Subject: [pypy-commit] pypy py3k: Port test_signal's applevel tests to py3k Message-ID: <20120121101433.2616582CB2@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51563:538110d792c0 Date: 2012-01-21 11:13 +0100 http://bitbucket.org/pypy/pypy/changeset/538110d792c0/ Log: Port test_signal's applevel tests to py3k diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -174,7 +174,7 @@ except OSError: pass else: - raise AssertionError, "os.read(fd_read, 1) succeeded?" + raise AssertionError("os.read(fd_read, 1) succeeded?") # fd_read, fd_write = posix.pipe() flags = fcntl.fcntl(fd_write, fcntl.F_GETFL, 0) @@ -189,7 +189,7 @@ cannot_read() posix.kill(posix.getpid(), signal.SIGUSR1) res = posix.read(fd_read, 1) - assert res == '\x00' + assert res == b'\x00' cannot_read() finally: old_wakeup = signal.set_wakeup_fd(old_wakeup) @@ -252,7 +252,7 @@ signal(SIGALRM, handler) alarm(1) try: - s.accept() + s._accept() except Alarm: pass else: From noreply at buildbot.pypy.org Sat Jan 21 11:18:18 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 21 Jan 2012 11:18:18 +0100 (CET) Subject: [pypy-commit] pypy default: lists have the same performance issue as bytearray objects. Fix. Message-ID: <20120121101818.9BC2A82CB2@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51564:5c3a4d558079 Date: 2012-01-21 11:18 +0100 http://bitbucket.org/pypy/pypy/changeset/5c3a4d558079/ Log: lists have the same performance issue as bytearray objects. Fix. diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -804,10 +804,11 @@ while i >= lim: items[i] = items[i-delta] i -= 1 - elif start >= 0: + elif delta == 0: + pass + else: + assert start >= 0 # start<0 is only possible with slicelength==0 del items[start:start+delta] - else: - assert delta==0 # start<0 is only possible with slicelength==0 elif len2 != slicelength: # No resize for extended slices raise operationerrfmt(self.space.w_ValueError, "attempt to " "assign sequence of size %d to extended slice of size %d", @@ -851,8 +852,8 @@ if step == 1: assert start >= 0 - assert slicelength >= 0 - del items[start:start+slicelength] + if slicelength > 0: + del items[start:start+slicelength] else: n = len(items) i = start diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -397,6 +397,7 @@ on_cpython = (option.runappdirect and not hasattr(sys, 'pypy_translation_info')) cls.w_on_cpython = cls.space.wrap(on_cpython) + cls.w_runappdirect = cls.space.wrap(option.runappdirect) def test_getstrategyfromlist_w(self): l0 = ["a", "2", "a", True] @@ -898,6 +899,18 @@ l[::-1] = l assert l == [6,5,4,3,2,1] + def test_setitem_slice_performance(self): + # because of a complexity bug, this used to take forever on a + # translated pypy. On CPython2.6 -A, it takes around 5 seconds. + if self.runappdirect: + count = 16*1024*1024 + else: + count = 1024 + b = [None] * count + for i in range(count): + b[i:i+1] = ['y'] + assert b == ['y'] * count + def test_recursive_repr(self): l = [] assert repr(l) == '[]' From noreply at buildbot.pypy.org Sat Jan 21 11:22:08 2012 From: noreply at buildbot.pypy.org (bivab) Date: Sat, 21 Jan 2012 11:22:08 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: planning for today Message-ID: <20120121102208.4A8AF82CB2@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: extradoc Changeset: r4048:e398e08fdd6b Date: 2012-01-21 11:21 +0100 http://bitbucket.org/pypy/extradoc/changeset/e398e08fdd6b/ Log: planning for today diff --git a/sprintinfo/leysin-winter-2012/planning.txt b/sprintinfo/leysin-winter-2012/planning.txt --- a/sprintinfo/leysin-winter-2012/planning.txt +++ b/sprintinfo/leysin-winter-2012/planning.txt @@ -2,7 +2,6 @@ People present -------------- -* Antonio Cuni * Armin Rigo * Romain Guillebert * David Schneider @@ -14,19 +13,13 @@ * review the JVM backend pull request (DONE) -* py3k (romain, anto) (some progress) +* py3k (romain) (little bit of progress) -* ffistruct - -* Cython backend - -* Debug the ARM backend (bivab, armin around) (some progress) +* Debug the ARM backend (bivab, armin around) (some more progress) * STM - refactored the RPython API (DONE) - app-level transaction module (DONE) - - start work on the GC + - start work on the GC (armin) -* concurrent-marksweep GC - -* lightweight tracing experiment (everyone) +* lightweight tracing experiment (DONE but still experimental) From noreply at buildbot.pypy.org Sat Jan 21 11:38:02 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 21 Jan 2012 11:38:02 +0100 (CET) Subject: [pypy-commit] pypy default: Special-case in RPython the case of "x ** 2.0". Message-ID: <20120121103802.0FBD582CB2@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51565:c5d041657831 Date: 2012-01-21 11:37 +0100 http://bitbucket.org/pypy/pypy/changeset/c5d041657831/ Log: Special-case in RPython the case of "x ** 2.0". diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -292,6 +292,10 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 + if y == 2.0: + return x * x # this is always a correct answer, and is relatively + # common in user programs. + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 From noreply at buildbot.pypy.org Sat Jan 21 12:34:04 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 21 Jan 2012 12:34:04 +0100 (CET) Subject: [pypy-commit] pypy default: Disable the error about "pypy translate.py" overwriting the same "pypy" Message-ID: <20120121113404.7209D82CB2@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51566:75795d0a00d2 Date: 2012-01-21 12:33 +0100 http://bitbucket.org/pypy/pypy/changeset/75795d0a00d2/ Log: Disable the error about "pypy translate.py" overwriting the same "pypy" if we are actually only running it partly, e.g. "pypy --annotate translate.py". diff --git a/pypy/translator/goal/translate.py b/pypy/translator/goal/translate.py --- a/pypy/translator/goal/translate.py +++ b/pypy/translator/goal/translate.py @@ -293,17 +293,18 @@ drv.exe_name = targetspec_dic['__name__'] + '-%(backend)s' # Double check to ensure we are not overwriting the current interpreter - try: - this_exe = py.path.local(sys.executable).new(ext='') - exe_name = drv.compute_exe_name() - samefile = this_exe.samefile(exe_name) - assert not samefile, ( - 'Output file %s is the currently running ' - 'interpreter (use --output=...)'% exe_name) - except EnvironmentError: - pass + goals = translateconfig.goals + if not goals or 'compile' in goals: + try: + this_exe = py.path.local(sys.executable).new(ext='') + exe_name = drv.compute_exe_name() + samefile = this_exe.samefile(exe_name) + assert not samefile, ( + 'Output file %s is the currently running ' + 'interpreter (use --output=...)'% exe_name) + except EnvironmentError: + pass - goals = translateconfig.goals try: drv.proceed(goals) finally: From noreply at buildbot.pypy.org Sat Jan 21 13:12:06 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 13:12:06 +0100 (CET) Subject: [pypy-commit] pypy default: add a few asserts, propagate axis where it works Message-ID: <20120121121206.BA94182CB2@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51567:5a6305c3c894 Date: 2012-01-21 13:58 +0200 http://bitbucket.org/pypy/pypy/changeset/5a6305c3c894/ Log: add a few asserts, propagate axis where it works diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/core/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -149,6 +149,7 @@ [5, 6]]) """ + assert order == 'C' if not hasattr(a, 'reshape'): a = numpypy.array(a) return a.reshape(newshape) @@ -690,6 +691,7 @@ 1 """ + assert axis is None if not hasattr(a, 'argmax'): a = numpypy.array(a) return a.argmax() @@ -705,6 +707,7 @@ documentation. """ + assert axis is None if not hasattr(a, 'argmin'): a = numpypy.array(a) return a.argmin() @@ -1354,9 +1357,11 @@ -128 """ + assert dtype is None + assert out is None if not hasattr(a, "sum"): a = numpypy.array(a) - return a.sum() + return a.sum(axis=axis) def product (a, axis=None, dtype=None, out=None): @@ -1382,6 +1387,8 @@ any : equivalent function """ + assert axis is None + assert out is None if not hasattr(a, 'any'): a = numpypy.array(a) return a.any() @@ -1396,6 +1403,8 @@ numpy.all : Equivalent function; see for details. """ + assert axis is None + assert out is None if not hasattr(a, 'all'): a = numpypy.array(a) return a.all() @@ -1464,6 +1473,8 @@ (191614240, 191614240) """ + assert axis is None + assert out is None if not hasattr(a, 'any'): a = numpypy.array(a) return a.any() @@ -1526,6 +1537,8 @@ (28293632, 28293632, array([ True], dtype=bool)) """ + assert axis is None + assert out is None if not hasattr(a, 'all'): a = numpypy.array(a) return a.all() @@ -1705,6 +1718,8 @@ 4.0 """ + assert axis is None + assert out is None if not hasattr(a, "max"): a = numpypy.array(a) return a.max() @@ -1766,6 +1781,8 @@ """ # amin() is equivalent to min() + assert axis is None + assert out is None if not hasattr(a, 'min'): a = numpypy.array(a) return a.min() @@ -2214,9 +2231,11 @@ 0.55000000074505806 """ + assert dtype is None + assert out is None if not hasattr(a, "mean"): a = numpypy.array(a) - return a.mean() + return a.mean(axis=axis) def std(a, axis=None, dtype=None, out=None, ddof=0): @@ -2305,6 +2324,10 @@ 0.44999999925552653 """ + assert axis is None + assert dtype is None + assert out is None + assert ddof == 0 if not hasattr(a, "std"): a = numpypy.array(a) return a.std() @@ -2398,6 +2421,10 @@ 0.20250000000000001 """ + assert axis is None + assert dtype is None + assert out is None + assert ddof == 0 if not hasattr(a, "var"): a = numpypy.array(a) return a.var() diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -728,7 +728,7 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array + from _numpypy import array, arange a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 @@ -738,6 +738,7 @@ assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (arange(10).reshape(5, 2).mean(axis=1) == [0.5, 2.5, 4.5, 6.5, 8.5]).all() def test_sum(self): from _numpypy import array @@ -1021,11 +1022,15 @@ assert a[0].tolist() == [17.1, 27.2] def test_var(self): - from _numpypy import array + from _numpypy import array, arange a = array(range(10)) assert a.var() == 8.25 a = array([5.0]) assert a.var() == 0.0 + a = arange(10).reshape(5, 2) + assert a.var() == 8.25 + #assert (a.var(0) == [8, 8]).all() + #assert (a.var(1) == [.25] * 5).all() def test_std(self): from _numpypy import array diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -18,8 +18,8 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - assert (argmin(a, axis=0) == array([0, 0, 0])).all() - assert (argmin(a, axis=1) == array([0, 0])).all() + #assert (argmin(a, axis=0) == array([0, 0, 0])).all() + #assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 @@ -40,8 +40,8 @@ assert sum([0.5, 1.5])== 2.0 assert sum([[0, 1], [0, 5]]) == 6 # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 - # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() - # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 @@ -98,20 +98,22 @@ from numpypy import array, var a = array([[1,2],[3,4]]) assert var(a) == 1.25 - # assert (np.var(a,0) == array([ 1., 1.])).all() - # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + #assert (var(a,0) == array([ 1., 1.])).all() + #assert (var(a,1) == array([ 0.25, 0.25])).all() def test_std(self): from numpypy import array, std a = array([[1, 2], [3, 4]]) assert std(a) == 1.1180339887498949 - # assert (std(a, axis=0) == array([ 1., 1.])).all() - # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + #assert (std(a, axis=0) == array([ 1., 1.])).all() + #assert (std(a, axis=1) == array([ 0.5, 0.5])).all() def test_mean(self): - from numpypy import array, mean + from numpypy import array, mean, arange assert mean(array(range(5))) == 2.0 assert mean(range(5)) == 2.0 + assert (mean(arange(10).reshape(5, 2), axis=0) == [4, 5]).all() + assert (mean(arange(10).reshape(5, 2), axis=1) == [0.5, 2.5, 4.5, 6.5, 8.5]).all() def test_reshape(self): from numpypy import arange, array, dtype, reshape From noreply at buildbot.pypy.org Sat Jan 21 13:12:07 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 13:12:07 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120121121208.0029982CB2@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51568:6a0c4cca12c4 Date: 2012-01-21 14:11 +0200 http://bitbucket.org/pypy/pypy/changeset/6a0c4cca12c4/ Log: merge diff --git a/pypy/translator/goal/translate.py b/pypy/translator/goal/translate.py --- a/pypy/translator/goal/translate.py +++ b/pypy/translator/goal/translate.py @@ -293,17 +293,18 @@ drv.exe_name = targetspec_dic['__name__'] + '-%(backend)s' # Double check to ensure we are not overwriting the current interpreter - try: - this_exe = py.path.local(sys.executable).new(ext='') - exe_name = drv.compute_exe_name() - samefile = this_exe.samefile(exe_name) - assert not samefile, ( - 'Output file %s is the currently running ' - 'interpreter (use --output=...)'% exe_name) - except EnvironmentError: - pass + goals = translateconfig.goals + if not goals or 'compile' in goals: + try: + this_exe = py.path.local(sys.executable).new(ext='') + exe_name = drv.compute_exe_name() + samefile = this_exe.samefile(exe_name) + assert not samefile, ( + 'Output file %s is the currently running ' + 'interpreter (use --output=...)'% exe_name) + except EnvironmentError: + pass - goals = translateconfig.goals try: drv.proceed(goals) finally: From noreply at buildbot.pypy.org Sat Jan 21 13:19:20 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 21 Jan 2012 13:19:20 +0100 (CET) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120121121920.0480B82CB2@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51569:2c98a32d1484 Date: 2012-01-20 20:53 +0100 http://bitbucket.org/pypy/pypy/changeset/2c98a32d1484/ Log: hg merge default diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from .fromnumeric import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py rename from lib_pypy/numpypy/fromnumeric.py rename to lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -85,7 +85,7 @@ array([4, 3, 6]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') # not deprecated --- copy if necessary, view otherwise @@ -273,7 +273,7 @@ [-1, -2, -3, -4, -5]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def repeat(a, repeats, axis=None): @@ -315,7 +315,7 @@ [3, 4]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def put(a, ind, v, mode='raise'): @@ -366,7 +366,7 @@ array([ 0, 1, 2, 3, -5]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def swapaxes(a, axis1, axis2): @@ -410,7 +410,7 @@ [3, 7]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def transpose(a, axes=None): @@ -451,8 +451,11 @@ (2, 1, 3) """ - raise NotImplemented('Waiting on interp level method') - + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T def sort(a, axis=-1, kind='quicksort', order=None): """ @@ -553,7 +556,7 @@ dtype=[('name', '|S10'), ('height', '= self.size @@ -157,6 +177,8 @@ offset += self.strides[i] break else: + if i == self.dim: + first_line = True indices[i] = 0 offset -= self.backstrides[i] else: diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -2,14 +2,15 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ + interp_boxes from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + SkipLastAxisIterator, Chunk, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -39,7 +40,24 @@ get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) - +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['s', 'frame', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], + name='numpy_filter', +) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', +) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -270,6 +288,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -287,11 +308,11 @@ descr_rmod = _binop_right_impl("mod") def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): - def impl(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, w_dim) + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") @@ -479,11 +500,69 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(self) + shapelen = len(arr.shape) + s = 0 + iter = None + while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = concr.create_iter() + sig = self.find_sig() + frame = sig.create_frame(self) + v = None + while not frame.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen, self=self) + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + + def setitem_filter(self, space, idx, val): + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -493,6 +572,11 @@ def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -509,9 +593,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -569,19 +652,19 @@ ) return w_result - def descr_mean(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) w_denom = space.wrap(self.size) else: - dim = space.int_w(w_dim) + dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) - return space.div(self.descr_sum_promote(space, w_dim), w_denom) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) def descr_var(self, space): # var = mean((values - mean(values)) ** 2) w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) - assert isinstance(w_res, BaseArray) + assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) return w_res.descr_mean(space, space.w_None) @@ -590,6 +673,10 @@ # std(v) = sqrt(var(v)) return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -682,6 +769,9 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + def create_sig(self): return signature.ScalarSignature(self.dtype) @@ -717,8 +807,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result @@ -788,7 +877,7 @@ Intermediate class for performing binary operations. """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -828,7 +917,7 @@ def __init__(self, shape, dtype, left, right): Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) - + def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() @@ -847,7 +936,7 @@ when we'll make AxisReduce lazy """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) @@ -919,14 +1008,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: @@ -938,7 +1027,7 @@ builder.append('\n' + indent) else: builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: @@ -955,7 +1044,7 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 @@ -1061,6 +1150,9 @@ array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): def create_sig(self): @@ -1081,6 +1173,10 @@ parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1127,6 +1223,9 @@ self.shape = new_shape self.calc_strides(new_shape) + def create_iter(self): + return ArrayIterator(self.size) + def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1181,6 +1280,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, @@ -1247,6 +1347,9 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1273,6 +1376,8 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -249,15 +249,16 @@ class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) self.func = func self.comparison_func = comparison_func + self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -268,6 +269,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -304,10 +306,12 @@ def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -420,6 +424,10 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -444,6 +452,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), @@ -470,7 +479,7 @@ extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -82,6 +82,16 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + def get_final_iter(self): final_iter = promote(self.final_iter) if final_iter < 0: diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -10,12 +10,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -726,15 +728,16 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array, mean + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - b = mean(a, axis=0) - b[0,0]==35. + b = a.mean(axis=0) + b[0, 0]==35. + assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from _numpypy import array @@ -755,6 +758,7 @@ assert array([]).sum() == 0.0 raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() @@ -769,6 +773,8 @@ assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_identity(self): from _numpypy import identity, array @@ -1298,6 +1304,56 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_array_indexing_one_elem(self): + skip("not yet") + from _numpypy import array, arange + raises(IndexError, 'arange(3)[array([3.5])]') + a = arange(3)[array([1])] + assert a == 1 + assert a[0] == 1 + raises(IndexError,'arange(3)[array([15])]') + assert arange(3)[array([-3])] == 0 + raises(IndexError,'arange(3)[array([-15])]') + assert arange(3)[array(1)] == 1 + + def test_fill(self): + from _numpypy import array + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + + def test_array_indexing_bool(self): + from _numpypy import arange + a = arange(10) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + a = arange(10).reshape(5, 2) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() + + def test_array_indexing_bool_setitem(self): + from _numpypy import arange, array + a = arange(6) + a[a > 3] = 15 + assert (a == [0, 1, 2, 3, 15, 15]).all() + a = arange(6).reshape(3, 2) + a[a & 1 == 1] = array([8, 9, 10]) + assert (a == [[0, 8], [2, 9], [4, 10]]).all() + + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): @@ -1437,9 +1493,11 @@ assert repr(a) == "array(0.0)" a = array(0.2) assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" def test_repr_multi(self): - from _numpypy import arange, zeros + from _numpypy import arange, zeros, array a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1462,6 +1520,9 @@ [498, 999], [499, 1000], [500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' def test_repr_slice(self): from _numpypy import array, zeros @@ -1546,18 +1607,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = array(range(105)).reshape(3, 5, 7) - assert a.reshape(1, -1).shape == (1, 105) - assert a.reshape(1, 1, -1).shape == (1, 1, 105) - assert a.reshape(-1, 1, 1).shape == (105, 1, 1) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: @@ -347,11 +355,20 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): - from numpypy import add, arange + from _numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_bitwise(self): + from _numpypy import bitwise_and, bitwise_or, arange, array + a = arange(6).reshape(2, 3) + assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() + assert (a & 1 == bitwise_and(a, 1)).all() + assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() + assert (a | 1 == bitwise_or(a, 1)).all() + raises(TypeError, 'array([1.0]) & 1') + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -217,6 +217,7 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. + py.test.skip("too fragile") self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, 'getfield_gc_pure': 8, @@ -349,7 +350,8 @@ self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, - 'int_eq': 1, 'guard_false': 1, 'jump': 1}) + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -94,6 +94,9 @@ width, storage, i, offset )) + def read_bool(self, storage, width, i, offset): + raise NotImplementedError + def store(self, storage, width, i, offset, box): value = self.unbox(box) libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), @@ -168,6 +171,7 @@ @simple_binary_op def min(self, v1, v2): return min(v1, v2) + class Bool(BaseType, Primitive): T = lltype.Bool @@ -185,6 +189,11 @@ else: return self.False + + def read_bool(self, storage, width, i, offset): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def coerce_subtype(self, space, w_subtype, w_item): # Doesn't return subclasses so it can return the constants. return self._coerce(space, w_item) @@ -253,6 +262,14 @@ assert v == 0 return 0 + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -374,6 +391,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) @@ -436,4 +457,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -10,6 +11,7 @@ from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver class Cache(object): in_recursion = False @@ -111,13 +113,24 @@ def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd for op in operations: if ops_offset is None: ofs = -1 else: ofs = ops_offset.get(op, 0) - l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, - logops.repr_of_resop(op))) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_greenkey)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) return l_w class WrappedBox(Wrappable): @@ -150,6 +163,15 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_greenkey) + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -182,6 +204,25 @@ box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) +class DebugMergePoint(WrappedOp): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.w_greenkey = w_greenkey + self.jd_name = jd_name + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_bytecode_no(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_jitdriver_name(self, space): + return space.wrap(self.jd_name) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -195,3 +236,15 @@ WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), + pycode = GetSetProperty(DebugMergePoint.get_pycode), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -92,6 +92,7 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) cls.orig_oplist = oplist @@ -117,6 +118,10 @@ assert elem[2][2] == False assert len(elem[3]) == 4 int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() @@ -211,3 +216,18 @@ assert op.getarg(0).getint() == 4 op.result = box assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' + assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (3, 2, 2, "final", 0) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py rename from lib_pypy/numpypy/test/test_fromnumeric.py rename to pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -1,7 +1,7 @@ - from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -class AppTestFromNumeric(BaseNumpyAppTest): + +class AppTestFromNumeric(BaseNumpyAppTest): def test_argmax(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, argmax @@ -18,12 +18,12 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - # assert (argmax(a, axis=0) == array([0, 0, 0])).all() - # assert (argmax(a, axis=1) == array([0, 0])).all() + assert (argmin(a, axis=0) == array([0, 0, 0])).all() + assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 - + def test_shape(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, identity, shape @@ -44,7 +44,7 @@ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 - + def test_amin(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, amin @@ -86,14 +86,14 @@ assert ndim([[1,2,3],[4,5,6]]) == 2 assert ndim(array([[1,2,3],[4,5,6]])) == 2 assert ndim(1) == 0 - + def test_rank(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, rank assert rank([[1,2,3],[4,5,6]]) == 2 assert rank(array([[1,2,3],[4,5,6]])) == 2 assert rank(1) == 0 - + def test_var(self): from numpypy import array, var a = array([[1,2],[3,4]]) @@ -107,3 +107,31 @@ assert std(a) == 1.1180339887498949 # assert (std(a, axis=0) == array([ 1., 1.])).all() # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + + def test_mean(self): + from numpypy import array, mean + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) + + def test_transpose(self): + from numpypy import arange, array, transpose, ones + x = arange(4).reshape((2,2)) + assert (transpose(x) == array([[0, 2],[1, 3]])).all() + # Once axes argument is implemented, add more tests + raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") + # x = ones((1, 2, 3)) + # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -29,6 +29,8 @@ assert type(a) == type(b) check(', '.join(['a']), 'a') raises(TypeError, ','.join, [b'a']) + exc = raises(TypeError, ''.join, ['a', 2, 3]) + assert 'sequence item 1' in str(e.value) def test_contains(self): assert '' in 'abc' diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -129,7 +129,7 @@ return space.wrap(l[0]) return space.wrap(w_self._value.join(l)) - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -144,17 +144,23 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = len(self) * (size - 1) + for i in range(size): + w_s = list_w[i] + try: + prealloc_size += len(space.unicode_w(w_s)) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string, %s " + "found", i, space.type(w_s).getname(space)) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if not isinstance(w_s, W_UnicodeObject): - raise operationerrfmt( - space.w_TypeError, - "sequence item %d: expected string, %s " - "found", i, space.type(w_s).getname(space)) - sb.append(w_s._value) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -52,7 +52,10 @@ from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] - res = _cast_to_box(llres) + if llres: + res = _cast_to_box(llres) + else: + res = None return _cast_to_gcref(ResOperation(no, args, res)) @register_helper(annmodel.SomePtr(llmemory.GCREF)) diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -424,7 +424,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -140,13 +140,15 @@ bytecode_name = None is_bytecode = True inline_level = None + has_dmp = False def parse_code_data(self, arg): m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', arg) if m is None: # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = arg + if arg: + self.bytecode_name = arg else: self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() self.startlineno = int(lineno) @@ -218,7 +220,7 @@ self.inputargs = inputargs self.chunks = chunks for chunk in self.chunks: - if chunk.filename is not None: + if chunk.bytecode_name is not None: self.startlineno = chunk.startlineno self.filename = chunk.filename self.name = chunk.name diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -283,3 +283,13 @@ assert loops[-1].count == 1234 assert loops[1].count == 123 assert loops[2].count == 12 + +def test_parse_nonpython(): + loop = parse(""" + [] + debug_merge_point(0, 'random') + debug_merge_point(0, ' #15 COMPARE_OP') + """) + f = Function.from_operations(loop.operations, LoopStorage()) + assert f.chunks[-1].filename == 'x.py' + assert f.filename is None diff --git a/pypy/tool/test/test_version.py b/pypy/tool/test/test_version.py --- a/pypy/tool/test/test_version.py +++ b/pypy/tool/test/test_version.py @@ -1,6 +1,22 @@ import os, sys import py -from pypy.tool.version import get_repo_version_info +from pypy.tool.version import get_repo_version_info, _get_hg_archive_version + +def test_hg_archival_version(tmpdir): + def version_for(name, **kw): + path = tmpdir.join(name) + path.write('\n'.join('%s: %s' % x for x in kw.items())) + return _get_hg_archive_version(str(path)) + + assert version_for('release', + tag='release-123', + node='000', + ) == ('PyPy', 'release-123', '000') + assert version_for('somebranch', + node='000', + branch='something', + ) == ('PyPy', 'something', '000') + def test_get_repo_version_info(): assert get_repo_version_info(None) diff --git a/pypy/tool/version.py b/pypy/tool/version.py --- a/pypy/tool/version.py +++ b/pypy/tool/version.py @@ -3,111 +3,139 @@ from subprocess import Popen, PIPE import pypy pypydir = os.path.dirname(os.path.abspath(pypy.__file__)) +pypyroot = os.path.dirname(pypydir) +default_retval = 'PyPy', '?', '?' + +def maywarn(err, repo_type='Mercurial'): + if not err: + return + + from pypy.tool.ansi_print import ansi_log + log = py.log.Producer("version") + py.log.setconsumer("version", ansi_log) + log.WARNING('Errors getting %s information: %s' % (repo_type, err)) def get_repo_version_info(hgexe=None): '''Obtain version information by invoking the 'hg' or 'git' commands.''' - # TODO: support extracting from .hg_archival.txt - - default_retval = 'PyPy', '?', '?' - pypyroot = os.path.abspath(os.path.join(pypydir, '..')) - - def maywarn(err, repo_type='Mercurial'): - if not err: - return - - from pypy.tool.ansi_print import ansi_log - log = py.log.Producer("version") - py.log.setconsumer("version", ansi_log) - log.WARNING('Errors getting %s information: %s' % (repo_type, err)) # Try to see if we can get info from Git if hgexe is not specified. if not hgexe: if os.path.isdir(os.path.join(pypyroot, '.git')): - gitexe = py.path.local.sysfind('git') - if gitexe: - try: - p = Popen( - [str(gitexe), 'rev-parse', 'HEAD'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - except OSError, e: - maywarn(e, 'Git') - return default_retval - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return default_retval - revision_id = p.stdout.read().strip()[:12] - p = Popen( - [str(gitexe), 'describe', '--tags', '--exact-match'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - if p.wait() != 0: - p = Popen( - [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, - cwd=pypyroot - ) - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return 'PyPy', '?', revision_id - branch = '?' - for line in p.stdout.read().strip().split('\n'): - if line.startswith('* '): - branch = line[1:].strip() - if branch == '(no branch)': - branch = '?' - break - return 'PyPy', branch, revision_id - return 'PyPy', p.stdout.read().strip(), revision_id + return _get_git_version() # Fallback to trying Mercurial. if hgexe is None: hgexe = py.path.local.sysfind('hg') - if not os.path.isdir(os.path.join(pypyroot, '.hg')): + if os.path.isfile(os.path.join(pypyroot, '.hg_archival.txt')): + return _get_hg_archive_version(os.path.join(pypyroot, '.hg_archival.txt')) + elif not os.path.isdir(os.path.join(pypyroot, '.hg')): maywarn('Not running from a Mercurial repository!') return default_retval elif not hgexe: maywarn('Cannot find Mercurial command!') return default_retval else: - env = dict(os.environ) - # get Mercurial into scripting mode - env['HGPLAIN'] = '1' - # disable user configuration, extensions, etc. - env['HGRCPATH'] = os.devnull + return _get_hg_version(hgexe) - try: - p = Popen([str(hgexe), 'version', '-q'], - stdout=PIPE, stderr=PIPE, env=env) - except OSError, e: - maywarn(e) - return default_retval - if not p.stdout.read().startswith('Mercurial Distributed SCM'): - maywarn('command does not identify itself as Mercurial') - return default_retval +def _get_hg_version(hgexe): + env = dict(os.environ) + # get Mercurial into scripting mode + env['HGPLAIN'] = '1' + # disable user configuration, extensions, etc. + env['HGRCPATH'] = os.devnull - p = Popen([str(hgexe), 'id', '-i', pypyroot], + try: + p = Popen([str(hgexe), 'version', '-q'], stdout=PIPE, stderr=PIPE, env=env) - hgid = p.stdout.read().strip() + except OSError, e: + maywarn(e) + return default_retval + + if not p.stdout.read().startswith('Mercurial Distributed SCM'): + maywarn('command does not identify itself as Mercurial') + return default_retval + + p = Popen([str(hgexe), 'id', '-i', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgid = p.stdout.read().strip() + maywarn(p.stderr.read()) + if p.wait() != 0: + hgid = '?' + + p = Popen([str(hgexe), 'id', '-t', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] + maywarn(p.stderr.read()) + if p.wait() != 0: + hgtags = ['?'] + + if hgtags: + return 'PyPy', hgtags[0], hgid + else: + # use the branch instead + p = Popen([str(hgexe), 'id', '-b', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgbranch = p.stdout.read().strip() maywarn(p.stderr.read()) + + return 'PyPy', hgbranch, hgid + + +def _get_hg_archive_version(path): + fp = open(path) + try: + data = dict(x.split(': ', 1) for x in fp.read().splitlines()) + finally: + fp.close() + if 'tag' in data: + return 'PyPy', data['tag'], data['node'] + else: + return 'PyPy', data['branch'], data['node'] + + +def _get_git_version(): + #XXX: this function is a untested hack, + # so the git mirror tav made will work + gitexe = py.path.local.sysfind('git') + if not gitexe: + return default_retval + + try: + p = Popen( + [str(gitexe), 'rev-parse', 'HEAD'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + except OSError, e: + maywarn(e, 'Git') + return default_retval + if p.wait() != 0: + maywarn(p.stderr.read(), 'Git') + return default_retval + revision_id = p.stdout.read().strip()[:12] + p = Popen( + [str(gitexe), 'describe', '--tags', '--exact-match'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + if p.wait() != 0: + p = Popen( + [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, + cwd=pypyroot + ) if p.wait() != 0: - hgid = '?' + maywarn(p.stderr.read(), 'Git') + return 'PyPy', '?', revision_id + branch = '?' + for line in p.stdout.read().strip().split('\n'): + if line.startswith('* '): + branch = line[1:].strip() + if branch == '(no branch)': + branch = '?' + break + return 'PyPy', branch, revision_id + return 'PyPy', p.stdout.read().strip(), revision_id - p = Popen([str(hgexe), 'id', '-t', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] - maywarn(p.stderr.read()) - if p.wait() != 0: - hgtags = ['?'] - if hgtags: - return 'PyPy', hgtags[0], hgid - else: - # use the branch instead - p = Popen([str(hgexe), 'id', '-b', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgbranch = p.stdout.read().strip() - maywarn(p.stderr.read()) - - return 'PyPy', hgbranch, hgid +if __name__ == '__main__': + print get_repo_version_info() diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Sat Jan 21 13:19:21 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 21 Jan 2012 13:19:21 +0100 (CET) Subject: [pypy-commit] pypy py3k: pyrepl has a lot of SyntaxErrors at the moment. Message-ID: <20120121121921.3CA8482CB2@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51570:e613284ce7f5 Date: 2012-01-21 11:22 +0100 http://bitbucket.org/pypy/pypy/changeset/e613284ce7f5/ Log: pyrepl has a lot of SyntaxErrors at the moment. gracefully raise an ImportError in readline.py diff --git a/lib_pypy/readline.py b/lib_pypy/readline.py --- a/lib_pypy/readline.py +++ b/lib_pypy/readline.py @@ -6,4 +6,7 @@ are only stubs at the moment. """ -from pyrepl.readline import * +try: + from pyrepl.readline import * +except SyntaxError: + raise ImportError From noreply at buildbot.pypy.org Sat Jan 21 13:19:22 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 21 Jan 2012 13:19:22 +0100 (CET) Subject: [pypy-commit] pypy py3k: Merge heads Message-ID: <20120121121922.E00CE82CB2@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51571:af3dee783792 Date: 2012-01-21 11:36 +0100 http://bitbucket.org/pypy/pypy/changeset/af3dee783792/ Log: Merge heads diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from .fromnumeric import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py rename from lib_pypy/numpypy/fromnumeric.py rename to lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -85,7 +85,7 @@ array([4, 3, 6]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') # not deprecated --- copy if necessary, view otherwise @@ -273,7 +273,7 @@ [-1, -2, -3, -4, -5]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def repeat(a, repeats, axis=None): @@ -315,7 +315,7 @@ [3, 4]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def put(a, ind, v, mode='raise'): @@ -366,7 +366,7 @@ array([ 0, 1, 2, 3, -5]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def swapaxes(a, axis1, axis2): @@ -410,7 +410,7 @@ [3, 7]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def transpose(a, axes=None): @@ -451,8 +451,11 @@ (2, 1, 3) """ - raise NotImplemented('Waiting on interp level method') - + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T def sort(a, axis=-1, kind='quicksort', order=None): """ @@ -553,7 +556,7 @@ dtype=[('name', '|S10'), ('height', '= self.size @@ -157,6 +177,8 @@ offset += self.strides[i] break else: + if i == self.dim: + first_line = True indices[i] = 0 offset -= self.backstrides[i] else: diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -2,14 +2,15 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ + interp_boxes from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + SkipLastAxisIterator, Chunk, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -39,7 +40,24 @@ get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) - +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['s', 'frame', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], + name='numpy_filter', +) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', +) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -270,6 +288,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -287,11 +308,11 @@ descr_rmod = _binop_right_impl("mod") def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): - def impl(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, w_dim) + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") @@ -479,11 +500,69 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(self) + shapelen = len(arr.shape) + s = 0 + iter = None + while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = concr.create_iter() + sig = self.find_sig() + frame = sig.create_frame(self) + v = None + while not frame.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen, self=self) + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + + def setitem_filter(self, space, idx, val): + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -493,6 +572,11 @@ def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -509,9 +593,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -569,19 +652,19 @@ ) return w_result - def descr_mean(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) w_denom = space.wrap(self.size) else: - dim = space.int_w(w_dim) + dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) - return space.div(self.descr_sum_promote(space, w_dim), w_denom) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) def descr_var(self, space): # var = mean((values - mean(values)) ** 2) w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) - assert isinstance(w_res, BaseArray) + assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) return w_res.descr_mean(space, space.w_None) @@ -590,6 +673,10 @@ # std(v) = sqrt(var(v)) return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -682,6 +769,9 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + def create_sig(self): return signature.ScalarSignature(self.dtype) @@ -717,8 +807,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result @@ -788,7 +877,7 @@ Intermediate class for performing binary operations. """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -828,7 +917,7 @@ def __init__(self, shape, dtype, left, right): Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) - + def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() @@ -847,7 +936,7 @@ when we'll make AxisReduce lazy """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) @@ -919,14 +1008,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: @@ -938,7 +1027,7 @@ builder.append('\n' + indent) else: builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: @@ -955,7 +1044,7 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 @@ -1061,6 +1150,9 @@ array.setslice(space, self) return array + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): def create_sig(self): @@ -1081,6 +1173,10 @@ parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1127,6 +1223,9 @@ self.shape = new_shape self.calc_strides(new_shape) + def create_iter(self): + return ArrayIterator(self.size) + def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1181,6 +1280,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, @@ -1247,6 +1347,9 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1273,6 +1376,8 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -249,15 +249,16 @@ class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) self.func = func self.comparison_func = comparison_func + self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -268,6 +269,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -304,10 +306,12 @@ def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -420,6 +424,10 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -444,6 +452,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), @@ -470,7 +479,7 @@ extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -82,6 +82,16 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + def get_final_iter(self): final_iter = promote(self.final_iter) if final_iter < 0: diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -10,12 +10,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -726,15 +728,16 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array, mean + from _numpypy import array a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - b = mean(a, axis=0) - b[0,0]==35. + b = a.mean(axis=0) + b[0, 0]==35. + assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() def test_sum(self): from _numpypy import array @@ -755,6 +758,7 @@ assert array([]).sum() == 0.0 raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() @@ -769,6 +773,8 @@ assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_identity(self): from _numpypy import identity, array @@ -1298,6 +1304,56 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_array_indexing_one_elem(self): + skip("not yet") + from _numpypy import array, arange + raises(IndexError, 'arange(3)[array([3.5])]') + a = arange(3)[array([1])] + assert a == 1 + assert a[0] == 1 + raises(IndexError,'arange(3)[array([15])]') + assert arange(3)[array([-3])] == 0 + raises(IndexError,'arange(3)[array([-15])]') + assert arange(3)[array(1)] == 1 + + def test_fill(self): + from _numpypy import array + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + + def test_array_indexing_bool(self): + from _numpypy import arange + a = arange(10) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + a = arange(10).reshape(5, 2) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() + + def test_array_indexing_bool_setitem(self): + from _numpypy import arange, array + a = arange(6) + a[a > 3] = 15 + assert (a == [0, 1, 2, 3, 15, 15]).all() + a = arange(6).reshape(3, 2) + a[a & 1 == 1] = array([8, 9, 10]) + assert (a == [[0, 8], [2, 9], [4, 10]]).all() + + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): @@ -1437,9 +1493,11 @@ assert repr(a) == "array(0.0)" a = array(0.2) assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" def test_repr_multi(self): - from _numpypy import arange, zeros + from _numpypy import arange, zeros, array a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1462,6 +1520,9 @@ [498, 999], [499, 1000], [500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' def test_repr_slice(self): from _numpypy import array, zeros @@ -1546,18 +1607,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = array(range(105)).reshape(3, 5, 7) - assert a.reshape(1, -1).shape == (1, 105) - assert a.reshape(1, 1, -1).shape == (1, 1, 105) - assert a.reshape(-1, 1, 1).shape == (105, 1, 1) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: @@ -347,11 +355,20 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): - from numpypy import add, arange + from _numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_bitwise(self): + from _numpypy import bitwise_and, bitwise_or, arange, array + a = arange(6).reshape(2, 3) + assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() + assert (a & 1 == bitwise_and(a, 1)).all() + assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() + assert (a | 1 == bitwise_or(a, 1)).all() + raises(TypeError, 'array([1.0]) & 1') + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -217,6 +217,7 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. + py.test.skip("too fragile") self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, 'getfield_gc_pure': 8, @@ -349,7 +350,8 @@ self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, - 'int_eq': 1, 'guard_false': 1, 'jump': 1}) + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -94,6 +94,9 @@ width, storage, i, offset )) + def read_bool(self, storage, width, i, offset): + raise NotImplementedError + def store(self, storage, width, i, offset, box): value = self.unbox(box) libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), @@ -168,6 +171,7 @@ @simple_binary_op def min(self, v1, v2): return min(v1, v2) + class Bool(BaseType, Primitive): T = lltype.Bool @@ -185,6 +189,11 @@ else: return self.False + + def read_bool(self, storage, width, i, offset): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def coerce_subtype(self, space, w_subtype, w_item): # Doesn't return subclasses so it can return the constants. return self._coerce(space, w_item) @@ -253,6 +262,14 @@ assert v == 0 return 0 + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -374,6 +391,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) @@ -436,4 +457,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -10,6 +11,7 @@ from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver class Cache(object): in_recursion = False @@ -111,13 +113,24 @@ def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd for op in operations: if ops_offset is None: ofs = -1 else: ofs = ops_offset.get(op, 0) - l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, - logops.repr_of_resop(op))) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_greenkey)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) return l_w class WrappedBox(Wrappable): @@ -150,6 +163,15 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_greenkey) + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -182,6 +204,25 @@ box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) +class DebugMergePoint(WrappedOp): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.w_greenkey = w_greenkey + self.jd_name = jd_name + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_bytecode_no(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_jitdriver_name(self, space): + return space.wrap(self.jd_name) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -195,3 +236,15 @@ WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), + pycode = GetSetProperty(DebugMergePoint.get_pycode), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -92,6 +92,7 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) cls.orig_oplist = oplist @@ -117,6 +118,10 @@ assert elem[2][2] == False assert len(elem[3]) == 4 int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() @@ -211,3 +216,18 @@ assert op.getarg(0).getint() == 4 op.result = box assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' + assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (3, 2, 2, "final", 0) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py rename from lib_pypy/numpypy/test/test_fromnumeric.py rename to pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -1,7 +1,7 @@ - from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -class AppTestFromNumeric(BaseNumpyAppTest): + +class AppTestFromNumeric(BaseNumpyAppTest): def test_argmax(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, argmax @@ -18,12 +18,12 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - # assert (argmax(a, axis=0) == array([0, 0, 0])).all() - # assert (argmax(a, axis=1) == array([0, 0])).all() + assert (argmin(a, axis=0) == array([0, 0, 0])).all() + assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 - + def test_shape(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, identity, shape @@ -44,7 +44,7 @@ # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 - + def test_amin(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, amin @@ -86,14 +86,14 @@ assert ndim([[1,2,3],[4,5,6]]) == 2 assert ndim(array([[1,2,3],[4,5,6]])) == 2 assert ndim(1) == 0 - + def test_rank(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, rank assert rank([[1,2,3],[4,5,6]]) == 2 assert rank(array([[1,2,3],[4,5,6]])) == 2 assert rank(1) == 0 - + def test_var(self): from numpypy import array, var a = array([[1,2],[3,4]]) @@ -107,3 +107,31 @@ assert std(a) == 1.1180339887498949 # assert (std(a, axis=0) == array([ 1., 1.])).all() # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + + def test_mean(self): + from numpypy import array, mean + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) + + def test_transpose(self): + from numpypy import arange, array, transpose, ones + x = arange(4).reshape((2,2)) + assert (transpose(x) == array([[0, 2],[1, 3]])).all() + # Once axes argument is implemented, add more tests + raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") + # x = ones((1, 2, 3)) + # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -29,6 +29,8 @@ assert type(a) == type(b) check(', '.join(['a']), 'a') raises(TypeError, ','.join, [b'a']) + exc = raises(TypeError, ''.join, ['a', 2, 3]) + assert 'sequence item 1' in str(e.value) def test_contains(self): assert '' in 'abc' diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -129,7 +129,7 @@ return space.wrap(l[0]) return space.wrap(w_self._value.join(l)) - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -144,17 +144,23 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = len(self) * (size - 1) + for i in range(size): + w_s = list_w[i] + try: + prealloc_size += len(space.unicode_w(w_s)) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string, %s " + "found", i, space.type(w_s).getname(space)) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if not isinstance(w_s, W_UnicodeObject): - raise operationerrfmt( - space.w_TypeError, - "sequence item %d: expected string, %s " - "found", i, space.type(w_s).getname(space)) - sb.append(w_s._value) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -52,7 +52,10 @@ from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] - res = _cast_to_box(llres) + if llres: + res = _cast_to_box(llres) + else: + res = None return _cast_to_gcref(ResOperation(no, args, res)) @register_helper(annmodel.SomePtr(llmemory.GCREF)) diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -424,7 +424,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -140,13 +140,15 @@ bytecode_name = None is_bytecode = True inline_level = None + has_dmp = False def parse_code_data(self, arg): m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', arg) if m is None: # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = arg + if arg: + self.bytecode_name = arg else: self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() self.startlineno = int(lineno) @@ -218,7 +220,7 @@ self.inputargs = inputargs self.chunks = chunks for chunk in self.chunks: - if chunk.filename is not None: + if chunk.bytecode_name is not None: self.startlineno = chunk.startlineno self.filename = chunk.filename self.name = chunk.name diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -283,3 +283,13 @@ assert loops[-1].count == 1234 assert loops[1].count == 123 assert loops[2].count == 12 + +def test_parse_nonpython(): + loop = parse(""" + [] + debug_merge_point(0, 'random') + debug_merge_point(0, ' #15 COMPARE_OP') + """) + f = Function.from_operations(loop.operations, LoopStorage()) + assert f.chunks[-1].filename == 'x.py' + assert f.filename is None diff --git a/pypy/tool/test/test_version.py b/pypy/tool/test/test_version.py --- a/pypy/tool/test/test_version.py +++ b/pypy/tool/test/test_version.py @@ -1,6 +1,22 @@ import os, sys import py -from pypy.tool.version import get_repo_version_info +from pypy.tool.version import get_repo_version_info, _get_hg_archive_version + +def test_hg_archival_version(tmpdir): + def version_for(name, **kw): + path = tmpdir.join(name) + path.write('\n'.join('%s: %s' % x for x in kw.items())) + return _get_hg_archive_version(str(path)) + + assert version_for('release', + tag='release-123', + node='000', + ) == ('PyPy', 'release-123', '000') + assert version_for('somebranch', + node='000', + branch='something', + ) == ('PyPy', 'something', '000') + def test_get_repo_version_info(): assert get_repo_version_info(None) diff --git a/pypy/tool/version.py b/pypy/tool/version.py --- a/pypy/tool/version.py +++ b/pypy/tool/version.py @@ -3,111 +3,139 @@ from subprocess import Popen, PIPE import pypy pypydir = os.path.dirname(os.path.abspath(pypy.__file__)) +pypyroot = os.path.dirname(pypydir) +default_retval = 'PyPy', '?', '?' + +def maywarn(err, repo_type='Mercurial'): + if not err: + return + + from pypy.tool.ansi_print import ansi_log + log = py.log.Producer("version") + py.log.setconsumer("version", ansi_log) + log.WARNING('Errors getting %s information: %s' % (repo_type, err)) def get_repo_version_info(hgexe=None): '''Obtain version information by invoking the 'hg' or 'git' commands.''' - # TODO: support extracting from .hg_archival.txt - - default_retval = 'PyPy', '?', '?' - pypyroot = os.path.abspath(os.path.join(pypydir, '..')) - - def maywarn(err, repo_type='Mercurial'): - if not err: - return - - from pypy.tool.ansi_print import ansi_log - log = py.log.Producer("version") - py.log.setconsumer("version", ansi_log) - log.WARNING('Errors getting %s information: %s' % (repo_type, err)) # Try to see if we can get info from Git if hgexe is not specified. if not hgexe: if os.path.isdir(os.path.join(pypyroot, '.git')): - gitexe = py.path.local.sysfind('git') - if gitexe: - try: - p = Popen( - [str(gitexe), 'rev-parse', 'HEAD'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - except OSError, e: - maywarn(e, 'Git') - return default_retval - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return default_retval - revision_id = p.stdout.read().strip()[:12] - p = Popen( - [str(gitexe), 'describe', '--tags', '--exact-match'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - if p.wait() != 0: - p = Popen( - [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, - cwd=pypyroot - ) - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return 'PyPy', '?', revision_id - branch = '?' - for line in p.stdout.read().strip().split('\n'): - if line.startswith('* '): - branch = line[1:].strip() - if branch == '(no branch)': - branch = '?' - break - return 'PyPy', branch, revision_id - return 'PyPy', p.stdout.read().strip(), revision_id + return _get_git_version() # Fallback to trying Mercurial. if hgexe is None: hgexe = py.path.local.sysfind('hg') - if not os.path.isdir(os.path.join(pypyroot, '.hg')): + if os.path.isfile(os.path.join(pypyroot, '.hg_archival.txt')): + return _get_hg_archive_version(os.path.join(pypyroot, '.hg_archival.txt')) + elif not os.path.isdir(os.path.join(pypyroot, '.hg')): maywarn('Not running from a Mercurial repository!') return default_retval elif not hgexe: maywarn('Cannot find Mercurial command!') return default_retval else: - env = dict(os.environ) - # get Mercurial into scripting mode - env['HGPLAIN'] = '1' - # disable user configuration, extensions, etc. - env['HGRCPATH'] = os.devnull + return _get_hg_version(hgexe) - try: - p = Popen([str(hgexe), 'version', '-q'], - stdout=PIPE, stderr=PIPE, env=env) - except OSError, e: - maywarn(e) - return default_retval - if not p.stdout.read().startswith('Mercurial Distributed SCM'): - maywarn('command does not identify itself as Mercurial') - return default_retval +def _get_hg_version(hgexe): + env = dict(os.environ) + # get Mercurial into scripting mode + env['HGPLAIN'] = '1' + # disable user configuration, extensions, etc. + env['HGRCPATH'] = os.devnull - p = Popen([str(hgexe), 'id', '-i', pypyroot], + try: + p = Popen([str(hgexe), 'version', '-q'], stdout=PIPE, stderr=PIPE, env=env) - hgid = p.stdout.read().strip() + except OSError, e: + maywarn(e) + return default_retval + + if not p.stdout.read().startswith('Mercurial Distributed SCM'): + maywarn('command does not identify itself as Mercurial') + return default_retval + + p = Popen([str(hgexe), 'id', '-i', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgid = p.stdout.read().strip() + maywarn(p.stderr.read()) + if p.wait() != 0: + hgid = '?' + + p = Popen([str(hgexe), 'id', '-t', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] + maywarn(p.stderr.read()) + if p.wait() != 0: + hgtags = ['?'] + + if hgtags: + return 'PyPy', hgtags[0], hgid + else: + # use the branch instead + p = Popen([str(hgexe), 'id', '-b', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgbranch = p.stdout.read().strip() maywarn(p.stderr.read()) + + return 'PyPy', hgbranch, hgid + + +def _get_hg_archive_version(path): + fp = open(path) + try: + data = dict(x.split(': ', 1) for x in fp.read().splitlines()) + finally: + fp.close() + if 'tag' in data: + return 'PyPy', data['tag'], data['node'] + else: + return 'PyPy', data['branch'], data['node'] + + +def _get_git_version(): + #XXX: this function is a untested hack, + # so the git mirror tav made will work + gitexe = py.path.local.sysfind('git') + if not gitexe: + return default_retval + + try: + p = Popen( + [str(gitexe), 'rev-parse', 'HEAD'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + except OSError, e: + maywarn(e, 'Git') + return default_retval + if p.wait() != 0: + maywarn(p.stderr.read(), 'Git') + return default_retval + revision_id = p.stdout.read().strip()[:12] + p = Popen( + [str(gitexe), 'describe', '--tags', '--exact-match'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + if p.wait() != 0: + p = Popen( + [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, + cwd=pypyroot + ) if p.wait() != 0: - hgid = '?' + maywarn(p.stderr.read(), 'Git') + return 'PyPy', '?', revision_id + branch = '?' + for line in p.stdout.read().strip().split('\n'): + if line.startswith('* '): + branch = line[1:].strip() + if branch == '(no branch)': + branch = '?' + break + return 'PyPy', branch, revision_id + return 'PyPy', p.stdout.read().strip(), revision_id - p = Popen([str(hgexe), 'id', '-t', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] - maywarn(p.stderr.read()) - if p.wait() != 0: - hgtags = ['?'] - if hgtags: - return 'PyPy', hgtags[0], hgid - else: - # use the branch instead - p = Popen([str(hgexe), 'id', '-b', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgbranch = p.stdout.read().strip() - maywarn(p.stderr.read()) - - return 'PyPy', hgbranch, hgid +if __name__ == '__main__': + print get_repo_version_info() diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Sat Jan 21 13:19:24 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 21 Jan 2012 13:19:24 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fix translation Message-ID: <20120121121924.2C00A82CB2@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51572:a718c93124eb Date: 2012-01-21 12:47 +0100 http://bitbucket.org/pypy/pypy/changeset/a718c93124eb/ Log: Fix translation diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -9,7 +9,7 @@ class Signature(object): _immutable_ = True - _immutable_fields_ = ["argnames[*]"] + _immutable_fields_ = ["argnames[*]", "kwonlyargnames[*]"] __slots__ = ("argnames", "kwonlyargnames", "varargname", "kwargname") def __init__(self, argnames, varargname=None, kwargname=None, kwonlyargnames=None): @@ -51,7 +51,7 @@ argnames = self.argnames if self.varargname is not None: argnames = argnames + [self.varargname] - argnames += self.kwonlyargnames + argnames = argnames + self.kwonlyargnames if self.kwargname is not None: argnames = argnames + [self.kwargname] return argnames From noreply at buildbot.pypy.org Sat Jan 21 13:39:53 2012 From: noreply at buildbot.pypy.org (bivab) Date: Sat, 21 Jan 2012 13:39:53 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: off by one error when extracting call arguments in guard_call_xxx operations Message-ID: <20120121123953.4662A82CB2@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51573:ce782a82bacb Date: 2012-01-21 12:13 +0100 http://bitbucket.org/pypy/pypy/changeset/ce782a82bacb/ Log: off by one error when extracting call arguments in guard_call_xxx operations diff --git a/pypy/jit/backend/arm/opassembler.py b/pypy/jit/backend/arm/opassembler.py --- a/pypy/jit/backend/arm/opassembler.py +++ b/pypy/jit/backend/arm/opassembler.py @@ -1118,7 +1118,7 @@ fail_index = self.cpu.get_fail_descr_number(faildescr) self._write_fail_index(fail_index) numargs = op.numargs() - callargs = arglocs[2:numargs] + callargs = arglocs[2:numargs + 1] # extract the arguments to the call adr = arglocs[1] resloc = arglocs[0] self._emit_call(fail_index, adr, callargs, fcond, resloc) @@ -1134,9 +1134,9 @@ # first, close the stack in the sense of the asmgcc GC root tracker gcrootmap = self.cpu.gc_ll_descr.gcrootmap numargs = op.numargs() + callargs = arglocs[2:numargs + 1] # extract the arguments to the call + adr = arglocs[1] resloc = arglocs[0] - adr = arglocs[1] - callargs = arglocs[2:numargs] if gcrootmap: self.call_release_gil(gcrootmap, arglocs, fcond) From noreply at buildbot.pypy.org Sat Jan 21 13:39:54 2012 From: noreply at buildbot.pypy.org (bivab) Date: Sat, 21 Jan 2012 13:39:54 +0100 (CET) Subject: [pypy-commit] pypy arm-backend-2: merge default Message-ID: <20120121123954.9BBC882CB2@wyvern.cs.uni-duesseldorf.de> Author: David Schneider Branch: arm-backend-2 Changeset: r51574:f81818811f73 Date: 2012-01-21 12:32 +0100 http://bitbucket.org/pypy/pypy/changeset/f81818811f73/ Log: merge default diff --git a/pypy/doc/translation.rst b/pypy/doc/translation.rst --- a/pypy/doc/translation.rst +++ b/pypy/doc/translation.rst @@ -155,7 +155,7 @@ function. The two input variables are the exception class and the exception value, respectively. (No other block will actually link to the exceptblock if the function does not - explicitely raise exceptions.) + explicitly raise exceptions.) ``Block`` @@ -325,7 +325,7 @@ Mutable objects need special treatment during annotation, because the annotation of contained values needs to be possibly updated to account for mutation operations, and consequently the annotation information -reflown through the relevant parts of the flow the graphs. +reflown through the relevant parts of the flow graphs. * ``SomeList`` stands for a list of homogeneous type (i.e. all the elements of the list are represented by a single common ``SomeXxx`` @@ -503,8 +503,8 @@ Since RPython is a garbage collected language there is a lot of heap memory allocation going on all the time, which would either not occur at all in a more -traditional explicitely managed language or results in an object which dies at -a time known in advance and can thus be explicitely deallocated. For example a +traditional explicitly managed language or results in an object which dies at +a time known in advance and can thus be explicitly deallocated. For example a loop of the following form:: for i in range(n): @@ -696,7 +696,7 @@ So far it is the second most mature high level backend after GenCLI: it still can't translate the full Standard Interpreter, but after the -Leysin sprint we were able to compile and run the rpytstone and +Leysin sprint we were able to compile and run the rpystone and richards benchmarks. GenJVM is almost entirely the work of Niko Matsakis, who worked on it diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1118,6 +1118,12 @@ for src, dst in singlefloats: self.mc.MOVD(dst, src) # Finally remap the arguments in the main regs + # If x is a register and is in dst_locs, then oups, it needs to + # be moved away: + if x in dst_locs: + src_locs.append(x) + dst_locs.append(r10) + x = r10 remap_frame_layout(self, src_locs, dst_locs, X86_64_SCRATCH_REG) self._regalloc.reserve_param(len(pass_on_stack)) @@ -2042,10 +2048,7 @@ size = sizeloc.value signloc = arglocs[1] - if isinstance(op.getarg(0), Const): - x = imm(op.getarg(0).getint()) - else: - x = arglocs[2] + x = arglocs[2] # the function address if x is eax: tmp = ecx else: diff --git a/pypy/jit/backend/x86/runner.py b/pypy/jit/backend/x86/runner.py --- a/pypy/jit/backend/x86/runner.py +++ b/pypy/jit/backend/x86/runner.py @@ -6,7 +6,7 @@ from pypy.jit.codewriter import longlong from pypy.jit.metainterp import history, compile from pypy.jit.backend.x86.assembler import Assembler386 -from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS +from pypy.jit.backend.x86.arch import FORCE_INDEX_OFS, IS_X86_32 from pypy.jit.backend.x86.profagent import ProfileAgent from pypy.jit.backend.llsupport.llmodel import AbstractLLCPU from pypy.jit.backend.x86 import regloc @@ -142,7 +142,9 @@ cast_ptr_to_int._annspecialcase_ = 'specialize:arglltype(0)' cast_ptr_to_int = staticmethod(cast_ptr_to_int) - all_null_registers = lltype.malloc(rffi.LONGP.TO, 24, + all_null_registers = lltype.malloc(rffi.LONGP.TO, + IS_X86_32 and (16+8) # 16 + 8 regs + or (16+16), # 16 + 16 regs flavor='raw', zero=True, immortal=True) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -70,6 +70,7 @@ ("exp", "exp"), ("fabs", "fabs"), ("floor", "floor"), + ("ceil", "ceil"), ("greater", "greater"), ("greater_equal", "greater_equal"), ("less", "less"), @@ -85,6 +86,8 @@ ("subtract", "subtract"), ('sqrt', 'sqrt'), ("tan", "tan"), + ('bitwise_and', 'bitwise_and'), + ('bitwise_or', 'bitwise_or'), ]: interpleveldefs[exposed] = "interp_ufuncs.get(space).%s" % impl diff --git a/pypy/module/micronumpy/interp_dtype.py b/pypy/module/micronumpy/interp_dtype.py --- a/pypy/module/micronumpy/interp_dtype.py +++ b/pypy/module/micronumpy/interp_dtype.py @@ -47,6 +47,10 @@ def getitem(self, storage, i): return self.itemtype.read(storage, self.itemtype.get_element_size(), i, 0) + def getitem_bool(self, storage, i): + isize = self.itemtype.get_element_size() + return self.itemtype.read_bool(storage, isize, i, 0) + def setitem(self, storage, i, box): self.itemtype.store(storage, self.itemtype.get_element_size(), i, 0, box) @@ -85,6 +89,12 @@ def descr_get_shape(self, space): return space.newtuple([]) + def is_int_type(self): + return self.kind == SIGNEDLTR or self.kind == UNSIGNEDLTR + + def is_bool_type(self): + return self.kind == BOOLLTR + W_Dtype.typedef = TypeDef("dtype", __module__ = "numpypy", __new__ = interp2app(W_Dtype.descr__new__.im_func), diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -4,6 +4,19 @@ from pypy.module.micronumpy.strides import calculate_broadcast_strides,\ calculate_slice_strides +# structures to describe slicing + +class Chunk(object): + def __init__(self, start, stop, step, lgt): + self.start = start + self.stop = stop + self.step = step + self.lgt = lgt + + def extend_shape(self, shape): + if self.step != 0: + shape.append(self.lgt) + class BaseTransform(object): pass @@ -38,11 +51,18 @@ self.size = size def next(self, shapelen): + return self._next(1) + + def _next(self, ofs): arr = instantiate(ArrayIterator) arr.size = self.size - arr.offset = self.offset + 1 + arr.offset = self.offset + ofs return arr + def next_no_increase(self, shapelen): + # a hack to make JIT believe this is always virtual + return self._next(0) + def done(self): return self.offset >= self.size diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -2,14 +2,15 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ + interp_boxes from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + SkipLastAxisIterator, Chunk, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -39,7 +40,24 @@ get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) - +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['s', 'frame', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], + name='numpy_filter', +) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', +) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -270,6 +288,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -479,11 +500,69 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(self) + shapelen = len(arr.shape) + s = 0 + iter = None + while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = concr.create_iter() + sig = self.find_sig() + frame = sig.create_frame(self) + v = None + while not frame.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen, self=self) + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + + def setitem_filter(self, space, idx, val): + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -493,6 +572,11 @@ def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -509,9 +593,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -724,8 +807,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result @@ -945,7 +1027,7 @@ builder.append('\n' + indent) else: builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: @@ -962,7 +1044,7 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 @@ -1091,6 +1173,10 @@ parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1137,6 +1223,9 @@ self.shape = new_shape self.calc_strides(new_shape) + def create_iter(self): + return ArrayIterator(self.size) + def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1191,6 +1280,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, @@ -1257,6 +1347,9 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -249,15 +249,16 @@ class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) self.func = func self.comparison_func = comparison_func + self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -268,6 +269,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -304,10 +306,12 @@ def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -425,6 +429,10 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -449,6 +457,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), @@ -475,7 +484,7 @@ extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -82,6 +82,16 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + def get_final_iter(self): final_iter = promote(self.final_iter) if final_iter < 0: diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -10,12 +10,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -1302,15 +1304,25 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_array_indexing_one_elem(self): + skip("not yet") + from _numpypy import array, arange + raises(IndexError, 'arange(3)[array([3.5])]') + a = arange(3)[array([1])] + assert a == 1 + assert a[0] == 1 + raises(IndexError,'arange(3)[array([15])]') + assert arange(3)[array([-3])] == 0 + raises(IndexError,'arange(3)[array([-15])]') + assert arange(3)[array(1)] == 1 + def test_fill(self): from _numpypy import array - a = array([1, 2, 3]) a.fill(10) assert (a == [10, 10, 10]).all() a.fill(False) assert (a == [0, 0, 0]).all() - b = a[:1] b.fill(4) assert (b == [4]).all() @@ -1324,6 +1336,24 @@ d.fill(100) assert d == 100 + def test_array_indexing_bool(self): + from _numpypy import arange + a = arange(10) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + a = arange(10).reshape(5, 2) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() + + def test_array_indexing_bool_setitem(self): + from _numpypy import arange, array + a = arange(6) + a[a > 3] = 15 + assert (a == [0, 1, 2, 3, 15, 15]).all() + a = arange(6).reshape(3, 2) + a[a & 1 == 1] = array([8, 9, 10]) + assert (a == [[0, 8], [2, 9], [4, 10]]).all() + + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: @@ -347,11 +355,20 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): - from numpypy import add, arange + from _numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_bitwise(self): + from _numpypy import bitwise_and, bitwise_or, arange, array + a = arange(6).reshape(2, 3) + assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() + assert (a & 1 == bitwise_and(a, 1)).all() + assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() + assert (a | 1 == bitwise_or(a, 1)).all() + raises(TypeError, 'array([1.0]) & 1') + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -217,6 +217,7 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. + py.test.skip("too fragile") self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, 'getfield_gc_pure': 8, diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -94,6 +94,9 @@ width, storage, i, offset )) + def read_bool(self, storage, width, i, offset): + raise NotImplementedError + def store(self, storage, width, i, offset, box): value = self.unbox(box) libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), @@ -168,6 +171,7 @@ @simple_binary_op def min(self, v1, v2): return min(v1, v2) + class Bool(BaseType, Primitive): T = lltype.Bool @@ -185,6 +189,11 @@ else: return self.False + + def read_bool(self, storage, width, i, offset): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def coerce_subtype(self, space, w_subtype, w_item): # Doesn't return subclasses so it can return the constants. return self._coerce(space, w_item) @@ -253,6 +262,14 @@ assert v == 0 return 0 + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -374,6 +391,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) @@ -436,4 +457,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -624,8 +624,8 @@ if step == 1: assert start >= 0 - assert slicelength >= 0 - del items[start:start+slicelength] + if slicelength > 0: + del items[start:start+slicelength] else: n = len(items) i = start @@ -662,10 +662,11 @@ while i >= lim: items[i] = items[i-delta] i -= 1 - elif start >= 0: + elif delta == 0: + pass + else: + assert start >= 0 # start<0 is only possible with slicelength==0 del items[start:start+delta] - else: - assert delta==0 # start<0 is only possible with slicelength==0 elif len2 != slicelength: # No resize for extended slices raise operationerrfmt(space.w_ValueError, "attempt to " "assign sequence of size %d to extended slice of size %d", diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -804,10 +804,11 @@ while i >= lim: items[i] = items[i-delta] i -= 1 - elif start >= 0: + elif delta == 0: + pass + else: + assert start >= 0 # start<0 is only possible with slicelength==0 del items[start:start+delta] - else: - assert delta==0 # start<0 is only possible with slicelength==0 elif len2 != slicelength: # No resize for extended slices raise operationerrfmt(self.space.w_ValueError, "attempt to " "assign sequence of size %d to extended slice of size %d", @@ -851,8 +852,8 @@ if step == 1: assert start >= 0 - assert slicelength >= 0 - del items[start:start+slicelength] + if slicelength > 0: + del items[start:start+slicelength] else: n = len(items) i = start diff --git a/pypy/objspace/std/test/test_bytearrayobject.py b/pypy/objspace/std/test/test_bytearrayobject.py --- a/pypy/objspace/std/test/test_bytearrayobject.py +++ b/pypy/objspace/std/test/test_bytearrayobject.py @@ -1,5 +1,9 @@ +from pypy import conftest class AppTestBytesArray: + def setup_class(cls): + cls.w_runappdirect = cls.space.wrap(conftest.option.runappdirect) + def test_basics(self): b = bytearray() assert type(b) is bytearray @@ -439,3 +443,15 @@ def test_reduce(self): assert bytearray('caf\xe9').__reduce__() == ( bytearray, (u'caf\xe9', 'latin-1'), None) + + def test_setitem_slice_performance(self): + # because of a complexity bug, this used to take forever on a + # translated pypy. On CPython2.6 -A, it takes around 8 seconds. + if self.runappdirect: + count = 16*1024*1024 + else: + count = 1024 + b = bytearray(count) + for i in range(count): + b[i:i+1] = 'y' + assert str(b) == 'y' * count diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -397,6 +397,7 @@ on_cpython = (option.runappdirect and not hasattr(sys, 'pypy_translation_info')) cls.w_on_cpython = cls.space.wrap(on_cpython) + cls.w_runappdirect = cls.space.wrap(option.runappdirect) def test_getstrategyfromlist_w(self): l0 = ["a", "2", "a", True] @@ -898,6 +899,18 @@ l[::-1] = l assert l == [6,5,4,3,2,1] + def test_setitem_slice_performance(self): + # because of a complexity bug, this used to take forever on a + # translated pypy. On CPython2.6 -A, it takes around 5 seconds. + if self.runappdirect: + count = 16*1024*1024 + else: + count = 1024 + b = [None] * count + for i in range(count): + b[i:i+1] = ['y'] + assert b == ['y'] * count + def test_recursive_repr(self): l = [] assert repr(l) == '[]' diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -292,6 +292,10 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 + if y == 2.0: + return x * x # this is always a correct answer, and is relatively + # common in user programs. + if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 diff --git a/pypy/tool/test/test_version.py b/pypy/tool/test/test_version.py --- a/pypy/tool/test/test_version.py +++ b/pypy/tool/test/test_version.py @@ -1,6 +1,22 @@ import os, sys import py -from pypy.tool.version import get_repo_version_info +from pypy.tool.version import get_repo_version_info, _get_hg_archive_version + +def test_hg_archival_version(tmpdir): + def version_for(name, **kw): + path = tmpdir.join(name) + path.write('\n'.join('%s: %s' % x for x in kw.items())) + return _get_hg_archive_version(str(path)) + + assert version_for('release', + tag='release-123', + node='000', + ) == ('PyPy', 'release-123', '000') + assert version_for('somebranch', + node='000', + branch='something', + ) == ('PyPy', 'something', '000') + def test_get_repo_version_info(): assert get_repo_version_info(None) diff --git a/pypy/tool/version.py b/pypy/tool/version.py --- a/pypy/tool/version.py +++ b/pypy/tool/version.py @@ -3,111 +3,139 @@ from subprocess import Popen, PIPE import pypy pypydir = os.path.dirname(os.path.abspath(pypy.__file__)) +pypyroot = os.path.dirname(pypydir) +default_retval = 'PyPy', '?', '?' + +def maywarn(err, repo_type='Mercurial'): + if not err: + return + + from pypy.tool.ansi_print import ansi_log + log = py.log.Producer("version") + py.log.setconsumer("version", ansi_log) + log.WARNING('Errors getting %s information: %s' % (repo_type, err)) def get_repo_version_info(hgexe=None): '''Obtain version information by invoking the 'hg' or 'git' commands.''' - # TODO: support extracting from .hg_archival.txt - - default_retval = 'PyPy', '?', '?' - pypyroot = os.path.abspath(os.path.join(pypydir, '..')) - - def maywarn(err, repo_type='Mercurial'): - if not err: - return - - from pypy.tool.ansi_print import ansi_log - log = py.log.Producer("version") - py.log.setconsumer("version", ansi_log) - log.WARNING('Errors getting %s information: %s' % (repo_type, err)) # Try to see if we can get info from Git if hgexe is not specified. if not hgexe: if os.path.isdir(os.path.join(pypyroot, '.git')): - gitexe = py.path.local.sysfind('git') - if gitexe: - try: - p = Popen( - [str(gitexe), 'rev-parse', 'HEAD'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - except OSError, e: - maywarn(e, 'Git') - return default_retval - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return default_retval - revision_id = p.stdout.read().strip()[:12] - p = Popen( - [str(gitexe), 'describe', '--tags', '--exact-match'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - if p.wait() != 0: - p = Popen( - [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, - cwd=pypyroot - ) - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return 'PyPy', '?', revision_id - branch = '?' - for line in p.stdout.read().strip().split('\n'): - if line.startswith('* '): - branch = line[1:].strip() - if branch == '(no branch)': - branch = '?' - break - return 'PyPy', branch, revision_id - return 'PyPy', p.stdout.read().strip(), revision_id + return _get_git_version() # Fallback to trying Mercurial. if hgexe is None: hgexe = py.path.local.sysfind('hg') - if not os.path.isdir(os.path.join(pypyroot, '.hg')): + if os.path.isfile(os.path.join(pypyroot, '.hg_archival.txt')): + return _get_hg_archive_version(os.path.join(pypyroot, '.hg_archival.txt')) + elif not os.path.isdir(os.path.join(pypyroot, '.hg')): maywarn('Not running from a Mercurial repository!') return default_retval elif not hgexe: maywarn('Cannot find Mercurial command!') return default_retval else: - env = dict(os.environ) - # get Mercurial into scripting mode - env['HGPLAIN'] = '1' - # disable user configuration, extensions, etc. - env['HGRCPATH'] = os.devnull + return _get_hg_version(hgexe) - try: - p = Popen([str(hgexe), 'version', '-q'], - stdout=PIPE, stderr=PIPE, env=env) - except OSError, e: - maywarn(e) - return default_retval - if not p.stdout.read().startswith('Mercurial Distributed SCM'): - maywarn('command does not identify itself as Mercurial') - return default_retval +def _get_hg_version(hgexe): + env = dict(os.environ) + # get Mercurial into scripting mode + env['HGPLAIN'] = '1' + # disable user configuration, extensions, etc. + env['HGRCPATH'] = os.devnull - p = Popen([str(hgexe), 'id', '-i', pypyroot], + try: + p = Popen([str(hgexe), 'version', '-q'], stdout=PIPE, stderr=PIPE, env=env) - hgid = p.stdout.read().strip() + except OSError, e: + maywarn(e) + return default_retval + + if not p.stdout.read().startswith('Mercurial Distributed SCM'): + maywarn('command does not identify itself as Mercurial') + return default_retval + + p = Popen([str(hgexe), 'id', '-i', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgid = p.stdout.read().strip() + maywarn(p.stderr.read()) + if p.wait() != 0: + hgid = '?' + + p = Popen([str(hgexe), 'id', '-t', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] + maywarn(p.stderr.read()) + if p.wait() != 0: + hgtags = ['?'] + + if hgtags: + return 'PyPy', hgtags[0], hgid + else: + # use the branch instead + p = Popen([str(hgexe), 'id', '-b', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgbranch = p.stdout.read().strip() maywarn(p.stderr.read()) + + return 'PyPy', hgbranch, hgid + + +def _get_hg_archive_version(path): + fp = open(path) + try: + data = dict(x.split(': ', 1) for x in fp.read().splitlines()) + finally: + fp.close() + if 'tag' in data: + return 'PyPy', data['tag'], data['node'] + else: + return 'PyPy', data['branch'], data['node'] + + +def _get_git_version(): + #XXX: this function is a untested hack, + # so the git mirror tav made will work + gitexe = py.path.local.sysfind('git') + if not gitexe: + return default_retval + + try: + p = Popen( + [str(gitexe), 'rev-parse', 'HEAD'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + except OSError, e: + maywarn(e, 'Git') + return default_retval + if p.wait() != 0: + maywarn(p.stderr.read(), 'Git') + return default_retval + revision_id = p.stdout.read().strip()[:12] + p = Popen( + [str(gitexe), 'describe', '--tags', '--exact-match'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + if p.wait() != 0: + p = Popen( + [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, + cwd=pypyroot + ) if p.wait() != 0: - hgid = '?' + maywarn(p.stderr.read(), 'Git') + return 'PyPy', '?', revision_id + branch = '?' + for line in p.stdout.read().strip().split('\n'): + if line.startswith('* '): + branch = line[1:].strip() + if branch == '(no branch)': + branch = '?' + break + return 'PyPy', branch, revision_id + return 'PyPy', p.stdout.read().strip(), revision_id - p = Popen([str(hgexe), 'id', '-t', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] - maywarn(p.stderr.read()) - if p.wait() != 0: - hgtags = ['?'] - if hgtags: - return 'PyPy', hgtags[0], hgid - else: - # use the branch instead - p = Popen([str(hgexe), 'id', '-b', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgbranch = p.stdout.read().strip() - maywarn(p.stderr.read()) - - return 'PyPy', hgbranch, hgid +if __name__ == '__main__': + print get_repo_version_info() From noreply at buildbot.pypy.org Sat Jan 21 14:12:06 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 14:12:06 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: start working on a branch that moves back stuff from interp-level to app-level Message-ID: <20120121131206.B683482D03@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51575:78fdc8d84157 Date: 2012-01-21 15:10 +0200 http://bitbucket.org/pypy/pypy/changeset/78fdc8d84157/ Log: start working on a branch that moves back stuff from interp-level to app-level diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, NoneNotWrapped +from pypy.interpreter.gateway import interp2app, NoneNotWrapped, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ interp_boxes @@ -11,6 +11,7 @@ from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ SkipLastAxisIterator, Chunk, ViewIterator +from pypy.module.micronumpy.appbridge import get_appbridge_cache numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -661,17 +662,13 @@ w_denom = space.wrap(self.shape[dim]) return space.div(self.descr_sum_promote(space, w_axis), w_denom) - def descr_var(self, space): - # var = mean((values - mean(values)) ** 2) - w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) - assert isinstance(w_res, BaseArray) - w_res = w_res.descr_pow(space, space.wrap(2)) - assert isinstance(w_res, BaseArray) - return w_res.descr_mean(space, space.w_None) + def descr_var(self, space, w_axis=None): + return get_appbridge_cache(space).call_method(space, '_var', self, + w_axis) - def descr_std(self, space): - # std(v) = sqrt(var(v)) - return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_std(self, space, w_axis=None): + return get_appbridge_cache(space).call_method(space, '_std', self, + w_axis) def descr_fill(self, space, w_value): concr = self.get_concrete_or_scalar() @@ -1245,8 +1242,13 @@ shape.append(item) return size, shape -def array(space, w_item_or_iterable, w_dtype=None, w_order=NoneNotWrapped): + at unwrap_spec(subok=bool, copy=bool, ownmaskna=bool) +def array(space, w_item_or_iterable, w_dtype=None, w_order=None, + subok=True, copy=False, w_maskna=None, ownmaskna=False): # find scalar + if (not subok or copy or not space.is_w(w_maskna, space.w_None) or + ownmaskna): + raise OperationError(space.w_NotImplementedError, space.wrap("Unsupported args")) if not space.issequence_w(w_item_or_iterable): if space.is_w(w_dtype, space.w_None): w_dtype = interp_ufuncs.find_dtype_for_scalar(space, @@ -1255,7 +1257,7 @@ space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) return scalar_w(space, dtype, w_item_or_iterable) - if w_order is None: + if space.is_w(w_order, space.w_None): order = 'C' else: order = space.str_w(w_order) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1029,8 +1029,8 @@ assert a.var() == 0.0 a = arange(10).reshape(5, 2) assert a.var() == 8.25 - #assert (a.var(0) == [8, 8]).all() - #assert (a.var(1) == [.25] * 5).all() + assert (a.var(0) == [8, 8]).all() + assert (a.var(1) == [.25] * 5).all() def test_std(self): from _numpypy import array From noreply at buildbot.pypy.org Sat Jan 21 14:34:28 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 14:34:28 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: add count_reduce_numbers Message-ID: <20120121133428.990F9710653@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51576:f53826f6f84a Date: 2012-01-21 15:33 +0200 http://bitbucket.org/pypy/pypy/changeset/f53826f6f84a/ Log: add count_reduce_numbers diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -28,6 +28,8 @@ 'fromstring': 'interp_support.fromstring', 'flatiter': 'interp_numarray.W_FlatIterator', + 'count_reduce_items': 'interp_numarray.count_reduce_items', + 'True_': 'types.Bool.True', 'False_': 'types.Bool.False', diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1308,6 +1308,23 @@ arr.dtype.fill(arr.storage, one, 0, size) return space.wrap(arr) + at unwrap_spec(arr=BaseArray, skipna=bool, keepdims=bool) +def count_reduce_items(space, arr, w_axis=None, skipna=False, keepdims=True): + if not keepdims: + raise OperationError(space.w_NotImplementedError, space.wrap("unsupported")) + if space.is_w(w_axis, space.w_None): + s = 1 + for elem in arr.shape: + s *= elem + return space.wrap(s) + if space.isinstance_w(w_axis, space.w_int): + return space.wrap(arr.shape[space.int_w(w_axis)]) + s = 1 + elems = space.fixedview(w_axis) + for w_elem in elems: + s *= arr.shape[space.int_w(w_elem)] + return space.wrap(s) + def dot(space, w_obj, w_obj2): w_arr = convert_to_array(space, w_obj) if isinstance(w_arr, Scalar): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype from pypy.module.micronumpy.signature import ReduceSignature,\ @@ -58,7 +58,9 @@ ) return self.call(space, __args__.arguments_w) - def descr_reduce(self, space, w_obj, w_dim=0): + @unwrap_spec(skipna=bool, keepdims=bool) + def descr_reduce(self, space, w_obj, w_axis=None, w_dtype=None, + skipna=False, keepdims=True): """reduce(...) reduce(a, axis=0) @@ -111,15 +113,18 @@ array([[ 1, 5], [ 9, 13]]) """ - return self.reduce(space, w_obj, False, False, w_dim) + if space.is_w(w_axis, space.w_None): + axis = -1 + else: + axis = space.int_w(w_axis) + return self.reduce(space, w_obj, False, False, axis) - def reduce(self, space, w_obj, multidim, promote_to_largest, w_dim): + def reduce(self, space, w_obj, multidim, promote_to_largest, dim): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ Scalar if self.argcount != 2: raise OperationError(space.w_ValueError, space.wrap("reduce only " "supported for binary functions")) - dim = space.int_w(w_dim) assert isinstance(self, W_Ufunc2) obj = convert_to_array(space, w_obj) if dim >= len(obj.shape): @@ -494,3 +499,4 @@ def get(space): return space.fromcache(UfuncState) + diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -394,3 +394,11 @@ (3, 3.5), ]: assert ufunc(a, b) == func(a, b) + + def test_count_reduce_items(self): + from _numpypy import count_reduce_items, arange + a = arange(24).reshape(2, 3, 4) + assert count_reduce_items(a) == 24 + assert count_reduce_items(a, 1) == 3 + assert count_reduce_items(a, (1, 2)) == 3 * 4 + From noreply at buildbot.pypy.org Sat Jan 21 14:42:10 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 14:42:10 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: Add missing files, to progress, we need out attribute (Which is pointless Message-ID: <20120121134210.2FE65710653@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51577:1ff91b2e045c Date: 2012-01-21 15:41 +0200 http://bitbucket.org/pypy/pypy/changeset/1ff91b2e045c/ Log: Add missing files, to progress, we need out attribute (Which is pointless on pypy btw) diff --git a/lib_pypy/numpypy/core/_methods.py b/lib_pypy/numpypy/core/_methods.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/_methods.py @@ -0,0 +1,98 @@ +# Array methods which are called by the both the C-code for the method +# and the Python code for the NumPy-namespace function + +import _numpypy as mu +um = mu +#from numpypy.core import umath as um +from numpypy.core.numeric import asanyarray + +def _amax(a, axis=None, out=None, skipna=False, keepdims=False): + return um.maximum.reduce(a, axis=axis, + out=out, skipna=skipna, keepdims=keepdims) + +def _amin(a, axis=None, out=None, skipna=False, keepdims=False): + return um.minimum.reduce(a, axis=axis, + out=out, skipna=skipna, keepdims=keepdims) + +def _sum(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + return um.add.reduce(a, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + +def _prod(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + return um.multiply.reduce(a, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + +def _mean(a, axis=None, dtype=None, out=None, skipna=False, keepdims=False): + arr = asanyarray(a) + + # Upgrade bool, unsigned int, and int to float64 + if dtype is None and arr.dtype.kind in ['b','u','i']: + ret = um.add.reduce(arr, axis=axis, dtype='f8', + out=out, skipna=skipna, keepdims=keepdims) + else: + ret = um.add.reduce(arr, axis=axis, dtype=dtype, + out=out, skipna=skipna, keepdims=keepdims) + rcount = mu.count_reduce_items(arr, axis=axis, + skipna=skipna, keepdims=keepdims) + if isinstance(ret, mu.ndarray): + ret = um.true_divide(ret, rcount, + out=ret, casting='unsafe', subok=False) + else: + ret = ret / float(rcount) + return ret + +def _var(a, axis=None, dtype=None, out=None, ddof=0, + skipna=False, keepdims=False): + arr = asanyarray(a) + + # First compute the mean, saving 'rcount' for reuse later + if dtype is None and arr.dtype.kind in ['b','u','i']: + arrmean = um.add.reduce(arr, axis=axis, dtype='f8', + skipna=skipna, keepdims=True) + else: + arrmean = um.add.reduce(arr, axis=axis, dtype=dtype, + skipna=skipna, keepdims=True) + rcount = mu.count_reduce_items(arr, axis=axis, + skipna=skipna, keepdims=True) + if isinstance(arrmean, mu.ndarray): + arrmean = um.true_divide(arrmean, rcount, + out=arrmean, casting='unsafe', subok=False) + else: + arrmean = arrmean / float(rcount) + + # arr - arrmean + x = arr - arrmean + + # (arr - arrmean) ** 2 + if arr.dtype.kind == 'c': + x = um.multiply(x, um.conjugate(x), out=x).real + else: + x = um.multiply(x, x, out=x) + + # add.reduce((arr - arrmean) ** 2, axis) + ret = um.add.reduce(x, axis=axis, dtype=dtype, out=out, + skipna=skipna, keepdims=keepdims) + + # add.reduce((arr - arrmean) ** 2, axis) / (n - ddof) + if not keepdims and isinstance(rcount, mu.ndarray): + rcount = rcount.squeeze(axis=axis) + rcount -= ddof + if isinstance(ret, mu.ndarray): + ret = um.true_divide(ret, rcount, + out=ret, casting='unsafe', subok=False) + else: + ret = ret / float(rcount) + + return ret + +def _std(a, axis=None, dtype=None, out=None, ddof=0, + skipna=False, keepdims=False): + ret = _var(a, axis=axis, dtype=dtype, out=out, ddof=ddof, + skipna=skipna, keepdims=keepdims) + + if isinstance(ret, mu.ndarray): + ret = um.sqrt(ret, out=ret) + else: + ret = um.sqrt(ret) + + return ret diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/numeric.py @@ -0,0 +1,62 @@ + +from _numpypy import array + +def asanyarray(a, dtype=None, order=None, maskna=None, ownmaskna=False): + """ + Convert the input to an ndarray, but pass ndarray subclasses through. + + Parameters + ---------- + a : array_like + Input data, in any form that can be converted to an array. This + includes scalars, lists, lists of tuples, tuples, tuples of tuples, + tuples of lists, and ndarrays. + dtype : data-type, optional + By default, the data-type is inferred from the input data. + order : {'C', 'F'}, optional + Whether to use row-major ('C') or column-major ('F') memory + representation. Defaults to 'C'. + maskna : bool or None, optional + If this is set to True, it forces the array to have an NA mask. + If this is set to False, it forces the array to not have an NA + mask. + ownmaskna : bool, optional + If this is set to True, forces the array to have a mask which + it owns. + + Returns + ------- + out : ndarray or an ndarray subclass + Array interpretation of `a`. If `a` is an ndarray or a subclass + of ndarray, it is returned as-is and no copy is performed. + + See Also + -------- + asarray : Similar function which always returns ndarrays. + ascontiguousarray : Convert input to a contiguous array. + asfarray : Convert input to a floating point ndarray. + asfortranarray : Convert input to an ndarray with column-major + memory order. + asarray_chkfinite : Similar function which checks input for NaNs and + Infs. + fromiter : Create an array from an iterator. + fromfunction : Construct an array by executing a function on grid + positions. + + Examples + -------- + Convert a list into an array: + + >>> a = [1, 2] + >>> np.asanyarray(a) + array([1, 2]) + + Instances of `ndarray` subclasses are passed through as-is: + + >>> a = np.matrix([1, 2]) + >>> np.asanyarray(a) is a + True + + """ + return array(a, dtype, copy=False, order=order, subok=True, + maskna=maskna, ownmaskna=ownmaskna) diff --git a/pypy/module/micronumpy/appbridge.py b/pypy/module/micronumpy/appbridge.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/appbridge.py @@ -0,0 +1,28 @@ + +from pypy.rlib.objectmodel import specialize + +class AppBridgeCache(object): + w__var = None + w__std = None + w_module = None + + def __init__(self, space): + self.w_import = space.appexec([], """(): + def f(): + mod = __import__('numpypy.core._methods', {}, {}, ['']) + return mod + return f + """) + + @specialize.arg(2) + def call_method(self, space, name, *args): + w_meth = getattr(self, 'w_' + name) + if w_meth is None: + if self.w_module is None: + self.w_module = space.call_function(self.w_import) + w_meth = space.getattr(self.w_module, space.wrap(name)) + setattr(self, 'w_' + name, w_meth) + return space.call_function(w_meth, *args) + +def get_appbridge_cache(space): + return space.fromcache(AppBridgeCache) From noreply at buildbot.pypy.org Sat Jan 21 14:48:41 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 14:48:41 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: rebinding is a bit harmful for now. Let's not do it Message-ID: <20120121134841.397EC710653@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51578:d6c604ffffa0 Date: 2012-01-21 15:48 +0200 http://bitbucket.org/pypy/pypy/changeset/d6c604ffffa0/ Log: rebinding is a bit harmful for now. Let's not do it diff --git a/lib_pypy/numpypy/core/_methods.py b/lib_pypy/numpypy/core/_methods.py --- a/lib_pypy/numpypy/core/_methods.py +++ b/lib_pypy/numpypy/core/_methods.py @@ -55,23 +55,23 @@ rcount = mu.count_reduce_items(arr, axis=axis, skipna=skipna, keepdims=True) if isinstance(arrmean, mu.ndarray): - arrmean = um.true_divide(arrmean, rcount, - out=arrmean, casting='unsafe', subok=False) + arrmean2 = um.true_divide(arrmean, rcount, + casting='unsafe', subok=False) else: - arrmean = arrmean / float(rcount) + arrmean2 = arrmean / float(rcount) # arr - arrmean - x = arr - arrmean + x = arr - arrmean2 # (arr - arrmean) ** 2 if arr.dtype.kind == 'c': - x = um.multiply(x, um.conjugate(x), out=x).real + y = um.multiply(x, um.conjugate(x)).real else: - x = um.multiply(x, x, out=x) + y = um.multiply(x, x) # add.reduce((arr - arrmean) ** 2, axis) - ret = um.add.reduce(x, axis=axis, dtype=dtype, out=out, - skipna=skipna, keepdims=keepdims) + ret = um.add.reduce(y, axis=axis, dtype=dtype, out=out, + skipna=skipna, keepdims=keepdims) # add.reduce((arr - arrmean) ** 2, axis) / (n - ddof) if not keepdims and isinstance(rcount, mu.ndarray): diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -60,7 +60,7 @@ @unwrap_spec(skipna=bool, keepdims=bool) def descr_reduce(self, space, w_obj, w_axis=None, w_dtype=None, - skipna=False, keepdims=True): + skipna=False, keepdims=True, w_out=None): """reduce(...) reduce(a, axis=0) @@ -113,6 +113,9 @@ array([[ 1, 5], [ 9, 13]]) """ + if not space.is_w(w_out, space.w_None): + raise OperationError(space.w_NotImplementedError, space.wrap( + "out not supported")) if space.is_w(w_axis, space.w_None): axis = -1 else: From noreply at buildbot.pypy.org Sat Jan 21 14:52:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 14:52:30 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: change rebindings, nonsense Message-ID: <20120121135230.EDE48710653@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51579:ac6129336b42 Date: 2012-01-21 15:51 +0200 http://bitbucket.org/pypy/pypy/changeset/ac6129336b42/ Log: change rebindings, nonsense diff --git a/lib_pypy/numpypy/core/_methods.py b/lib_pypy/numpypy/core/_methods.py --- a/lib_pypy/numpypy/core/_methods.py +++ b/lib_pypy/numpypy/core/_methods.py @@ -55,22 +55,22 @@ rcount = mu.count_reduce_items(arr, axis=axis, skipna=skipna, keepdims=True) if isinstance(arrmean, mu.ndarray): - arrmean2 = um.true_divide(arrmean, rcount, + arrmean = um.true_divide(arrmean, rcount, casting='unsafe', subok=False) else: - arrmean2 = arrmean / float(rcount) + arrmean = arrmean / float(rcount) # arr - arrmean - x = arr - arrmean2 + x = arr - arrmean # (arr - arrmean) ** 2 if arr.dtype.kind == 'c': - y = um.multiply(x, um.conjugate(x)).real + x = um.multiply(x, um.conjugate(x)).real else: - y = um.multiply(x, x) + x = um.multiply(x, x) # add.reduce((arr - arrmean) ** 2, axis) - ret = um.add.reduce(y, axis=axis, dtype=dtype, out=out, + ret = um.add.reduce(x, axis=axis, dtype=dtype, out=out, skipna=skipna, keepdims=keepdims) # add.reduce((arr - arrmean) ** 2, axis) / (n - ddof) From noreply at buildbot.pypy.org Sat Jan 21 15:54:37 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sat, 21 Jan 2012 15:54:37 +0100 (CET) Subject: [pypy-commit] pypy py3k: Start to port test_itertools' apptests to py3k Message-ID: <20120121145437.1553A82D3C@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51580:250aef313572 Date: 2012-01-21 15:51 +0100 http://bitbucket.org/pypy/pypy/changeset/250aef313572/ Log: Start to port test_itertools' apptests to py3k diff --git a/pypy/module/itertools/test/test_itertools.py b/pypy/module/itertools/test/test_itertools.py --- a/pypy/module/itertools/test/test_itertools.py +++ b/pypy/module/itertools/test/test_itertools.py @@ -11,21 +11,21 @@ it = itertools.count() for x in range(10): - assert it.next() == x + assert next(it) == x def test_count_firstval(self): import itertools it = itertools.count(3) for x in range(10): - assert it.next() == x + 3 + assert next(it) == x + 3 def test_count_repr(self): import itertools it = itertools.count(123) assert repr(it) == 'count(123)' - it.next() + next(it) assert repr(it) == 'count(124)' it = itertools.count(12.1, 1.0) assert repr(it) == 'count(12.1, 1.0)' @@ -43,7 +43,7 @@ it = itertools.repeat(o) for x in range(10): - assert o is it.next() + assert o is next(it) def test_repeat_times(self): import itertools @@ -51,8 +51,8 @@ times = 10 it = itertools.repeat(None, times) for i in range(times): - it.next() - raises(StopIteration, it.next) + next(it) + raises(StopIteration, next, it) #---does not work in CPython 2.5 #it = itertools.repeat(None, None) @@ -60,12 +60,12 @@ # it.next() # Should be no StopIteration it = itertools.repeat(None, 0) - raises(StopIteration, it.next) - raises(StopIteration, it.next) + raises(StopIteration, next, it) + raises(StopIteration, next, it) it = itertools.repeat(None, -1) - raises(StopIteration, it.next) - raises(StopIteration, it.next) + raises(StopIteration, next, it) + raises(StopIteration, next, it) def test_repeat_overflow(self): import itertools @@ -78,12 +78,12 @@ it = itertools.repeat('foobar') assert repr(it) == "repeat('foobar')" - it.next() + next(it) assert repr(it) == "repeat('foobar')" it = itertools.repeat('foobar', 10) assert repr(it) == "repeat('foobar', 10)" - it.next() + next(it) assert repr(it) == "repeat('foobar', 9)" list(it) assert repr(it) == "repeat('foobar', 0)" @@ -92,22 +92,22 @@ import itertools it = itertools.takewhile(bool, []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.takewhile(bool, [False, True, True]) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.takewhile(bool, [1, 2, 3, 0, 1, 1]) for x in [1, 2, 3]: - assert it.next() == x + assert next(it) == x - raises(StopIteration, it.next) + raises(StopIteration, next, it) def test_takewhile_wrongargs(self): import itertools it = itertools.takewhile(None, [1]) - raises(TypeError, it.next) + raises(TypeError, next, it) raises(TypeError, itertools.takewhile, bool, None) @@ -115,25 +115,25 @@ import itertools it = itertools.dropwhile(bool, []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.dropwhile(bool, [True, True, True]) - raises(StopIteration, it.next) + raises(StopIteration, next, it) def is_odd(arg): return (arg % 2 == 1) it = itertools.dropwhile(is_odd, [1, 3, 5, 2, 4, 6]) for x in [2, 4, 6]: - assert it.next() == x + assert next(it) == x - raises(StopIteration, it.next) + raises(StopIteration, next, it) def test_dropwhile_wrongargs(self): import itertools it = itertools.dropwhile(None, [1]) - raises(TypeError, it.next) + raises(TypeError, next, it) raises(TypeError, itertools.dropwhile, bool, None) @@ -141,26 +141,26 @@ import itertools it = itertools.ifilter(None, []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.ifilter(None, [1, 0, 2, 3, 0]) for x in [1, 2, 3]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) def is_odd(arg): return (arg % 2 == 1) it = itertools.ifilter(is_odd, [1, 2, 3, 4, 5, 6]) for x in [1, 3, 5]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) def test_ifilter_wrongargs(self): import itertools it = itertools.ifilter(0, [1]) - raises(TypeError, it.next) + raises(TypeError, next, it) raises(TypeError, itertools.ifilter, bool, None) @@ -168,26 +168,26 @@ import itertools it = itertools.ifilterfalse(None, []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.ifilterfalse(None, [1, 0, 2, 3, 0]) for x in [0, 0]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) def is_odd(arg): return (arg % 2 == 1) it = itertools.ifilterfalse(is_odd, [1, 2, 3, 4, 5, 6]) for x in [2, 4, 6]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) def test_ifilterfalse_wrongargs(self): import itertools it = itertools.ifilterfalse(0, [1]) - raises(TypeError, it.next) + raises(TypeError, next, it) raises(TypeError, itertools.ifilterfalse, bool, None) @@ -195,83 +195,83 @@ import itertools it = itertools.islice([], 0) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.islice([1, 2, 3], 0) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.islice([1, 2, 3, 4, 5], 3) for x in [1, 2, 3]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) it = itertools.islice([1, 2, 3, 4, 5], 1, 3) for x in [2, 3]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) it = itertools.islice([1, 2, 3, 4, 5], 0, 3, 2) for x in [1, 3]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) it = itertools.islice([1, 2, 3], 0, None) for x in [1, 2, 3]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) - assert list(itertools.islice(xrange(100), 10, 3)) == [] + assert list(itertools.islice(range(100), 10, 3)) == [] # new in 2.5: start=None or step=None - assert list(itertools.islice(xrange(10), None)) == range(10) - assert list(itertools.islice(xrange(10), None,None)) == range(10) - assert list(itertools.islice(xrange(10), None,None,None)) == range(10) + assert list(itertools.islice(range(10), None)) == range(10) + assert list(itertools.islice(range(10), None,None)) == range(10) + assert list(itertools.islice(range(10), None,None,None)) == range(10) def test_islice_dropitems_exact(self): import itertools it = iter("abcdefghij") itertools.islice(it, 2, 2) # doesn't eagerly drop anything - assert it.next() == "a" + assert next(it) == "a" itertools.islice(it, 3, 8, 2) # doesn't eagerly drop anything - assert it.next() == "b" - assert it.next() == "c" + assert next(it) == "b" + assert next(it) == "c" it = iter("abcdefghij") x = next(itertools.islice(it, 2, 3), None) # drops 2 items assert x == "c" - assert it.next() == "d" + assert next(it) == "d" it = iter("abcdefghij") x = next(itertools.islice(it, 3, 8, 2), None) # drops 3 items assert x == "d" - assert it.next() == "e" + assert next(it) == "e" it = iter("abcdefghij") x = next(itertools.islice(it, None, 8), None) # drops 0 items assert x == "a" - assert it.next() == "b" + assert next(it) == "b" it = iter("abcdefghij") x = next(itertools.islice(it, 3, 2), None) # drops 3 items assert x is None - assert it.next() == "d" + assert next(it) == "d" it = iter("abcdefghij") islc = itertools.islice(it, 3, 7, 2) - assert islc.next() == "d" # drops 0, 1, 2, returns item #3 - assert it.next() == "e" - assert islc.next() == "g" # drops the 4th and return item #5 - assert it.next() == "h" - raises(StopIteration, islc.next) # drops the 6th and raise - assert it.next() == "j" + assert next(islc) == "d" # drops 0, 1, 2, returns item #3 + assert next(it) == "e" + assert next(islc) == "g" # drops the 4th and return item #5 + assert next(it) == "h" + raises(StopIteration, next, islc) # drops the 6th and raise + assert next(it) == "j" it = iter("abcdefghij") islc = itertools.islice(it, 3, 4, 3) - assert islc.next() == "d" # drops 0, 1, 2, returns item #3 - assert it.next() == "e" - raises(StopIteration, islc.next) # item #4 is 'stop', so just raise - assert it.next() == "f" + assert next(islc) == "d" # drops 0, 1, 2, returns item #3 + assert next(it) == "e" + raises(StopIteration, next, islc) # item #4 is 'stop', so just raise + assert next(it) == "f" def test_islice_overflow(self): import itertools @@ -298,22 +298,22 @@ import itertools it = itertools.chain() - raises(StopIteration, it.next) - raises(StopIteration, it.next) + raises(StopIteration, next, it) + raises(StopIteration, next, it) it = itertools.chain([1, 2, 3]) for x in [1, 2, 3]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) it = itertools.chain([1, 2, 3], [4], [5, 6]) for x in [1, 2, 3, 4, 5, 6]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) it = itertools.chain([], [], [1], []) - assert it.next() == 1 - raises(StopIteration, it.next) + assert next(it) == 1 + raises(StopIteration, next, it) def test_imap(self): import itertools @@ -321,35 +321,35 @@ obj_list = [object(), object(), object()] it = itertools.imap(None, obj_list) for x in obj_list: - assert it.next() == (x, ) - raises(StopIteration, it.next) + assert next(it) == (x, ) + raises(StopIteration, next, it) it = itertools.imap(None, [1, 2, 3], [4], [5, 6]) - assert it.next() == (1, 4, 5) - raises(StopIteration, it.next) + assert next(it) == (1, 4, 5) + raises(StopIteration, next, it) it = itertools.imap(None, [], [], [1], []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.imap(str, [0, 1, 0, 1]) for x in ['0', '1', '0', '1']: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) import operator it = itertools.imap(operator.add, [1, 2, 3], [4, 5, 6]) for x in [5, 7, 9]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) def test_imap_wrongargs(self): import itertools # Duplicate python 2.4 behaviour for invalid arguments it = itertools.imap(0, []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.imap(0, [0]) - raises(TypeError, it.next) + raises(TypeError, next, it) raises(TypeError, itertools.imap, None, 0) raises(TypeError, itertools.imap, None) @@ -360,29 +360,29 @@ import itertools it = itertools.izip() - raises(StopIteration, it.next) + raises(StopIteration, next, it) obj_list = [object(), object(), object()] it = itertools.izip(obj_list) for x in obj_list: - assert it.next() == (x, ) - raises(StopIteration, it.next) + assert next(it) == (x, ) + raises(StopIteration, next, it) it = itertools.izip([1, 2, 3], [4], [5, 6]) - assert it.next() == (1, 4, 5) - raises(StopIteration, it.next) + assert next(it) == (1, 4, 5) + raises(StopIteration, next, it) it = itertools.izip([], [], [1], []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) # Up to one additional item may be consumed per iterable, as per python docs it1 = iter([1, 2, 3, 4, 5, 6]) it2 = iter([5, 6]) it = itertools.izip(it1, it2) for x in [(1, 5), (2, 6)]: - assert it.next() == x - raises(StopIteration, it.next) - assert it1.next() in [3, 4] + assert next(it) == x + raises(StopIteration, next, it) + assert next(it1) in [3, 4] #---does not work in CPython 2.5 #raises(StopIteration, it.next) #assert it1.next() in [4, 5] @@ -398,7 +398,7 @@ args = [()] * x + [None] + [()] * (9 - x) try: itertools.izip(*args) - except TypeError, e: + except TypeError as e: assert str(e).find("#" + str(x + 1) + " ") >= 0 else: fail("TypeError expected") @@ -407,22 +407,22 @@ import itertools it = itertools.cycle([]) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.cycle([1, 2, 3]) for x in [1, 2, 3, 1, 2, 3, 1, 2, 3]: - assert it.next() == x + assert next(it) == x def test_starmap(self): import itertools, operator it = itertools.starmap(operator.add, []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.starmap(operator.add, [(0, 1), (2, 3), (4, 5)]) for x in [1, 5, 9]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) assert list(itertools.starmap(operator.add, [iter((40,2))])) == [42] @@ -430,37 +430,37 @@ import itertools it = itertools.starmap(None, [(1, )]) - raises(TypeError, it.next) + raises(TypeError, next, it) it = itertools.starmap(None, []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.starmap(bool, [0]) - raises(TypeError, it.next) + raises(TypeError, next, it) def test_tee(self): import itertools it1, it2 = itertools.tee([]) - raises(StopIteration, it1.next) - raises(StopIteration, it2.next) + raises(StopIteration, next, it1) + raises(StopIteration, next, it2) it1, it2 = itertools.tee([1, 2, 3]) for x in [1, 2]: - assert it1.next() == x + assert next(it1) == x for x in [1, 2, 3]: - assert it2.next() == x - assert it1.next() == 3 - raises(StopIteration, it1.next) - raises(StopIteration, it2.next) + assert next(it2) == x + assert next(it1) == 3 + raises(StopIteration, next, it1) + raises(StopIteration, next, it2) assert itertools.tee([], 0) == () iterators = itertools.tee([1, 2, 3], 10) for it in iterators: for x in [1, 2, 3]: - assert it.next() == x - raises(StopIteration, it.next) + assert next(it) == x + raises(StopIteration, next, it) def test_tee_wrongargs(self): import itertools @@ -504,67 +504,67 @@ import itertools it = itertools.groupby([]) - raises(StopIteration, it.next) + raises(StopIteration, next, it) it = itertools.groupby([1, 2, 2, 3, 3, 3, 4, 4, 4, 4]) for x in [1, 2, 3, 4]: - k, g = it.next() + k, g = next(it) assert k == x assert len(list(g)) == x - raises(StopIteration, g.next) - raises(StopIteration, it.next) + raises(StopIteration, next, g) + raises(StopIteration, next ,it) it = itertools.groupby([0, 1, 2, 3, 4, 5], None) for x in [0, 1, 2, 3, 4, 5]: - k, g = it.next() + k, g = next(it) assert k == x - assert g.next() == x - raises(StopIteration, g.next) - raises(StopIteration, it.next) + assert next(g) == x + raises(StopIteration, next, g) + raises(StopIteration, next, it) # consumes after group started it = itertools.groupby([0, 0, 0, 0, 1]) - k1, g1 = it.next() - assert g1.next() == 0 - k2, g2 = it.next() - raises(StopIteration, g1.next) - assert g2.next() == 1 - raises(StopIteration, g2.next) + k1, g1 = next(it) + assert next(g1) == 0 + k2, g2 = next(it) + raises(StopIteration, next, g1) + assert next(g2) == 1 + raises(StopIteration, next, g2) # skips with not started group it = itertools.groupby([0, 0, 1]) - k1, g1 = it.next() - k2, g2 = it.next() - raises(StopIteration, g1.next) - assert g2.next() == 1 - raises(StopIteration, g2.next) + k1, g1 = next(it) + k2, g2 = next(it) + raises(StopIteration, next, g1) + assert next(g2) == 1 + raises(StopIteration, next, g2) it = itertools.groupby([0, 1, 2]) - k1, g1 = it.next() - k2, g2 = it.next() - k2, g3 = it.next() - raises(StopIteration, g1.next) - raises(StopIteration, g2.next) - assert g3.next() == 2 + k1, g1 = next(it) + k2, g2 = next(it) + k2, g3 = next(it) + raises(StopIteration, next, g1) + raises(StopIteration, next, g2) + assert next(g3) == 2 def half_floor(x): return x // 2 it = itertools.groupby([0, 1, 2, 3, 4, 5], half_floor) for x in [0, 1, 2]: - k, g = it.next() + k, g = next(it) assert k == x - assert half_floor(g.next()) == x - assert half_floor(g.next()) == x - raises(StopIteration, g.next) - raises(StopIteration, it.next) + assert half_floor(next(g)) == x + assert half_floor(next(g)) == x + raises(StopIteration, next, g) + raises(StopIteration, next, it) # keyword argument it = itertools.groupby([0, 1, 2, 3, 4, 5], key = half_floor) for x in [0, 1, 2]: - k, g = it.next() + k, g = next(it) assert k == x assert list(g) == [x*2, x*2+1] - raises(StopIteration, it.next) + raises(StopIteration, next, it) # Grouping is not based on key identity class NeverEqual(object): @@ -573,12 +573,12 @@ objects = [NeverEqual(), NeverEqual(), NeverEqual()] it = itertools.groupby(objects) for x in objects: - print "Trying", x - k, g = it.next() + print("Trying", x) + k, g = next(it) assert k is x - assert g.next() is x - raises(StopIteration, g.next) - raises(StopIteration, it.next) + assert next(g) is x + raises(StopIteration, next, g) + raises(StopIteration, next, it) # Grouping is based on key equality class AlwaysEqual(object): @@ -586,19 +586,19 @@ return True objects = [AlwaysEqual(), AlwaysEqual(), AlwaysEqual()] it = itertools.groupby(objects) - k, g = it.next() + k, g = next(it) assert k is objects[0] for x in objects: - assert g.next() is x - raises(StopIteration, g.next) - raises(StopIteration, it.next) + assert next(g) is x + raises(StopIteration, next, g) + raises(StopIteration, next, it) def test_groupby_wrongargs(self): import itertools raises(TypeError, itertools.groupby, 0) it = itertools.groupby([0], 1) - raises(TypeError, it.next) + raises(TypeError, next, it) def test_iterables(self): import itertools @@ -624,8 +624,8 @@ for it in iterables: assert hasattr(it, '__iter__') assert iter(it) is it - assert hasattr(it, 'next') - assert callable(it.next) + assert hasattr(it, '__next__') + assert callable(it.__next__) def test_docstrings(self): import itertools @@ -669,25 +669,25 @@ def test_count_overflow(self): import itertools, sys it = itertools.count(sys.maxint - 1) - assert it.next() == sys.maxint - 1 - assert it.next() == sys.maxint - assert it.next() == sys.maxint + 1 + assert next(it) == sys.maxint - 1 + assert next(it) == sys.maxint + assert next(it) == sys.maxint + 1 it = itertools.count(sys.maxint + 1) - assert it.next() == sys.maxint + 1 - assert it.next() == sys.maxint + 2 + assert next(it) == sys.maxint + 1 + assert next(it) == sys.maxint + 2 it = itertools.count(-sys.maxint-2) - assert it.next() == -sys.maxint - 2 - assert it.next() == -sys.maxint - 1 - assert it.next() == -sys.maxint - assert it.next() == -sys.maxint + 1 + assert next(it) == -sys.maxint - 2 + assert next(it) == -sys.maxint - 1 + assert next(it) == -sys.maxint + assert next(it) == -sys.maxint + 1 it = itertools.count(0, sys.maxint) - assert it.next() == sys.maxint * 0 - assert it.next() == sys.maxint * 1 - assert it.next() == sys.maxint * 2 + assert next(it) == sys.maxint * 0 + assert next(it) == sys.maxint * 1 + assert next(it) == sys.maxint * 2 it = itertools.count(0, sys.maxint + 1) - assert it.next() == (sys.maxint + 1) * 0 - assert it.next() == (sys.maxint + 1) * 1 - assert it.next() == (sys.maxint + 1) * 2 + assert next(it) == (sys.maxint + 1) * 0 + assert next(it) == (sys.maxint + 1) * 1 + assert next(it) == (sys.maxint + 1) * 2 def test_chain_fromiterable(self): import itertools @@ -832,8 +832,8 @@ def test_product_empty(self): from itertools import product prod = product('abc', repeat=0) - assert prod.next() == () - raises (StopIteration, prod.next) + assert next(prod) == () + raises (StopIteration, next, prod) def test_permutations(self): from itertools import permutations @@ -882,7 +882,7 @@ def test_permutations_r_gt_n(self): from itertools import permutations perm = permutations([1, 2], 3) - raises(StopIteration, perm.next) + raises(StopIteration, next, perm) def test_permutations_neg_r(self): from itertools import permutations @@ -905,14 +905,14 @@ def test_compress_diff_len(self): import itertools it = itertools.compress(['a'], []) - raises(StopIteration, it.next) + raises(StopIteration, next, it) def test_count_kwargs(self): import itertools it = itertools.count(start=2, step=3) - assert it.next() == 2 - assert it.next() == 5 - assert it.next() == 8 + assert next(it) == 2 + assert next(it) == 5 + assert next(it) == 8 def test_repeat_kwargs(self): import itertools @@ -961,10 +961,10 @@ r1 = Repeater(1, 3, RuntimeError) r2 = Repeater(2, 4, StopIteration) it = itertools.izip_longest(r1, r2, fillvalue=0) - assert it.next() == (1, 2) - assert it.next() == (1, 2) - assert it.next()== (1, 2) - raises(RuntimeError, it.next) + assert next(it) == (1, 2) + assert next(it) == (1, 2) + assert next(it)== (1, 2) + raises(RuntimeError, next, it) def test_subclassing(self): import itertools From noreply at buildbot.pypy.org Sat Jan 21 15:54:38 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sat, 21 Jan 2012 15:54:38 +0100 (CET) Subject: [pypy-commit] pypy py3k: Add a test for keyword-only arguments Message-ID: <20120121145438.54A1482D3D@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51581:39eb4a258003 Date: 2012-01-21 15:53 +0100 http://bitbucket.org/pypy/pypy/changeset/39eb4a258003/ Log: Add a test for keyword-only arguments diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -810,6 +810,14 @@ """ yield self.st, func, "f()", (1, [2, 3, 4], 5) + def test_kwonlyargs_default_parameters(self): + """ This actually test an interpreter bug, but since we can't parse + py3k only code in the interpreter tests right now, it's there waiting + for this feature""" + func = """ def f(a, b, c=3, *, d=4): + return a, b, c, d + """ + yield self.st, func, "f(1, 2)", (1, 2, 3, 4) class AppTestCompiler: From noreply at buildbot.pypy.org Sat Jan 21 17:03:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 17:03:23 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: shuffle stuff around Message-ID: <20120121160323.F17E482D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51582:b31c0d563d03 Date: 2012-01-21 18:02 +0200 http://bitbucket.org/pypy/pypy/changeset/b31c0d563d03/ Log: shuffle stuff around diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1,10 +1,11 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, NoneNotWrapped, unwrap_spec +from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ - interp_boxes -from pypy.module.micronumpy.strides import calculate_slice_strides +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy.strides import calculate_slice_strides,\ + shape_agreement, find_shape_and_elems, get_shape_from_iterable,\ + calc_new_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name @@ -60,176 +61,6 @@ name='numpy_filterset', ) -def _find_shape_and_elems(space, w_iterable): - shape = [space.len_w(w_iterable)] - batch = space.listview(w_iterable) - while True: - new_batch = [] - if not batch: - return shape, [] - if not space.issequence_w(batch[0]): - for elem in batch: - if space.issequence_w(elem): - raise OperationError(space.w_ValueError, space.wrap( - "setting an array element with a sequence")) - return shape, batch - size = space.len_w(batch[0]) - for w_elem in batch: - if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: - raise OperationError(space.w_ValueError, space.wrap( - "setting an array element with a sequence")) - new_batch += space.listview(w_elem) - shape.append(size) - batch = new_batch - -def shape_agreement(space, shape1, shape2): - ret = _shape_agreement(shape1, shape2) - if len(ret) < max(len(shape1), len(shape2)): - raise OperationError(space.w_ValueError, - space.wrap("operands could not be broadcast together with shapes (%s) (%s)" % ( - ",".join([str(x) for x in shape1]), - ",".join([str(x) for x in shape2]), - )) - ) - return ret - -def _shape_agreement(shape1, shape2): - """ Checks agreement about two shapes with respect to broadcasting. Returns - the resulting shape. - """ - lshift = 0 - rshift = 0 - if len(shape1) > len(shape2): - m = len(shape1) - n = len(shape2) - rshift = len(shape2) - len(shape1) - remainder = shape1 - else: - m = len(shape2) - n = len(shape1) - lshift = len(shape1) - len(shape2) - remainder = shape2 - endshape = [0] * m - indices1 = [True] * m - indices2 = [True] * m - for i in range(m - 1, m - n - 1, -1): - left = shape1[i + lshift] - right = shape2[i + rshift] - if left == right: - endshape[i] = left - elif left == 1: - endshape[i] = right - indices1[i + lshift] = False - elif right == 1: - endshape[i] = left - indices2[i + rshift] = False - else: - return [] - #raise OperationError(space.w_ValueError, space.wrap( - # "frames are not aligned")) - for i in range(m - n): - endshape[i] = remainder[i] - return endshape - -def get_shape_from_iterable(space, old_size, w_iterable): - new_size = 0 - new_shape = [] - if space.isinstance_w(w_iterable, space.w_int): - new_size = space.int_w(w_iterable) - if new_size < 0: - new_size = old_size - new_shape = [new_size] - else: - neg_dim = -1 - batch = space.listview(w_iterable) - new_size = 1 - if len(batch) < 1: - if old_size == 1: - # Scalars can have an empty size. - new_size = 1 - else: - new_size = 0 - new_shape = [] - i = 0 - for elem in batch: - s = space.int_w(elem) - if s < 0: - if neg_dim >= 0: - raise OperationError(space.w_ValueError, space.wrap( - "can only specify one unknown dimension")) - s = 1 - neg_dim = i - new_size *= s - new_shape.append(s) - i += 1 - if neg_dim >= 0: - new_shape[neg_dim] = old_size / new_size - new_size *= new_shape[neg_dim] - if new_size != old_size: - raise OperationError(space.w_ValueError, - space.wrap("total size of new array must be unchanged")) - return new_shape - -# Recalculating strides. Find the steps that the iteration does for each -# dimension, given the stride and shape. Then try to create a new stride that -# fits the new shape, using those steps. If there is a shape/step mismatch -# (meaning that the realignment of elements crosses from one step into another) -# return None so that the caller can raise an exception. -def calc_new_strides(new_shape, old_shape, old_strides): - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and - # len(new_shape) > 0 - steps = [] - last_step = 1 - oldI = 0 - new_strides = [] - if old_strides[0] < old_strides[-1]: - #Start at old_shape[0], old_stides[0] - for i in range(len(old_shape)): - steps.append(old_strides[i] / last_step) - last_step *= old_shape[i] - cur_step = steps[0] - n_new_elems_used = 1 - n_old_elems_to_use = old_shape[0] - for s in new_shape: - new_strides.append(cur_step * n_new_elems_used) - n_new_elems_used *= s - while n_new_elems_used > n_old_elems_to_use: - oldI += 1 - if steps[oldI] != steps[oldI - 1]: - return None - n_old_elems_to_use *= old_shape[oldI] - if n_new_elems_used == n_old_elems_to_use: - oldI += 1 - if oldI >= len(old_shape): - continue - cur_step = steps[oldI] - n_old_elems_to_use *= old_shape[oldI] - else: - #Start at old_shape[-1], old_strides[-1] - for i in range(len(old_shape) - 1, -1, -1): - steps.insert(0, old_strides[i] / last_step) - last_step *= old_shape[i] - cur_step = steps[-1] - n_new_elems_used = 1 - oldI = -1 - n_old_elems_to_use = old_shape[-1] - for i in range(len(new_shape) - 1, -1, -1): - s = new_shape[i] - new_strides.insert(0, cur_step * n_new_elems_used) - n_new_elems_used *= s - while n_new_elems_used > n_old_elems_to_use: - oldI -= 1 - if steps[oldI] != steps[oldI + 1]: - return None - n_old_elems_to_use *= old_shape[oldI] - if n_new_elems_used == n_old_elems_to_use: - oldI -= 1 - if oldI < -len(old_shape): - continue - cur_step = steps[oldI] - n_old_elems_to_use *= old_shape[oldI] - return new_strides - class BaseArray(Wrappable): _attrs_ = ["invalidates", "shape", 'size'] @@ -1264,7 +1095,7 @@ if order != 'C': # or order != 'F': raise operationerrfmt(space.w_ValueError, "Unknown order: %s", order) - shape, elems_w = _find_shape_and_elems(space, w_item_or_iterable) + shape, elems_w = find_shape_and_elems(space, w_item_or_iterable) # they come back in C order size = len(elems_w) if space.is_w(w_dtype, space.w_None): diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -37,3 +37,174 @@ rstrides = [0] * (len(res_shape) - len(orig_shape)) + rstrides rbackstrides = [0] * (len(res_shape) - len(orig_shape)) + rbackstrides return rstrides, rbackstrides + + +def find_shape_and_elems(space, w_iterable): + shape = [space.len_w(w_iterable)] + batch = space.listview(w_iterable) + while True: + new_batch = [] + if not batch: + return shape, [] + if not space.issequence_w(batch[0]): + for elem in batch: + if space.issequence_w(elem): + raise OperationError(space.w_ValueError, space.wrap( + "setting an array element with a sequence")) + return shape, batch + size = space.len_w(batch[0]) + for w_elem in batch: + if not space.issequence_w(w_elem) or space.len_w(w_elem) != size: + raise OperationError(space.w_ValueError, space.wrap( + "setting an array element with a sequence")) + new_batch += space.listview(w_elem) + shape.append(size) + batch = new_batch + +def shape_agreement(space, shape1, shape2): + ret = _shape_agreement(shape1, shape2) + if len(ret) < max(len(shape1), len(shape2)): + raise OperationError(space.w_ValueError, + space.wrap("operands could not be broadcast together with shapes (%s) (%s)" % ( + ",".join([str(x) for x in shape1]), + ",".join([str(x) for x in shape2]), + )) + ) + return ret + +def _shape_agreement(shape1, shape2): + """ Checks agreement about two shapes with respect to broadcasting. Returns + the resulting shape. + """ + lshift = 0 + rshift = 0 + if len(shape1) > len(shape2): + m = len(shape1) + n = len(shape2) + rshift = len(shape2) - len(shape1) + remainder = shape1 + else: + m = len(shape2) + n = len(shape1) + lshift = len(shape1) - len(shape2) + remainder = shape2 + endshape = [0] * m + indices1 = [True] * m + indices2 = [True] * m + for i in range(m - 1, m - n - 1, -1): + left = shape1[i + lshift] + right = shape2[i + rshift] + if left == right: + endshape[i] = left + elif left == 1: + endshape[i] = right + indices1[i + lshift] = False + elif right == 1: + endshape[i] = left + indices2[i + rshift] = False + else: + return [] + #raise OperationError(space.w_ValueError, space.wrap( + # "frames are not aligned")) + for i in range(m - n): + endshape[i] = remainder[i] + return endshape + +def get_shape_from_iterable(space, old_size, w_iterable): + new_size = 0 + new_shape = [] + if space.isinstance_w(w_iterable, space.w_int): + new_size = space.int_w(w_iterable) + if new_size < 0: + new_size = old_size + new_shape = [new_size] + else: + neg_dim = -1 + batch = space.listview(w_iterable) + new_size = 1 + if len(batch) < 1: + if old_size == 1: + # Scalars can have an empty size. + new_size = 1 + else: + new_size = 0 + new_shape = [] + i = 0 + for elem in batch: + s = space.int_w(elem) + if s < 0: + if neg_dim >= 0: + raise OperationError(space.w_ValueError, space.wrap( + "can only specify one unknown dimension")) + s = 1 + neg_dim = i + new_size *= s + new_shape.append(s) + i += 1 + if neg_dim >= 0: + new_shape[neg_dim] = old_size / new_size + new_size *= new_shape[neg_dim] + if new_size != old_size: + raise OperationError(space.w_ValueError, + space.wrap("total size of new array must be unchanged")) + return new_shape + +# Recalculating strides. Find the steps that the iteration does for each +# dimension, given the stride and shape. Then try to create a new stride that +# fits the new shape, using those steps. If there is a shape/step mismatch +# (meaning that the realignment of elements crosses from one step into another) +# return None so that the caller can raise an exception. +def calc_new_strides(new_shape, old_shape, old_strides): + # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and + # len(new_shape) > 0 + steps = [] + last_step = 1 + oldI = 0 + new_strides = [] + if old_strides[0] < old_strides[-1]: + #Start at old_shape[0], old_stides[0] + for i in range(len(old_shape)): + steps.append(old_strides[i] / last_step) + last_step *= old_shape[i] + cur_step = steps[0] + n_new_elems_used = 1 + n_old_elems_to_use = old_shape[0] + for s in new_shape: + new_strides.append(cur_step * n_new_elems_used) + n_new_elems_used *= s + while n_new_elems_used > n_old_elems_to_use: + oldI += 1 + if steps[oldI] != steps[oldI - 1]: + return None + n_old_elems_to_use *= old_shape[oldI] + if n_new_elems_used == n_old_elems_to_use: + oldI += 1 + if oldI >= len(old_shape): + continue + cur_step = steps[oldI] + n_old_elems_to_use *= old_shape[oldI] + else: + #Start at old_shape[-1], old_strides[-1] + for i in range(len(old_shape) - 1, -1, -1): + steps.insert(0, old_strides[i] / last_step) + last_step *= old_shape[i] + cur_step = steps[-1] + n_new_elems_used = 1 + oldI = -1 + n_old_elems_to_use = old_shape[-1] + for i in range(len(new_shape) - 1, -1, -1): + s = new_shape[i] + new_strides.insert(0, cur_step * n_new_elems_used) + n_new_elems_used *= s + while n_new_elems_used > n_old_elems_to_use: + oldI -= 1 + if steps[oldI] != steps[oldI + 1]: + return None + n_old_elems_to_use *= old_shape[oldI] + if n_new_elems_used == n_old_elems_to_use: + oldI -= 1 + if oldI < -len(old_shape): + continue + cur_step = steps[oldI] + n_old_elems_to_use *= old_shape[oldI] + return new_strides From noreply at buildbot.pypy.org Sat Jan 21 17:04:44 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sat, 21 Jan 2012 17:04:44 +0100 (CET) Subject: [pypy-commit] pypy pytest: syncronize pylib and pytest with current hg versions Message-ID: <20120121160444.32ECD82D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r51583:513060cd54e5 Date: 2012-01-21 17:03 +0100 http://bitbucket.org/pypy/pypy/changeset/513060cd54e5/ Log: syncronize pylib and pytest with current hg versions diff --git a/_pytest/__init__.py b/_pytest/__init__.py --- a/_pytest/__init__.py +++ b/_pytest/__init__.py @@ -1,2 +1,2 @@ # -__version__ = '2.1.0.dev4' +__version__ = '2.2.2.dev6' diff --git a/_pytest/assertion/__init__.py b/_pytest/assertion/__init__.py --- a/_pytest/assertion/__init__.py +++ b/_pytest/assertion/__init__.py @@ -2,35 +2,25 @@ support for presenting detailed information in failing assertions. """ import py -import imp -import marshal -import struct import sys import pytest from _pytest.monkeypatch import monkeypatch -from _pytest.assertion import reinterpret, util - -try: - from _pytest.assertion.rewrite import rewrite_asserts -except ImportError: - rewrite_asserts = None -else: - import ast +from _pytest.assertion import util def pytest_addoption(parser): group = parser.getgroup("debugconfig") - group.addoption('--assertmode', action="store", dest="assertmode", - choices=("on", "old", "off", "default"), default="default", - metavar="on|old|off", + group.addoption('--assert', action="store", dest="assertmode", + choices=("rewrite", "reinterp", "plain",), + default="rewrite", metavar="MODE", help="""control assertion debugging tools. -'off' performs no assertion debugging. -'old' reinterprets the expressions in asserts to glean information. -'on' (the default) rewrites the assert statements in test modules to provide -sub-expression results.""") +'plain' performs no assertion debugging. +'reinterp' reinterprets assert statements after they failed to provide assertion expression information. +'rewrite' (the default) rewrites assert statements in test modules on import +to provide assert expression information. """) group.addoption('--no-assert', action="store_true", default=False, - dest="noassert", help="DEPRECATED equivalent to --assertmode=off") + dest="noassert", help="DEPRECATED equivalent to --assert=plain") group.addoption('--nomagic', action="store_true", default=False, - dest="nomagic", help="DEPRECATED equivalent to --assertmode=off") + dest="nomagic", help="DEPRECATED equivalent to --assert=plain") class AssertionState: """State for the assertion plugin.""" @@ -40,89 +30,90 @@ self.trace = config.trace.root.get("assertion") def pytest_configure(config): - warn_about_missing_assertion() mode = config.getvalue("assertmode") if config.getvalue("noassert") or config.getvalue("nomagic"): - if mode not in ("off", "default"): - raise pytest.UsageError("assertion options conflict") - mode = "off" - elif mode == "default": - mode = "on" - if mode != "off": - def callbinrepr(op, left, right): - hook_result = config.hook.pytest_assertrepr_compare( - config=config, op=op, left=left, right=right) - for new_expl in hook_result: - if new_expl: - return '\n~'.join(new_expl) + mode = "plain" + if mode == "rewrite": + try: + import ast + except ImportError: + mode = "reinterp" + else: + if sys.platform.startswith('java'): + mode = "reinterp" + if mode != "plain": + _load_modules(mode) m = monkeypatch() config._cleanup.append(m.undo) m.setattr(py.builtin.builtins, 'AssertionError', reinterpret.AssertionError) - m.setattr(util, '_reprcompare', callbinrepr) - if mode == "on" and rewrite_asserts is None: - mode = "old" + hook = None + if mode == "rewrite": + hook = rewrite.AssertionRewritingHook() + sys.meta_path.append(hook) + warn_about_missing_assertion(mode) config._assertstate = AssertionState(config, mode) + config._assertstate.hook = hook config._assertstate.trace("configured with mode set to %r" % (mode,)) -def _write_pyc(co, source_path): - if hasattr(imp, "cache_from_source"): - # Handle PEP 3147 pycs. - pyc = py.path.local(imp.cache_from_source(str(source_path))) - pyc.ensure() - else: - pyc = source_path + "c" - mtime = int(source_path.mtime()) - fp = pyc.open("wb") - try: - fp.write(imp.get_magic()) - fp.write(struct.pack(" 0 and - item.identifier != "__future__"): + elif (not isinstance(item, ast.ImportFrom) or item.level > 0 or + item.module != "__future__"): lineno = item.lineno break pos += 1 @@ -118,9 +358,9 @@ for alias in aliases] mod.body[pos:pos] = imports # Collect asserts. - nodes = collections.deque([mod]) + nodes = [mod] while nodes: - node = nodes.popleft() + node = nodes.pop() for name, field in ast.iter_fields(node): if isinstance(field, list): new = [] @@ -143,7 +383,7 @@ """Get a new variable.""" # Use a character invalid in python identifiers to avoid clashing. name = "@py_assert" + str(next(self.variable_counter)) - self.variables.add(name) + self.variables.append(name) return name def assign(self, expr): @@ -198,7 +438,8 @@ # There's already a message. Don't mess with it. return [assert_] self.statements = [] - self.variables = set() + self.cond_chain = () + self.variables = [] self.variable_counter = itertools.count() self.stack = [] self.on_failure = [] @@ -220,11 +461,11 @@ else: raise_ = ast.Raise(exc, None, None) body.append(raise_) - # Delete temporary variables. - names = [ast.Name(name, ast.Del()) for name in self.variables] - if names: - delete = ast.Delete(names) - self.statements.append(delete) + # Clear temporary variables by setting them to None. + if self.variables: + variables = [ast.Name(name, ast.Store()) for name in self.variables] + clear = ast.Assign(variables, ast.Name("None", ast.Load())) + self.statements.append(clear) # Fix line numbers. for stmt in self.statements: set_location(stmt, assert_.lineno, assert_.col_offset) @@ -240,21 +481,38 @@ return name, self.explanation_param(expr) def visit_BoolOp(self, boolop): - operands = [] - explanations = [] + res_var = self.variable() + expl_list = self.assign(ast.List([], ast.Load())) + app = ast.Attribute(expl_list, "append", ast.Load()) + is_or = int(isinstance(boolop.op, ast.Or)) + body = save = self.statements + fail_save = self.on_failure + levels = len(boolop.values) - 1 self.push_format_context() - for operand in boolop.values: - res, explanation = self.visit(operand) - operands.append(res) - explanations.append(explanation) - expls = ast.Tuple([ast.Str(expl) for expl in explanations], ast.Load()) - is_or = ast.Num(isinstance(boolop.op, ast.Or)) - expl_template = self.helper("format_boolop", - ast.Tuple(operands, ast.Load()), expls, - is_or) + # Process each operand, short-circuting if needed. + for i, v in enumerate(boolop.values): + if i: + fail_inner = [] + self.on_failure.append(ast.If(cond, fail_inner, [])) + self.on_failure = fail_inner + self.push_format_context() + res, expl = self.visit(v) + body.append(ast.Assign([ast.Name(res_var, ast.Store())], res)) + expl_format = self.pop_format_context(ast.Str(expl)) + call = ast.Call(app, [expl_format], [], None, None) + self.on_failure.append(ast.Expr(call)) + if i < levels: + cond = res + if is_or: + cond = ast.UnaryOp(ast.Not(), cond) + inner = [] + self.statements.append(ast.If(cond, inner, [])) + self.statements = body = inner + self.statements = save + self.on_failure = fail_save + expl_template = self.helper("format_boolop", expl_list, ast.Num(is_or)) expl = self.pop_format_context(expl_template) - res = self.assign(ast.BoolOp(boolop.op, operands)) - return res, self.explanation_param(expl) + return ast.Name(res_var, ast.Load()), self.explanation_param(expl) def visit_UnaryOp(self, unary): pattern = unary_map[unary.op.__class__] @@ -288,7 +546,7 @@ new_star, expl = self.visit(call.starargs) arg_expls.append("*" + expl) if call.kwargs: - new_kwarg, expl = self.visit(call.kwarg) + new_kwarg, expl = self.visit(call.kwargs) arg_expls.append("**" + expl) expl = "%s(%s)" % (func_expl, ', '.join(arg_expls)) new_call = ast.Call(new_func, new_args, new_kwargs, new_star, new_kwarg) diff --git a/_pytest/capture.py b/_pytest/capture.py --- a/_pytest/capture.py +++ b/_pytest/capture.py @@ -11,22 +11,22 @@ group._addoption('-s', action="store_const", const="no", dest="capture", help="shortcut for --capture=no.") + at pytest.mark.tryfirst +def pytest_cmdline_parse(pluginmanager, args): + # we want to perform capturing already for plugin/conftest loading + if '-s' in args or "--capture=no" in args: + method = "no" + elif hasattr(os, 'dup') and '--capture=sys' not in args: + method = "fd" + else: + method = "sys" + capman = CaptureManager(method) + pluginmanager.register(capman, "capturemanager") + def addouterr(rep, outerr): - repr = getattr(rep, 'longrepr', None) - if not hasattr(repr, 'addsection'): - return for secname, content in zip(["out", "err"], outerr): if content: - repr.addsection("Captured std%s" % secname, content.rstrip()) - -def pytest_unconfigure(config): - # registered in config.py during early conftest.py loading - capman = config.pluginmanager.getplugin('capturemanager') - while capman._method2capture: - name, cap = capman._method2capture.popitem() - # XXX logging module may wants to close it itself on process exit - # otherwise we could do finalization here and call "reset()". - cap.suspend() + rep.sections.append(("Captured std%s" % secname, content)) class NoCapture: def startall(self): @@ -39,8 +39,9 @@ return "", "" class CaptureManager: - def __init__(self): + def __init__(self, defaultmethod=None): self._method2capture = {} + self._defaultmethod = defaultmethod def _maketempfile(self): f = py.std.tempfile.TemporaryFile() @@ -65,14 +66,6 @@ else: raise ValueError("unknown capturing method: %r" % method) - def _getmethod_preoptionparse(self, args): - if '-s' in args or "--capture=no" in args: - return "no" - elif hasattr(os, 'dup') and '--capture=sys' not in args: - return "fd" - else: - return "sys" - def _getmethod(self, config, fspath): if config.option.capture: method = config.option.capture @@ -85,16 +78,22 @@ method = "sys" return method + def reset_capturings(self): + for name, cap in self._method2capture.items(): + cap.reset() + def resumecapture_item(self, item): method = self._getmethod(item.config, item.fspath) if not hasattr(item, 'outerr'): item.outerr = ('', '') # we accumulate outerr on the item return self.resumecapture(method) - def resumecapture(self, method): + def resumecapture(self, method=None): if hasattr(self, '_capturing'): raise ValueError("cannot resume, already capturing with %r" % (self._capturing,)) + if method is None: + method = self._defaultmethod cap = self._method2capture.get(method) self._capturing = method if cap is None: @@ -164,17 +163,6 @@ def pytest_runtest_teardown(self, item): self.resumecapture_item(item) - def pytest__teardown_final(self, __multicall__, session): - method = self._getmethod(session.config, None) - self.resumecapture(method) - try: - rep = __multicall__.execute() - finally: - outerr = self.suspendcapture() - if rep: - addouterr(rep, outerr) - return rep - def pytest_keyboard_interrupt(self, excinfo): if hasattr(self, '_capturing'): self.suspendcapture() diff --git a/_pytest/config.py b/_pytest/config.py --- a/_pytest/config.py +++ b/_pytest/config.py @@ -8,13 +8,15 @@ def pytest_cmdline_parse(pluginmanager, args): config = Config(pluginmanager) config.parse(args) - if config.option.debug: - config.trace.root.setwriter(sys.stderr.write) return config def pytest_unconfigure(config): - for func in config._cleanup: - func() + while 1: + try: + fin = config._cleanup.pop() + except IndexError: + break + fin() class Parser: """ Parser for command line arguments. """ @@ -81,6 +83,7 @@ self._inidict[name] = (help, type, default) self._ininames.append(name) + class OptionGroup: def __init__(self, name, description="", parser=None): self.name = name @@ -256,11 +259,14 @@ self.hook = self.pluginmanager.hook self._inicache = {} self._cleanup = [] - + @classmethod def fromdictargs(cls, option_dict, args): """ constructor useable for subprocesses. """ config = cls() + # XXX slightly crude way to initialize capturing + import _pytest.capture + _pytest.capture.pytest_cmdline_parse(config.pluginmanager, args) config._preparse(args, addopts=False) config.option.__dict__.update(option_dict) for x in config.option.plugins: @@ -285,11 +291,10 @@ def _setinitialconftest(self, args): # capture output during conftest init (#issue93) - from _pytest.capture import CaptureManager - capman = CaptureManager() - self.pluginmanager.register(capman, 'capturemanager') - # will be unregistered in capture.py's unconfigure() - capman.resumecapture(capman._getmethod_preoptionparse(args)) + # XXX introduce load_conftest hook to avoid needing to know + # about capturing plugin here + capman = self.pluginmanager.getplugin("capturemanager") + capman.resumecapture() try: try: self._conftest.setinitial(args) @@ -334,6 +339,7 @@ # Note that this can only be called once per testing process. assert not hasattr(self, 'args'), ( "can only parse cmdline args at most once per Config object") + self._origargs = args self._preparse(args) self._parser.hints.extend(self.pluginmanager._hints) args = self._parser.parse_setoption(args, self.option) @@ -341,6 +347,14 @@ args.append(py.std.os.getcwd()) self.args = args + def addinivalue_line(self, name, line): + """ add a line to an ini-file option. The option must have been + declared but might not yet be set in which case the line becomes the + the first line in its value. """ + x = self.getini(name) + assert isinstance(x, list) + x.append(line) # modifies the cached list inline + def getini(self, name): """ return configuration value from an ini file. If the specified name hasn't been registered through a prior ``parse.addini`` @@ -422,7 +436,7 @@ def getcfg(args, inibasenames): - args = [x for x in args if str(x)[0] != "-"] + args = [x for x in args if not str(x).startswith("-")] if not args: args = [py.path.local()] for arg in args: diff --git a/_pytest/core.py b/_pytest/core.py --- a/_pytest/core.py +++ b/_pytest/core.py @@ -16,11 +16,10 @@ "junitxml resultlog doctest").split() class TagTracer: - def __init__(self, prefix="[pytest] "): + def __init__(self): self._tag2proc = {} self.writer = None self.indent = 0 - self.prefix = prefix def get(self, name): return TagTracerSub(self, (name,)) @@ -30,7 +29,7 @@ if args: indent = " " * self.indent content = " ".join(map(str, args)) - self.writer("%s%s%s\n" %(self.prefix, indent, content)) + self.writer("%s%s [%s]\n" %(indent, content, ":".join(tags))) try: self._tag2proc[tags](tags, args) except KeyError: @@ -212,6 +211,14 @@ self.register(mod, modname) self.consider_module(mod) + def pytest_configure(self, config): + config.addinivalue_line("markers", + "tryfirst: mark a hook implementation function such that the " + "plugin machinery will try to call it first/as early as possible.") + config.addinivalue_line("markers", + "trylast: mark a hook implementation function such that the " + "plugin machinery will try to call it last/as late as possible.") + def pytest_plugin_registered(self, plugin): import pytest dic = self.call_plugin(plugin, "pytest_namespace", {}) or {} @@ -432,10 +439,7 @@ def _preloadplugins(): _preinit.append(PluginManager(load=True)) -def main(args=None, plugins=None): - """ returned exit code integer, after an in-process testing run - with the given command line arguments, preloading an optional list - of passed in plugin objects. """ +def _prepareconfig(args=None, plugins=None): if args is None: args = sys.argv[1:] elif isinstance(args, py.path.local): @@ -449,13 +453,19 @@ else: # subsequent calls to main will create a fresh instance _pluginmanager = PluginManager(load=True) hook = _pluginmanager.hook + if plugins: + for plugin in plugins: + _pluginmanager.register(plugin) + return hook.pytest_cmdline_parse( + pluginmanager=_pluginmanager, args=args) + +def main(args=None, plugins=None): + """ returned exit code integer, after an in-process testing run + with the given command line arguments, preloading an optional list + of passed in plugin objects. """ try: - if plugins: - for plugin in plugins: - _pluginmanager.register(plugin) - config = hook.pytest_cmdline_parse( - pluginmanager=_pluginmanager, args=args) - exitstatus = hook.pytest_cmdline_main(config=config) + config = _prepareconfig(args, plugins) + exitstatus = config.hook.pytest_cmdline_main(config=config) except UsageError: e = sys.exc_info()[1] sys.stderr.write("ERROR: %s\n" %(e.args[0],)) diff --git a/_pytest/helpconfig.py b/_pytest/helpconfig.py --- a/_pytest/helpconfig.py +++ b/_pytest/helpconfig.py @@ -1,7 +1,7 @@ """ version info, help messages, tracing configuration. """ import py import pytest -import inspect, sys +import os, inspect, sys from _pytest.core import varnames def pytest_addoption(parser): @@ -18,7 +18,29 @@ help="trace considerations of conftest.py files."), group.addoption('--debug', action="store_true", dest="debug", default=False, - help="generate and show internal debugging information.") + help="store internal tracing debug information in 'pytestdebug.log'.") + + +def pytest_cmdline_parse(__multicall__): + config = __multicall__.execute() + if config.option.debug: + path = os.path.abspath("pytestdebug.log") + f = open(path, 'w') + config._debugfile = f + f.write("versions pytest-%s, py-%s, python-%s\ncwd=%s\nargs=%s\n\n" %( + pytest.__version__, py.__version__, ".".join(map(str, sys.version_info)), + os.getcwd(), config._origargs)) + config.trace.root.setwriter(f.write) + sys.stderr.write("writing pytestdebug information to %s\n" % path) + return config + + at pytest.mark.trylast +def pytest_unconfigure(config): + if hasattr(config, '_debugfile'): + config._debugfile.close() + sys.stderr.write("wrote pytestdebug information to %s\n" % + config._debugfile.name) + config.trace.root.setwriter(None) def pytest_cmdline_main(config): @@ -34,6 +56,7 @@ elif config.option.help: config.pluginmanager.do_configure(config) showhelp(config) + config.pluginmanager.do_unconfigure(config) return 0 def showhelp(config): @@ -91,7 +114,7 @@ verinfo = getpluginversioninfo(config) if verinfo: lines.extend(verinfo) - + if config.option.traceconfig: lines.append("active plugins:") plugins = [] diff --git a/_pytest/hookspec.py b/_pytest/hookspec.py --- a/_pytest/hookspec.py +++ b/_pytest/hookspec.py @@ -121,16 +121,23 @@ def pytest_itemstart(item, node=None): """ (deprecated, use pytest_runtest_logstart). """ -def pytest_runtest_protocol(item): - """ implements the standard runtest_setup/call/teardown protocol including - capturing exceptions and calling reporting hooks on the results accordingly. +def pytest_runtest_protocol(item, nextitem): + """ implements the runtest_setup/call/teardown protocol for + the given test item, including capturing exceptions and calling + reporting hooks. + + :arg item: test item for which the runtest protocol is performed. + + :arg nexitem: the scheduled-to-be-next test item (or None if this + is the end my friend). This argument is passed on to + :py:func:`pytest_runtest_teardown`. :return boolean: True if no further hook implementations should be invoked. """ pytest_runtest_protocol.firstresult = True def pytest_runtest_logstart(nodeid, location): - """ signal the start of a test run. """ + """ signal the start of running a single test item. """ def pytest_runtest_setup(item): """ called before ``pytest_runtest_call(item)``. """ @@ -138,8 +145,14 @@ def pytest_runtest_call(item): """ called to execute the test ``item``. """ -def pytest_runtest_teardown(item): - """ called after ``pytest_runtest_call``. """ +def pytest_runtest_teardown(item, nextitem): + """ called after ``pytest_runtest_call``. + + :arg nexitem: the scheduled-to-be-next test item (None if no further + test item is scheduled). This argument can be used to + perform exact teardowns, i.e. calling just enough finalizers + so that nextitem only needs to call setup-functions. + """ def pytest_runtest_makereport(item, call): """ return a :py:class:`_pytest.runner.TestReport` object @@ -149,15 +162,8 @@ pytest_runtest_makereport.firstresult = True def pytest_runtest_logreport(report): - """ process item test report. """ - -# special handling for final teardown - somewhat internal for now -def pytest__teardown_final(session): - """ called before test session finishes. """ -pytest__teardown_final.firstresult = True - -def pytest__teardown_final_logerror(report, session): - """ called if runtest_teardown_final failed. """ + """ process a test setup/call/teardown report relating to + the respective phase of executing a test. """ # ------------------------------------------------------------------------- # test session related hooks diff --git a/_pytest/junitxml.py b/_pytest/junitxml.py --- a/_pytest/junitxml.py +++ b/_pytest/junitxml.py @@ -25,6 +25,10 @@ long = int +class Junit(py.xml.Namespace): + pass + + # We need to get the subset of the invalid unicode ranges according to # XML 1.0 which are valid in this python build. Hence we calculate # this dynamically instead of hardcoding it. The spec range of valid @@ -40,6 +44,14 @@ del _illegal_unichrs del _illegal_ranges +def bin_xml_escape(arg): + def repl(matchobj): + i = ord(matchobj.group()) + if i <= 0xFF: + return unicode('#x%02X') % i + else: + return unicode('#x%04X') % i + return illegal_xml_re.sub(repl, py.xml.escape(arg)) def pytest_addoption(parser): group = parser.getgroup("terminal reporting") @@ -68,117 +80,97 @@ logfile = os.path.expanduser(os.path.expandvars(logfile)) self.logfile = os.path.normpath(logfile) self.prefix = prefix - self.test_logs = [] + self.tests = [] self.passed = self.skipped = 0 self.failed = self.errors = 0 - self._durations = {} def _opentestcase(self, report): names = report.nodeid.split("::") names[0] = names[0].replace("/", '.') - names = tuple(names) - d = {'time': self._durations.pop(report.nodeid, "0")} names = [x.replace(".py", "") for x in names if x != "()"] classnames = names[:-1] if self.prefix: classnames.insert(0, self.prefix) - d['classname'] = ".".join(classnames) - d['name'] = py.xml.escape(names[-1]) - attrs = ['%s="%s"' % item for item in sorted(d.items())] - self.test_logs.append("\n" % " ".join(attrs)) + self.tests.append(Junit.testcase( + classname=".".join(classnames), + name=names[-1], + time=getattr(report, 'duration', 0) + )) - def _closetestcase(self): - self.test_logs.append("") - - def appendlog(self, fmt, *args): - def repl(matchobj): - i = ord(matchobj.group()) - if i <= 0xFF: - return unicode('#x%02X') % i - else: - return unicode('#x%04X') % i - args = tuple([illegal_xml_re.sub(repl, py.xml.escape(arg)) - for arg in args]) - self.test_logs.append(fmt % args) + def append(self, obj): + self.tests[-1].append(obj) def append_pass(self, report): self.passed += 1 - self._opentestcase(report) - self._closetestcase() def append_failure(self, report): - self._opentestcase(report) #msg = str(report.longrepr.reprtraceback.extraline) if "xfail" in report.keywords: - self.appendlog( - '') + self.append( + Junit.skipped(message="xfail-marked test passes unexpectedly")) self.skipped += 1 else: - self.appendlog('%s', - report.longrepr) + sec = dict(report.sections) + fail = Junit.failure(message="test failure") + fail.append(str(report.longrepr)) + self.append(fail) + for name in ('out', 'err'): + content = sec.get("Captured std%s" % name) + if content: + tag = getattr(Junit, 'system-'+name) + self.append(tag(bin_xml_escape(content))) self.failed += 1 - self._closetestcase() def append_collect_failure(self, report): - self._opentestcase(report) #msg = str(report.longrepr.reprtraceback.extraline) - self.appendlog('%s', - report.longrepr) - self._closetestcase() + self.append(Junit.failure(str(report.longrepr), + message="collection failure")) self.errors += 1 def append_collect_skipped(self, report): - self._opentestcase(report) #msg = str(report.longrepr.reprtraceback.extraline) - self.appendlog('%s', - report.longrepr) - self._closetestcase() + self.append(Junit.skipped(str(report.longrepr), + message="collection skipped")) self.skipped += 1 def append_error(self, report): - self._opentestcase(report) - self.appendlog('%s', - report.longrepr) - self._closetestcase() + self.append(Junit.error(str(report.longrepr), + message="test setup failure")) self.errors += 1 def append_skipped(self, report): - self._opentestcase(report) if "xfail" in report.keywords: - self.appendlog( - '%s', - report.keywords['xfail']) + self.append(Junit.skipped(str(report.keywords['xfail']), + message="expected test failure")) else: filename, lineno, skipreason = report.longrepr if skipreason.startswith("Skipped: "): skipreason = skipreason[9:] - self.appendlog('%s', - skipreason, "%s:%s: %s" % report.longrepr, - ) - self._closetestcase() + self.append( + Junit.skipped("%s:%s: %s" % report.longrepr, + type="pytest.skip", + message=skipreason + )) self.skipped += 1 def pytest_runtest_logreport(self, report): if report.passed: - self.append_pass(report) + if report.when == "call": # ignore setup/teardown + self._opentestcase(report) + self.append_pass(report) elif report.failed: + self._opentestcase(report) if report.when != "call": self.append_error(report) else: self.append_failure(report) elif report.skipped: + self._opentestcase(report) self.append_skipped(report) - def pytest_runtest_call(self, item, __multicall__): - start = time.time() - try: - return __multicall__.execute() - finally: - self._durations[item.nodeid] = time.time() - start - def pytest_collectreport(self, report): if not report.passed: + self._opentestcase(report) if report.failed: self.append_collect_failure(report) else: @@ -187,10 +179,11 @@ def pytest_internalerror(self, excrepr): self.errors += 1 data = py.xml.escape(excrepr) - self.test_logs.append( - '\n' - ' ' - '%s' % data) + self.tests.append( + Junit.testcase( + Junit.error(data, message="internal error"), + classname="pytest", + name="internal")) def pytest_sessionstart(self, session): self.suite_start_time = time.time() @@ -204,17 +197,17 @@ suite_stop_time = time.time() suite_time_delta = suite_stop_time - self.suite_start_time numtests = self.passed + self.failed + logfile.write('') - logfile.write('') - logfile.writelines(self.test_logs) - logfile.write('') + logfile.write(Junit.testsuite( + self.tests, + name="", + errors=self.errors, + failures=self.failed, + skips=self.skipped, + tests=numtests, + time="%.3f" % suite_time_delta, + ).unicode(indent=0)) logfile.close() def pytest_terminal_summary(self, terminalreporter): diff --git a/_pytest/main.py b/_pytest/main.py --- a/_pytest/main.py +++ b/_pytest/main.py @@ -2,7 +2,7 @@ import py import pytest, _pytest -import os, sys +import os, sys, imp tracebackcutdir = py.path.local(_pytest.__file__).dirpath() # exitcodes for the command line @@ -11,6 +11,8 @@ EXIT_INTERRUPTED = 2 EXIT_INTERNALERROR = 3 +name_re = py.std.re.compile("^[a-zA-Z_]\w*$") + def pytest_addoption(parser): parser.addini("norecursedirs", "directory patterns to avoid for recursion", type="args", default=('.*', 'CVS', '_darcs', '{arch}')) @@ -27,6 +29,9 @@ action="store", type="int", dest="maxfail", default=0, help="exit after first num failures or errors.") + group._addoption('--strict', action="store_true", + help="run pytest in strict mode, warnings become errors.") + group = parser.getgroup("collect", "collection") group.addoption('--collectonly', action="store_true", dest="collectonly", @@ -48,7 +53,7 @@ def pytest_namespace(): collect = dict(Item=Item, Collector=Collector, File=File, Session=Session) return dict(collect=collect) - + def pytest_configure(config): py.test.config = config # compatibiltiy if config.option.exitfirst: @@ -77,11 +82,11 @@ session.exitstatus = EXIT_INTERNALERROR if excinfo.errisinstance(SystemExit): sys.stderr.write("mainloop: caught Spurious SystemExit!\n") + if initstate >= 2: + config.hook.pytest_sessionfinish(session=session, + exitstatus=session.exitstatus or (session._testsfailed and 1)) if not session.exitstatus and session._testsfailed: session.exitstatus = EXIT_TESTSFAILED - if initstate >= 2: - config.hook.pytest_sessionfinish(session=session, - exitstatus=session.exitstatus) if initstate >= 1: config.pluginmanager.do_unconfigure(config) return session.exitstatus @@ -101,8 +106,12 @@ def pytest_runtestloop(session): if session.config.option.collectonly: return True - for item in session.session.items: - item.config.hook.pytest_runtest_protocol(item=item) + for i, item in enumerate(session.items): + try: + nextitem = session.items[i+1] + except IndexError: + nextitem = None + item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) if session.shouldstop: raise session.Interrupted(session.shouldstop) return True @@ -132,7 +141,7 @@ return getattr(pytest, name) return property(fget, None, None, "deprecated attribute %r, use pytest.%s" % (name,name)) - + class Node(object): """ base class for all Nodes in the collection tree. Collector subclasses have children, Items are terminal nodes.""" @@ -143,13 +152,13 @@ #: the parent collector node. self.parent = parent - + #: the test config object self.config = config or parent.config #: the collection this node is part of self.session = session or parent.session - + #: filesystem path where this node was collected from self.fspath = getattr(parent, 'fspath', None) self.ihook = self.session.gethookproxy(self.fspath) @@ -224,13 +233,13 @@ def listchain(self): """ return list of all parent collectors up to self, starting from root of collection tree. """ - l = [self] - while 1: - x = l[0] - if x.parent is not None: # and x.parent.parent is not None: - l.insert(0, x.parent) - else: - return l + chain = [] + item = self + while item is not None: + chain.append(item) + item = item.parent + chain.reverse() + return chain def listnames(self): return [x.name for x in self.listchain()] @@ -325,6 +334,8 @@ """ a basic test invocation item. Note that for a single function there might be multiple test invocation items. """ + nextitem = None + def reportinfo(self): return self.fspath, None, "" @@ -469,16 +480,29 @@ return True def _tryconvertpyarg(self, x): - try: - mod = __import__(x, None, None, ['__doc__']) - except (ValueError, ImportError): - return x - p = py.path.local(mod.__file__) - if p.purebasename == "__init__": - p = p.dirpath() - else: - p = p.new(basename=p.purebasename+".py") - return str(p) + mod = None + path = [os.path.abspath('.')] + sys.path + for name in x.split('.'): + # ignore anything that's not a proper name here + # else something like --pyargs will mess up '.' + # since imp.find_module will actually sometimes work for it + # but it's supposed to be considered a filesystem path + # not a package + if name_re.match(name) is None: + return x + try: + fd, mod, type_ = imp.find_module(name, path) + except ImportError: + return x + else: + if fd is not None: + fd.close() + + if type_[2] != imp.PKG_DIRECTORY: + path = [os.path.dirname(mod)] + else: + path = [mod] + return mod def _parsearg(self, arg): """ return (fspath, names) tuple after checking the file exists. """ @@ -496,7 +520,7 @@ raise pytest.UsageError(msg + arg) parts[0] = path return parts - + def matchnodes(self, matching, names): self.trace("matchnodes", matching, names) self.trace.root.indent += 1 diff --git a/_pytest/mark.py b/_pytest/mark.py --- a/_pytest/mark.py +++ b/_pytest/mark.py @@ -14,12 +14,37 @@ "Terminate expression with ':' to make the first match match " "all subsequent tests (usually file-order). ") + group._addoption("-m", + action="store", dest="markexpr", default="", metavar="MARKEXPR", + help="only run tests matching given mark expression. " + "example: -m 'mark1 and not mark2'." + ) + + group.addoption("--markers", action="store_true", help= + "show markers (builtin, plugin and per-project ones).") + + parser.addini("markers", "markers for test functions", 'linelist') + +def pytest_cmdline_main(config): + if config.option.markers: + config.pluginmanager.do_configure(config) + tw = py.io.TerminalWriter() + for line in config.getini("markers"): + name, rest = line.split(":", 1) + tw.write("@pytest.mark.%s:" % name, bold=True) + tw.line(rest) + tw.line() + config.pluginmanager.do_unconfigure(config) + return 0 +pytest_cmdline_main.tryfirst = True + def pytest_collection_modifyitems(items, config): keywordexpr = config.option.keyword - if not keywordexpr: + matchexpr = config.option.markexpr + if not keywordexpr and not matchexpr: return selectuntil = False - if keywordexpr[-1] == ":": + if keywordexpr[-1:] == ":": selectuntil = True keywordexpr = keywordexpr[:-1] @@ -29,21 +54,38 @@ if keywordexpr and skipbykeyword(colitem, keywordexpr): deselected.append(colitem) else: - remaining.append(colitem) if selectuntil: keywordexpr = None + if matchexpr: + if not matchmark(colitem, matchexpr): + deselected.append(colitem) + continue + remaining.append(colitem) if deselected: config.hook.pytest_deselected(items=deselected) items[:] = remaining +class BoolDict: + def __init__(self, mydict): + self._mydict = mydict + def __getitem__(self, name): + return name in self._mydict + +def matchmark(colitem, matchexpr): + return eval(matchexpr, {}, BoolDict(colitem.obj.__dict__)) + +def pytest_configure(config): + if config.option.strict: + pytest.mark._config = config + def skipbykeyword(colitem, keywordexpr): """ return True if they given keyword expression means to skip this collector/item. """ if not keywordexpr: return - + itemkeywords = getkeywords(colitem) for key in filter(None, keywordexpr.split()): eor = key[:1] == '-' @@ -77,15 +119,31 @@ @py.test.mark.slowtest def test_function(): pass - + will set a 'slowtest' :class:`MarkInfo` object on the ``test_function`` object. """ def __getattr__(self, name): if name[0] == "_": raise AttributeError(name) + if hasattr(self, '_config'): + self._check(name) return MarkDecorator(name) + def _check(self, name): + try: + if name in self._markers: + return + except AttributeError: + pass + self._markers = l = set() + for line in self._config.getini("markers"): + beginning = line.split(":", 1) + x = beginning[0].split("(", 1)[0] + l.add(x) + if name not in self._markers: + raise AttributeError("%r not a registered marker" % (name,)) + class MarkDecorator: """ A decorator for test functions and test classes. When applied it will create :class:`MarkInfo` objects which may be @@ -133,8 +191,7 @@ holder = MarkInfo(self.markname, self.args, self.kwargs) setattr(func, self.markname, holder) else: - holder.kwargs.update(self.kwargs) - holder.args += self.args + holder.add(self.args, self.kwargs) return func kw = self.kwargs.copy() kw.update(kwargs) @@ -150,27 +207,20 @@ self.args = args #: keyword argument dictionary, empty if nothing specified self.kwargs = kwargs + self._arglist = [(args, kwargs.copy())] def __repr__(self): return "" % ( self.name, self.args, self.kwargs) -def pytest_itemcollected(item): - if not isinstance(item, pytest.Function): - return - try: - func = item.obj.__func__ - except AttributeError: - func = getattr(item.obj, 'im_func', item.obj) - pyclasses = (pytest.Class, pytest.Module) - for node in item.listchain(): - if isinstance(node, pyclasses): - marker = getattr(node.obj, 'pytestmark', None) - if marker is not None: - if isinstance(marker, list): - for mark in marker: - mark(func) - else: - marker(func) - node = node.parent - item.keywords.update(py.builtin._getfuncdict(func)) + def add(self, args, kwargs): + """ add a MarkInfo with the given args and kwargs. """ + self._arglist.append((args, kwargs)) + self.args += args + self.kwargs.update(kwargs) + + def __iter__(self): + """ yield MarkInfo objects each relating to a marking-call. """ + for args, kwargs in self._arglist: + yield MarkInfo(self.name, args, kwargs) + diff --git a/_pytest/monkeypatch.py b/_pytest/monkeypatch.py --- a/_pytest/monkeypatch.py +++ b/_pytest/monkeypatch.py @@ -13,6 +13,7 @@ monkeypatch.setenv(name, value, prepend=False) monkeypatch.delenv(name, value, raising=True) monkeypatch.syspath_prepend(path) + monkeypatch.chdir(path) All modifications will be undone after the requesting test function has finished. The ``raising`` @@ -30,6 +31,7 @@ def __init__(self): self._setattr = [] self._setitem = [] + self._cwd = None def setattr(self, obj, name, value, raising=True): """ set attribute ``name`` on ``obj`` to ``value``, by default @@ -83,6 +85,17 @@ self._savesyspath = sys.path[:] sys.path.insert(0, str(path)) + def chdir(self, path): + """ change the current working directory to the specified path + path can be a string or a py.path.local object + """ + if self._cwd is None: + self._cwd = os.getcwd() + if hasattr(path, "chdir"): + path.chdir() + else: + os.chdir(path) + def undo(self): """ undo previous changes. This call consumes the undo stack. Calling it a second time has no effect unless @@ -95,9 +108,17 @@ self._setattr[:] = [] for dictionary, name, value in self._setitem: if value is notset: - del dictionary[name] + try: + del dictionary[name] + except KeyError: + pass # was already deleted, so we have the desired state else: dictionary[name] = value self._setitem[:] = [] if hasattr(self, '_savesyspath'): sys.path[:] = self._savesyspath + del self._savesyspath + + if self._cwd is not None: + os.chdir(self._cwd) + self._cwd = None diff --git a/_pytest/nose.py b/_pytest/nose.py --- a/_pytest/nose.py +++ b/_pytest/nose.py @@ -13,6 +13,7 @@ call.excinfo = call2.excinfo + at pytest.mark.trylast def pytest_runtest_setup(item): if isinstance(item, (pytest.Function)): if isinstance(item.parent, pytest.Generator): diff --git a/_pytest/pastebin.py b/_pytest/pastebin.py --- a/_pytest/pastebin.py +++ b/_pytest/pastebin.py @@ -38,7 +38,11 @@ del tr._tw.__dict__['write'] def getproxy(): - return py.std.xmlrpclib.ServerProxy(url.xmlrpc).pastes + if sys.version_info < (3, 0): + from xmlrpclib import ServerProxy + else: + from xmlrpc.client import ServerProxy + return ServerProxy(url.xmlrpc).pastes def pytest_terminal_summary(terminalreporter): if terminalreporter.config.option.pastebin != "failed": diff --git a/_pytest/pdb.py b/_pytest/pdb.py --- a/_pytest/pdb.py +++ b/_pytest/pdb.py @@ -19,11 +19,13 @@ class pytestPDB: """ Pseudo PDB that defers to the real pdb. """ item = None + collector = None def set_trace(self): """ invoke PDB set_trace debugging, dropping any IO capturing. """ frame = sys._getframe().f_back - item = getattr(self, 'item', None) + item = self.item or self.collector + if item is not None: capman = item.config.pluginmanager.getplugin("capturemanager") out, err = capman.suspendcapture() @@ -38,6 +40,14 @@ pytestPDB.item = item pytest_runtest_setup = pytest_runtest_call = pytest_runtest_teardown = pdbitem + at pytest.mark.tryfirst +def pytest_make_collect_report(__multicall__, collector): + try: + pytestPDB.collector = collector + return __multicall__.execute() + finally: + pytestPDB.collector = None + def pytest_runtest_makereport(): pytestPDB.item = None @@ -60,7 +70,13 @@ tw.sep(">", "traceback") rep.toterminal(tw) tw.sep(">", "entering PDB") - post_mortem(call.excinfo._excinfo[2]) + # A doctest.UnexpectedException is not useful for post_mortem. + # Use the underlying exception instead: + if isinstance(call.excinfo.value, py.std.doctest.UnexpectedException): + tb = call.excinfo.value.exc_info[2] + else: + tb = call.excinfo._excinfo[2] + post_mortem(tb) rep._pdbshown = True return rep diff --git a/_pytest/pytester.py b/_pytest/pytester.py --- a/_pytest/pytester.py +++ b/_pytest/pytester.py @@ -25,6 +25,7 @@ _pytest_fullpath except NameError: _pytest_fullpath = os.path.abspath(pytest.__file__.rstrip("oc")) + _pytest_fullpath = _pytest_fullpath.replace("$py.class", ".py") def pytest_funcarg___pytest(request): return PytestArg(request) @@ -313,16 +314,6 @@ result.extend(session.genitems(colitem)) return result - def inline_genitems(self, *args): - #config = self.parseconfig(*args) - config = self.parseconfigure(*args) - rec = self.getreportrecorder(config) - session = Session(config) - config.hook.pytest_sessionstart(session=session) - session.perform_collect() - config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK) - return session.items, rec - def runitem(self, source): # used from runner functional tests item = self.getitem(source) @@ -343,64 +334,57 @@ l = list(args) + [p] reprec = self.inline_run(*l) reports = reprec.getreports("pytest_runtest_logreport") - assert len(reports) == 1, reports - return reports[0] + assert len(reports) == 3, reports # setup/call/teardown + return reports[1] + + def inline_genitems(self, *args): + return self.inprocess_run(list(args) + ['--collectonly']) def inline_run(self, *args): - args = ("-s", ) + args # otherwise FD leakage - config = self.parseconfig(*args) - reprec = self.getreportrecorder(config) - #config.pluginmanager.do_configure(config) - config.hook.pytest_cmdline_main(config=config) - #config.pluginmanager.do_unconfigure(config) - return reprec + items, rec = self.inprocess_run(args) + return rec - def config_preparse(self): - config = self.Config() - for plugin in self.plugins: - if isinstance(plugin, str): - config.pluginmanager.import_plugin(plugin) - else: - if isinstance(plugin, dict): - plugin = PseudoPlugin(plugin) - if not config.pluginmanager.isregistered(plugin): - config.pluginmanager.register(plugin) - return config + def inprocess_run(self, args, plugins=None): + rec = [] + items = [] + class Collect: + def pytest_configure(x, config): + rec.append(self.getreportrecorder(config)) + def pytest_itemcollected(self, item): + items.append(item) + if not plugins: + plugins = [] + plugins.append(Collect()) + ret = self.pytestmain(list(args), plugins=[Collect()]) + reprec = rec[0] + reprec.ret = ret + assert len(rec) == 1 + return items, reprec def parseconfig(self, *args): - if not args: - args = (self.tmpdir,) - config = self.config_preparse() - args = list(args) + args = [str(x) for x in args] for x in args: if str(x).startswith('--basetemp'): break else: args.append("--basetemp=%s" % self.tmpdir.dirpath('basetemp')) - config.parse(args) + import _pytest.core + config = _pytest.core._prepareconfig(args, self.plugins) + # the in-process pytest invocation needs to avoid leaking FDs + # so we register a "reset_capturings" callmon the capturing manager + # and make sure it gets called + config._cleanup.append( + config.pluginmanager.getplugin("capturemanager").reset_capturings) + import _pytest.config + self.request.addfinalizer( + lambda: _pytest.config.pytest_unconfigure(config)) return config - def reparseconfig(self, args=None): - """ this is used from tests that want to re-invoke parse(). """ - if not args: - args = [self.tmpdir] - oldconfig = getattr(py.test, 'config', None) - try: - c = py.test.config = self.Config() - c.basetemp = py.path.local.make_numbered_dir(prefix="reparse", - keep=0, rootdir=self.tmpdir, lock_timeout=None) - c.parse(args) - c.pluginmanager.do_configure(c) - self.request.addfinalizer(lambda: c.pluginmanager.do_unconfigure(c)) - return c - finally: - py.test.config = oldconfig - def parseconfigure(self, *args): config = self.parseconfig(*args) config.pluginmanager.do_configure(config) self.request.addfinalizer(lambda: - config.pluginmanager.do_unconfigure(config)) + config.pluginmanager.do_unconfigure(config)) return config def getitem(self, source, funcname="test_func"): @@ -420,7 +404,6 @@ self.makepyfile(__init__ = "#") self.config = config = self.parseconfigure(path, *configargs) node = self.getnode(config, path) - #config.pluginmanager.do_unconfigure(config) return node def collect_by_name(self, modcol, name): @@ -437,9 +420,16 @@ return py.std.subprocess.Popen(cmdargs, stdout=stdout, stderr=stderr, **kw) def pytestmain(self, *args, **kwargs): - ret = pytest.main(*args, **kwargs) - if ret == 2: - raise KeyboardInterrupt() + class ResetCapturing: + @pytest.mark.trylast + def pytest_unconfigure(self, config): + capman = config.pluginmanager.getplugin("capturemanager") + capman.reset_capturings() + plugins = kwargs.setdefault("plugins", []) + rc = ResetCapturing() + plugins.append(rc) + return pytest.main(*args, **kwargs) + def run(self, *cmdargs): return self._run(*cmdargs) @@ -528,6 +518,8 @@ pexpect = py.test.importorskip("pexpect", "2.4") if hasattr(sys, 'pypy_version_info') and '64' in py.std.platform.machine(): pytest.skip("pypy-64 bit not supported") + if sys.platform == "darwin": + pytest.xfail("pexpect does not work reliably on darwin?!") logfile = self.tmpdir.join("spawn.out") child = pexpect.spawn(cmd, logfile=logfile.open("w")) child.timeout = expect_timeout @@ -540,10 +532,6 @@ return "INTERNAL not-utf8-decodeable, truncated string:\n%s" % ( py.io.saferepr(out),) -class PseudoPlugin: - def __init__(self, vars): - self.__dict__.update(vars) - class ReportRecorder(object): def __init__(self, hook): self.hook = hook @@ -565,10 +553,17 @@ def getreports(self, names="pytest_runtest_logreport pytest_collectreport"): return [x.report for x in self.getcalls(names)] - def matchreport(self, inamepart="", names="pytest_runtest_logreport pytest_collectreport", when=None): + def matchreport(self, inamepart="", + names="pytest_runtest_logreport pytest_collectreport", when=None): """ return a testreport whose dotted import path matches """ l = [] for rep in self.getreports(names=names): + try: + if not when and rep.when != "call" and rep.passed: + # setup/teardown passing reports - let's ignore those + continue + except AttributeError: + pass if when and getattr(rep, 'when', None) != when: continue if not inamepart or inamepart in rep.nodeid.split("::"): diff --git a/_pytest/python.py b/_pytest/python.py --- a/_pytest/python.py +++ b/_pytest/python.py @@ -4,6 +4,7 @@ import sys import pytest from py._code.code import TerminalRepr +from _pytest.monkeypatch import monkeypatch import _pytest cutdir = py.path.local(_pytest.__file__).dirpath() @@ -26,6 +27,24 @@ showfuncargs(config) return 0 + +def pytest_generate_tests(metafunc): + try: + param = metafunc.function.parametrize + except AttributeError: + return + for p in param: + metafunc.parametrize(*p.args, **p.kwargs) + +def pytest_configure(config): + config.addinivalue_line("markers", + "parametrize(argnames, argvalues): call a test function multiple " + "times passing in multiple different argument value sets. Example: " + "@parametrize('arg1', [1,2]) would lead to two calls of the decorated " + "test function, one with arg1=1 and another with arg1=2." + ) + + @pytest.mark.trylast def pytest_namespace(): raises.Exception = pytest.fail.Exception @@ -138,6 +157,7 @@ obj = obj.place_as self._fslineno = py.code.getfslineno(obj) + assert isinstance(self._fslineno[1], int), obj return self._fslineno def reportinfo(self): @@ -155,6 +175,7 @@ else: fspath, lineno = self._getfslineno() modpath = self.getmodpath() + assert isinstance(lineno, int) return fspath, lineno, modpath class PyCollectorMixin(PyobjMixin, pytest.Collector): @@ -200,6 +221,7 @@ module = self.getparent(Module).obj clscol = self.getparent(Class) cls = clscol and clscol.obj or None + transfer_markers(funcobj, cls, module) metafunc = Metafunc(funcobj, config=self.config, cls=cls, module=module) gentesthook = self.config.hook.pytest_generate_tests @@ -219,6 +241,19 @@ l.append(function) return l +def transfer_markers(funcobj, cls, mod): + # XXX this should rather be code in the mark plugin or the mark + # plugin should merge with the python plugin. + for holder in (cls, mod): + try: + pytestmark = holder.pytestmark + except AttributeError: + continue + if isinstance(pytestmark, list): + for mark in pytestmark: + mark(funcobj) + else: + pytestmark(funcobj) class Module(pytest.File, PyCollectorMixin): def _getobj(self): @@ -226,13 +261,8 @@ def _importtestmodule(self): # we assume we are only called once per module - from _pytest import assertion - assertion.before_module_import(self) try: - try: - mod = self.fspath.pyimport(ensuresyspath=True) - finally: - assertion.after_module_import(self) + mod = self.fspath.pyimport(ensuresyspath=True) except SyntaxError: excinfo = py.code.ExceptionInfo() raise self.CollectError(excinfo.getrepr(style="short")) @@ -244,7 +274,8 @@ " %s\n" "which is not the same as the test file we want to collect:\n" " %s\n" - "HINT: use a unique basename for your test file modules" + "HINT: remove __pycache__ / .pyc files and/or use a " + "unique basename for your test file modules" % e.args ) #print "imported test module", mod @@ -374,6 +405,7 @@ tw.line() tw.line("%s:%d" % (self.filename, self.firstlineno+1)) + class Generator(FunctionMixin, PyCollectorMixin, pytest.Collector): def collect(self): # test generators are seen as collectors but they also @@ -430,6 +462,7 @@ "yielded functions (deprecated) cannot have funcargs") else: if callspec is not None: + self.callspec = callspec self.funcargs = callspec.funcargs or {} self._genid = callspec.id if hasattr(callspec, "param"): @@ -506,15 +539,59 @@ request._fillfuncargs() _notexists = object() -class CallSpec: - def __init__(self, funcargs, id, param): - self.funcargs = funcargs - self.id = id + +class CallSpec2(object): + def __init__(self, metafunc): + self.metafunc = metafunc + self.funcargs = {} + self._idlist = [] + self.params = {} + self._globalid = _notexists + self._globalid_args = set() + self._globalparam = _notexists + + def copy(self, metafunc): + cs = CallSpec2(self.metafunc) + cs.funcargs.update(self.funcargs) + cs.params.update(self.params) + cs._idlist = list(self._idlist) + cs._globalid = self._globalid + cs._globalid_args = self._globalid_args + cs._globalparam = self._globalparam + return cs + + def _checkargnotcontained(self, arg): + if arg in self.params or arg in self.funcargs: + raise ValueError("duplicate %r" %(arg,)) + + def getparam(self, name): + try: + return self.params[name] + except KeyError: + if self._globalparam is _notexists: + raise ValueError(name) + return self._globalparam + + @property + def id(self): + return "-".join(map(str, filter(None, self._idlist))) + + def setmulti(self, valtype, argnames, valset, id): + for arg,val in zip(argnames, valset): + self._checkargnotcontained(arg) + getattr(self, valtype)[arg] = val + self._idlist.append(id) + + def setall(self, funcargs, id, param): + for x in funcargs: + self._checkargnotcontained(x) + self.funcargs.update(funcargs) + if id is not _notexists: + self._idlist.append(id) if param is not _notexists: - self.param = param - def __repr__(self): - return "" %( - self.id, getattr(self, 'param', '?'), self.funcargs) + assert self._globalparam is _notexists + self._globalparam = param + class Metafunc: def __init__(self, function, config=None, cls=None, module=None): @@ -528,31 +605,69 @@ self._calls = [] self._ids = py.builtin.set() + def parametrize(self, argnames, argvalues, indirect=False, ids=None): + """ Add new invocations to the underlying test function using the list + of argvalues for the given argnames. Parametrization is performed + during the collection phase. If you need to setup expensive resources + you may pass indirect=True and implement a funcarg factory which can + perform the expensive setup just before a test is actually run. + + :arg argnames: an argument name or a list of argument names + + :arg argvalues: a list of values for the argname or a list of tuples of + values for the list of argument names. + + :arg indirect: if True each argvalue corresponding to an argument will + be passed as request.param to its respective funcarg factory so + that it can perform more expensive setups during the setup phase of + a test rather than at collection time. + + :arg ids: list of string ids each corresponding to the argvalues so + that they are part of the test id. If no ids are provided they will + be generated automatically from the argvalues. + """ + if not isinstance(argnames, (tuple, list)): + argnames = (argnames,) + argvalues = [(val,) for val in argvalues] + for arg in argnames: + if arg not in self.funcargnames: + raise ValueError("%r has no argument %r" %(self.function, arg)) + valtype = indirect and "params" or "funcargs" + if not ids: + idmaker = IDMaker() + ids = list(map(idmaker, argvalues)) + newcalls = [] + for callspec in self._calls or [CallSpec2(self)]: + for i, valset in enumerate(argvalues): + assert len(valset) == len(argnames) + newcallspec = callspec.copy(self) + newcallspec.setmulti(valtype, argnames, valset, ids[i]) + newcalls.append(newcallspec) + self._calls = newcalls + def addcall(self, funcargs=None, id=_notexists, param=_notexists): - """ add a new call to the underlying test function during the - collection phase of a test run. Note that request.addcall() is - called during the test collection phase prior and independently - to actual test execution. Therefore you should perform setup - of resources in a funcarg factory which can be instrumented - with the ``param``. + """ (deprecated, use parametrize) Add a new call to the underlying + test function during the collection phase of a test run. Note that + request.addcall() is called during the test collection phase prior and + independently to actual test execution. You should only use addcall() + if you need to specify multiple arguments of a test function. :arg funcargs: argument keyword dictionary used when invoking the test function. :arg id: used for reporting and identification purposes. If you - don't supply an `id` the length of the currently - list of calls to the test function will be used. + don't supply an `id` an automatic unique id will be generated. - :arg param: will be exposed to a later funcarg factory invocation - through the ``request.param`` attribute. It allows to - defer test fixture setup activities to when an actual - test is run. + :arg param: a parameter which will be exposed to a later funcarg factory + invocation through the ``request.param`` attribute. """ assert funcargs is None or isinstance(funcargs, dict) if funcargs is not None: for name in funcargs: if name not in self.funcargnames: pytest.fail("funcarg %r not used in this function." % name) + else: + funcargs = {} if id is None: raise ValueError("id=None not allowed") if id is _notexists: @@ -561,11 +676,26 @@ if id in self._ids: raise ValueError("duplicate id %r" % id) self._ids.add(id) - self._calls.append(CallSpec(funcargs, id, param)) + + cs = CallSpec2(self) + cs.setall(funcargs, id, param) + self._calls.append(cs) + +class IDMaker: + def __init__(self): + self.counter = 0 + def __call__(self, valset): + l = [] + for val in valset: + if not isinstance(val, (int, str)): + val = "."+str(self.counter) + self.counter += 1 + l.append(str(val)) + return "-".join(l) class FuncargRequest: """ A request for function arguments from a test function. - + Note that there is an optional ``param`` attribute in case there was an invocation to metafunc.addcall(param=...). If no such call was done in a ``pytest_generate_tests`` @@ -637,7 +767,7 @@ def applymarker(self, marker): - """ apply a marker to a single test function invocation. + """ Apply a marker to a single test function invocation. This method is useful if you don't want to have a keyword/marker on all function invocations. @@ -649,7 +779,7 @@ self._pyfuncitem.keywords[marker.markname] = marker def cached_setup(self, setup, teardown=None, scope="module", extrakey=None): - """ return a testing resource managed by ``setup`` & + """ Return a testing resource managed by ``setup`` & ``teardown`` calls. ``scope`` and ``extrakey`` determine when the ``teardown`` function will be called so that subsequent calls to ``setup`` would recreate the resource. @@ -698,11 +828,18 @@ self._raiselookupfailed(argname) funcargfactory = self._name2factory[argname].pop() oldarg = self._currentarg - self._currentarg = argname + mp = monkeypatch() + mp.setattr(self, '_currentarg', argname) + try: + param = self._pyfuncitem.callspec.getparam(argname) + except (AttributeError, ValueError): + pass + else: + mp.setattr(self, 'param', param, raising=False) try: self._funcargs[argname] = res = funcargfactory(request=self) finally: - self._currentarg = oldarg + mp.undo() return res def _getscopeitem(self, scope): @@ -817,8 +954,7 @@ >>> raises(ZeroDivisionError, f, x=0) - A third possibility is to use a string which which will - be executed:: + A third possibility is to use a string to be executed:: >>> raises(ZeroDivisionError, "f(0)") diff --git a/_pytest/resultlog.py b/_pytest/resultlog.py --- a/_pytest/resultlog.py +++ b/_pytest/resultlog.py @@ -63,6 +63,8 @@ self.write_log_entry(testpath, lettercode, longrepr) def pytest_runtest_logreport(self, report): + if report.when != "call" and report.passed: + return res = self.config.hook.pytest_report_teststatus(report=report) code = res[1] if code == 'x': @@ -89,5 +91,8 @@ self.log_outcome(report, code, longrepr) def pytest_internalerror(self, excrepr): - path = excrepr.reprcrash.path + reprcrash = getattr(excrepr, 'reprcrash', None) + path = getattr(reprcrash, "path", None) + if path is None: + path = "cwd:%s" % py.path.local() self.write_log_entry(path, '!', str(excrepr)) diff --git a/_pytest/runner.py b/_pytest/runner.py --- a/_pytest/runner.py +++ b/_pytest/runner.py @@ -1,6 +1,6 @@ """ basic collect and runtest protocol implementations """ -import py, sys +import py, sys, time from py._code.code import TerminalRepr def pytest_namespace(): @@ -14,33 +14,58 @@ # # pytest plugin hooks +def pytest_addoption(parser): + group = parser.getgroup("terminal reporting", "reporting", after="general") + group.addoption('--durations', + action="store", type="int", default=None, metavar="N", + help="show N slowest setup/test durations (N=0 for all)."), + +def pytest_terminal_summary(terminalreporter): + durations = terminalreporter.config.option.durations + if durations is None: + return + tr = terminalreporter + dlist = [] + for replist in tr.stats.values(): + for rep in replist: + if hasattr(rep, 'duration'): + dlist.append(rep) + if not dlist: + return + dlist.sort(key=lambda x: x.duration) + dlist.reverse() + if not durations: + tr.write_sep("=", "slowest test durations") + else: + tr.write_sep("=", "slowest %s test durations" % durations) + dlist = dlist[:durations] + + for rep in dlist: + nodeid = rep.nodeid.replace("::()::", "::") + tr.write_line("%02.2fs %-8s %s" % + (rep.duration, rep.when, nodeid)) + def pytest_sessionstart(session): session._setupstate = SetupState() -def pytest_sessionfinish(session, exitstatus): - hook = session.config.hook - rep = hook.pytest__teardown_final(session=session) - if rep: - hook.pytest__teardown_final_logerror(session=session, report=rep) - session.exitstatus = 1 - class NodeInfo: def __init__(self, location): self.location = location -def pytest_runtest_protocol(item): +def pytest_runtest_protocol(item, nextitem): item.ihook.pytest_runtest_logstart( nodeid=item.nodeid, location=item.location, ) - runtestprotocol(item) + runtestprotocol(item, nextitem=nextitem) return True -def runtestprotocol(item, log=True): +def runtestprotocol(item, log=True, nextitem=None): rep = call_and_report(item, "setup", log) reports = [rep] if rep.passed: reports.append(call_and_report(item, "call", log)) - reports.append(call_and_report(item, "teardown", log)) + reports.append(call_and_report(item, "teardown", log, + nextitem=nextitem)) return reports def pytest_runtest_setup(item): @@ -49,16 +74,8 @@ def pytest_runtest_call(item): item.runtest() -def pytest_runtest_teardown(item): - item.session._setupstate.teardown_exact(item) - -def pytest__teardown_final(session): - call = CallInfo(session._setupstate.teardown_all, when="teardown") - if call.excinfo: - ntraceback = call.excinfo.traceback .cut(excludepath=py._pydir) - call.excinfo.traceback = ntraceback.filter() - longrepr = call.excinfo.getrepr(funcargs=True) - return TeardownErrorReport(longrepr) +def pytest_runtest_teardown(item, nextitem): + item.session._setupstate.teardown_exact(item, nextitem) def pytest_report_teststatus(report): if report.when in ("setup", "teardown"): @@ -74,18 +91,18 @@ # # Implementation -def call_and_report(item, when, log=True): - call = call_runtest_hook(item, when) +def call_and_report(item, when, log=True, **kwds): + call = call_runtest_hook(item, when, **kwds) hook = item.ihook report = hook.pytest_runtest_makereport(item=item, call=call) - if log and (when == "call" or not report.passed): + if log: hook.pytest_runtest_logreport(report=report) return report -def call_runtest_hook(item, when): +def call_runtest_hook(item, when, **kwds): hookname = "pytest_runtest_" + when ihook = getattr(item.ihook, hookname) - return CallInfo(lambda: ihook(item=item), when=when) + return CallInfo(lambda: ihook(item=item, **kwds), when=when) class CallInfo: """ Result/Exception info a function invocation. """ @@ -95,12 +112,16 @@ #: context of invocation: one of "setup", "call", #: "teardown", "memocollect" self.when = when + self.start = time.time() try: - self.result = func() - except KeyboardInterrupt: - raise - except: - self.excinfo = py.code.ExceptionInfo() + try: + self.result = func() + except KeyboardInterrupt: + raise + except: + self.excinfo = py.code.ExceptionInfo() + finally: + self.stop = time.time() def __repr__(self): if self.excinfo: @@ -120,6 +141,10 @@ return s class BaseReport(object): + + def __init__(self, **kw): + self.__dict__.update(kw) + def toterminal(self, out): longrepr = self.longrepr if hasattr(self, 'node'): @@ -139,6 +164,7 @@ def pytest_runtest_makereport(item, call): when = call.when + duration = call.stop-call.start keywords = dict([(x,1) for x in item.keywords]) excinfo = call.excinfo if not call.excinfo: @@ -160,14 +186,15 @@ else: # exception in setup or teardown longrepr = item._repr_failure_py(excinfo) return TestReport(item.nodeid, item.location, - keywords, outcome, longrepr, when) + keywords, outcome, longrepr, when, + duration=duration) class TestReport(BaseReport): """ Basic test report object (also used for setup and teardown calls if they fail). """ def __init__(self, nodeid, location, - keywords, outcome, longrepr, when): + keywords, outcome, longrepr, when, sections=(), duration=0, **extra): #: normalized collection node id self.nodeid = nodeid @@ -179,16 +206,25 @@ #: a name -> value dictionary containing all keywords and #: markers associated with a test invocation. self.keywords = keywords - + #: test outcome, always one of "passed", "failed", "skipped". self.outcome = outcome #: None or a failure representation. self.longrepr = longrepr - + #: one of 'setup', 'call', 'teardown' to indicate runtest phase. self.when = when + #: list of (secname, data) extra information which needs to + #: marshallable + self.sections = list(sections) + + #: time it took to run just the test + self.duration = duration + + self.__dict__.update(extra) + def __repr__(self): return "" % ( self.nodeid, self.when, self.outcome) @@ -196,8 +232,10 @@ class TeardownErrorReport(BaseReport): outcome = "failed" when = "teardown" - def __init__(self, longrepr): + def __init__(self, longrepr, **extra): self.longrepr = longrepr + self.sections = [] + self.__dict__.update(extra) def pytest_make_collect_report(collector): call = CallInfo(collector._memocollect, "memocollect") @@ -219,11 +257,13 @@ getattr(call, 'result', None)) class CollectReport(BaseReport): - def __init__(self, nodeid, outcome, longrepr, result): + def __init__(self, nodeid, outcome, longrepr, result, sections=(), **extra): self.nodeid = nodeid self.outcome = outcome self.longrepr = longrepr self.result = result or [] + self.sections = list(sections) + self.__dict__.update(extra) @property def location(self): @@ -277,20 +317,22 @@ self._teardown_with_finalization(None) assert not self._finalizers - def teardown_exact(self, item): - if self.stack and item == self.stack[-1]: + def teardown_exact(self, item, nextitem): + needed_collectors = nextitem and nextitem.listchain() or [] + self._teardown_towards(needed_collectors) + + def _teardown_towards(self, needed_collectors): + while self.stack: + if self.stack == needed_collectors[:len(self.stack)]: + break self._pop_and_teardown() - else: - self._callfinalizers(item) def prepare(self, colitem): """ setup objects along the collector chain to the test-method and teardown previously setup objects.""" needed_collectors = colitem.listchain() - while self.stack: - if self.stack == needed_collectors[:len(self.stack)]: - break - self._pop_and_teardown() + self._teardown_towards(needed_collectors) + # check if the last collection node has raised an error for col in self.stack: if hasattr(col, '_prepare_exc'): diff --git a/_pytest/skipping.py b/_pytest/skipping.py --- a/_pytest/skipping.py +++ b/_pytest/skipping.py @@ -9,6 +9,21 @@ action="store_true", dest="runxfail", default=False, help="run tests even if they are marked xfail") +def pytest_configure(config): + config.addinivalue_line("markers", + "skipif(*conditions): skip the given test function if evaluation " + "of all conditions has a True value. Evaluation happens within the " + "module global context. Example: skipif('sys.platform == \"win32\"') " + "skips the test if we are on the win32 platform. " + ) + config.addinivalue_line("markers", + "xfail(*conditions, reason=None, run=True): mark the the test function " + "as an expected failure. Optionally specify a reason and run=False " + "if you don't even want to execute the test function. Any positional " + "condition strings will be evaluated (like with skipif) and if one is " + "False the marker will not be applied." + ) + def pytest_namespace(): return dict(xfail=xfail) @@ -169,21 +184,23 @@ elif char == "X": show_xpassed(terminalreporter, lines) elif char in "fF": - show_failed(terminalreporter, lines) + show_simple(terminalreporter, lines, 'failed', "FAIL %s") elif char in "sS": show_skipped(terminalreporter, lines) + elif char == "E": + show_simple(terminalreporter, lines, 'error', "ERROR %s") if lines: tr._tw.sep("=", "short test summary info") for line in lines: tr._tw.line(line) -def show_failed(terminalreporter, lines): +def show_simple(terminalreporter, lines, stat, format): tw = terminalreporter._tw - failed = terminalreporter.stats.get("failed") + failed = terminalreporter.stats.get(stat) if failed: for rep in failed: pos = rep.nodeid - lines.append("FAIL %s" %(pos, )) + lines.append(format %(pos, )) def show_xfailed(terminalreporter, lines): xfailed = terminalreporter.stats.get("xfailed") diff --git a/_pytest/terminal.py b/_pytest/terminal.py --- a/_pytest/terminal.py +++ b/_pytest/terminal.py @@ -15,7 +15,7 @@ group._addoption('-r', action="store", dest="reportchars", default=None, metavar="chars", help="show extra test summary info as specified by chars (f)ailed, " - "(s)skipped, (x)failed, (X)passed.") + "(E)error, (s)skipped, (x)failed, (X)passed.") group._addoption('-l', '--showlocals', action="store_true", dest="showlocals", default=False, help="show locals in tracebacks (disabled by default).") @@ -43,7 +43,8 @@ pass else: stdout = os.fdopen(newfd, stdout.mode, 1) - config._toclose = stdout + config._cleanup.append(lambda: stdout.close()) + reporter = TerminalReporter(config, stdout) config.pluginmanager.register(reporter, 'terminalreporter') if config.option.debug or config.option.traceconfig: @@ -52,11 +53,6 @@ reporter.write_line("[traceconfig] " + msg) config.trace.root.setprocessor("pytest:config", mywriter) -def pytest_unconfigure(config): - if hasattr(config, '_toclose'): - #print "closing", config._toclose, config._toclose.fileno() - config._toclose.close() - def getreportopt(config): reportopts = "" optvalue = config.option.report @@ -165,9 +161,6 @@ def pytest_deselected(self, items): self.stats.setdefault('deselected', []).extend(items) - def pytest__teardown_final_logerror(self, report): - self.stats.setdefault("error", []).append(report) - def pytest_runtest_logstart(self, nodeid, location): # ensure that the path is printed before the # 1st test of a module starts running @@ -259,7 +252,7 @@ msg = "platform %s -- Python %s" % (sys.platform, verinfo) if hasattr(sys, 'pypy_version_info'): verinfo = ".".join(map(str, sys.pypy_version_info[:3])) - msg += "[pypy-%s]" % verinfo + msg += "[pypy-%s-%s]" % (verinfo, sys.pypy_version_info[3]) msg += " -- pytest-%s" % (py.test.__version__) if self.verbosity > 0 or self.config.option.debug or \ getattr(self.config.option, 'pastebin', None): @@ -318,12 +311,17 @@ self.config.hook.pytest_terminal_summary(terminalreporter=self) if exitstatus == 2: self._report_keyboardinterrupt() + del self._keyboardinterrupt_memo self.summary_deselected() self.summary_stats() def pytest_keyboard_interrupt(self, excinfo): self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True) + def pytest_unconfigure(self): + if hasattr(self, '_keyboardinterrupt_memo'): + self._report_keyboardinterrupt() + def _report_keyboardinterrupt(self): excrepr = self._keyboardinterrupt_memo msg = excrepr.reprcrash.message @@ -388,7 +386,7 @@ else: msg = self._getfailureheadline(rep) self.write_sep("_", msg) - rep.toterminal(self._tw) + self._outrep_summary(rep) def summary_errors(self): if self.config.option.tbstyle != "no": @@ -406,7 +404,15 @@ elif rep.when == "teardown": msg = "ERROR at teardown of " + msg self.write_sep("_", msg) - rep.toterminal(self._tw) + self._outrep_summary(rep) + + def _outrep_summary(self, rep): + rep.toterminal(self._tw) + for secname, content in rep.sections: + self._tw.sep("-", secname) + if content[-1:] == "\n": + content = content[:-1] + self._tw.line(content) def summary_stats(self): session_duration = py.std.time.time() - self._sessionstarttime @@ -417,9 +423,10 @@ keys.append(key) parts = [] for key in keys: - val = self.stats.get(key, None) - if val: - parts.append("%d %s" %(len(val), key)) + if key: # setup/teardown reports have an empty key, ignore them + val = self.stats.get(key, None) + if val: + parts.append("%d %s" %(len(val), key)) line = ", ".join(parts) # XXX coloring msg = "%s in %.2f seconds" %(line, session_duration) @@ -430,8 +437,15 @@ def summary_deselected(self): if 'deselected' in self.stats: + l = [] + k = self.config.option.keyword + if k: + l.append("-k%s" % k) + m = self.config.option.markexpr + if m: + l.append("-m %r" % m) self.write_sep("=", "%d tests deselected by %r" %( - len(self.stats['deselected']), self.config.option.keyword), bold=True) + len(self.stats['deselected']), " ".join(l)), bold=True) def repr_pythonversion(v=None): if v is None: diff --git a/_pytest/tmpdir.py b/_pytest/tmpdir.py --- a/_pytest/tmpdir.py +++ b/_pytest/tmpdir.py @@ -46,7 +46,7 @@ def finish(self): self.trace("finish") - + def pytest_configure(config): mp = monkeypatch() t = TempdirHandler(config) @@ -64,5 +64,5 @@ name = request._pyfuncitem.name name = py.std.re.sub("[\W]", "_", name) x = request.config._tmpdirhandler.mktemp(name, numbered=True) - return x.realpath() + return x diff --git a/_pytest/unittest.py b/_pytest/unittest.py --- a/_pytest/unittest.py +++ b/_pytest/unittest.py @@ -2,6 +2,9 @@ import pytest, py import sys, pdb +# for transfering markers +from _pytest.python import transfer_markers + def pytest_pycollect_makeitem(collector, name, obj): unittest = sys.modules.get('unittest') if unittest is None: @@ -19,7 +22,14 @@ class UnitTestCase(pytest.Class): def collect(self): loader = py.std.unittest.TestLoader() + module = self.getparent(pytest.Module).obj + cls = self.obj for name in loader.getTestCaseNames(self.obj): + x = getattr(self.obj, name) + funcobj = getattr(x, 'im_func', x) + transfer_markers(funcobj, cls, module) + if hasattr(funcobj, 'todo'): + pytest.mark.xfail(reason=str(funcobj.todo))(funcobj) yield TestCaseFunction(name, parent=self) def setup(self): @@ -37,15 +47,13 @@ class TestCaseFunction(pytest.Function): _excinfo = None - def __init__(self, name, parent): - super(TestCaseFunction, self).__init__(name, parent) - if hasattr(self._obj, 'todo'): - getattr(self._obj, 'im_func', self._obj).xfail = \ - pytest.mark.xfail(reason=str(self._obj.todo)) - def setup(self): self._testcase = self.parent.obj(self.name) self._obj = getattr(self._testcase, self.name) + if hasattr(self._testcase, 'skip'): + pytest.skip(self._testcase.skip) + if hasattr(self._obj, 'skip'): + pytest.skip(self._obj.skip) if hasattr(self._testcase, 'setup_method'): self._testcase.setup_method(self._obj) @@ -104,7 +112,10 @@ def _prunetraceback(self, excinfo): pytest.Function._prunetraceback(self, excinfo) - excinfo.traceback = excinfo.traceback.filter(lambda x:not x.frame.f_globals.get('__unittest')) + traceback = excinfo.traceback.filter( + lambda x:not x.frame.f_globals.get('__unittest')) + if traceback: + excinfo.traceback = traceback @pytest.mark.tryfirst def pytest_runtest_makereport(item, call): @@ -120,14 +131,19 @@ ut = sys.modules['twisted.python.failure'] Failure__init__ = ut.Failure.__init__.im_func check_testcase_implements_trial_reporter() - def excstore(self, exc_value=None, exc_type=None, exc_tb=None): + def excstore(self, exc_value=None, exc_type=None, exc_tb=None, + captureVars=None): if exc_value is None: self._rawexcinfo = sys.exc_info() else: if exc_type is None: exc_type = type(exc_value) self._rawexcinfo = (exc_type, exc_value, exc_tb) - Failure__init__(self, exc_value, exc_type, exc_tb) + try: + Failure__init__(self, exc_value, exc_type, exc_tb, + captureVars=captureVars) + except TypeError: + Failure__init__(self, exc_value, exc_type, exc_tb) ut.Failure.__init__ = excstore try: return __multicall__.execute() diff --git a/py/__init__.py b/py/__init__.py --- a/py/__init__.py +++ b/py/__init__.py @@ -8,7 +8,7 @@ (c) Holger Krekel and others, 2004-2010 """ -__version__ = '1.4.4.dev1' +__version__ = '1.4.7.dev3' from py import _apipkg @@ -70,6 +70,11 @@ 'getrawcode' : '._code.code:getrawcode', 'patch_builtins' : '._code.code:patch_builtins', 'unpatch_builtins' : '._code.code:unpatch_builtins', + '_AssertionError' : '._code.assertion:AssertionError', + '_reinterpret_old' : '._code.assertion:reinterpret_old', + '_reinterpret' : '._code.assertion:reinterpret', + '_reprcompare' : '._code.assertion:_reprcompare', + '_format_explanation' : '._code.assertion:_format_explanation', }, # backports and additions of builtins diff --git a/py/_builtin.py b/py/_builtin.py --- a/py/_builtin.py +++ b/py/_builtin.py @@ -142,7 +142,7 @@ del back elif locs is None: locs = globs - fp = open(fn, "rb") + fp = open(fn, "r") try: source = fp.read() finally: diff --git a/py/_code/_assertionnew.py b/py/_code/_assertionnew.py new file mode 100644 --- /dev/null +++ b/py/_code/_assertionnew.py @@ -0,0 +1,339 @@ +""" +Find intermediate evalutation results in assert statements through builtin AST. +This should replace _assertionold.py eventually. +""" + +import sys +import ast + +import py +from py._code.assertion import _format_explanation, BuiltinAssertionError + + +if sys.platform.startswith("java") and sys.version_info < (2, 5, 2): + # See http://bugs.jython.org/issue1497 + _exprs = ("BoolOp", "BinOp", "UnaryOp", "Lambda", "IfExp", "Dict", + "ListComp", "GeneratorExp", "Yield", "Compare", "Call", + "Repr", "Num", "Str", "Attribute", "Subscript", "Name", + "List", "Tuple") + _stmts = ("FunctionDef", "ClassDef", "Return", "Delete", "Assign", + "AugAssign", "Print", "For", "While", "If", "With", "Raise", + "TryExcept", "TryFinally", "Assert", "Import", "ImportFrom", + "Exec", "Global", "Expr", "Pass", "Break", "Continue") + _expr_nodes = set(getattr(ast, name) for name in _exprs) + _stmt_nodes = set(getattr(ast, name) for name in _stmts) + def _is_ast_expr(node): + return node.__class__ in _expr_nodes + def _is_ast_stmt(node): + return node.__class__ in _stmt_nodes +else: + def _is_ast_expr(node): + return isinstance(node, ast.expr) + def _is_ast_stmt(node): + return isinstance(node, ast.stmt) + + +class Failure(Exception): + """Error found while interpreting AST.""" + + def __init__(self, explanation=""): + self.cause = sys.exc_info() + self.explanation = explanation + + +def interpret(source, frame, should_fail=False): + mod = ast.parse(source) + visitor = DebugInterpreter(frame) + try: + visitor.visit(mod) + except Failure: + failure = sys.exc_info()[1] + return getfailure(failure) + if should_fail: + return ("(assertion failed, but when it was re-run for " + "printing intermediate values, it did not fail. Suggestions: " + "compute assert expression before the assert or use --no-assert)") + +def run(offending_line, frame=None): + if frame is None: + frame = py.code.Frame(sys._getframe(1)) + return interpret(offending_line, frame) + +def getfailure(failure): + explanation = _format_explanation(failure.explanation) + value = failure.cause[1] + if str(value): + lines = explanation.splitlines() + if not lines: + lines.append("") + lines[0] += " << %s" % (value,) + explanation = "\n".join(lines) + text = "%s: %s" % (failure.cause[0].__name__, explanation) + if text.startswith("AssertionError: assert "): + text = text[16:] + return text + + +operator_map = { + ast.BitOr : "|", + ast.BitXor : "^", + ast.BitAnd : "&", + ast.LShift : "<<", + ast.RShift : ">>", + ast.Add : "+", + ast.Sub : "-", + ast.Mult : "*", + ast.Div : "/", + ast.FloorDiv : "//", + ast.Mod : "%", + ast.Eq : "==", + ast.NotEq : "!=", + ast.Lt : "<", + ast.LtE : "<=", + ast.Gt : ">", + ast.GtE : ">=", + ast.Pow : "**", + ast.Is : "is", + ast.IsNot : "is not", + ast.In : "in", + ast.NotIn : "not in" +} + +unary_map = { + ast.Not : "not %s", + ast.Invert : "~%s", + ast.USub : "-%s", + ast.UAdd : "+%s" +} + + +class DebugInterpreter(ast.NodeVisitor): + """Interpret AST nodes to gleam useful debugging information. """ + + def __init__(self, frame): + self.frame = frame + + def generic_visit(self, node): + # Fallback when we don't have a special implementation. + if _is_ast_expr(node): + mod = ast.Expression(node) + co = self._compile(mod) + try: + result = self.frame.eval(co) + except Exception: + raise Failure() + explanation = self.frame.repr(result) + return explanation, result + elif _is_ast_stmt(node): + mod = ast.Module([node]) + co = self._compile(mod, "exec") + try: + self.frame.exec_(co) + except Exception: + raise Failure() + return None, None + else: + raise AssertionError("can't handle %s" %(node,)) + + def _compile(self, source, mode="eval"): + return compile(source, "", mode) + + def visit_Expr(self, expr): + return self.visit(expr.value) + + def visit_Module(self, mod): + for stmt in mod.body: + self.visit(stmt) + + def visit_Name(self, name): + explanation, result = self.generic_visit(name) + # See if the name is local. + source = "%r in locals() is not globals()" % (name.id,) + co = self._compile(source) + try: + local = self.frame.eval(co) + except Exception: + # have to assume it isn't + local = False + if not local: + return name.id, result + return explanation, result + + def visit_Compare(self, comp): + left = comp.left + left_explanation, left_result = self.visit(left) + for op, next_op in zip(comp.ops, comp.comparators): + next_explanation, next_result = self.visit(next_op) + op_symbol = operator_map[op.__class__] + explanation = "%s %s %s" % (left_explanation, op_symbol, + next_explanation) + source = "__exprinfo_left %s __exprinfo_right" % (op_symbol,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_left=left_result, + __exprinfo_right=next_result) + except Exception: + raise Failure(explanation) + try: + if not result: + break + except KeyboardInterrupt: + raise + except: + break + left_explanation, left_result = next_explanation, next_result + + rcomp = py.code._reprcompare + if rcomp: + res = rcomp(op_symbol, left_result, next_result) + if res: + explanation = res + return explanation, result + + def visit_BoolOp(self, boolop): + is_or = isinstance(boolop.op, ast.Or) + explanations = [] + for operand in boolop.values: + explanation, result = self.visit(operand) + explanations.append(explanation) + if result == is_or: + break + name = is_or and " or " or " and " + explanation = "(" + name.join(explanations) + ")" + return explanation, result + + def visit_UnaryOp(self, unary): + pattern = unary_map[unary.op.__class__] + operand_explanation, operand_result = self.visit(unary.operand) + explanation = pattern % (operand_explanation,) + co = self._compile(pattern % ("__exprinfo_expr",)) + try: + result = self.frame.eval(co, __exprinfo_expr=operand_result) + except Exception: + raise Failure(explanation) + return explanation, result + + def visit_BinOp(self, binop): + left_explanation, left_result = self.visit(binop.left) + right_explanation, right_result = self.visit(binop.right) + symbol = operator_map[binop.op.__class__] + explanation = "(%s %s %s)" % (left_explanation, symbol, + right_explanation) + source = "__exprinfo_left %s __exprinfo_right" % (symbol,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_left=left_result, + __exprinfo_right=right_result) + except Exception: + raise Failure(explanation) + return explanation, result + + def visit_Call(self, call): + func_explanation, func = self.visit(call.func) + arg_explanations = [] + ns = {"__exprinfo_func" : func} + arguments = [] + for arg in call.args: + arg_explanation, arg_result = self.visit(arg) + arg_name = "__exprinfo_%s" % (len(ns),) + ns[arg_name] = arg_result + arguments.append(arg_name) + arg_explanations.append(arg_explanation) + for keyword in call.keywords: + arg_explanation, arg_result = self.visit(keyword.value) + arg_name = "__exprinfo_%s" % (len(ns),) + ns[arg_name] = arg_result + keyword_source = "%s=%%s" % (keyword.arg) + arguments.append(keyword_source % (arg_name,)) + arg_explanations.append(keyword_source % (arg_explanation,)) + if call.starargs: + arg_explanation, arg_result = self.visit(call.starargs) + arg_name = "__exprinfo_star" + ns[arg_name] = arg_result + arguments.append("*%s" % (arg_name,)) + arg_explanations.append("*%s" % (arg_explanation,)) + if call.kwargs: + arg_explanation, arg_result = self.visit(call.kwargs) + arg_name = "__exprinfo_kwds" + ns[arg_name] = arg_result + arguments.append("**%s" % (arg_name,)) + arg_explanations.append("**%s" % (arg_explanation,)) + args_explained = ", ".join(arg_explanations) + explanation = "%s(%s)" % (func_explanation, args_explained) + args = ", ".join(arguments) + source = "__exprinfo_func(%s)" % (args,) + co = self._compile(source) + try: + result = self.frame.eval(co, **ns) + except Exception: + raise Failure(explanation) + pattern = "%s\n{%s = %s\n}" + rep = self.frame.repr(result) + explanation = pattern % (rep, rep, explanation) + return explanation, result + + def _is_builtin_name(self, name): + pattern = "%r not in globals() and %r not in locals()" + source = pattern % (name.id, name.id) + co = self._compile(source) + try: + return self.frame.eval(co) + except Exception: + return False + + def visit_Attribute(self, attr): + if not isinstance(attr.ctx, ast.Load): + return self.generic_visit(attr) + source_explanation, source_result = self.visit(attr.value) + explanation = "%s.%s" % (source_explanation, attr.attr) + source = "__exprinfo_expr.%s" % (attr.attr,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_expr=source_result) + except Exception: + raise Failure(explanation) + explanation = "%s\n{%s = %s.%s\n}" % (self.frame.repr(result), + self.frame.repr(result), + source_explanation, attr.attr) + # Check if the attr is from an instance. + source = "%r in getattr(__exprinfo_expr, '__dict__', {})" + source = source % (attr.attr,) + co = self._compile(source) + try: + from_instance = self.frame.eval(co, __exprinfo_expr=source_result) + except Exception: + from_instance = True + if from_instance: + rep = self.frame.repr(result) + pattern = "%s\n{%s = %s\n}" + explanation = pattern % (rep, rep, explanation) + return explanation, result + + def visit_Assert(self, assrt): + test_explanation, test_result = self.visit(assrt.test) + if test_explanation.startswith("False\n{False =") and \ + test_explanation.endswith("\n"): + test_explanation = test_explanation[15:-2] + explanation = "assert %s" % (test_explanation,) + if not test_result: + try: + raise BuiltinAssertionError + except Exception: + raise Failure(explanation) + return explanation, test_result + + def visit_Assign(self, assign): + value_explanation, value_result = self.visit(assign.value) + explanation = "... = %s" % (value_explanation,) + name = ast.Name("__exprinfo_expr", ast.Load(), + lineno=assign.value.lineno, + col_offset=assign.value.col_offset) + new_assign = ast.Assign(assign.targets, name, lineno=assign.lineno, + col_offset=assign.col_offset) + mod = ast.Module([new_assign]) + co = self._compile(mod, "exec") + try: + self.frame.exec_(co, __exprinfo_expr=value_result) + except Exception: + raise Failure(explanation) + return explanation, value_result diff --git a/py/_code/_assertionold.py b/py/_code/_assertionold.py new file mode 100644 --- /dev/null +++ b/py/_code/_assertionold.py @@ -0,0 +1,555 @@ +import py +import sys, inspect +from compiler import parse, ast, pycodegen +from py._code.assertion import BuiltinAssertionError, _format_explanation + +passthroughex = py.builtin._sysex + +class Failure: + def __init__(self, node): + self.exc, self.value, self.tb = sys.exc_info() + self.node = node + +class View(object): + """View base class. + + If C is a subclass of View, then C(x) creates a proxy object around + the object x. The actual class of the proxy is not C in general, + but a *subclass* of C determined by the rules below. To avoid confusion + we call view class the class of the proxy (a subclass of C, so of View) + and object class the class of x. + + Attributes and methods not found in the proxy are automatically read on x. + Other operations like setting attributes are performed on the proxy, as + determined by its view class. The object x is available from the proxy + as its __obj__ attribute. + + The view class selection is determined by the __view__ tuples and the + optional __viewkey__ method. By default, the selected view class is the + most specific subclass of C whose __view__ mentions the class of x. + If no such subclass is found, the search proceeds with the parent + object classes. For example, C(True) will first look for a subclass + of C with __view__ = (..., bool, ...) and only if it doesn't find any + look for one with __view__ = (..., int, ...), and then ..., object,... + If everything fails the class C itself is considered to be the default. + + Alternatively, the view class selection can be driven by another aspect + of the object x, instead of the class of x, by overriding __viewkey__. + See last example at the end of this module. + """ + + _viewcache = {} + __view__ = () + + def __new__(rootclass, obj, *args, **kwds): + self = object.__new__(rootclass) + self.__obj__ = obj + self.__rootclass__ = rootclass + key = self.__viewkey__() + try: + self.__class__ = self._viewcache[key] + except KeyError: + self.__class__ = self._selectsubclass(key) + return self + + def __getattr__(self, attr): + # attributes not found in the normal hierarchy rooted on View + # are looked up in the object's real class + return getattr(self.__obj__, attr) + + def __viewkey__(self): + return self.__obj__.__class__ + + def __matchkey__(self, key, subclasses): + if inspect.isclass(key): + keys = inspect.getmro(key) + else: + keys = [key] + for key in keys: + result = [C for C in subclasses if key in C.__view__] + if result: + return result + return [] + + def _selectsubclass(self, key): + subclasses = list(enumsubclasses(self.__rootclass__)) + for C in subclasses: + if not isinstance(C.__view__, tuple): + C.__view__ = (C.__view__,) + choices = self.__matchkey__(key, subclasses) + if not choices: + return self.__rootclass__ + elif len(choices) == 1: + return choices[0] + else: + # combine the multiple choices + return type('?', tuple(choices), {}) + + def __repr__(self): + return '%s(%r)' % (self.__rootclass__.__name__, self.__obj__) + + +def enumsubclasses(cls): + for subcls in cls.__subclasses__(): + for subsubclass in enumsubclasses(subcls): + yield subsubclass + yield cls + + +class Interpretable(View): + """A parse tree node with a few extra methods.""" + explanation = None + + def is_builtin(self, frame): + return False + + def eval(self, frame): + # fall-back for unknown expression nodes + try: + expr = ast.Expression(self.__obj__) + expr.filename = '' + self.__obj__.filename = '' + co = pycodegen.ExpressionCodeGenerator(expr).getCode() + result = frame.eval(co) + except passthroughex: + raise + except: + raise Failure(self) + self.result = result + self.explanation = self.explanation or frame.repr(self.result) + + def run(self, frame): + # fall-back for unknown statement nodes + try: + expr = ast.Module(None, ast.Stmt([self.__obj__])) + expr.filename = '' + co = pycodegen.ModuleCodeGenerator(expr).getCode() + frame.exec_(co) + except passthroughex: + raise + except: + raise Failure(self) + + def nice_explanation(self): + return _format_explanation(self.explanation) + + +class Name(Interpretable): + __view__ = ast.Name + + def is_local(self, frame): + source = '%r in locals() is not globals()' % self.name + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def is_global(self, frame): + source = '%r in globals()' % self.name + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def is_builtin(self, frame): + source = '%r not in locals() and %r not in globals()' % ( + self.name, self.name) + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def eval(self, frame): + super(Name, self).eval(frame) + if not self.is_local(frame): + self.explanation = self.name + +class Compare(Interpretable): + __view__ = ast.Compare + + def eval(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + for operation, expr2 in self.ops: + if hasattr(self, 'result'): + # shortcutting in chained expressions + if not frame.is_true(self.result): + break + expr2 = Interpretable(expr2) + expr2.eval(frame) + self.explanation = "%s %s %s" % ( + expr.explanation, operation, expr2.explanation) + source = "__exprinfo_left %s __exprinfo_right" % operation + try: + self.result = frame.eval(source, + __exprinfo_left=expr.result, + __exprinfo_right=expr2.result) + except passthroughex: + raise + except: + raise Failure(self) + expr = expr2 + +class And(Interpretable): + __view__ = ast.And + + def eval(self, frame): + explanations = [] + for expr in self.nodes: + expr = Interpretable(expr) + expr.eval(frame) + explanations.append(expr.explanation) + self.result = expr.result + if not frame.is_true(expr.result): + break + self.explanation = '(' + ' and '.join(explanations) + ')' + +class Or(Interpretable): + __view__ = ast.Or + + def eval(self, frame): + explanations = [] + for expr in self.nodes: + expr = Interpretable(expr) + expr.eval(frame) + explanations.append(expr.explanation) + self.result = expr.result + if frame.is_true(expr.result): + break + self.explanation = '(' + ' or '.join(explanations) + ')' + + +# == Unary operations == +keepalive = [] +for astclass, astpattern in { + ast.Not : 'not __exprinfo_expr', + ast.Invert : '(~__exprinfo_expr)', + }.items(): + + class UnaryArith(Interpretable): + __view__ = astclass + + def eval(self, frame, astpattern=astpattern): + expr = Interpretable(self.expr) + expr.eval(frame) + self.explanation = astpattern.replace('__exprinfo_expr', + expr.explanation) + try: + self.result = frame.eval(astpattern, + __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + + keepalive.append(UnaryArith) + +# == Binary operations == +for astclass, astpattern in { + ast.Add : '(__exprinfo_left + __exprinfo_right)', + ast.Sub : '(__exprinfo_left - __exprinfo_right)', + ast.Mul : '(__exprinfo_left * __exprinfo_right)', + ast.Div : '(__exprinfo_left / __exprinfo_right)', + ast.Mod : '(__exprinfo_left % __exprinfo_right)', + ast.Power : '(__exprinfo_left ** __exprinfo_right)', + }.items(): + + class BinaryArith(Interpretable): + __view__ = astclass + + def eval(self, frame, astpattern=astpattern): + left = Interpretable(self.left) + left.eval(frame) + right = Interpretable(self.right) + right.eval(frame) + self.explanation = (astpattern + .replace('__exprinfo_left', left .explanation) + .replace('__exprinfo_right', right.explanation)) + try: + self.result = frame.eval(astpattern, + __exprinfo_left=left.result, + __exprinfo_right=right.result) + except passthroughex: + raise + except: + raise Failure(self) + + keepalive.append(BinaryArith) + + +class CallFunc(Interpretable): + __view__ = ast.CallFunc + + def is_bool(self, frame): + source = 'isinstance(__exprinfo_value, bool)' + try: + return frame.is_true(frame.eval(source, + __exprinfo_value=self.result)) + except passthroughex: + raise + except: + return False + + def eval(self, frame): + node = Interpretable(self.node) + node.eval(frame) + explanations = [] + vars = {'__exprinfo_fn': node.result} + source = '__exprinfo_fn(' + for a in self.args: + if isinstance(a, ast.Keyword): + keyword = a.name + a = a.expr + else: + keyword = None + a = Interpretable(a) + a.eval(frame) + argname = '__exprinfo_%d' % len(vars) + vars[argname] = a.result + if keyword is None: + source += argname + ',' + explanations.append(a.explanation) + else: + source += '%s=%s,' % (keyword, argname) + explanations.append('%s=%s' % (keyword, a.explanation)) + if self.star_args: + star_args = Interpretable(self.star_args) + star_args.eval(frame) + argname = '__exprinfo_star' + vars[argname] = star_args.result + source += '*' + argname + ',' + explanations.append('*' + star_args.explanation) + if self.dstar_args: + dstar_args = Interpretable(self.dstar_args) + dstar_args.eval(frame) + argname = '__exprinfo_kwds' + vars[argname] = dstar_args.result + source += '**' + argname + ',' + explanations.append('**' + dstar_args.explanation) + self.explanation = "%s(%s)" % ( + node.explanation, ', '.join(explanations)) + if source.endswith(','): + source = source[:-1] + source += ')' + try: + self.result = frame.eval(source, **vars) + except passthroughex: + raise + except: + raise Failure(self) + if not node.is_builtin(frame) or not self.is_bool(frame): + r = frame.repr(self.result) + self.explanation = '%s\n{%s = %s\n}' % (r, r, self.explanation) + +class Getattr(Interpretable): + __view__ = ast.Getattr + + def eval(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + source = '__exprinfo_expr.%s' % self.attrname + try: + self.result = frame.eval(source, __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + self.explanation = '%s.%s' % (expr.explanation, self.attrname) + # if the attribute comes from the instance, its value is interesting + source = ('hasattr(__exprinfo_expr, "__dict__") and ' + '%r in __exprinfo_expr.__dict__' % self.attrname) + try: + from_instance = frame.is_true( + frame.eval(source, __exprinfo_expr=expr.result)) + except passthroughex: + raise + except: + from_instance = True + if from_instance: + r = frame.repr(self.result) + self.explanation = '%s\n{%s = %s\n}' % (r, r, self.explanation) + +# == Re-interpretation of full statements == + +class Assert(Interpretable): + __view__ = ast.Assert + + def run(self, frame): + test = Interpretable(self.test) + test.eval(frame) + # simplify 'assert False where False = ...' + if (test.explanation.startswith('False\n{False = ') and + test.explanation.endswith('\n}')): + test.explanation = test.explanation[15:-2] + # print the result as 'assert ' + self.result = test.result + self.explanation = 'assert ' + test.explanation + if not frame.is_true(test.result): + try: + raise BuiltinAssertionError + except passthroughex: + raise + except: + raise Failure(self) + +class Assign(Interpretable): + __view__ = ast.Assign + + def run(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + self.result = expr.result + self.explanation = '... = ' + expr.explanation + # fall-back-run the rest of the assignment + ass = ast.Assign(self.nodes, ast.Name('__exprinfo_expr')) + mod = ast.Module(None, ast.Stmt([ass])) + mod.filename = '' + co = pycodegen.ModuleCodeGenerator(mod).getCode() + try: + frame.exec_(co, __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + +class Discard(Interpretable): + __view__ = ast.Discard + + def run(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + self.result = expr.result + self.explanation = expr.explanation + +class Stmt(Interpretable): + __view__ = ast.Stmt + + def run(self, frame): + for stmt in self.nodes: + stmt = Interpretable(stmt) + stmt.run(frame) + + +def report_failure(e): + explanation = e.node.nice_explanation() + if explanation: + explanation = ", in: " + explanation + else: + explanation = "" + sys.stdout.write("%s: %s%s\n" % (e.exc.__name__, e.value, explanation)) + +def check(s, frame=None): + if frame is None: + frame = sys._getframe(1) + frame = py.code.Frame(frame) + expr = parse(s, 'eval') + assert isinstance(expr, ast.Expression) + node = Interpretable(expr.node) + try: + node.eval(frame) + except passthroughex: + raise + except Failure: + e = sys.exc_info()[1] + report_failure(e) + else: + if not frame.is_true(node.result): + sys.stderr.write("assertion failed: %s\n" % node.nice_explanation()) + + +########################################################### +# API / Entry points +# ######################################################### + +def interpret(source, frame, should_fail=False): + module = Interpretable(parse(source, 'exec').node) + #print "got module", module + if isinstance(frame, py.std.types.FrameType): + frame = py.code.Frame(frame) + try: + module.run(frame) + except Failure: + e = sys.exc_info()[1] + return getfailure(e) + except passthroughex: + raise + except: + import traceback + traceback.print_exc() + if should_fail: + return ("(assertion failed, but when it was re-run for " + "printing intermediate values, it did not fail. Suggestions: " + "compute assert expression before the assert or use --nomagic)") + else: + return None + +def getmsg(excinfo): + if isinstance(excinfo, tuple): + excinfo = py.code.ExceptionInfo(excinfo) + #frame, line = gettbline(tb) + #frame = py.code.Frame(frame) + #return interpret(line, frame) + + tb = excinfo.traceback[-1] + source = str(tb.statement).strip() + x = interpret(source, tb.frame, should_fail=True) + if not isinstance(x, str): + raise TypeError("interpret returned non-string %r" % (x,)) + return x + +def getfailure(e): + explanation = e.node.nice_explanation() + if str(e.value): + lines = explanation.split('\n') + lines[0] += " << %s" % (e.value,) + explanation = '\n'.join(lines) + text = "%s: %s" % (e.exc.__name__, explanation) + if text.startswith('AssertionError: assert '): + text = text[16:] + return text + +def run(s, frame=None): + if frame is None: + frame = sys._getframe(1) + frame = py.code.Frame(frame) + module = Interpretable(parse(s, 'exec').node) + try: + module.run(frame) + except Failure: + e = sys.exc_info()[1] + report_failure(e) + + +if __name__ == '__main__': + # example: + def f(): + return 5 + def g(): + return 3 + def h(x): + return 'never' + check("f() * g() == 5") + check("not f()") + check("not (f() and g() or 0)") + check("f() == g()") + i = 4 + check("i == f()") + check("len(f()) == 0") + check("isinstance(2+3+4, float)") + + run("x = i") + check("x == 5") + + run("assert not f(), 'oops'") + run("a, b, c = 1, 2") + run("a, b, c = f()") + + check("max([f(),g()]) == 4") + check("'hello'[g()] == 'h'") + run("'guk%d' % h(f())") diff --git a/py/_code/assertion.py b/py/_code/assertion.py new file mode 100644 --- /dev/null +++ b/py/_code/assertion.py @@ -0,0 +1,94 @@ +import sys +import py + +BuiltinAssertionError = py.builtin.builtins.AssertionError + +_reprcompare = None # if set, will be called by assert reinterp for comparison ops + +def _format_explanation(explanation): + """This formats an explanation + + Normally all embedded newlines are escaped, however there are + three exceptions: \n{, \n} and \n~. The first two are intended + cover nested explanations, see function and attribute explanations + for examples (.visit_Call(), visit_Attribute()). The last one is + for when one explanation needs to span multiple lines, e.g. when + displaying diffs. + """ + raw_lines = (explanation or '').split('\n') + # escape newlines not followed by {, } and ~ + lines = [raw_lines[0]] + for l in raw_lines[1:]: + if l.startswith('{') or l.startswith('}') or l.startswith('~'): + lines.append(l) + else: + lines[-1] += '\\n' + l + + result = lines[:1] + stack = [0] + stackcnt = [0] + for line in lines[1:]: + if line.startswith('{'): + if stackcnt[-1]: + s = 'and ' + else: + s = 'where ' + stack.append(len(result)) + stackcnt[-1] += 1 + stackcnt.append(0) + result.append(' +' + ' '*(len(stack)-1) + s + line[1:]) + elif line.startswith('}'): + assert line.startswith('}') + stack.pop() + stackcnt.pop() + result[stack[-1]] += line[1:] + else: + assert line.startswith('~') + result.append(' '*len(stack) + line[1:]) + assert len(stack) == 1 + return '\n'.join(result) + + +class AssertionError(BuiltinAssertionError): + def __init__(self, *args): + BuiltinAssertionError.__init__(self, *args) + if args: + try: + self.msg = str(args[0]) + except py.builtin._sysex: + raise + except: + self.msg = "<[broken __repr__] %s at %0xd>" %( + args[0].__class__, id(args[0])) + else: + f = py.code.Frame(sys._getframe(1)) + try: + source = f.code.fullsource + if source is not None: + try: + source = source.getstatement(f.lineno, assertion=True) + except IndexError: + source = None + else: + source = str(source.deindent()).strip() + except py.error.ENOENT: + source = None + # this can also occur during reinterpretation, when the + # co_filename is set to "". + if source: + self.msg = reinterpret(source, f, should_fail=True) + else: + self.msg = "" + if not self.args: + self.args = (self.msg,) + +if sys.version_info > (3, 0): + AssertionError.__module__ = "builtins" + reinterpret_old = "old reinterpretation not available for py3" +else: + from py._code._assertionold import interpret as reinterpret_old +if sys.version_info >= (2, 6) or (sys.platform.startswith("java")): + from py._code._assertionnew import interpret as reinterpret +else: + reinterpret = reinterpret_old + diff --git a/py/_code/code.py b/py/_code/code.py --- a/py/_code/code.py +++ b/py/_code/code.py @@ -145,6 +145,17 @@ return self.frame.f_locals locals = property(getlocals, None, None, "locals of underlaying frame") + def reinterpret(self): + """Reinterpret the failing statement and returns a detailed information + about what operations are performed.""" + if self.exprinfo is None: + source = str(self.statement).strip() + x = py.code._reinterpret(source, self.frame, should_fail=True) + if not isinstance(x, str): + raise TypeError("interpret returned non-string %r" % (x,)) + self.exprinfo = x + return self.exprinfo + def getfirstlinesource(self): # on Jython this firstlineno can be -1 apparently return max(self.frame.code.firstlineno, 0) @@ -158,13 +169,12 @@ end = self.lineno try: _, end = source.getstatementrange(end) - except IndexError: + except (IndexError, ValueError): end = self.lineno + 1 # heuristic to stop displaying source on e.g. # if something: # assume this causes a NameError # # _this_ lines and the one # below we don't want from entry.getsource() - end = min(end, len(source)) for i in range(self.lineno, end): if source[i].rstrip().endswith(':'): end = i + 1 @@ -273,7 +283,11 @@ """ cache = {} for i, entry in enumerate(self): - key = entry.frame.code.path, entry.lineno + # id for the code.raw is needed to work around + # the strange metaprogramming in the decorator lib from pypi + # which generates code objects that have hash/value equality + #XXX needs a test + key = entry.frame.code.path, id(entry.frame.code.raw), entry.lineno #print "checking for recursion at", key l = cache.setdefault(key, []) if l: @@ -308,7 +322,7 @@ self._striptext = 'AssertionError: ' self._excinfo = tup self.type, self.value, tb = self._excinfo - self.typename = getattr(self.type, "__name__", "???") + self.typename = self.type.__name__ self.traceback = py.code.Traceback(tb) def __repr__(self): @@ -347,14 +361,16 @@ showlocals: show locals per traceback entry style: long|short|no|native traceback style tbfilter: hide entries (where __tracebackhide__ is true) + + in case of style==native, tbfilter and showlocals is ignored. """ if style == 'native': - import traceback - return ''.join(traceback.format_exception( - self.type, - self.value, - self.traceback[0]._rawentry, - )) + return ReprExceptionInfo(ReprTracebackNative( + py.std.traceback.format_exception( + self.type, + self.value, + self.traceback[0]._rawentry, + )), self._getreprcrash()) fmt = FormattedExcinfo(showlocals=showlocals, style=style, abspath=abspath, tbfilter=tbfilter, funcargs=funcargs) @@ -452,7 +468,7 @@ def repr_locals(self, locals): if self.showlocals: lines = [] - keys = list(locals) + keys = [loc for loc in locals if loc[0] != "@"] keys.sort() for name in keys: value = locals[name] @@ -506,7 +522,10 @@ def _makepath(self, path): if not self.abspath: - np = py.path.local().bestrelpath(path) + try: + np = py.path.local().bestrelpath(path) + except OSError: + return path if len(np) < len(str(path)): path = np return path @@ -595,6 +614,19 @@ if self.extraline: tw.line(self.extraline) +class ReprTracebackNative(ReprTraceback): + def __init__(self, tblines): + self.style = "native" + self.reprentries = [ReprEntryNative(tblines)] + self.extraline = None + +class ReprEntryNative(TerminalRepr): + def __init__(self, tblines): + self.lines = tblines + + def toterminal(self, tw): + tw.write("".join(self.lines)) + class ReprEntry(TerminalRepr): localssep = "_ " @@ -680,19 +712,26 @@ oldbuiltins = {} -def patch_builtins(compile=True): - """ put compile builtins to Python's builtins. """ +def patch_builtins(assertion=True, compile=True): + """ put compile and AssertionError builtins to Python's builtins. """ + if assertion: + from py._code import assertion + l = oldbuiltins.setdefault('AssertionError', []) + l.append(py.builtin.builtins.AssertionError) + py.builtin.builtins.AssertionError = assertion.AssertionError if compile: l = oldbuiltins.setdefault('compile', []) l.append(py.builtin.builtins.compile) py.builtin.builtins.compile = py.code.compile -def unpatch_builtins(compile=True): +def unpatch_builtins(assertion=True, compile=True): """ remove compile and AssertionError builtins from Python builtins. """ + if assertion: + py.builtin.builtins.AssertionError = oldbuiltins['AssertionError'].pop() if compile: py.builtin.builtins.compile = oldbuiltins['compile'].pop() -def getrawcode(obj): +def getrawcode(obj, trycall=True): """ return code object for given function. """ try: return obj.__code__ @@ -701,5 +740,10 @@ obj = getattr(obj, 'func_code', obj) obj = getattr(obj, 'f_code', obj) obj = getattr(obj, '__code__', obj) + if trycall and not hasattr(obj, 'co_firstlineno'): + if hasattr(obj, '__call__') and not py.std.inspect.isclass(obj): + x = getrawcode(obj.__call__, trycall=False) + if hasattr(x, 'co_firstlineno'): + return x return obj diff --git a/py/_code/source.py b/py/_code/source.py --- a/py/_code/source.py +++ b/py/_code/source.py @@ -108,6 +108,7 @@ def getstatementrange(self, lineno, assertion=False): """ return (start, end) tuple which spans the minimal statement region which containing the given lineno. + raise an IndexError if no such statementrange can be found. """ # XXX there must be a better than these heuristic ways ... # XXX there may even be better heuristics :-) @@ -116,6 +117,7 @@ # 1. find the start of the statement from codeop import compile_command + end = None for start in range(lineno, -1, -1): if assertion: line = self.lines[start] @@ -139,7 +141,9 @@ trysource = self[start:end] if trysource.isparseable(): return start, end - return start, len(self) + if end is None: + raise IndexError("no valid source range around line %d " % (lineno,)) + return start, end def getblockend(self, lineno): # XXX @@ -257,23 +261,29 @@ def getfslineno(obj): + """ Return source location (path, lineno) for the given object. + If the source cannot be determined return ("", -1) + """ try: code = py.code.Code(obj) except TypeError: - # fallback to - fn = (py.std.inspect.getsourcefile(obj) or - py.std.inspect.getfile(obj)) + try: + fn = (py.std.inspect.getsourcefile(obj) or + py.std.inspect.getfile(obj)) + except TypeError: + return "", -1 + fspath = fn and py.path.local(fn) or None + lineno = -1 if fspath: try: _, lineno = findsource(obj) except IOError: - lineno = None - else: - lineno = None + pass else: fspath = code.path lineno = code.firstlineno + assert isinstance(lineno, int) return fspath, lineno # @@ -286,7 +296,7 @@ except py.builtin._sysex: raise except: - return None, None + return None, -1 source = Source() source.lines = [line.rstrip() for line in sourcelines] return source, lineno diff --git a/py/_iniconfig.py b/py/_iniconfig.py --- a/py/_iniconfig.py +++ b/py/_iniconfig.py @@ -103,6 +103,7 @@ def _parseline(self, line, lineno): # comments line = line.split('#')[0].rstrip() + line = line.split(';')[0].rstrip() # blank lines if not line: return None, None diff --git a/py/_io/capture.py b/py/_io/capture.py --- a/py/_io/capture.py +++ b/py/_io/capture.py @@ -258,6 +258,9 @@ f = getattr(self, name).tmpfile f.seek(0) res = f.read() + enc = getattr(f, 'encoding', None) + if enc: + res = py.builtin._totext(res, enc) f.truncate(0) f.seek(0) l.append(res) diff --git a/py/_io/terminalwriter.py b/py/_io/terminalwriter.py --- a/py/_io/terminalwriter.py +++ b/py/_io/terminalwriter.py @@ -105,6 +105,8 @@ Blue=44, Purple=45, Cyan=46, White=47, bold=1, light=2, blink=5, invert=7) + _newline = None # the last line printed + # XXX deprecate stringio argument def __init__(self, file=None, stringio=False, encoding=None): if file is None: @@ -112,11 +114,9 @@ self.stringio = file = py.io.TextIO() else: file = py.std.sys.stdout - if hasattr(file, 'encoding'): - encoding = file.encoding elif hasattr(file, '__call__'): file = WriteFile(file, encoding=encoding) - self.encoding = encoding + self.encoding = encoding or getattr(file, 'encoding', "utf-8") self._file = file self.fullwidth = get_terminal_width() self.hasmarkup = should_do_markup(file) @@ -182,8 +182,24 @@ return s def line(self, s='', **kw): + if self._newline == False: + self.write("\n") self.write(s, **kw) self.write('\n') + self._newline = True + + def reline(self, line, **opts): + if not self.hasmarkup: + raise ValueError("cannot use rewrite-line without terminal") + if not self._newline: + self.write("\r") + self.write(line, **opts) + lastlen = getattr(self, '_lastlinelen', None) + self._lastlinelen = lenlastline = len(line) + if lenlastline < lastlen: + self.write(" " * (lastlen - lenlastline + 1)) + self._newline = False + class Win32ConsoleWriter(TerminalWriter): def write(self, s, **kw): diff --git a/py/_path/common.py b/py/_path/common.py --- a/py/_path/common.py +++ b/py/_path/common.py @@ -368,6 +368,5 @@ else: name = str(path) # path.strpath # XXX svn? pattern = '*' + path.sep + pattern - from fnmatch import fnmatch - return fnmatch(name, pattern) + return py.std.fnmatch.fnmatch(name, pattern) diff --git a/py/_path/local.py b/py/_path/local.py --- a/py/_path/local.py +++ b/py/_path/local.py @@ -157,14 +157,16 @@ return str(self) < str(other) def samefile(self, other): - """ return True if 'other' references the same file as 'self'. """ - if not iswin32: - return py.error.checked_call( - os.path.samefile, str(self), str(other)) + """ return True if 'other' references the same file as 'self'. + """ + if not isinstance(other, py.path.local): + other = os.path.abspath(str(other)) if self == other: return True - other = os.path.abspath(str(other)) - return self == other + if iswin32: + return False # ther is no samefile + return py.error.checked_call( + os.path.samefile, str(self), str(other)) def remove(self, rec=1, ignore_errors=False): """ remove a file or directory (or a directory tree if rec=1). @@ -539,7 +541,11 @@ if self.basename != "__init__.py": modfile = modfile[:-12] - if not self.samefile(modfile): + try: + issame = self.samefile(modfile) + except py.error.ENOENT: + issame = False + if not issame: raise self.ImportMismatchError(modname, modfile, self) return mod else: diff --git a/py/_path/svnurl.py b/py/_path/svnurl.py --- a/py/_path/svnurl.py +++ b/py/_path/svnurl.py @@ -233,6 +233,8 @@ e = sys.exc_info()[1] if e.err.find('non-existent in that revision') != -1: raise py.error.ENOENT(self, e.err) + elif e.err.find("E200009:") != -1: + raise py.error.ENOENT(self, e.err) elif e.err.find('File not found') != -1: raise py.error.ENOENT(self, e.err) elif e.err.find('not part of a repository')!=-1: diff --git a/py/_path/svnwc.py b/py/_path/svnwc.py --- a/py/_path/svnwc.py +++ b/py/_path/svnwc.py @@ -482,10 +482,13 @@ except py.process.cmdexec.Error: e = sys.exc_info()[1] strerr = e.err.lower() - if strerr.find('file not found') != -1: + if strerr.find('not found') != -1: + raise py.error.ENOENT(self) + elif strerr.find("E200009:") != -1: raise py.error.ENOENT(self) if (strerr.find('file exists') != -1 or strerr.find('file already exists') != -1 or + strerr.find('w150002:') != -1 or strerr.find("can't create directory") != -1): raise py.error.EEXIST(self) raise @@ -593,7 +596,7 @@ out = self._authsvn('lock').strip() if not out: # warning or error, raise exception - raise Exception(out[4:]) + raise ValueError("unknown error in svn lock command") def unlock(self): """ unset a previously set lock """ @@ -1066,6 +1069,8 @@ modrev = '?' author = '?' date = '' + elif itemstatus == "replaced": + pass else: #print entryel.toxml() commitel = entryel.getElementsByTagName('commit')[0] @@ -1148,7 +1153,11 @@ raise ValueError("Not a versioned resource") #raise ValueError, "Not a versioned resource %r" % path self.kind = d['nodekind'] == 'directory' and 'dir' or d['nodekind'] - self.rev = int(d['revision']) + try: + self.rev = int(d['revision']) + except KeyError: + self.rev = None + self.path = py.path.local(d['path']) self.size = self.path.size() if 'lastchangedrev' in d: diff --git a/py/_xmlgen.py b/py/_xmlgen.py --- a/py/_xmlgen.py +++ b/py/_xmlgen.py @@ -136,7 +136,8 @@ def list(self, obj): assert id(obj) not in self.visited self.visited[id(obj)] = 1 - map(self.visit, obj) + for elem in obj: + self.visit(elem) def Tag(self, tag): assert id(tag) not in self.visited diff --git a/py/bin/_findpy.py b/py/bin/_findpy.py deleted file mode 100644 --- a/py/bin/_findpy.py +++ /dev/null @@ -1,38 +0,0 @@ -#!/usr/bin/env python - -# -# find and import a version of 'py' -# -import sys -import os -from os.path import dirname as opd, exists, join, basename, abspath - -def searchpy(current): - while 1: - last = current - initpy = join(current, '__init__.py') - if not exists(initpy): - pydir = join(current, 'py') - # recognize py-package and ensure it is importable - if exists(pydir) and exists(join(pydir, '__init__.py')): - #for p in sys.path: - # if p == current: - # return True - if current != sys.path[0]: # if we are already first, then ok - sys.stderr.write("inserting into sys.path: %s\n" % current) - sys.path.insert(0, current) - return True - current = opd(current) - if last == current: - return False - -if not searchpy(abspath(os.curdir)): - if not searchpy(opd(abspath(sys.argv[0]))): - if not searchpy(opd(__file__)): - pass # let's hope it is just on sys.path - -import py -import pytest - -if __name__ == '__main__': - print ("py lib is at %s" % py.__file__) diff --git a/py/bin/py.test b/py/bin/py.test deleted file mode 100755 --- a/py/bin/py.test +++ /dev/null @@ -1,3 +0,0 @@ -#!/usr/bin/env python -from _findpy import pytest -raise SystemExit(pytest.main()) From pullrequests-noreply at bitbucket.org Sat Jan 21 17:05:48 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Sat, 21 Jan 2012 16:05:48 -0000 Subject: [pypy-commit] [OPEN] Pull request #21 for pypy/pypy: datetime.py fix for issue972, with unit test In-Reply-To: References: Message-ID: <20120121160548.14690.86236@bitbucket03.managed.contegix.com> Pull request #21 has been updated by David Ripton to include new changes. https://bitbucket.org/pypy/pypy/pull-request/21/datetimepy-fix-for-issue972-with-unit-test Title: datetime.py fix for issue972, with unit test Creator: David Ripton Copying utcfromtimestamp from the CPython3.2 version of datetime.py seems to fix the roundoff errors in time zone calculations. Updated list of changes: 07e667cb75a4 by David Ripton: "Add a test to prove that datetime issue972 (and dupe issue986) are fixed." 235d8b8434a8 by David Ripton: "Copy function utcfromtimestamp from CPython 3.2's datetime.py Fixes issue972, w?" -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From pullrequests-noreply at bitbucket.org Sat Jan 21 17:16:09 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Sat, 21 Jan 2012 16:16:09 -0000 Subject: [pypy-commit] [OPEN] Pull request #21 for pypy/pypy: datetime.py fix for issue972, with unit test In-Reply-To: References: Message-ID: <20120121161609.29507.44926@bitbucket01.managed.contegix.com> Pull request #21 has been updated by David Ripton to include new changes. https://bitbucket.org/pypy/pypy/pull-request/21/datetimepy-fix-for-issue972-with-unit-test Title: datetime.py fix for issue972, with unit test Creator: David Ripton Copying utcfromtimestamp from the CPython3.2 version of datetime.py seems to fix the roundoff errors in time zone calculations. Updated list of changes: eeedeffe6525 by David Ripton: "Clean up os.environ["TZ"] when we're done, in case other tests follow." 07e667cb75a4 by David Ripton: "Add a test to prove that datetime issue972 (and dupe issue986) are fixed." 235d8b8434a8 by David Ripton: "Copy function utcfromtimestamp from CPython 3.2's datetime.py Fixes issue972, w?" -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Sat Jan 21 17:26:08 2012 From: noreply at buildbot.pypy.org (dripton) Date: Sat, 21 Jan 2012 17:26:08 +0100 (CET) Subject: [pypy-commit] pypy default: Copy function utcfromtimestamp from CPython 3.2's datetime.py Message-ID: <20120121162608.C116C82D45@wyvern.cs.uni-duesseldorf.de> Author: David Ripton Branch: Changeset: r51584:235d8b8434a8 Date: 2012-01-18 20:07 -0500 http://bitbucket.org/pypy/pypy/changeset/235d8b8434a8/ Log: Copy function utcfromtimestamp from CPython 3.2's datetime.py Fixes issue972, which was caused by rounding differences between fromtimestamp and utcfromtimestamp. diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -1440,17 +1440,22 @@ return result fromtimestamp = classmethod(fromtimestamp) + @classmethod def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." - if 1 - (t % 1.0) < 0.0000005: - t = float(int(t)) + 1 - if t < 0: - t -= 1 + t, frac = divmod(t, 1.0) + us = round(frac * 1e6) + + # If timestamp is less than one microsecond smaller than a + # full second, us can be rounded up to 1000000. In this case, + # roll over to seconds, otherwise, ValueError is raised + # by the constructor. + if us == 1000000: + t += 1 + us = 0 y, m, d, hh, mm, ss, weekday, jday, dst = _time.gmtime(t) - us = int((t % 1.0) * 1000000) ss = min(ss, 59) # clamp out leap seconds if the platform has them return cls(y, m, d, hh, mm, ss, us) - utcfromtimestamp = classmethod(utcfromtimestamp) # XXX This is supposed to do better than we *can* do by using time.time(), # XXX if the platform supports a more accurate way. The C implementation From noreply at buildbot.pypy.org Sat Jan 21 17:26:10 2012 From: noreply at buildbot.pypy.org (dripton) Date: Sat, 21 Jan 2012 17:26:10 +0100 (CET) Subject: [pypy-commit] pypy default: Add a test to prove that datetime issue972 (and dupe issue986) are fixed. Message-ID: <20120121162610.00FCD82D45@wyvern.cs.uni-duesseldorf.de> Author: David Ripton Branch: Changeset: r51585:07e667cb75a4 Date: 2012-01-21 11:01 -0500 http://bitbucket.org/pypy/pypy/changeset/07e667cb75a4/ Log: Add a test to prove that datetime issue972 (and dupe issue986) are fixed. diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -0,0 +1,17 @@ +"""Additional tests for datetime.""" + +import time +import datetime +import os + +def test_utcfromtimestamp(): + """Confirm that utcfromtimestamp and fromtimestamp give consistent results. + + Based on danchr's test script in https://bugs.pypy.org/issue986 + """ + os.putenv("TZ", "GMT") + for unused in xrange(100): + now = time.time() + delta = (datetime.datetime.utcfromtimestamp(now) - + datetime.datetime.fromtimestamp(now)) + assert delta.days * 86400 + delta.seconds == 0 From pullrequests-noreply at bitbucket.org Sat Jan 21 17:26:10 2012 From: pullrequests-noreply at bitbucket.org (Bitbucket) Date: Sat, 21 Jan 2012 16:26:10 -0000 Subject: [pypy-commit] [ACCEPTED] Pull request #21 for pypy/pypy: datetime.py fix for issue972, with unit test In-Reply-To: References: Message-ID: <20120121162610.23241.5787@bitbucket05.managed.contegix.com> Pull request #21 has been accepted by Maciej Fijalkowski. Changes in dripton/pypy have been pulled into pypy/pypy. https://bitbucket.org/pypy/pypy/pull-request/21/datetimepy-fix-for-issue972-with-unit-test -- This is an issue notification from bitbucket.org. You are receiving this either because you are the participating in a pull request, or you are following it. From noreply at buildbot.pypy.org Sat Jan 21 17:26:11 2012 From: noreply at buildbot.pypy.org (dripton) Date: Sat, 21 Jan 2012 17:26:11 +0100 (CET) Subject: [pypy-commit] pypy default: Clean up os.environ["TZ"] when we're done, in case other tests follow. Message-ID: <20120121162611.3064682D45@wyvern.cs.uni-duesseldorf.de> Author: David Ripton Branch: Changeset: r51586:eeedeffe6525 Date: 2012-01-21 11:13 -0500 http://bitbucket.org/pypy/pypy/changeset/eeedeffe6525/ Log: Clean up os.environ["TZ"] when we're done, in case other tests follow. diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py --- a/pypy/module/test_lib_pypy/test_datetime.py +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -9,9 +9,16 @@ Based on danchr's test script in https://bugs.pypy.org/issue986 """ - os.putenv("TZ", "GMT") - for unused in xrange(100): - now = time.time() - delta = (datetime.datetime.utcfromtimestamp(now) - - datetime.datetime.fromtimestamp(now)) - assert delta.days * 86400 + delta.seconds == 0 + try: + prev_tz = os.environ.get("TZ") + os.environ["TZ"] = "GMT" + for unused in xrange(100): + now = time.time() + delta = (datetime.datetime.utcfromtimestamp(now) - + datetime.datetime.fromtimestamp(now)) + assert delta.days * 86400 + delta.seconds == 0 + finally: + if prev_tz is None: + del os.environ["TZ"] + else: + os.environ["TZ"] = prev_tz From noreply at buildbot.pypy.org Sat Jan 21 17:47:19 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sat, 21 Jan 2012 17:47:19 +0100 (CET) Subject: [pypy-commit] pypy pytest: resuffle the code in pypy/test_all.py, so a ./pytest.py run will not confuse at collection time Message-ID: <20120121164719.CB4C882D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r51588:8237f969655e Date: 2012-01-21 17:23 +0100 http://bitbucket.org/pypy/pypy/changeset/8237f969655e/ Log: resuffle the code in pypy/test_all.py, so a ./pytest.py run will not confuse at collection time diff --git a/pypy/test_all.py b/pypy/test_all.py --- a/pypy/test_all.py +++ b/pypy/test_all.py @@ -11,11 +11,12 @@ """ import sys, os -if len(sys.argv) == 1 and os.path.dirname(sys.argv[0]) in '.': - print >> sys.stderr, __doc__ - sys.exit(2) if __name__ == '__main__': + if len(sys.argv) == 1 and os.path.dirname(sys.argv[0]) in '.': + print >> sys.stderr, __doc__ + sys.exit(2) + import tool.autopath import pytest import pytest_cov From noreply at buildbot.pypy.org Sat Jan 21 17:47:21 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sat, 21 Jan 2012 17:47:21 +0100 (CET) Subject: [pypy-commit] pypy pytest: switch pytest.ini over new option name Message-ID: <20120121164721.0D38E82D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r51589:e17af4cf9825 Date: 2012-01-21 17:33 +0100 http://bitbucket.org/pypy/pypy/changeset/e17af4cf9825/ Log: switch pytest.ini over new option name diff --git a/pypy/pytest.ini b/pypy/pytest.ini --- a/pypy/pytest.ini +++ b/pypy/pytest.ini @@ -1,2 +1,2 @@ [pytest] -addopts = --assertmode=old -rf +addopts = --assert=plain -rf From noreply at buildbot.pypy.org Sat Jan 21 17:47:22 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sat, 21 Jan 2012 17:47:22 +0100 (CET) Subject: [pypy-commit] pypy pytest: add 2 missing __init__.py files so those test files can be discovered correctly Message-ID: <20120121164722.3FE2082D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r51590:5a86baac8343 Date: 2012-01-21 17:39 +0100 http://bitbucket.org/pypy/pypy/changeset/5a86baac8343/ Log: add 2 missing __init__.py files so those test files can be discovered correctly diff --git a/pypy/module/pyexpat/test/__init__.py b/pypy/module/pyexpat/test/__init__.py new file mode 100644 diff --git a/pypy/tool/jitlogparser/test/__init__.py b/pypy/tool/jitlogparser/test/__init__.py new file mode 100644 From noreply at buildbot.pypy.org Sat Jan 21 17:56:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 21 Jan 2012 17:56:34 +0100 (CET) Subject: [pypy-commit] pypy stm: Adding a libitm wrapper. Not used so far, and crashes anyway Message-ID: <20120121165634.031A882D45@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51591:bee9a6779dd0 Date: 2012-01-21 17:56 +0100 http://bitbucket.org/pypy/pypy/changeset/bee9a6779dd0/ Log: Adding a libitm wrapper. Not used so far, and crashes anyway because libitm doesn't expect _ITM_RU?() calls when not in a transaction at all. diff --git a/pypy/translator/stm/using_libitm/stm2itm.h b/pypy/translator/stm/using_libitm/stm2itm.h new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/using_libitm/stm2itm.h @@ -0,0 +1,74 @@ +#include +#include +#include + + +static void stm_descriptor_init(void) { /* nothing */ } +static void stm_descriptor_done(void) { /* nothing */ } + +static void* stm_perform_transaction(void*(*f)(void*), void* arg) +{ + void *result; + int _i = _ITM_beginTransaction(pr_instrumentedCode); + assert(_i & a_runInstrumentedCode); + /**/ + result = f(arg); + /**/ + _ITM_commitTransaction(); + return result; +} + +#define STM_CCHARP1(arg) void +#define STM_EXPLAIN1(info) /* nothing */ + +static void stm_try_inevitable(STM_CCHARP1(why)) +{ + _ITM_changeTransactionMode(modeSerialIrrevocable); +} + +static void stm_abort_and_retry(void) +{ + abort(); /* XXX */ +} + +static long stm_debug_get_state(void) +{ + return _ITM_inTransaction(); +} + + +#if PYPY_LONG_BIT == 32 +# define stm_read_word(addr) _ITM_RU4(addr) +# define stm_write_word(addr, val) _ITM_WU4(addr, val) +#else +# define stm_read_word(addr) _ITM_RU8(addr) +# define stm_write_word(addr, val) _ITM_WU8(addr, val) +#endif + +// XXX little-endian only! +/* this macro is used if 'base' is a word-aligned pointer and 'offset' + is a compile-time constant */ +#define stm_fx_read_partial(base, offset) \ + (stm_read_word( \ + (long*)(((char*)(base)) + ((offset) & ~(sizeof(void*)-1)))) \ + >> (8 * ((offset) & (sizeof(void*)-1)))) + +#define stm_read_partial_1(addr) _ITM_RU1(addr) +#define stm_read_partial_2(addr) _ITM_RU2(addr) +#define stm_write_partial_1(addr, nval) _ITM_WU1(addr, nval) +#define stm_write_partial_2(addr, nval) _ITM_WU2(addr, nval) +#if PYPY_LONG_BIT == 64 +#define stm_read_partial_4(addr) _ITM_RU4(addr) +#define stm_write_partial_4(addr, nval) _ITM_WU4(addr, nval) +#endif + +#define stm_read_double(addr) _ITM_RD(addr) +#define stm_write_double(addr, val) _ITM_WD(addr, val) + +#define stm_read_float(addr) _ITM_RF(addr) +#define stm_write_float(addr, val) _ITM_WF(addr, val) + +#if PYPY_LONG_BIT == 32 +#define stm_read_doubleword(addr) _ITM_RU8(addr) +#define stm_write_doubleword(addr, val) _ITM_WU8(addr, val) +#endif From noreply at buildbot.pypy.org Sat Jan 21 18:02:28 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 18:02:28 +0100 (CET) Subject: [pypy-commit] pypy default: Implement numpypy.ndarray.flatten Message-ID: <20120121170228.C277C82D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51592:6eb004a15a12 Date: 2012-01-21 11:02 -0600 http://bitbucket.org/pypy/pypy/changeset/6eb004a15a12/ Log: Implement numpypy.ndarray.flatten diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -430,9 +430,15 @@ def descr_copy(self, space): return self.copy(space) + def descr_flatten(self, space): + return self.flatten(space) + def copy(self, space): return self.get_concrete().copy(space) + def flatten(self, space): + return self.get_concrete().flatten(space) + def descr_len(self, space): if len(self.shape): return space.wrap(self.shape[0]) @@ -769,6 +775,11 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype) + array.setitem(0, self.value) + return array + def fill(self, space, w_value): self.value = self.dtype.coerce(space, w_value) @@ -1150,6 +1161,15 @@ array.setslice(space, self) return array + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype, self.order) + if self.supports_fast_slicing(): + array._fast_setslice(space, self) + else: + arr = SliceArray(array.shape, array.dtype, array, self) + array._sliceloop(arr) + return array + def fill(self, space, w_value): self.setslice(space, scalar_w(space, self.dtype, w_value)) @@ -1379,6 +1399,7 @@ fill = interp2app(BaseArray.descr_fill), copy = interp2app(BaseArray.descr_copy), + flatten = interp2app(BaseArray.descr_flatten), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), ) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1039,6 +1039,20 @@ a = array([5.0]) assert a.std() == 0.0 + def test_flatten(self): + from _numpypy import array + + a = array([[1, 2, 3], [4, 5, 6]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6]).all() + a = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6, 7, 8]).all() + a = array([1, 2, 3, 4, 5, 6, 7, 8]) + assert (a[::2].flatten() == [1, 3, 5, 7]).all() + a = array([1, 2, 3]) + assert ((a + a).flatten() == [2, 4, 6]).all() + a = array(2) + assert (a.flatten() == [2]).all() + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): From noreply at buildbot.pypy.org Sat Jan 21 18:13:26 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 18:13:26 +0100 (CET) Subject: [pypy-commit] pypy default: A failing test for flatten Message-ID: <20120121171326.C88B782D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51593:637b4cfa4f65 Date: 2012-01-21 11:13 -0600 http://bitbucket.org/pypy/pypy/changeset/637b4cfa4f65/ Log: A failing test for flatten diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1052,6 +1052,8 @@ assert ((a + a).flatten() == [2, 4, 6]).all() a = array(2) assert (a.flatten() == [2]).all() + a = array([[1, 2], [3, 4]]) + assert (a.T.flatten() == [1, 3, 2, 4]).all() class AppTestMultiDim(BaseNumpyAppTest): From noreply at buildbot.pypy.org Sat Jan 21 18:28:42 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 21 Jan 2012 18:28:42 +0100 (CET) Subject: [pypy-commit] pypy default: Special-case for "complex ** 2" too. Message-ID: <20120121172842.A6DEE82D45@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51594:76e23aa1ef63 Date: 2012-01-21 18:28 +0100 http://bitbucket.org/pypy/pypy/changeset/76e23aa1ef63/ Log: Special-case for "complex ** 2" too. diff --git a/pypy/objspace/std/complexobject.py b/pypy/objspace/std/complexobject.py --- a/pypy/objspace/std/complexobject.py +++ b/pypy/objspace/std/complexobject.py @@ -8,6 +8,7 @@ from pypy.rlib.rbigint import rbigint from pypy.rlib.rfloat import ( formatd, DTSF_STR_PRECISION, isinf, isnan, copysign) +from pypy.rlib import jit import math @@ -129,10 +130,10 @@ ir = len * math.sin(phase) return W_ComplexObject(rr, ir) - def pow_int(self, n): - if n > 100 or n < -100: - return self.pow(W_ComplexObject(1.0 * n, 0.0)) - elif n > 0: + def pow_small_int(self, n): + if n >= 0: + if jit.isconstant(n) and n == 2: + return self.mul(self) return self.pow_positive_int(n) else: return w_one.div(self.pow_positive_int(-n)) @@ -217,10 +218,10 @@ def pow__Complex_Complex_ANY(space, w_complex, w_exponent, thirdArg): if not space.is_w(thirdArg, space.w_None): raise OperationError(space.w_ValueError, space.wrap('complex modulo')) - int_exponent = int(w_exponent.realval) try: - if w_exponent.imagval == 0.0 and w_exponent.realval == int_exponent: - w_p = w_complex.pow_int(int_exponent) + r = w_exponent.realval + if w_exponent.imagval == 0.0 and -100.0 <= r <= 100.0 and r == int(r): + w_p = w_complex.pow_small_int(int(r)) else: w_p = w_complex.pow(w_exponent) except ZeroDivisionError: diff --git a/pypy/objspace/std/test/test_complexobject.py b/pypy/objspace/std/test/test_complexobject.py --- a/pypy/objspace/std/test/test_complexobject.py +++ b/pypy/objspace/std/test/test_complexobject.py @@ -71,7 +71,7 @@ assert _powu((0.0,1.0),2) == (-1.0,0.0) def _powi((r1, i1), n): - w_res = W_ComplexObject(r1, i1).pow_int(n) + w_res = W_ComplexObject(r1, i1).pow_small_int(n) return w_res.realval, w_res.imagval assert _powi((0.0,2.0),0) == (1.0,0.0) assert _powi((0.0,0.0),2) == (0.0,0.0) @@ -213,6 +213,7 @@ assert a ** 105 == a ** 105 assert a ** -105 == a ** -105 assert a ** -30 == a ** -30 + assert a ** 2 == a * a assert 0.0j ** 0 == 1 From noreply at buildbot.pypy.org Sat Jan 21 18:33:33 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 18:33:33 +0100 (CET) Subject: [pypy-commit] pypy default: Fix for flatten() with certain types of arrays. Message-ID: <20120121173333.5780382D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51595:fe8a844cd0d0 Date: 2012-01-21 11:30 -0600 http://bitbucket.org/pypy/pypy/changeset/fe8a844cd0d0/ Log: Fix for flatten() with certain types of arrays. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -925,14 +925,15 @@ self.left.create_sig(), self.right.create_sig()) class SliceArray(Call2): - def __init__(self, shape, dtype, left, right): + def __init__(self, shape, dtype, left, right, no_broadcast=False): + self.no_broadcast = no_broadcast Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() - if self.shape != self.right.shape: + if not self.no_broadcast and self.shape != self.right.shape: return signature.SliceloopBroadcastSignature(self.ufunc, self.name, self.calc_dtype, @@ -1166,7 +1167,7 @@ if self.supports_fast_slicing(): array._fast_setslice(space, self) else: - arr = SliceArray(array.shape, array.dtype, array, self) + arr = SliceArray(array.shape, array.dtype, array, self, no_broadcast=True) array._sliceloop(arr) return array From noreply at buildbot.pypy.org Sat Jan 21 18:33:34 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 18:33:34 +0100 (CET) Subject: [pypy-commit] pypy default: merged upstream Message-ID: <20120121173334.A31E182D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r51596:1ebae6842fb2 Date: 2012-01-21 11:33 -0600 http://bitbucket.org/pypy/pypy/changeset/1ebae6842fb2/ Log: merged upstream diff --git a/pypy/objspace/std/complexobject.py b/pypy/objspace/std/complexobject.py --- a/pypy/objspace/std/complexobject.py +++ b/pypy/objspace/std/complexobject.py @@ -8,6 +8,7 @@ from pypy.rlib.rbigint import rbigint from pypy.rlib.rfloat import ( formatd, DTSF_STR_PRECISION, isinf, isnan, copysign) +from pypy.rlib import jit import math @@ -129,10 +130,10 @@ ir = len * math.sin(phase) return W_ComplexObject(rr, ir) - def pow_int(self, n): - if n > 100 or n < -100: - return self.pow(W_ComplexObject(1.0 * n, 0.0)) - elif n > 0: + def pow_small_int(self, n): + if n >= 0: + if jit.isconstant(n) and n == 2: + return self.mul(self) return self.pow_positive_int(n) else: return w_one.div(self.pow_positive_int(-n)) @@ -217,10 +218,10 @@ def pow__Complex_Complex_ANY(space, w_complex, w_exponent, thirdArg): if not space.is_w(thirdArg, space.w_None): raise OperationError(space.w_ValueError, space.wrap('complex modulo')) - int_exponent = int(w_exponent.realval) try: - if w_exponent.imagval == 0.0 and w_exponent.realval == int_exponent: - w_p = w_complex.pow_int(int_exponent) + r = w_exponent.realval + if w_exponent.imagval == 0.0 and -100.0 <= r <= 100.0 and r == int(r): + w_p = w_complex.pow_small_int(int(r)) else: w_p = w_complex.pow(w_exponent) except ZeroDivisionError: diff --git a/pypy/objspace/std/test/test_complexobject.py b/pypy/objspace/std/test/test_complexobject.py --- a/pypy/objspace/std/test/test_complexobject.py +++ b/pypy/objspace/std/test/test_complexobject.py @@ -71,7 +71,7 @@ assert _powu((0.0,1.0),2) == (-1.0,0.0) def _powi((r1, i1), n): - w_res = W_ComplexObject(r1, i1).pow_int(n) + w_res = W_ComplexObject(r1, i1).pow_small_int(n) return w_res.realval, w_res.imagval assert _powi((0.0,2.0),0) == (1.0,0.0) assert _powi((0.0,0.0),2) == (0.0,0.0) @@ -213,6 +213,7 @@ assert a ** 105 == a ** 105 assert a ** -105 == a ** -105 assert a ** -30 == a ** -30 + assert a ** 2 == a * a assert 0.0j ** 0 == 1 From noreply at buildbot.pypy.org Sat Jan 21 18:35:20 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 18:35:20 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: add true divide Message-ID: <20120121173520.E11F082D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51597:a045db574996 Date: 2012-01-21 18:31 +0200 http://bitbucket.org/pypy/pypy/changeset/a045db574996/ Log: add true divide diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -68,6 +68,7 @@ ("copysign", "copysign"), ("cos", "cos"), ("divide", "divide"), + ("true_divide", "true_divide"), ("equal", "equal"), ("exp", "exp"), ("fabs", "fabs"), diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -442,6 +442,7 @@ ("bitwise_or", "bitwise_or", 2, {"identity": 0, 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), + ("true_divide", "div", 2, {"promote_to_float": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -402,3 +402,6 @@ assert count_reduce_items(a, 1) == 3 assert count_reduce_items(a, (1, 2)) == 3 * 4 + def test_true_divide(self): + from _numpypy import arange, array, true_divide + assert (true_divide(arange(3), array([2, 2, 2])) == array([0, 0.5, 1])).all() From noreply at buildbot.pypy.org Sat Jan 21 18:35:22 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 18:35:22 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: implement keepdims=True Message-ID: <20120121173522.282CF82D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51598:0fcad0cba011 Date: 2012-01-21 19:34 +0200 http://bitbucket.org/pypy/pypy/changeset/0fcad0cba011/ Log: implement keepdims=True diff --git a/pypy/module/micronumpy/interp_iter.py b/pypy/module/micronumpy/interp_iter.py --- a/pypy/module/micronumpy/interp_iter.py +++ b/pypy/module/micronumpy/interp_iter.py @@ -153,8 +153,13 @@ class AxisIterator(BaseIterator): def __init__(self, start, dim, shape, strides, backstrides): self.res_shape = shape[:] - self.strides = strides[:dim] + [0] + strides[dim:] - self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] + if len(shape) == len(strides): + # keepdims = True + self.strides = strides[:dim] + [0] + strides[dim + 1:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim + 1:] + else: + self.strides = strides[:dim] + [0] + strides[dim:] + self.backstrides = backstrides[:dim] + [0] + backstrides[dim:] self.first_line = True self.indices = [0] * len(shape) self._done = False diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1077,6 +1077,8 @@ def array(space, w_item_or_iterable, w_dtype=None, w_order=None, subok=True, copy=False, w_maskna=None, ownmaskna=False): # find scalar + if w_maskna is None: + w_maskna = space.w_None if (not subok or copy or not space.is_w(w_maskna, space.w_None) or ownmaskna): raise OperationError(space.w_NotImplementedError, space.wrap("Unsupported args")) @@ -1088,7 +1090,7 @@ space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) ) return scalar_w(space, dtype, w_item_or_iterable) - if space.is_w(w_order, space.w_None): + if space.is_w(w_order, space.w_None) or w_order is None: order = 'C' else: order = space.str_w(w_order) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -46,7 +46,8 @@ return self.identity def descr_call(self, space, __args__): - if __args__.keywords or len(__args__.arguments_w) < self.argcount: + # XXX do something with strange keywords + if len(__args__.arguments_w) < self.argcount: raise OperationError(space.w_ValueError, space.wrap("invalid number of arguments") ) @@ -60,7 +61,7 @@ @unwrap_spec(skipna=bool, keepdims=bool) def descr_reduce(self, space, w_obj, w_axis=None, w_dtype=None, - skipna=False, keepdims=True, w_out=None): + skipna=False, keepdims=False, w_out=None): """reduce(...) reduce(a, axis=0) @@ -120,9 +121,9 @@ axis = -1 else: axis = space.int_w(w_axis) - return self.reduce(space, w_obj, False, False, axis) + return self.reduce(space, w_obj, False, False, axis, keepdims) - def reduce(self, space, w_obj, multidim, promote_to_largest, dim): + def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ Scalar if self.argcount != 2: @@ -148,7 +149,7 @@ raise operationerrfmt(space.w_ValueError, "zero-size array to " "%s.reduce without identity", self.name) if shapelen > 1 and dim >= 0: - res = self.do_axis_reduce(obj, dtype, dim) + res = self.do_axis_reduce(obj, dtype, dim, keepdims) return space.wrap(res) scalarsig = ScalarSignature(dtype) sig = find_sig(ReduceSignature(self.func, self.name, dtype, @@ -162,11 +163,14 @@ value = self.identity.convert_to(dtype) return self.reduce_loop(shapelen, sig, frame, value, obj, dtype) - def do_axis_reduce(self, obj, dtype, dim): + def do_axis_reduce(self, obj, dtype, dim, keepdims): from pypy.module.micronumpy.interp_numarray import AxisReduce,\ W_NDimArray - - shape = obj.shape[0:dim] + obj.shape[dim + 1:len(obj.shape)] + + if keepdims: + shape = obj.shape[:dim] + [1] + obj.shape[dim + 1:] + else: + shape = obj.shape[:dim] + obj.shape[dim + 1:] size = 1 for s in shape: size *= s diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -1,5 +1,5 @@ from pypy.rlib import jit - +from pypy.interpreter.error import OperationError @jit.look_inside_iff(lambda shape, start, strides, backstrides, chunks: jit.isconstant(len(chunks)) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -344,7 +344,7 @@ from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(ValueError, add.reduce, 1) + raises(TypeError, add.reduce, 1) def test_reduce_1d(self): from _numpypy import add, maximum @@ -360,6 +360,14 @@ assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_reduce_keepdims(self): + from _numpypy import add, arange + a = arange(12).reshape(3, 4) + b = add.reduce(a, 0, keepdims=True) + assert b.shape == (1, 4) + assert (add.reduce(a, 0, keepdims=True) == [12, 15, 18, 21]).all() + + def test_bitwise(self): from _numpypy import bitwise_and, bitwise_or, arange, array a = arange(6).reshape(2, 3) From noreply at buildbot.pypy.org Sat Jan 21 18:40:15 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 18:40:15 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: note that .T is forcing arrays Message-ID: <20120121174015.0872082D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4049:dcdfbead3f5f Date: 2012-01-21 11:40 -0600 http://bitbucket.org/pypy/extradoc/changeset/dcdfbead3f5f/ Log: note that .T is forcing arrays diff --git a/planning/micronumpy.txt b/planning/micronumpy.txt --- a/planning/micronumpy.txt +++ b/planning/micronumpy.txt @@ -19,6 +19,8 @@ - expose ndarray.ctypes +- Make .T not force a lazy array. + - subclassing ndarray (instantiating subcalsses curently returns the wrong type) * keep subclass type when slicing, __array_finalize__ From noreply at buildbot.pypy.org Sat Jan 21 18:51:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 18:51:03 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: a bit of shuffling until all the tests pass Message-ID: <20120121175103.660F682D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51599:890ac4e99e93 Date: 2012-01-21 19:50 +0200 http://bitbucket.org/pypy/pypy/changeset/890ac4e99e93/ Log: a bit of shuffling until all the tests pass diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -142,9 +142,11 @@ def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): def impl(self, space, w_axis=None): if space.is_w(w_axis, space.w_None): - w_axis = space.wrap(-1) + axis = -1 + else: + axis = space.int_w(w_axis) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, w_axis) + self, True, promote_to_largest, axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -1,6 +1,6 @@ from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.error import OperationError, operationerrfmt -from pypy.interpreter.gateway import interp2app, unwrap_spec +from pypy.interpreter.gateway import interp2app, unwrap_spec, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty, interp_attrproperty from pypy.module.micronumpy import interp_boxes, interp_dtype from pypy.module.micronumpy.signature import ReduceSignature,\ @@ -60,7 +60,7 @@ return self.call(space, __args__.arguments_w) @unwrap_spec(skipna=bool, keepdims=bool) - def descr_reduce(self, space, w_obj, w_axis=None, w_dtype=None, + def descr_reduce(self, space, w_obj, w_axis=NoneNotWrapped, w_dtype=None, skipna=False, keepdims=False, w_out=None): """reduce(...) reduce(a, axis=0) @@ -117,13 +117,16 @@ if not space.is_w(w_out, space.w_None): raise OperationError(space.w_NotImplementedError, space.wrap( "out not supported")) - if space.is_w(w_axis, space.w_None): + if w_axis is None: + axis = 0 + elif space.is_w(w_axis, space.w_None): axis = -1 else: axis = space.int_w(w_axis) return self.reduce(space, w_obj, False, False, axis, keepdims) - def reduce(self, space, w_obj, multidim, promote_to_largest, dim, keepdims): + def reduce(self, space, w_obj, multidim, promote_to_largest, dim, + keepdims=False): from pypy.module.micronumpy.interp_numarray import convert_to_array, \ Scalar if self.argcount != 2: From noreply at buildbot.pypy.org Sat Jan 21 18:51:47 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 18:51:47 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: fix one more test Message-ID: <20120121175147.2E7B082D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51600:92d15f3d3e45 Date: 2012-01-21 19:51 +0200 http://bitbucket.org/pypy/pypy/changeset/92d15f3d3e45/ Log: fix one more test diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -344,7 +344,7 @@ from _numpypy import sin, add raises(ValueError, sin.reduce, [1, 2, 3]) - raises(TypeError, add.reduce, 1) + raises((ValueError, TypeError), add.reduce, 1) def test_reduce_1d(self): from _numpypy import add, maximum From noreply at buildbot.pypy.org Sat Jan 21 18:56:56 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 18:56:56 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: translation fix for test zjit Message-ID: <20120121175656.CBFDF82D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51601:220d6c657b35 Date: 2012-01-21 19:55 +0200 http://bitbucket.org/pypy/pypy/changeset/220d6c657b35/ Log: translation fix for test zjit diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -39,6 +39,7 @@ w_TypeError = None w_IndexError = None w_OverflowError = None + w_NotImplementedError = None w_None = None w_bool = "bool" From noreply at buildbot.pypy.org Sat Jan 21 19:00:14 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sat, 21 Jan 2012 19:00:14 +0100 (CET) Subject: [pypy-commit] pypy pytest: teach testsupport tests about reports now coming in unconditionally Message-ID: <20120121180014.9EB0A82D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r51602:ab8337ed12e8 Date: 2012-01-21 18:59 +0100 http://bitbucket.org/pypy/pypy/changeset/ab8337ed12e8/ Log: teach testsupport tests about reports now coming in unconditionally diff --git a/pypy/tool/pytest/test/test_pytestsupport.py b/pypy/tool/pytest/test/test_pytestsupport.py --- a/pypy/tool/pytest/test/test_pytestsupport.py +++ b/pypy/tool/pytest/test/test_pytestsupport.py @@ -165,7 +165,10 @@ def test_one(self): exec 'blow' """) - ev, = sorter.getreports("pytest_runtest_logreport") + reports = sorter.getreports("pytest_runtest_logreport") + setup, ev, teardown = reports assert ev.failed + assert setup.passed + assert teardown.passed assert 'NameError' in ev.longrepr.reprcrash.message assert 'blow' in ev.longrepr.reprcrash.message From noreply at buildbot.pypy.org Sat Jan 21 19:28:30 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 19:28:30 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: implement copy kwarg Message-ID: <20120121182830.5967682D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51603:cc80fb4d3ab9 Date: 2012-01-21 20:23 +0200 http://bitbucket.org/pypy/pypy/changeset/cc80fb4d3ab9/ Log: implement copy kwarg diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1077,11 +1077,11 @@ @unwrap_spec(subok=bool, copy=bool, ownmaskna=bool) def array(space, w_item_or_iterable, w_dtype=None, w_order=None, - subok=True, copy=False, w_maskna=None, ownmaskna=False): + subok=True, copy=True, w_maskna=None, ownmaskna=False): # find scalar if w_maskna is None: w_maskna = space.w_None - if (not subok or copy or not space.is_w(w_maskna, space.w_None) or + if (not subok or not space.is_w(w_maskna, space.w_None) or ownmaskna): raise OperationError(space.w_NotImplementedError, space.wrap("Unsupported args")) if not space.issequence_w(w_item_or_iterable): @@ -1099,6 +1099,14 @@ if order != 'C': # or order != 'F': raise operationerrfmt(space.w_ValueError, "Unknown order: %s", order) + if isinstance(w_item_or_iterable, BaseArray): + if (not space.is_w(w_dtype, space.w_None) and + w_item_or_iterable.find_dtype() is not w_dtype): + raise OperationError(space.w_NotImplementedError, space.wrap( + "copying over different dtypes unsupported")) + if copy: + return w_item_or_iterable.copy(space) + return w_item_or_iterable shape, elems_w = find_shape_and_elems(space, w_item_or_iterable) # they come back in C order size = len(elems_w) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1358,7 +1358,13 @@ a[a & 1 == 1] = array([8, 9, 10]) assert (a == [[0, 8], [2, 9], [4, 10]]).all() - + def test_copy_kwarg(self): + from _numpypy import array + x = array([1, 2, 3]) + assert (array(x) == x).all() + assert array(x) is not x + assert array(x, copy=False) is x + assert array(x, copy=True) is not x class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): From noreply at buildbot.pypy.org Sat Jan 21 20:04:52 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 20:04:52 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: merged default Message-ID: <20120121190452.B0BB782D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-back-to-applevel Changeset: r51604:f49b248ef393 Date: 2012-01-21 11:57 -0600 http://bitbucket.org/pypy/pypy/changeset/f49b248ef393/ Log: merged default diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -1440,17 +1440,22 @@ return result fromtimestamp = classmethod(fromtimestamp) + @classmethod def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." - if 1 - (t % 1.0) < 0.0000005: - t = float(int(t)) + 1 - if t < 0: - t -= 1 + t, frac = divmod(t, 1.0) + us = round(frac * 1e6) + + # If timestamp is less than one microsecond smaller than a + # full second, us can be rounded up to 1000000. In this case, + # roll over to seconds, otherwise, ValueError is raised + # by the constructor. + if us == 1000000: + t += 1 + us = 0 y, m, d, hh, mm, ss, weekday, jday, dst = _time.gmtime(t) - us = int((t % 1.0) * 1000000) ss = min(ss, 59) # clamp out leap seconds if the platform has them return cls(y, m, d, hh, mm, ss, us) - utcfromtimestamp = classmethod(utcfromtimestamp) # XXX This is supposed to do better than we *can* do by using time.time(), # XXX if the platform supports a more accurate way. The C implementation diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -264,9 +264,15 @@ def descr_copy(self, space): return self.copy(space) + def descr_flatten(self, space): + return self.flatten(space) + def copy(self, space): return self.get_concrete().copy(space) + def flatten(self, space): + return self.get_concrete().flatten(space) + def descr_len(self, space): if len(self.shape): return space.wrap(self.shape[0]) @@ -599,6 +605,11 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype) + array.setitem(0, self.value) + return array + def fill(self, space, w_value): self.value = self.dtype.coerce(space, w_value) @@ -744,14 +755,15 @@ self.left.create_sig(), self.right.create_sig()) class SliceArray(Call2): - def __init__(self, shape, dtype, left, right): + def __init__(self, shape, dtype, left, right, no_broadcast=False): + self.no_broadcast = no_broadcast Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() - if self.shape != self.right.shape: + if not self.no_broadcast and self.shape != self.right.shape: return signature.SliceloopBroadcastSignature(self.ufunc, self.name, self.calc_dtype, @@ -980,6 +992,15 @@ array.setslice(space, self) return array + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype, self.order) + if self.supports_fast_slicing(): + array._fast_setslice(space, self) + else: + arr = SliceArray(array.shape, array.dtype, array, self, no_broadcast=True) + array._sliceloop(arr) + return array + def fill(self, space, w_value): self.setslice(space, scalar_w(space, self.dtype, w_value)) @@ -1233,6 +1254,7 @@ fill = interp2app(BaseArray.descr_fill), copy = interp2app(BaseArray.descr_copy), + flatten = interp2app(BaseArray.descr_flatten), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), ) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1039,6 +1039,22 @@ a = array([5.0]) assert a.std() == 0.0 + def test_flatten(self): + from _numpypy import array + + a = array([[1, 2, 3], [4, 5, 6]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6]).all() + a = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6, 7, 8]).all() + a = array([1, 2, 3, 4, 5, 6, 7, 8]) + assert (a[::2].flatten() == [1, 3, 5, 7]).all() + a = array([1, 2, 3]) + assert ((a + a).flatten() == [2, 4, 6]).all() + a = array(2) + assert (a.flatten() == [2]).all() + a = array([[1, 2], [3, 4]]) + assert (a.T.flatten() == [1, 3, 2, 4]).all() + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -0,0 +1,24 @@ +"""Additional tests for datetime.""" + +import time +import datetime +import os + +def test_utcfromtimestamp(): + """Confirm that utcfromtimestamp and fromtimestamp give consistent results. + + Based on danchr's test script in https://bugs.pypy.org/issue986 + """ + try: + prev_tz = os.environ.get("TZ") + os.environ["TZ"] = "GMT" + for unused in xrange(100): + now = time.time() + delta = (datetime.datetime.utcfromtimestamp(now) - + datetime.datetime.fromtimestamp(now)) + assert delta.days * 86400 + delta.seconds == 0 + finally: + if prev_tz is None: + del os.environ["TZ"] + else: + os.environ["TZ"] = prev_tz diff --git a/pypy/objspace/std/complexobject.py b/pypy/objspace/std/complexobject.py --- a/pypy/objspace/std/complexobject.py +++ b/pypy/objspace/std/complexobject.py @@ -8,6 +8,7 @@ from pypy.rlib.rbigint import rbigint from pypy.rlib.rfloat import ( formatd, DTSF_STR_PRECISION, isinf, isnan, copysign) +from pypy.rlib import jit import math @@ -129,10 +130,10 @@ ir = len * math.sin(phase) return W_ComplexObject(rr, ir) - def pow_int(self, n): - if n > 100 or n < -100: - return self.pow(W_ComplexObject(1.0 * n, 0.0)) - elif n > 0: + def pow_small_int(self, n): + if n >= 0: + if jit.isconstant(n) and n == 2: + return self.mul(self) return self.pow_positive_int(n) else: return w_one.div(self.pow_positive_int(-n)) @@ -217,10 +218,10 @@ def pow__Complex_Complex_ANY(space, w_complex, w_exponent, thirdArg): if not space.is_w(thirdArg, space.w_None): raise OperationError(space.w_ValueError, space.wrap('complex modulo')) - int_exponent = int(w_exponent.realval) try: - if w_exponent.imagval == 0.0 and w_exponent.realval == int_exponent: - w_p = w_complex.pow_int(int_exponent) + r = w_exponent.realval + if w_exponent.imagval == 0.0 and -100.0 <= r <= 100.0 and r == int(r): + w_p = w_complex.pow_small_int(int(r)) else: w_p = w_complex.pow(w_exponent) except ZeroDivisionError: diff --git a/pypy/objspace/std/test/test_complexobject.py b/pypy/objspace/std/test/test_complexobject.py --- a/pypy/objspace/std/test/test_complexobject.py +++ b/pypy/objspace/std/test/test_complexobject.py @@ -71,7 +71,7 @@ assert _powu((0.0,1.0),2) == (-1.0,0.0) def _powi((r1, i1), n): - w_res = W_ComplexObject(r1, i1).pow_int(n) + w_res = W_ComplexObject(r1, i1).pow_small_int(n) return w_res.realval, w_res.imagval assert _powi((0.0,2.0),0) == (1.0,0.0) assert _powi((0.0,0.0),2) == (0.0,0.0) @@ -213,6 +213,7 @@ assert a ** 105 == a ** 105 assert a ** -105 == a ** -105 assert a ** -30 == a ** -30 + assert a ** 2 == a * a assert 0.0j ** 0 == 1 From noreply at buildbot.pypy.org Sat Jan 21 20:04:53 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 20:04:53 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: merged upstream Message-ID: <20120121190453.E9A0F82D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-back-to-applevel Changeset: r51605:8b5bfccf07e6 Date: 2012-01-21 12:56 -0600 http://bitbucket.org/pypy/pypy/changeset/8b5bfccf07e6/ Log: merged upstream diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1098,11 +1098,11 @@ @unwrap_spec(subok=bool, copy=bool, ownmaskna=bool) def array(space, w_item_or_iterable, w_dtype=None, w_order=None, - subok=True, copy=False, w_maskna=None, ownmaskna=False): + subok=True, copy=True, w_maskna=None, ownmaskna=False): # find scalar if w_maskna is None: w_maskna = space.w_None - if (not subok or copy or not space.is_w(w_maskna, space.w_None) or + if (not subok or not space.is_w(w_maskna, space.w_None) or ownmaskna): raise OperationError(space.w_NotImplementedError, space.wrap("Unsupported args")) if not space.issequence_w(w_item_or_iterable): @@ -1120,6 +1120,14 @@ if order != 'C': # or order != 'F': raise operationerrfmt(space.w_ValueError, "Unknown order: %s", order) + if isinstance(w_item_or_iterable, BaseArray): + if (not space.is_w(w_dtype, space.w_None) and + w_item_or_iterable.find_dtype() is not w_dtype): + raise OperationError(space.w_NotImplementedError, space.wrap( + "copying over different dtypes unsupported")) + if copy: + return w_item_or_iterable.copy(space) + return w_item_or_iterable shape, elems_w = find_shape_and_elems(space, w_item_or_iterable) # they come back in C order size = len(elems_w) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1374,7 +1374,13 @@ a[a & 1 == 1] = array([8, 9, 10]) assert (a == [[0, 8], [2, 9], [4, 10]]).all() - + def test_copy_kwarg(self): + from _numpypy import array + x = array([1, 2, 3]) + assert (array(x) == x).all() + assert array(x) is not x + assert array(x, copy=False) is x + assert array(x, copy=True) is not x class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): From noreply at buildbot.pypy.org Sat Jan 21 20:12:14 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 20:12:14 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: import var/std, figure out extra kwargs for reduce Message-ID: <20120121191214.2175582D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51606:1c6f3a721515 Date: 2012-01-21 21:02 +0200 http://bitbucket.org/pypy/pypy/changeset/1c6f3a721515/ Log: import var/std, figure out extra kwargs for reduce diff --git a/lib_pypy/numpypy/core/_methods.py b/lib_pypy/numpypy/core/_methods.py --- a/lib_pypy/numpypy/core/_methods.py +++ b/lib_pypy/numpypy/core/_methods.py @@ -36,7 +36,7 @@ skipna=skipna, keepdims=keepdims) if isinstance(ret, mu.ndarray): ret = um.true_divide(ret, rcount, - out=ret, casting='unsafe', subok=False) + casting='unsafe', subok=False) else: ret = ret / float(rcount) return ret @@ -79,7 +79,7 @@ rcount -= ddof if isinstance(ret, mu.ndarray): ret = um.true_divide(ret, rcount, - out=ret, casting='unsafe', subok=False) + casting='unsafe', subok=False) else: ret = ret / float(rcount) @@ -91,7 +91,7 @@ skipna=skipna, keepdims=keepdims) if isinstance(ret, mu.ndarray): - ret = um.sqrt(ret, out=ret) + ret = um.sqrt(ret) else: ret = um.sqrt(ret) diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/core/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -2324,13 +2324,12 @@ 0.44999999925552653 """ - assert axis is None assert dtype is None assert out is None assert ddof == 0 if not hasattr(a, "std"): a = numpypy.array(a) - return a.std() + return a.std(axis=axis) def var(a, axis=None, dtype=None, out=None, ddof=0): @@ -2421,10 +2420,9 @@ 0.20250000000000001 """ - assert axis is None assert dtype is None assert out is None assert ddof == 0 if not hasattr(a, "var"): a = numpypy.array(a) - return a.var() + return a.var(axis=axis) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -46,18 +46,27 @@ return self.identity def descr_call(self, space, __args__): - # XXX do something with strange keywords - if len(__args__.arguments_w) < self.argcount: + args_w, kwds_w = __args__.unpack() + # it occurs to me that we don't support any datatypes that + # require casting, change it later when we do + kwds_w.pop('casting', None) + w_subok = kwds_w.pop('subok', None) + w_out = kwds_w.pop('out', space.w_None) + if ((w_subok is not None and space.is_true(w_subok)) or + not space.is_w(w_out, space.w_None)): + raise OperationError(space.w_NotImplementedError, + space.wrap("parameters unsupported")) + if kwds_w or len(args_w) < self.argcount: raise OperationError(space.w_ValueError, space.wrap("invalid number of arguments") ) - elif len(__args__.arguments_w) > self.argcount: + elif len(args_w) > self.argcount: # The extra arguments should actually be the output array, but we # don't support that yet. raise OperationError(space.w_TypeError, space.wrap("invalid number of arguments") ) - return self.call(space, __args__.arguments_w) + return self.call(space, args_w) @unwrap_spec(skipna=bool, keepdims=bool) def descr_reduce(self, space, w_obj, w_axis=NoneNotWrapped, w_dtype=None, diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -98,15 +98,15 @@ from numpypy import array, var a = array([[1,2],[3,4]]) assert var(a) == 1.25 - #assert (var(a,0) == array([ 1., 1.])).all() - #assert (var(a,1) == array([ 0.25, 0.25])).all() + assert (var(a,0) == array([ 1., 1.])).all() + assert (var(a,1) == array([ 0.25, 0.25])).all() def test_std(self): from numpypy import array, std a = array([[1, 2], [3, 4]]) assert std(a) == 1.1180339887498949 - #assert (std(a, axis=0) == array([ 1., 1.])).all() - #assert (std(a, axis=1) == array([ 0.5, 0.5])).all() + assert (std(a, axis=0) == array([ 1., 1.])).all() + assert (std(a, axis=1) == array([ 0.5, 0.5])).all() def test_mean(self): from numpypy import array, mean, arange From noreply at buildbot.pypy.org Sat Jan 21 20:12:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 20:12:15 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: merge Message-ID: <20120121191215.6160382D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51607:33707327eb21 Date: 2012-01-21 21:11 +0200 http://bitbucket.org/pypy/pypy/changeset/33707327eb21/ Log: merge diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -1440,17 +1440,22 @@ return result fromtimestamp = classmethod(fromtimestamp) + @classmethod def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." - if 1 - (t % 1.0) < 0.0000005: - t = float(int(t)) + 1 - if t < 0: - t -= 1 + t, frac = divmod(t, 1.0) + us = round(frac * 1e6) + + # If timestamp is less than one microsecond smaller than a + # full second, us can be rounded up to 1000000. In this case, + # roll over to seconds, otherwise, ValueError is raised + # by the constructor. + if us == 1000000: + t += 1 + us = 0 y, m, d, hh, mm, ss, weekday, jday, dst = _time.gmtime(t) - us = int((t % 1.0) * 1000000) ss = min(ss, 59) # clamp out leap seconds if the platform has them return cls(y, m, d, hh, mm, ss, us) - utcfromtimestamp = classmethod(utcfromtimestamp) # XXX This is supposed to do better than we *can* do by using time.time(), # XXX if the platform supports a more accurate way. The C implementation diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -264,9 +264,15 @@ def descr_copy(self, space): return self.copy(space) + def descr_flatten(self, space): + return self.flatten(space) + def copy(self, space): return self.get_concrete().copy(space) + def flatten(self, space): + return self.get_concrete().flatten(space) + def descr_len(self, space): if len(self.shape): return space.wrap(self.shape[0]) @@ -599,6 +605,11 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype) + array.setitem(0, self.value) + return array + def fill(self, space, w_value): self.value = self.dtype.coerce(space, w_value) @@ -744,14 +755,15 @@ self.left.create_sig(), self.right.create_sig()) class SliceArray(Call2): - def __init__(self, shape, dtype, left, right): + def __init__(self, shape, dtype, left, right, no_broadcast=False): + self.no_broadcast = no_broadcast Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() - if self.shape != self.right.shape: + if not self.no_broadcast and self.shape != self.right.shape: return signature.SliceloopBroadcastSignature(self.ufunc, self.name, self.calc_dtype, @@ -980,6 +992,15 @@ array.setslice(space, self) return array + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype, self.order) + if self.supports_fast_slicing(): + array._fast_setslice(space, self) + else: + arr = SliceArray(array.shape, array.dtype, array, self, no_broadcast=True) + array._sliceloop(arr) + return array + def fill(self, space, w_value): self.setslice(space, scalar_w(space, self.dtype, w_value)) @@ -1241,6 +1262,7 @@ fill = interp2app(BaseArray.descr_fill), copy = interp2app(BaseArray.descr_copy), + flatten = interp2app(BaseArray.descr_flatten), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), ) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1039,6 +1039,22 @@ a = array([5.0]) assert a.std() == 0.0 + def test_flatten(self): + from _numpypy import array + + a = array([[1, 2, 3], [4, 5, 6]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6]).all() + a = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6, 7, 8]).all() + a = array([1, 2, 3, 4, 5, 6, 7, 8]) + assert (a[::2].flatten() == [1, 3, 5, 7]).all() + a = array([1, 2, 3]) + assert ((a + a).flatten() == [2, 4, 6]).all() + a = array(2) + assert (a.flatten() == [2]).all() + a = array([[1, 2], [3, 4]]) + assert (a.T.flatten() == [1, 3, 2, 4]).all() + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -0,0 +1,24 @@ +"""Additional tests for datetime.""" + +import time +import datetime +import os + +def test_utcfromtimestamp(): + """Confirm that utcfromtimestamp and fromtimestamp give consistent results. + + Based on danchr's test script in https://bugs.pypy.org/issue986 + """ + try: + prev_tz = os.environ.get("TZ") + os.environ["TZ"] = "GMT" + for unused in xrange(100): + now = time.time() + delta = (datetime.datetime.utcfromtimestamp(now) - + datetime.datetime.fromtimestamp(now)) + assert delta.days * 86400 + delta.seconds == 0 + finally: + if prev_tz is None: + del os.environ["TZ"] + else: + os.environ["TZ"] = prev_tz diff --git a/pypy/objspace/std/complexobject.py b/pypy/objspace/std/complexobject.py --- a/pypy/objspace/std/complexobject.py +++ b/pypy/objspace/std/complexobject.py @@ -8,6 +8,7 @@ from pypy.rlib.rbigint import rbigint from pypy.rlib.rfloat import ( formatd, DTSF_STR_PRECISION, isinf, isnan, copysign) +from pypy.rlib import jit import math @@ -129,10 +130,10 @@ ir = len * math.sin(phase) return W_ComplexObject(rr, ir) - def pow_int(self, n): - if n > 100 or n < -100: - return self.pow(W_ComplexObject(1.0 * n, 0.0)) - elif n > 0: + def pow_small_int(self, n): + if n >= 0: + if jit.isconstant(n) and n == 2: + return self.mul(self) return self.pow_positive_int(n) else: return w_one.div(self.pow_positive_int(-n)) @@ -217,10 +218,10 @@ def pow__Complex_Complex_ANY(space, w_complex, w_exponent, thirdArg): if not space.is_w(thirdArg, space.w_None): raise OperationError(space.w_ValueError, space.wrap('complex modulo')) - int_exponent = int(w_exponent.realval) try: - if w_exponent.imagval == 0.0 and w_exponent.realval == int_exponent: - w_p = w_complex.pow_int(int_exponent) + r = w_exponent.realval + if w_exponent.imagval == 0.0 and -100.0 <= r <= 100.0 and r == int(r): + w_p = w_complex.pow_small_int(int(r)) else: w_p = w_complex.pow(w_exponent) except ZeroDivisionError: diff --git a/pypy/objspace/std/test/test_complexobject.py b/pypy/objspace/std/test/test_complexobject.py --- a/pypy/objspace/std/test/test_complexobject.py +++ b/pypy/objspace/std/test/test_complexobject.py @@ -71,7 +71,7 @@ assert _powu((0.0,1.0),2) == (-1.0,0.0) def _powi((r1, i1), n): - w_res = W_ComplexObject(r1, i1).pow_int(n) + w_res = W_ComplexObject(r1, i1).pow_small_int(n) return w_res.realval, w_res.imagval assert _powi((0.0,2.0),0) == (1.0,0.0) assert _powi((0.0,0.0),2) == (0.0,0.0) @@ -213,6 +213,7 @@ assert a ** 105 == a ** 105 assert a ** -105 == a ** -105 assert a ** -30 == a ** -30 + assert a ** 2 == a * a assert 0.0j ** 0 == 1 From noreply at buildbot.pypy.org Sat Jan 21 20:48:48 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 20:48:48 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: Make dict.pop RPython. Message-ID: <20120121194848.81E1C82D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: numpy-back-to-applevel Changeset: r51608:81d4c62723ed Date: 2012-01-21 13:48 -0600 http://bitbucket.org/pypy/pypy/changeset/81d4c62723ed/ Log: Make dict.pop RPython. diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -440,6 +440,12 @@ def method_popitem(dct): return dct.getanyitem('items') + def method_pop(dct, s_key, s_dfl=None): + dct.dictdef.generalize_key(s_key) + if s_dfl is not None: + dct.dictdef.generalize_value(s_dfl) + return dct.dictdef.read_value() + def _can_only_throw(dic, *ignore): if dic1.dictdef.dictkey.custom_eq_hash: return None # r_dict: can throw anything diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -323,6 +323,16 @@ hop.exception_is_here() return hop.gendirectcall(ll_popitem, cTUPLE, v_dict) + def rtype_method_pop(self, hop): + if hop.nb_args == 2: + v_args = hop.inputargs(self, self.key_repr) + target = ll_pop + elif hop.nb_args == 3: + v_args = hop.inputargs(self, self.key_repr, self.value_repr) + target = ll_pop_default + hop.exception_is_here() + return hop.gendirectcall(target, *v_args) + class __extend__(pairtype(DictRepr, rmodel.Repr)): def rtype_getitem((r_dict, r_key), hop): @@ -874,3 +884,18 @@ r.item1 = recast(ELEM.TO.item1, entry.value) _ll_dict_del(dic, i) return r + +def ll_pop(dic, key): + i = ll_dict_lookup(dic, key, dic.keyhash(key)) + if not i & HIGHEST_BIT: + value = ll_get_value(dic, i) + _ll_dict_del(dic, i) + return value + else: + raise KeyError + +def ll_pop_default(dic, key, dfl): + try: + return ll_pop(dic, key) + except KeyError: + return dfl diff --git a/pypy/rpython/ootypesystem/rdict.py b/pypy/rpython/ootypesystem/rdict.py --- a/pypy/rpython/ootypesystem/rdict.py +++ b/pypy/rpython/ootypesystem/rdict.py @@ -160,6 +160,16 @@ hop.exception_is_here() return hop.gendirectcall(ll_popitem, cTUPLE, v_dict) + def rtype_method_pop(self, hop): + if hop.nb_args == 2: + v_args = hop.inputargs(self, self.key_repr) + target = ll_pop + elif hop.nb_args == 3: + v_args = hop.inputargs(self, self.key_repr, self.value_repr) + target = ll_pop_default + hop.exception_is_here() + return hop.gendirectcall(target, *v_args) + def __get_func(self, interp, r_func, fn, TYPE): if isinstance(r_func, MethodOfFrozenPBCRepr): obj = r_func.r_im_self.convert_const(fn.im_self) @@ -370,6 +380,20 @@ return res raise KeyError +def ll_pop(d, key): + if d.ll_contains(key): + value = d.ll_get(key) + d.ll_remove(key) + return value + else: + raise KeyError + +def ll_pop_default(d, key, dfl): + try: + return ll_pop(d, key) + except KeyError: + return dfl + # ____________________________________________________________ # # Iteration. diff --git a/pypy/rpython/test/test_rdict.py b/pypy/rpython/test/test_rdict.py --- a/pypy/rpython/test/test_rdict.py +++ b/pypy/rpython/test/test_rdict.py @@ -622,6 +622,26 @@ res = self.interpret(func, []) assert res in [5263, 6352] + def test_dict_pop(self): + def f(n, default): + d = {} + d[2] = 3 + d[4] = 5 + if default == -1: + try: + x = d.pop(n) + except KeyError: + x = -1 + else: + x = d.pop(n, default) + return x * 10 + len(d) + res = self.interpret(f, [2, -1]) + assert res == 31 + res = self.interpret(f, [3, -1]) + assert res == -8 + res = self.interpret(f, [2, 5]) + assert res == 31 + class TestLLtype(BaseTestRdict, LLRtypeMixin): def test_dict_but_not_with_char_keys(self): def func(i): From noreply at buildbot.pypy.org Sat Jan 21 21:40:08 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sat, 21 Jan 2012 21:40:08 +0100 (CET) Subject: [pypy-commit] pypy pytest: use funcargs in the testrunner test to ease having recoverable output Message-ID: <20120121204008.0B52782D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r51609:a696d22b84c6 Date: 2012-01-21 20:08 +0100 http://bitbucket.org/pypy/pypy/changeset/a696d22b84c6/ Log: use funcargs in the testrunner test to ease having recoverable output diff --git a/testrunner/test/conftest.py b/testrunner/test/conftest.py new file mode 100644 --- /dev/null +++ b/testrunner/test/conftest.py @@ -0,0 +1,6 @@ + +def pytest_runtest_makereport(__multicall__, item): + report = __multicall__.execute() + if 'out' in item.funcargs: + report.sections.append(('out', item.funcargs['out'].read())) + return report diff --git a/testrunner/test/test_runner.py b/testrunner/test/test_runner.py --- a/testrunner/test/test_runner.py +++ b/testrunner/test/test_runner.py @@ -53,49 +53,44 @@ assert not should_report_failure("F Def\n. Ghi\n. Jkl\n") + class TestRunHelper(object): + def pytest_funcarg__out(self, request): + tmpdir = request.getfuncargvalue('tmpdir') + return tmpdir.ensure('out') - def setup_method(self, meth): - h, self.fn = tempfile.mkstemp() - os.close(h) + def test_run(self, out): + res = runner.run([sys.executable, "-c", "print 42"], '.', out) + assert res == 0 + assert out.read() == "42\n" - def teardown_method(self, meth): - os.unlink(self.fn) - - def test_run(self): - res = runner.run([sys.executable, "-c", "print 42"], '.', - py.path.local(self.fn)) - assert res == 0 - out = py.path.local(self.fn).read('r') - assert out == "42\n" - - def test_error(self): - res = runner.run([sys.executable, "-c", "import sys; sys.exit(3)"], '.', py.path.local(self.fn)) + def test_error(self, out): + res = runner.run([sys.executable, "-c", "import sys; sys.exit(3)"], '.', out) assert res == 3 - def test_signal(self): + def test_signal(self, out): if sys.platform == 'win32': py.test.skip("no death by signal on windows") - res = runner.run([sys.executable, "-c", "import os; os.kill(os.getpid(), 9)"], '.', py.path.local(self.fn)) + res = runner.run([sys.executable, "-c", "import os; os.kill(os.getpid(), 9)"], '.', out) assert res == -9 - def test_timeout(self): - res = runner.run([sys.executable, "-c", "while True: pass"], '.', py.path.local(self.fn), timeout=3) + def test_timeout(self, out): + res = runner.run([sys.executable, "-c", "while True: pass"], '.', out, timeout=3) assert res == -999 - def test_timeout_lock(self): - res = runner.run([sys.executable, "-c", "import threading; l=threading.Lock(); l.acquire(); l.acquire()"], '.', py.path.local(self.fn), timeout=3) + def test_timeout_lock(self, out): + res = runner.run([sys.executable, "-c", "import threading; l=threading.Lock(); l.acquire(); l.acquire()"], '.', out, timeout=3) assert res == -999 - def test_timeout_syscall(self): - res = runner.run([sys.executable, "-c", "import socket; s=s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM); s.bind(('', 0)); s.recv(1000)"], '.', py.path.local(self.fn), timeout=3) + def test_timeout_syscall(self, out): + res = runner.run([sys.executable, "-c", "import socket; s=s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM); s.bind(('', 0)); s.recv(1000)"], '.', out, timeout=3) assert res == -999 - def test_timeout_success(self): + def test_timeout_success(self, out): res = runner.run([sys.executable, "-c", "print 42"], '.', - py.path.local(self.fn), timeout=2) + out, timeout=2) assert res == 0 - out = py.path.local(self.fn).read('r') + out = out.read() assert out == "42\n" From noreply at buildbot.pypy.org Sat Jan 21 21:40:09 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sat, 21 Jan 2012 21:40:09 +0100 (CET) Subject: [pypy-commit] pypy pytest: fix the remaining testrunner issues Message-ID: <20120121204009.418FA82D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: pytest Changeset: r51610:d423bf345f99 Date: 2012-01-21 21:39 +0100 http://bitbucket.org/pypy/pypy/changeset/d423bf345f99/ Log: fix the remaining testrunner issues diff --git a/testrunner/test/test_runner.py b/testrunner/test/test_runner.py --- a/testrunner/test/test_runner.py +++ b/testrunner/test/test_runner.py @@ -117,6 +117,7 @@ expected = ['INTERP', 'IARG', 'driver', 'darg', + '-p', 'resultlog', '--resultlog=LOGFILE', 'test_one'] @@ -133,6 +134,7 @@ expected = ['/wd' + os.sep + './INTERP', 'IARG', 'driver', 'darg', + '-p', 'resultlog', '--resultlog=LOGFILE', 'test_one'] @@ -246,7 +248,7 @@ assert '\n' in log log_lines = log.splitlines() - assert log_lines[0] == ". test_normal/test_example.py:test_one" + assert ". test_normal/test_example.py::test_one" in log_lines nfailures = 0 noutcomes = 0 for line in log_lines: From noreply at buildbot.pypy.org Sat Jan 21 22:58:57 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sat, 21 Jan 2012 22:58:57 +0100 (CET) Subject: [pypy-commit] pypy py3k: Move the kwonly arguments test to the interpreter Message-ID: <20120121215857.475F582D45@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51611:31e2e6510fba Date: 2012-01-21 22:58 +0100 http://bitbucket.org/pypy/pypy/changeset/31e2e6510fba/ Log: Move the kwonly arguments test to the interpreter diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -810,15 +810,6 @@ """ yield self.st, func, "f()", (1, [2, 3, 4], 5) - def test_kwonlyargs_default_parameters(self): - """ This actually test an interpreter bug, but since we can't parse - py3k only code in the interpreter tests right now, it's there waiting - for this feature""" - func = """ def f(a, b, c=3, *, d=4): - return a, b, c, d - """ - yield self.st, func, "f(1, 2)", (1, 2, 3, 4) - class AppTestCompiler: def test_docstring_not_loaded(self): diff --git a/pypy/interpreter/test/test_interpreter.py b/pypy/interpreter/test/test_interpreter.py --- a/pypy/interpreter/test/test_interpreter.py +++ b/pypy/interpreter/test/test_interpreter.py @@ -240,6 +240,12 @@ ''' assert self.codetest(code, 'f', []) == os.name + def test_kwonlyargs_default_parameters(self): + code = """ def f(a, b, c=3, *, d=4): + return a, b, c, d + """ + assert self.codetest(code, "f", [1, 2]) == (1, 2, 3, 4) + class AppTestInterpreter: def test_trivial(self): From noreply at buildbot.pypy.org Sat Jan 21 23:11:15 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 23:11:15 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: start adding necessary things for using applevel __str__ and __repr__ Message-ID: <20120121221115.6A78582D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51612:616ba3da7de7 Date: 2012-01-21 21:46 +0200 http://bitbucket.org/pypy/pypy/changeset/616ba3da7de7/ Log: start adding necessary things for using applevel __str__ and __repr__ diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py --- a/lib_pypy/numpypy/core/__init__.py +++ b/lib_pypy/numpypy/core/__init__.py @@ -1,1 +1,2 @@ from .fromnumeric import * +from .numeric import * diff --git a/lib_pypy/numpypy/core/arrayprint.py b/lib_pypy/numpypy/core/arrayprint.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/arrayprint.py @@ -0,0 +1,785 @@ +"""Array printing function + +$Id: arrayprint.py,v 1.9 2005/09/13 13:58:44 teoliphant Exp $ +""" +__all__ = ["array2string", "set_printoptions", "get_printoptions"] +__docformat__ = 'restructuredtext' + +# +# Written by Konrad Hinsen +# last revision: 1996-3-13 +# modified by Jim Hugunin 1997-3-3 for repr's and str's (and other details) +# and by Perry Greenfield 2000-4-1 for numarray +# and by Travis Oliphant 2005-8-22 for numpy + +import sys +import _numpypy as _nt +from _numpypy import maximum, minimum, absolute, not_equal #, isnan, isinf +#from _numpypy import format_longfloat, datetime_as_string, datetime_data, isna +from fromnumeric import ravel + + +def product(x, y): return x*y + +_summaryEdgeItems = 3 # repr N leading and trailing items of each dimension +_summaryThreshold = 1000 # total items > triggers array summarization + +_float_output_precision = 8 +_float_output_suppress_small = False +_line_width = 75 +_nan_str = 'nan' +_inf_str = 'inf' +_na_str = 'NA' +_formatter = None # formatting function for array elements + +if sys.version_info[0] >= 3: + from functools import reduce + +def set_printoptions(precision=None, threshold=None, edgeitems=None, + linewidth=None, suppress=None, + nanstr=None, infstr=None, nastr=None, + formatter=None): + """ + Set printing options. + + These options determine the way floating point numbers, arrays and + other NumPy objects are displayed. + + Parameters + ---------- + precision : int, optional + Number of digits of precision for floating point output (default 8). + threshold : int, optional + Total number of array elements which trigger summarization + rather than full repr (default 1000). + edgeitems : int, optional + Number of array items in summary at beginning and end of + each dimension (default 3). + linewidth : int, optional + The number of characters per line for the purpose of inserting + line breaks (default 75). + suppress : bool, optional + Whether or not suppress printing of small floating point values + using scientific notation (default False). + nanstr : str, optional + String representation of floating point not-a-number (default nan). + infstr : str, optional + String representation of floating point infinity (default inf). + nastr : str, optional + String representation of NA missing value (default NA). + formatter : dict of callables, optional + If not None, the keys should indicate the type(s) that the respective + formatting function applies to. Callables should return a string. + Types that are not specified (by their corresponding keys) are handled + by the default formatters. Individual types for which a formatter + can be set are:: + + - 'bool' + - 'int' + - 'timedelta' : a `numpy.timedelta64` + - 'datetime' : a `numpy.datetime64` + - 'float' + - 'longfloat' : 128-bit floats + - 'complexfloat' + - 'longcomplexfloat' : composed of two 128-bit floats + - 'numpy_str' : types `numpy.string_` and `numpy.unicode_` + - 'str' : all other strings + + Other keys that can be used to set a group of types at once are:: + + - 'all' : sets all types + - 'int_kind' : sets 'int' + - 'float_kind' : sets 'float' and 'longfloat' + - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' + - 'str_kind' : sets 'str' and 'numpystr' + + See Also + -------- + get_printoptions, set_string_function, array2string + + Notes + ----- + `formatter` is always reset with a call to `set_printoptions`. + + Examples + -------- + Floating point precision can be set: + + >>> np.set_printoptions(precision=4) + >>> print np.array([1.123456789]) + [ 1.1235] + + Long arrays can be summarised: + + >>> np.set_printoptions(threshold=5) + >>> print np.arange(10) + [0 1 2 ..., 7 8 9] + + Small results can be suppressed: + + >>> eps = np.finfo(float).eps + >>> x = np.arange(4.) + >>> x**2 - (x + eps)**2 + array([ -4.9304e-32, -4.4409e-16, 0.0000e+00, 0.0000e+00]) + >>> np.set_printoptions(suppress=True) + >>> x**2 - (x + eps)**2 + array([-0., -0., 0., 0.]) + + A custom formatter can be used to display array elements as desired: + + >>> np.set_printoptions(formatter={'all':lambda x: 'int: '+str(-x)}) + >>> x = np.arange(3) + >>> x + array([int: 0, int: -1, int: -2]) + >>> np.set_printoptions() # formatter gets reset + >>> x + array([0, 1, 2]) + + To put back the default options, you can use: + + >>> np.set_printoptions(edgeitems=3,infstr='inf', + ... linewidth=75, nanstr='nan', precision=8, + ... suppress=False, threshold=1000, formatter=None) + """ + + global _summaryThreshold, _summaryEdgeItems, _float_output_precision, \ + _line_width, _float_output_suppress_small, _nan_str, _inf_str, \ + _na_str, _formatter + if linewidth is not None: + _line_width = linewidth + if threshold is not None: + _summaryThreshold = threshold + if edgeitems is not None: + _summaryEdgeItems = edgeitems + if precision is not None: + _float_output_precision = precision + if suppress is not None: + _float_output_suppress_small = not not suppress + if nanstr is not None: + _nan_str = nanstr + if infstr is not None: + _inf_str = infstr + if nastr is not None: + _na_str = nastr + _formatter = formatter + +def get_printoptions(): + """ + Return the current print options. + + Returns + ------- + print_opts : dict + Dictionary of current print options with keys + + - precision : int + - threshold : int + - edgeitems : int + - linewidth : int + - suppress : bool + - nanstr : str + - infstr : str + - formatter : dict of callables + + For a full description of these options, see `set_printoptions`. + + See Also + -------- + set_printoptions, set_string_function + + """ + d = dict(precision=_float_output_precision, + threshold=_summaryThreshold, + edgeitems=_summaryEdgeItems, + linewidth=_line_width, + suppress=_float_output_suppress_small, + nanstr=_nan_str, + infstr=_inf_str, + nastr=_na_str, + formatter=_formatter) + return d + +def _leading_trailing(a): + import numeric as _nc + if a.ndim == 1: + if len(a) > 2*_summaryEdgeItems: + b = _nc.concatenate((a[:_summaryEdgeItems], + a[-_summaryEdgeItems:])) + else: + b = a + else: + if len(a) > 2*_summaryEdgeItems: + l = [_leading_trailing(a[i]) for i in range( + min(len(a), _summaryEdgeItems))] + l.extend([_leading_trailing(a[-i]) for i in range( + min(len(a), _summaryEdgeItems),0,-1)]) + else: + l = [_leading_trailing(a[i]) for i in range(0, len(a))] + b = _nc.concatenate(tuple(l)) + return b + +def _boolFormatter(x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif x: + return ' True' + else: + return 'False' + + +def repr_format(x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return repr(x) + +def _array2string(a, max_line_width, precision, suppress_small, separator=' ', + prefix="", formatter=None): + + if max_line_width is None: + max_line_width = _line_width + + if precision is None: + precision = _float_output_precision + + if suppress_small is None: + suppress_small = _float_output_suppress_small + + if formatter is None: + formatter = _formatter + + if a.size > _summaryThreshold: + summary_insert = "..., " + data = _leading_trailing(a) + else: + summary_insert = "" + data = ravel(a) + + formatdict = {'bool' : _boolFormatter, + 'int' : IntegerFormat(data), + 'float' : FloatFormat(data, precision, suppress_small), + 'longfloat' : LongFloatFormat(precision), + 'complexfloat' : ComplexFormat(data, precision, + suppress_small), + 'longcomplexfloat' : LongComplexFormat(precision), + 'datetime' : DatetimeFormat(data), + 'timedelta' : TimedeltaFormat(data), + 'numpystr' : repr_format, + 'str' : str} + + if formatter is not None: + fkeys = [k for k in formatter.keys() if formatter[k] is not None] + if 'all' in fkeys: + for key in formatdict.keys(): + formatdict[key] = formatter['all'] + if 'int_kind' in fkeys: + for key in ['int']: + formatdict[key] = formatter['int_kind'] + if 'float_kind' in fkeys: + for key in ['float', 'longfloat']: + formatdict[key] = formatter['float_kind'] + if 'complex_kind' in fkeys: + for key in ['complexfloat', 'longcomplexfloat']: + formatdict[key] = formatter['complex_kind'] + if 'str_kind' in fkeys: + for key in ['numpystr', 'str']: + formatdict[key] = formatter['str_kind'] + for key in formatdict.keys(): + if key in fkeys: + formatdict[key] = formatter[key] + + try: + format_function = a._format + msg = "The `_format` attribute is deprecated in Numpy 2.0 and " \ + "will be removed in 2.1. Use the `formatter` kw instead." + import warnings + warnings.warn(msg, DeprecationWarning) + except AttributeError: + # find the right formatting function for the array + dtypeobj = a.dtype.type + if issubclass(dtypeobj, _nt.bool_): + format_function = formatdict['bool'] + elif issubclass(dtypeobj, _nt.integer): + if issubclass(dtypeobj, _nt.timedelta64): + format_function = formatdict['timedelta'] + else: + format_function = formatdict['int'] + elif issubclass(dtypeobj, _nt.floating): + if issubclass(dtypeobj, _nt.longfloat): + format_function = formatdict['longfloat'] + else: + format_function = formatdict['float'] + elif issubclass(dtypeobj, _nt.complexfloating): + if issubclass(dtypeobj, _nt.clongfloat): + format_function = formatdict['longcomplexfloat'] + else: + format_function = formatdict['complexfloat'] + elif issubclass(dtypeobj, (_nt.unicode_, _nt.string_)): + format_function = formatdict['numpystr'] + elif issubclass(dtypeobj, _nt.datetime64): + format_function = formatdict['datetime'] + else: + format_function = formatdict['str'] + + # skip over "[" + next_line_prefix = " " + # skip over array( + next_line_prefix += " "*len(prefix) + + lst = _formatArray(a, format_function, len(a.shape), max_line_width, + next_line_prefix, separator, + _summaryEdgeItems, summary_insert)[:-1] + return lst + +def _convert_arrays(obj): + import numeric as _nc + newtup = [] + for k in obj: + if isinstance(k, _nc.ndarray): + k = k.tolist() + elif isinstance(k, tuple): + k = _convert_arrays(k) + newtup.append(k) + return tuple(newtup) + + +def array2string(a, max_line_width=None, precision=None, + suppress_small=None, separator=' ', prefix="", + style=repr, formatter=None): + """ + Return a string representation of an array. + + Parameters + ---------- + a : ndarray + Input array. + max_line_width : int, optional + The maximum number of columns the string should span. Newline + characters splits the string appropriately after array elements. + precision : int, optional + Floating point precision. Default is the current printing + precision (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent very small numbers as zero. A number is "very small" if it + is smaller than the current printing precision. + separator : str, optional + Inserted between elements. + prefix : str, optional + An array is typically printed as:: + + 'prefix(' + array2string(a) + ')' + + The length of the prefix string is used to align the + output correctly. + style : function, optional + A function that accepts an ndarray and returns a string. Used only + when the shape of `a` is equal to ``()``, i.e. for 0-D arrays. + formatter : dict of callables, optional + If not None, the keys should indicate the type(s) that the respective + formatting function applies to. Callables should return a string. + Types that are not specified (by their corresponding keys) are handled + by the default formatters. Individual types for which a formatter + can be set are:: + + - 'bool' + - 'int' + - 'timedelta' : a `numpy.timedelta64` + - 'datetime' : a `numpy.datetime64` + - 'float' + - 'longfloat' : 128-bit floats + - 'complexfloat' + - 'longcomplexfloat' : composed of two 128-bit floats + - 'numpy_str' : types `numpy.string_` and `numpy.unicode_` + - 'str' : all other strings + + Other keys that can be used to set a group of types at once are:: + + - 'all' : sets all types + - 'int_kind' : sets 'int' + - 'float_kind' : sets 'float' and 'longfloat' + - 'complex_kind' : sets 'complexfloat' and 'longcomplexfloat' + - 'str_kind' : sets 'str' and 'numpystr' + + Returns + ------- + array_str : str + String representation of the array. + + Raises + ------ + TypeError : if a callable in `formatter` does not return a string. + + See Also + -------- + array_str, array_repr, set_printoptions, get_printoptions + + Notes + ----- + If a formatter is specified for a certain type, the `precision` keyword is + ignored for that type. + + Examples + -------- + >>> x = np.array([1e-16,1,2,3]) + >>> print np.array2string(x, precision=2, separator=',', + ... suppress_small=True) + [ 0., 1., 2., 3.] + + >>> x = np.arange(3.) + >>> np.array2string(x, formatter={'float_kind':lambda x: "%.2f" % x}) + '[0.00 1.00 2.00]' + + >>> x = np.arange(3) + >>> np.array2string(x, formatter={'int':lambda x: hex(x)}) + '[0x0L 0x1L 0x2L]' + + """ + + if a.shape == (): + x = a.item() + if isna(x): + lst = str(x).replace('NA', _na_str, 1) + else: + try: + lst = a._format(x) + msg = "The `_format` attribute is deprecated in Numpy " \ + "2.0 and will be removed in 2.1. Use the " \ + "`formatter` kw instead." + import warnings + warnings.warn(msg, DeprecationWarning) + except AttributeError: + if isinstance(x, tuple): + x = _convert_arrays(x) + lst = style(x) + elif reduce(product, a.shape) == 0: + # treat as a null array if any of shape elements == 0 + lst = "[]" + else: + lst = _array2string(a, max_line_width, precision, suppress_small, + separator, prefix, formatter=formatter) + return lst + +def _extendLine(s, line, word, max_line_len, next_line_prefix): + if len(line.rstrip()) + len(word.rstrip()) >= max_line_len: + s += line.rstrip() + "\n" + line = next_line_prefix + line += word + return s, line + + +def _formatArray(a, format_function, rank, max_line_len, + next_line_prefix, separator, edge_items, summary_insert): + """formatArray is designed for two modes of operation: + + 1. Full output + + 2. Summarized output + + """ + if rank == 0: + obj = a.item() + if isinstance(obj, tuple): + obj = _convert_arrays(obj) + return str(obj) + + if summary_insert and 2*edge_items < len(a): + leading_items, trailing_items, summary_insert1 = \ + edge_items, edge_items, summary_insert + else: + leading_items, trailing_items, summary_insert1 = 0, len(a), "" + + if rank == 1: + s = "" + line = next_line_prefix + for i in xrange(leading_items): + word = format_function(a[i]) + separator + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + + if summary_insert1: + s, line = _extendLine(s, line, summary_insert1, max_line_len, next_line_prefix) + + for i in xrange(trailing_items, 1, -1): + word = format_function(a[-i]) + separator + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + + word = format_function(a[-1]) + s, line = _extendLine(s, line, word, max_line_len, next_line_prefix) + s += line + "]\n" + s = '[' + s[len(next_line_prefix):] + else: + s = '[' + sep = separator.rstrip() + for i in xrange(leading_items): + if i > 0: + s += next_line_prefix + s += _formatArray(a[i], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert) + s = s.rstrip() + sep.rstrip() + '\n'*max(rank-1,1) + + if summary_insert1: + s += next_line_prefix + summary_insert1 + "\n" + + for i in xrange(trailing_items, 1, -1): + if leading_items or i != trailing_items: + s += next_line_prefix + s += _formatArray(a[-i], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert) + s = s.rstrip() + sep.rstrip() + '\n'*max(rank-1,1) + if leading_items or trailing_items > 1: + s += next_line_prefix + s += _formatArray(a[-1], format_function, rank-1, max_line_len, + " " + next_line_prefix, separator, edge_items, + summary_insert).rstrip()+']\n' + return s + +class FloatFormat(object): + def __init__(self, data, precision, suppress_small, sign=False): + self.precision = precision + self.suppress_small = suppress_small + self.sign = sign + self.exp_format = False + self.large_exponent = False + self.max_str_len = 0 + try: + self.fillFormat(data) + except (TypeError, NotImplementedError): + # if reduce(data) fails, this instance will not be called, just + # instantiated in formatdict. + pass + + def fillFormat(self, data): + import numeric as _nc + errstate = _nc.seterr(all='ignore') + try: + special = isnan(data) | isinf(data) | isna(data) + special[isna(data)] = False + valid = not_equal(data, 0) & ~special + valid[isna(data)] = False + non_zero = absolute(data.compress(valid)) + if len(non_zero) == 0: + max_val = 0. + min_val = 0. + else: + max_val = maximum.reduce(non_zero, skipna=True) + min_val = minimum.reduce(non_zero, skipna=True) + if max_val >= 1.e8: + self.exp_format = True + if not self.suppress_small and (min_val < 0.0001 + or max_val/min_val > 1000.): + self.exp_format = True + finally: + _nc.seterr(**errstate) + + if self.exp_format: + self.large_exponent = 0 < min_val < 1e-99 or max_val >= 1e100 + self.max_str_len = 8 + self.precision + if self.large_exponent: + self.max_str_len += 1 + if self.sign: + format = '%+' + else: + format = '%' + format = format + '%d.%de' % (self.max_str_len, self.precision) + else: + format = '%%.%df' % (self.precision,) + if len(non_zero): + precision = max([_digits(x, self.precision, format) + for x in non_zero]) + else: + precision = 0 + precision = min(self.precision, precision) + self.max_str_len = len(str(int(max_val))) + precision + 2 + if _nc.any(special): + self.max_str_len = max(self.max_str_len, + len(_nan_str), + len(_inf_str)+1, + len(_na_str)) + if self.sign: + format = '%#+' + else: + format = '%#' + format = format + '%d.%df' % (self.max_str_len, precision) + + self.special_fmt = '%%%ds' % (self.max_str_len,) + self.format = format + + def __call__(self, x, strip_zeros=True): + import numeric as _nc + err = _nc.seterr(invalid='ignore') + try: + if isna(x): + return self.special_fmt % (str(x).replace('NA', _na_str, 1),) + elif isnan(x): + if self.sign: + return self.special_fmt % ('+' + _nan_str,) + else: + return self.special_fmt % (_nan_str,) + elif isinf(x): + if x > 0: + if self.sign: + return self.special_fmt % ('+' + _inf_str,) + else: + return self.special_fmt % (_inf_str,) + else: + return self.special_fmt % ('-' + _inf_str,) + finally: + _nc.seterr(**err) + + s = self.format % x + if self.large_exponent: + # 3-digit exponent + expsign = s[-3] + if expsign == '+' or expsign == '-': + s = s[1:-2] + '0' + s[-2:] + elif self.exp_format: + # 2-digit exponent + if s[-3] == '0': + s = ' ' + s[:-3] + s[-2:] + elif strip_zeros: + z = s.rstrip('0') + s = z + ' '*(len(s)-len(z)) + return s + + +def _digits(x, precision, format): + s = format % x + z = s.rstrip('0') + return precision - len(s) + len(z) + + +_MAXINT = sys.maxint +_MININT = -sys.maxint-1 +class IntegerFormat(object): + def __init__(self, data): + try: + max_str_len = max(len(str(maximum.reduce(data, skipna=True))), + len(str(minimum.reduce(data, skipna=True)))) + self.format = '%' + str(max_str_len) + 'd' + except TypeError, NotImplementedError: + # if reduce(data) fails, this instance will not be called, just + # instantiated in formatdict. + pass + except ValueError: + # this occurs when everything is NA + pass + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif _MININT < x < _MAXINT: + return self.format % x + else: + return "%s" % x + +class LongFloatFormat(object): + # XXX Have to add something to determine the width to use a la FloatFormat + # Right now, things won't line up properly + def __init__(self, precision, sign=False): + self.precision = precision + self.sign = sign + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + elif isnan(x): + if self.sign: + return '+' + _nan_str + else: + return ' ' + _nan_str + elif isinf(x): + if x > 0: + if self.sign: + return '+' + _inf_str + else: + return ' ' + _inf_str + else: + return '-' + _inf_str + elif x >= 0: + if self.sign: + return '+' + format_longfloat(x, self.precision) + else: + return ' ' + format_longfloat(x, self.precision) + else: + return format_longfloat(x, self.precision) + + +class LongComplexFormat(object): + def __init__(self, precision): + self.real_format = LongFloatFormat(precision) + self.imag_format = LongFloatFormat(precision, sign=True) + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + r = self.real_format(x.real) + i = self.imag_format(x.imag) + return r + i + 'j' + + +class ComplexFormat(object): + def __init__(self, x, precision, suppress_small): + self.real_format = FloatFormat(x.real, precision, suppress_small) + self.imag_format = FloatFormat(x.imag, precision, suppress_small, + sign=True) + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + r = self.real_format(x.real, strip_zeros=False) + i = self.imag_format(x.imag, strip_zeros=False) + if not self.imag_format.exp_format: + z = i.rstrip('0') + i = z + 'j' + ' '*(len(i)-len(z)) + else: + i = i + 'j' + return r + i + +class DatetimeFormat(object): + def __init__(self, x, unit=None, + timezone=None, casting='same_kind'): + # Get the unit from the dtype + if unit is None: + if x.dtype.kind == 'M': + unit = datetime_data(x.dtype)[0] + else: + unit = 's' + + # If timezone is default, make it 'local' or 'UTC' based on the unit + if timezone is None: + # Date units -> UTC, time units -> local + if unit in ('Y', 'M', 'W', 'D'): + self.timezone = 'UTC' + else: + self.timezone = 'local' + else: + self.timezone = timezone + self.unit = unit + self.casting = casting + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return "'%s'" % datetime_as_string(x, + unit=self.unit, + timezone=self.timezone, + casting=self.casting) + +class TimedeltaFormat(object): + def __init__(self, data): + if data.dtype.kind == 'm': + v = data.view('i8') + max_str_len = max(len(str(maximum.reduce(v))), + len(str(minimum.reduce(v)))) + self.format = '%' + str(max_str_len) + 'd' + + def __call__(self, x): + if isna(x): + return str(x).replace('NA', _na_str, 1) + else: + return self.format % x.astype('i8') + diff --git a/lib_pypy/numpypy/core/numeric.py b/lib_pypy/numpypy/core/numeric.py --- a/lib_pypy/numpypy/core/numeric.py +++ b/lib_pypy/numpypy/core/numeric.py @@ -1,5 +1,8 @@ -from _numpypy import array +from _numpypy import array, ndarray, int_, float_ #, complex_# , longlong +import sys +import _numpypy as multiarray # ARGH +from numpypy.core.arrayprint import array2string def asanyarray(a, dtype=None, order=None, maskna=None, ownmaskna=False): """ @@ -60,3 +63,245 @@ """ return array(a, dtype, copy=False, order=order, subok=True, maskna=maskna, ownmaskna=ownmaskna) + +def base_repr(number, base=2, padding=0): + """ + Return a string representation of a number in the given base system. + + Parameters + ---------- + number : int + The value to convert. Only positive values are handled. + base : int, optional + Convert `number` to the `base` number system. The valid range is 2-36, + the default value is 2. + padding : int, optional + Number of zeros padded on the left. Default is 0 (no padding). + + Returns + ------- + out : str + String representation of `number` in `base` system. + + See Also + -------- + binary_repr : Faster version of `base_repr` for base 2. + + Examples + -------- + >>> np.base_repr(5) + '101' + >>> np.base_repr(6, 5) + '11' + >>> np.base_repr(7, base=5, padding=3) + '00012' + + >>> np.base_repr(10, base=16) + 'A' + >>> np.base_repr(32, base=16) + '20' + + """ + digits = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' + if base > len(digits): + raise ValueError("Bases greater than 36 not handled in base_repr.") + + num = abs(number) + res = [] + while num: + res.append(digits[num % base]) + num //= base + if padding: + res.append('0' * padding) + if number < 0: + res.append('-') + return ''.join(reversed(res or '0')) + +_typelessdata = [int_, float_]#, complex_] +# XXX +#if issubclass(intc, int): +# _typelessdata.append(intc) + +#if issubclass(longlong, int): +# _typelessdata.append(longlong) + +def array_repr(arr, max_line_width=None, precision=None, suppress_small=None): + """ + Return the string representation of an array. + + Parameters + ---------- + arr : ndarray + Input array. + max_line_width : int, optional + The maximum number of columns the string should span. Newline + characters split the string appropriately after array elements. + precision : int, optional + Floating point precision. Default is the current printing precision + (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent very small numbers as zero, default is False. Very small + is defined by `precision`, if the precision is 8 then + numbers smaller than 5e-9 are represented as zero. + + Returns + ------- + string : str + The string representation of an array. + + See Also + -------- + array_str, array2string, set_printoptions + + Examples + -------- + >>> np.array_repr(np.array([1,2])) + 'array([1, 2])' + >>> np.array_repr(np.ma.array([0.])) + 'MaskedArray([ 0.])' + >>> np.array_repr(np.array([], np.int32)) + 'array([], dtype=int32)' + + >>> x = np.array([1e-6, 4e-7, 2, 3]) + >>> np.array_repr(x, precision=6, suppress_small=True) + 'array([ 0.000001, 0. , 2. , 3. ])' + + """ + if arr.size > 0 or arr.shape==(0,): + lst = array2string(arr, max_line_width, precision, suppress_small, + ', ', "array(") + else: # show zero-length shape unless it is (0,) + lst = "[], shape=%s" % (repr(arr.shape),) + + if arr.__class__ is not ndarray: + cName= arr.__class__.__name__ + else: + cName = "array" + + skipdtype = (arr.dtype.type in _typelessdata) and arr.size > 0 + + if arr.flags.maskna: + whichna = isna(arr) + # If nothing is NA, explicitly signal the NA-mask + if not any(whichna): + lst += ", maskna=True" + # If everything is NA, can't skip the dtype + if skipdtype and all(whichna): + skipdtype = False + + if skipdtype: + return "%s(%s)" % (cName, lst) + else: + typename = arr.dtype.name + # Quote typename in the output if it is "complex". + if typename and not (typename[0].isalpha() and typename.isalnum()): + typename = "'%s'" % typename + + lf = '' + if 0 or issubclass(arr.dtype.type, flexible): + if arr.dtype.names: + typename = "%s" % str(arr.dtype) + else: + typename = "'%s'" % str(arr.dtype) + lf = '\n'+' '*len("array(") + return cName + "(%s, %sdtype=%s)" % (lst, lf, typename) + +def array_str(a, max_line_width=None, precision=None, suppress_small=None): + """ + Return a string representation of the data in an array. + + The data in the array is returned as a single string. This function is + similar to `array_repr`, the difference being that `array_repr` also + returns information on the kind of array and its data type. + + Parameters + ---------- + a : ndarray + Input array. + max_line_width : int, optional + Inserts newlines if text is longer than `max_line_width`. The + default is, indirectly, 75. + precision : int, optional + Floating point precision. Default is the current printing precision + (usually 8), which can be altered using `set_printoptions`. + suppress_small : bool, optional + Represent numbers "very close" to zero as zero; default is False. + Very close is defined by precision: if the precision is 8, e.g., + numbers smaller (in absolute value) than 5e-9 are represented as + zero. + + See Also + -------- + array2string, array_repr, set_printoptions + + Examples + -------- + >>> np.array_str(np.arange(3)) + '[0 1 2]' + + """ + return array2string(a, max_line_width, precision, suppress_small, ' ', "", str) + +def set_string_function(f, repr=True): + """ + Set a Python function to be used when pretty printing arrays. + + Parameters + ---------- + f : function or None + Function to be used to pretty print arrays. The function should expect + a single array argument and return a string of the representation of + the array. If None, the function is reset to the default NumPy function + to print arrays. + repr : bool, optional + If True (default), the function for pretty printing (``__repr__``) + is set, if False the function that returns the default string + representation (``__str__``) is set. + + See Also + -------- + set_printoptions, get_printoptions + + Examples + -------- + >>> def pprint(arr): + ... return 'HA! - What are you going to do now?' + ... + >>> np.set_string_function(pprint) + >>> a = np.arange(10) + >>> a + HA! - What are you going to do now? + >>> print a + [0 1 2 3 4 5 6 7 8 9] + + We can reset the function to the default: + + >>> np.set_string_function(None) + >>> a + array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) + + `repr` affects either pretty printing or normal string representation. + Note that ``__repr__`` is still affected by setting ``__str__`` + because the width of each array element in the returned string becomes + equal to the length of the result of ``__str__()``. + + >>> x = np.arange(4) + >>> np.set_string_function(lambda x:'random', repr=False) + >>> x.__str__() + 'random' + >>> x.__repr__() + 'array([ 0, 1, 2, 3])' + + """ + if f is None: + if repr: + return multiarray.set_string_function(array_repr, 1) + else: + return multiarray.set_string_function(array_str, 0) + else: + return multiarray.set_string_function(f, repr) + +set_string_function(array_str, 0) +set_string_function(array_repr, 1) + +little_endian = (sys.byteorder == 'little') diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -28,6 +28,8 @@ 'fromstring': 'interp_support.fromstring', 'flatiter': 'interp_numarray.W_FlatIterator', + 'set_string_function': 'appbridge.set_string_function', + 'count_reduce_items': 'interp_numarray.count_reduce_items', 'True_': 'types.Bool.True', diff --git a/pypy/module/micronumpy/appbridge.py b/pypy/module/micronumpy/appbridge.py --- a/pypy/module/micronumpy/appbridge.py +++ b/pypy/module/micronumpy/appbridge.py @@ -5,6 +5,8 @@ w__var = None w__std = None w_module = None + w_array_repr = None + w_array_str = None def __init__(self, space): self.w_import = space.appexec([], """(): @@ -24,5 +26,12 @@ setattr(self, 'w_' + name, w_meth) return space.call_function(w_meth, *args) +def set_string_function(space, w_f, w_repr): + cache = get_appbridge_cache(space) + if space.is_true(w_repr): + cache.w_array_repr = w_f + else: + cache.w_array_str = w_f + def get_appbridge_cache(space): return space.fromcache(AppBridgeCache) diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -303,6 +303,12 @@ return space.wrap(res.build()) def descr_str(self, space): + cache = get_appbridge_cache(space) + if cache.w_array_str is None: + raise OperationError(space.w_RuntimeError, space.wrap( + "str function not set")) + return space.call_function(cache.w_array_str, self) + ret = StringBuilder() concrete = self.get_concrete_or_scalar() concrete.to_str(space, 0, ret, ' ') diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py @@ -0,0 +1,21 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + + +class AppTestBaseRepr(BaseNumpyAppTest): + def test_base3(self): + from numpypy import base_repr + assert base_repr(3**5, 3) == '100000' + + def test_positive(self): + from numpypy import base_repr + assert base_repr(12, 10) == '12' + assert base_repr(12, 10, 4) == '000012' + assert base_repr(12, 4) == '30' + assert base_repr(3731624803700888, 36) == '10QR0ROFCEW' + + def test_negative(self): + from numpypy import base_repr + assert base_repr(-12, 10) == '-12' + assert base_repr(-12, 10, 4) == '-000012' + assert base_repr(-12, 4) == '-30' From noreply at buildbot.pypy.org Sat Jan 21 23:11:16 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 23:11:16 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: Start working towards ravel and applevel array_string, nothing works! Message-ID: <20120121221116.AE00A82D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51613:eb5dcdf724f3 Date: 2012-01-21 22:44 +0200 http://bitbucket.org/pypy/pypy/changeset/eb5dcdf724f3/ Log: Start working towards ravel and applevel array_string, nothing works! diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/core/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -1054,8 +1054,9 @@ array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) """ - raise NotImplementedError('Waiting on interp level method') - + if not hasattr(a, 'ravel'): + a = numpypy.array(a) + return a.ravel(order=order) def nonzero(a): """ diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -280,6 +280,11 @@ "len() of unsized object")) def descr_repr(self, space): + cache = get_appbridge_cache(space) + if cache.w_array_repr is None: + return space.wrap(self.dump_data()) + return space.call_function(cache.w_array_repr, self) + res = StringBuilder() res.append("array(") concrete = self.get_concrete_or_scalar() @@ -302,11 +307,26 @@ res.append(")") return space.wrap(res.build()) + def dump_data(self): + concr = self.get_concrete() + i = concr.create_iter() + first = True + s = StringBuilder() + s.append('array([') + while not i.done(): + if first: + first = False + else: + s.append(', ') + s.append(concr.dtype.itemtype.str_format(concr.getitem(i.offset))) + i = i.next(len(concr.shape)) + s.append('])') + return s.build() + def descr_str(self, space): cache = get_appbridge_cache(space) if cache.w_array_str is None: - raise OperationError(space.w_RuntimeError, space.wrap( - "str function not set")) + return space.wrap(self.dump_data()) return space.call_function(cache.w_array_str, self) ret = StringBuilder() @@ -546,6 +566,12 @@ return space.wrap(W_NDimSlice(concrete.start, strides, backstrides, shape, concrete)) + def descr_ravel(self, space, w_order=None): + if not space.is_w(w_order, space.w_None): + raise OperationError(space.w_NotImplementedError, space.wrap( + "order not implemented")) + return self.descr_reshape(space, [space.wrap(-1)]) + def descr_get_flatiter(self, space): return space.wrap(W_FlatIterator(self)) @@ -1251,6 +1277,7 @@ T = GetSetProperty(BaseArray.descr_get_transpose), flat = GetSetProperty(BaseArray.descr_get_flatiter), + ravel = interp2app(BaseArray.descr_ravel), mean = interp2app(BaseArray.descr_mean), sum = interp2app(BaseArray.descr_sum), diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1382,6 +1382,14 @@ assert array(x, copy=False) is x assert array(x, copy=True) is not x + def test_ravel(self): + from _numpypy import arange + assert (arange(3).ravel() == arange(3)).all() + assert (arange(6).reshape(2, 3).ravel() == arange(6)).all() + print arange(6).reshape(2, 3).T.ravel() + assert (arange(6).reshape(2, 3).T.ravel() == [0, 3, 1, 4, 2, 5]).all() + + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): import struct diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -137,3 +137,6 @@ # x = ones((1, 2, 3)) # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + def test_ravel(self): + from numpypy import ravel + diff --git a/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py --- a/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_numeric.py @@ -19,3 +19,8 @@ assert base_repr(-12, 10) == '-12' assert base_repr(-12, 10, 4) == '-000012' assert base_repr(-12, 4) == '-30' + +class AppTestRepr(BaseNumpyAppTest): + def test_repr(self): + from numpypy import array + assert repr(array([1, 2, 3, 4])) == 'array([1, 2, 3, 4])' From noreply at buildbot.pypy.org Sat Jan 21 23:11:17 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sat, 21 Jan 2012 23:11:17 +0100 (CET) Subject: [pypy-commit] pypy numpy-back-to-applevel: merge Message-ID: <20120121221117.EB7F982D45@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: numpy-back-to-applevel Changeset: r51614:da00788e9d17 Date: 2012-01-22 00:10 +0200 http://bitbucket.org/pypy/pypy/changeset/da00788e9d17/ Log: merge diff --git a/pypy/annotation/unaryop.py b/pypy/annotation/unaryop.py --- a/pypy/annotation/unaryop.py +++ b/pypy/annotation/unaryop.py @@ -440,6 +440,12 @@ def method_popitem(dct): return dct.getanyitem('items') + def method_pop(dct, s_key, s_dfl=None): + dct.dictdef.generalize_key(s_key) + if s_dfl is not None: + dct.dictdef.generalize_value(s_dfl) + return dct.dictdef.read_value() + def _can_only_throw(dic, *ignore): if dic1.dictdef.dictkey.custom_eq_hash: return None # r_dict: can throw anything diff --git a/pypy/rpython/lltypesystem/rdict.py b/pypy/rpython/lltypesystem/rdict.py --- a/pypy/rpython/lltypesystem/rdict.py +++ b/pypy/rpython/lltypesystem/rdict.py @@ -323,6 +323,16 @@ hop.exception_is_here() return hop.gendirectcall(ll_popitem, cTUPLE, v_dict) + def rtype_method_pop(self, hop): + if hop.nb_args == 2: + v_args = hop.inputargs(self, self.key_repr) + target = ll_pop + elif hop.nb_args == 3: + v_args = hop.inputargs(self, self.key_repr, self.value_repr) + target = ll_pop_default + hop.exception_is_here() + return hop.gendirectcall(target, *v_args) + class __extend__(pairtype(DictRepr, rmodel.Repr)): def rtype_getitem((r_dict, r_key), hop): @@ -874,3 +884,18 @@ r.item1 = recast(ELEM.TO.item1, entry.value) _ll_dict_del(dic, i) return r + +def ll_pop(dic, key): + i = ll_dict_lookup(dic, key, dic.keyhash(key)) + if not i & HIGHEST_BIT: + value = ll_get_value(dic, i) + _ll_dict_del(dic, i) + return value + else: + raise KeyError + +def ll_pop_default(dic, key, dfl): + try: + return ll_pop(dic, key) + except KeyError: + return dfl diff --git a/pypy/rpython/ootypesystem/rdict.py b/pypy/rpython/ootypesystem/rdict.py --- a/pypy/rpython/ootypesystem/rdict.py +++ b/pypy/rpython/ootypesystem/rdict.py @@ -160,6 +160,16 @@ hop.exception_is_here() return hop.gendirectcall(ll_popitem, cTUPLE, v_dict) + def rtype_method_pop(self, hop): + if hop.nb_args == 2: + v_args = hop.inputargs(self, self.key_repr) + target = ll_pop + elif hop.nb_args == 3: + v_args = hop.inputargs(self, self.key_repr, self.value_repr) + target = ll_pop_default + hop.exception_is_here() + return hop.gendirectcall(target, *v_args) + def __get_func(self, interp, r_func, fn, TYPE): if isinstance(r_func, MethodOfFrozenPBCRepr): obj = r_func.r_im_self.convert_const(fn.im_self) @@ -370,6 +380,20 @@ return res raise KeyError +def ll_pop(d, key): + if d.ll_contains(key): + value = d.ll_get(key) + d.ll_remove(key) + return value + else: + raise KeyError + +def ll_pop_default(d, key, dfl): + try: + return ll_pop(d, key) + except KeyError: + return dfl + # ____________________________________________________________ # # Iteration. diff --git a/pypy/rpython/test/test_rdict.py b/pypy/rpython/test/test_rdict.py --- a/pypy/rpython/test/test_rdict.py +++ b/pypy/rpython/test/test_rdict.py @@ -622,6 +622,26 @@ res = self.interpret(func, []) assert res in [5263, 6352] + def test_dict_pop(self): + def f(n, default): + d = {} + d[2] = 3 + d[4] = 5 + if default == -1: + try: + x = d.pop(n) + except KeyError: + x = -1 + else: + x = d.pop(n, default) + return x * 10 + len(d) + res = self.interpret(f, [2, -1]) + assert res == 31 + res = self.interpret(f, [3, -1]) + assert res == -8 + res = self.interpret(f, [2, 5]) + assert res == 31 + class TestLLtype(BaseTestRdict, LLRtypeMixin): def test_dict_but_not_with_char_keys(self): def func(i): From noreply at buildbot.pypy.org Sat Jan 21 23:17:05 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sat, 21 Jan 2012 23:17:05 +0100 (CET) Subject: [pypy-commit] extradoc extradoc: english and a conclusion Message-ID: <20120121221705.B951B82D45@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: extradoc Changeset: r4050:1b7919a194e1 Date: 2012-01-21 16:16 -0600 http://bitbucket.org/pypy/extradoc/changeset/1b7919a194e1/ Log: english and a conclusion diff --git a/blog/draft/pypy-2011.rst b/blog/draft/pypy-2011.rst --- a/blog/draft/pypy-2011.rst +++ b/blog/draft/pypy-2011.rst @@ -3,35 +3,44 @@ Hello everyone. -PyPy development is fast, sometimes very fast. We try to do 3-4 releases a year -and yet a lot of time when someone complains about the last release being slow -we usually say "oh, but it's the release, that was AGES ago". That makes -something as big as summarizing a year next to impossible, but using my -external IQ amplifiers like internet and hg log, I will try to provide you -with some semi-serious statistics: +PyPy's development is fast, sometimes very fast. We try to do 3-4 releases a +year and still many times when someone complains about the latest release being +slow we usually say "oh, that release was AGES ago". This makes something as +big as summarizing a year next to impossible, but using my external IQ +amplifiers, like internet and hg log, I will try to provide you with some +semi-serious statistics: -* We made 3 releases last year, boringly, 1.5, 1.6 and 1.7 +* We did 3 releases last year, boringly named, 1.5, 1.6 and 1.7 -* Number of visitors on pypy.org grew from about 200 to 800 per day (averaged - weekly) +* The number of visitors on pypy.org grew from about 200 to 800 per day + (averaged weekly) * We spoke at 15 conferences/meetups + (XXX: can we put a list in a comment to be sure it's correct) -* We made 10660 commits, or 29 per day (mostly typos ;-) +* We made 10660 commits, or 29 per day (mostly typos ;-)) * We published 43 blog posts, keeping you entertained almost weekly -* We made PyPy over 2x faster (on a select set of benchmarks), +* We made PyPy over 2x faster (on a select set of benchmarks), see the `1.4 and nightly`_ comparison -* We got 71 new people which did at least one commit in the PyPy repository +* 71 brand new people contributed at least one commit to the PyPy repository. (XXX: this is what I got by doing wc on contributors.txt at 2011-01-01 and the current contributors.rst, but we still need to update the latter) -* We made PyPy 17x more compatible (of course this is unmeasurable, so why - not just claim it) +* We made PyPy 17x more compatible (of course this is not measurable, so we're + going to claim it!) -* We probably have [XXX insert unicode for infinity] infinitely many times more users now, since noone used - 1.4 in production +* We have [XXX insert unicode for infinity] infinitely many times more users + now, since no one used 1.4 in production + +2011 was a very exciting year for us. But we're pretty sure 2012 is going to be +even more exciting! So watch this space for more exciting news, try PyPy out on +your projects, and, as always, we invite you to join us and contribute to PyPy +in any way you can. + +Signed, +The PyPy Developers .. _`1.4 and nightly` http://speed.pypy.org/comparison/?exe=1%2B172%2C1%2BL%2Bdefault&ben=1%2C34%2C27%2C2%2C25%2C3%2C4%2C5%2C22%2C6%2C39%2C7%2C8%2C23%2C24%2C9%2C10%2C11%2C12%2C13%2C14%2C15%2C35%2C36%2C37%2C38%2C16%2C28%2C30%2C32%2C29%2C33%2C17%2C18%2C19%2C20&env=1%2C2&hor=true&bas=1%2B172&chart=normal+bars From noreply at buildbot.pypy.org Sat Jan 21 23:27:10 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 21 Jan 2012 23:27:10 +0100 (CET) Subject: [pypy-commit] pypy default: Cosmetic changes in datetime.py, to reduce differences with the version in CPython 3.2 Message-ID: <20120121222710.91D3E82D45@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51615:c5fa49a9207e Date: 2012-01-18 22:38 +0100 http://bitbucket.org/pypy/pypy/changeset/c5fa49a9207e/ Log: Cosmetic changes in datetime.py, to reduce differences with the version in CPython 3.2 diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -13,7 +13,7 @@ Sources for time zone and DST data: http://www.twinsun.com/tz/tz-link.htm This was originally copied from the sandbox of the CPython CVS repository. -Thanks to Tim Peters for suggesting using it. +Thanks to Tim Peters for suggesting using it. """ import time as _time @@ -271,6 +271,8 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): + if not isinstance(year, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -280,6 +282,8 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): + if not isinstance(hour, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -575,15 +579,26 @@ s = s + ".%06d" % self.__microseconds return s - days = property(lambda self: self.__days, doc="days") - seconds = property(lambda self: self.__seconds, doc="seconds") - microseconds = property(lambda self: self.__microseconds, - doc="microseconds") - def total_seconds(self): return ((self.days * 86400 + self.seconds) * 10**6 + self.microseconds) / 1e6 + # Read-only field accessors + @property + def days(self): + """days""" + return self.__days + + @property + def seconds(self): + """seconds""" + return self.__seconds + + @property + def microseconds(self): + """microseconds""" + return self.__microseconds + def __add__(self, other): if isinstance(other, timedelta): # for CPython compatibility, we cannot use @@ -756,18 +771,19 @@ # Additional constructors + @classmethod def fromtimestamp(cls, t): "Construct a date from a POSIX timestamp (like time.time())." y, m, d, hh, mm, ss, weekday, jday, dst = _time.localtime(t) return cls(y, m, d) - fromtimestamp = classmethod(fromtimestamp) + @classmethod def today(cls): "Construct a date from time.time()." t = _time.time() return cls.fromtimestamp(t) - today = classmethod(today) + @classmethod def fromordinal(cls, n): """Contruct a date from a proleptic Gregorian ordinal. @@ -776,7 +792,6 @@ """ y, m, d = _ord2ymd(n) return cls(y, m, d) - fromordinal = classmethod(fromordinal) # Conversions to string @@ -799,6 +814,14 @@ "Format using strftime()." return _wrap_strftime(self, fmt, self.timetuple()) + def __format__(self, format): + if not isinstance(format, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + format.__class__.__name__) + if not format: + return str(self) + return self.strftime(format) + def isoformat(self): """Return the date formatted according to ISO. @@ -812,19 +835,21 @@ __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) + # Read-only field accessors + @property + def year(self): + """year (1-9999)""" + return self.__year - # Read-only field accessors - year = property(lambda self: self.__year, - doc="year (%d-%d)" % (MINYEAR, MAXYEAR)) - month = property(lambda self: self.__month, doc="month (1-12)") - day = property(lambda self: self.__day, doc="day (1-31)") + @property + def month(self): + """month (1-12)""" + return self.__month + + @property + def day(self): + """day (1-31)""" + return self.__day # Standard conversions, __cmp__, __hash__ (and helpers) @@ -852,7 +877,7 @@ _check_date_fields(year, month, day) return date(year, month, day) - # Comparisons. + # Comparisons of date objects with other. def __eq__(self, other): if isinstance(other, date): @@ -1126,16 +1151,34 @@ return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self.__hour + + @property + def minute(self): + """minute (0-59)""" + return self.__minute + + @property + def second(self): + """second (0-59)""" + return self.__second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self.__microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo # Standard conversions, __hash__ (and helpers) - # Comparisons. + # Comparisons of time objects with other. def __eq__(self, other): if isinstance(other, time): @@ -1255,14 +1298,6 @@ __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) - def strftime(self, fmt): """Format using strftime(). The date part of the timestamp passed to underlying strftime should not be used. @@ -1274,6 +1309,14 @@ 0, 1, -1) return _wrap_strftime(self, fmt, timetuple) + def __format__(self, format): + if not isinstance(format, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + format.__class__.__name__) + if not format: + return str(self) + return self.strftime(format) + # Timezone functions def utcoffset(self): @@ -1378,9 +1421,11 @@ time.resolution = timedelta(microseconds=1) class datetime(date): + """datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) - # XXX needs docstrings - # See http://www.zope.org/Members/fdrake/DateTimeWiki/TimeZoneInfo + The year, month and day arguments are required. tzinfo may be None, or an + instance of a tzinfo subclass. The remaining arguments may be ints or longs. + """ def __new__(cls, year, month=None, day=None, hour=0, minute=0, second=0, microsecond=0, tzinfo=None): @@ -1404,13 +1449,32 @@ return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self.__hour + @property + def minute(self): + """minute (0-59)""" + return self.__minute + + @property + def second(self): + """second (0-59)""" + return self.__second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self.__microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo + + @classmethod def fromtimestamp(cls, t, tz=None): """Construct a datetime from a POSIX timestamp (like time.time()). @@ -1438,8 +1502,8 @@ if tz is not None: result = tz.fromutc(result) return result - fromtimestamp = classmethod(fromtimestamp) + @classmethod def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." if 1 - (t % 1.0) < 0.0000005: @@ -1450,25 +1514,25 @@ us = int((t % 1.0) * 1000000) ss = min(ss, 59) # clamp out leap seconds if the platform has them return cls(y, m, d, hh, mm, ss, us) - utcfromtimestamp = classmethod(utcfromtimestamp) # XXX This is supposed to do better than we *can* do by using time.time(), # XXX if the platform supports a more accurate way. The C implementation # XXX uses gettimeofday on platforms that have it, but that isn't # XXX available from Python. So now() may return different results # XXX across the implementations. + @classmethod def now(cls, tz=None): "Construct a datetime from time.time() and optional time zone info." t = _time.time() return cls.fromtimestamp(t, tz) - now = classmethod(now) + @classmethod def utcnow(cls): "Construct a UTC datetime from time.time()." t = _time.time() return cls.utcfromtimestamp(t) - utcnow = classmethod(utcnow) + @classmethod def combine(cls, date, time): "Construct a datetime from a given date and a given time." if not isinstance(date, _date_class): @@ -1478,7 +1542,6 @@ return cls(date.year, date.month, date.day, time.hour, time.minute, time.second, time.microsecond, time.tzinfo) - combine = classmethod(combine) def timetuple(self): "Return local time tuple compatible with time.localtime()." @@ -1596,13 +1659,13 @@ return s def __repr__(self): - "Convert to formal string, for repr()." + """Convert to formal string, for repr().""" L = [self.__year, self.__month, self.__day, # These are never zero self.__hour, self.__minute, self.__second, self.__microsecond] if L[-1] == 0: del L[-1] if L[-1] == 0: - del L[-1] + del L[-1] s = ", ".join(map(str, L)) s = "%s(%s)" % ('datetime.' + self.__class__.__name__, s) if self._tzinfo is not None: @@ -2009,7 +2072,7 @@ Because we know z.d said z was in daylight time (else [5] would have held and we would have stopped then), and we know z.d != z'.d (else [8] would have held -and we we have stopped then), and there are only 2 possible values dst() can +and we have stopped then), and there are only 2 possible values dst() can return in Eastern, it follows that z'.d must be 0 (which it is in the example, but the reasoning doesn't depend on the example -- it depends on there being two possible dst() outcomes, one zero and the other non-zero). Therefore From noreply at buildbot.pypy.org Sat Jan 21 23:27:11 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 21 Jan 2012 23:27:11 +0100 (CET) Subject: [pypy-commit] pypy default: Slowly merge datetime.py from upstream: don't use "private" attributes. Message-ID: <20120121222711.DBB0A82D45@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51616:5aa09e8483d3 Date: 2012-01-18 23:04 +0100 http://bitbucket.org/pypy/pypy/changeset/5aa09e8483d3/ Log: Slowly merge datetime.py from upstream: don't use "private" attributes. diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -547,36 +547,36 @@ self = object.__new__(cls) - self.__days = d - self.__seconds = s - self.__microseconds = us + self._days = d + self._seconds = s + self._microseconds = us if abs(d) > 999999999: raise OverflowError("timedelta # of days is too large: %d" % d) return self def __repr__(self): - if self.__microseconds: + if self._microseconds: return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds, - self.__microseconds) - if self.__seconds: + self._days, + self._seconds, + self._microseconds) + if self._seconds: return "%s(%d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds) - return "%s(%d)" % ('datetime.' + self.__class__.__name__, self.__days) + self._days, + self._seconds) + return "%s(%d)" % ('datetime.' + self.__class__.__name__, self._days) def __str__(self): - mm, ss = divmod(self.__seconds, 60) + mm, ss = divmod(self._seconds, 60) hh, mm = divmod(mm, 60) s = "%d:%02d:%02d" % (hh, mm, ss) - if self.__days: + if self._days: def plural(n): return n, abs(n) != 1 and "s" or "" - s = ("%d day%s, " % plural(self.__days)) + s - if self.__microseconds: - s = s + ".%06d" % self.__microseconds + s = ("%d day%s, " % plural(self._days)) + s + if self._microseconds: + s = s + ".%06d" % self._microseconds return s def total_seconds(self): @@ -587,32 +587,36 @@ @property def days(self): """days""" - return self.__days + return self._days @property def seconds(self): """seconds""" - return self.__seconds + return self._seconds @property def microseconds(self): """microseconds""" - return self.__microseconds + return self._microseconds def __add__(self, other): if isinstance(other, timedelta): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days + other.__days, - self.__seconds + other.__seconds, - self.__microseconds + other.__microseconds) + return timedelta(self._days + other._days, + self._seconds + other._seconds, + self._microseconds + other._microseconds) return NotImplemented __radd__ = __add__ def __sub__(self, other): if isinstance(other, timedelta): - return self + -other + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(self._days - other._days, + self._seconds - other._seconds, + self._microseconds - other._microseconds) return NotImplemented def __rsub__(self, other): @@ -621,17 +625,17 @@ return NotImplemented def __neg__(self): - # for CPython compatibility, we cannot use - # our __class__ here, but need a real timedelta - return timedelta(-self.__days, - -self.__seconds, - -self.__microseconds) + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(-self._days, + -self._seconds, + -self._microseconds) def __pos__(self): return self def __abs__(self): - if self.__days < 0: + if self._days < 0: return -self else: return self @@ -640,81 +644,81 @@ if isinstance(other, (int, long)): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days * other, - self.__seconds * other, - self.__microseconds * other) + return timedelta(self._days * other, + self._seconds * other, + self._microseconds * other) return NotImplemented __rmul__ = __mul__ def __div__(self, other): if isinstance(other, (int, long)): - usec = ((self.__days * (24*3600L) + self.__seconds) * 1000000 + - self.__microseconds) + usec = ((self._days * (24*3600L) + self._seconds) * 1000000 + + self._microseconds) return timedelta(0, 0, usec // other) return NotImplemented __floordiv__ = __div__ - # Comparisons. + # Comparisons of timedelta objects with other. def __eq__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, timedelta) - return cmp(self.__getstate(), other.__getstate()) + return cmp(self._getstate(), other._getstate()) def __hash__(self): - return hash(self.__getstate()) + return hash(self._getstate()) def __nonzero__(self): - return (self.__days != 0 or - self.__seconds != 0 or - self.__microseconds != 0) + return (self._days != 0 or + self._seconds != 0 or + self._microseconds != 0) # Pickle support. __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - return (self.__days, self.__seconds, self.__microseconds) + def _getstate(self): + return (self._days, self._seconds, self._microseconds) def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) timedelta.min = timedelta(-999999999) timedelta.max = timedelta(days=999999999, hours=23, minutes=59, seconds=59, @@ -764,9 +768,9 @@ return self _check_date_fields(year, month, day) self = object.__new__(cls) - self.__year = year - self.__month = month - self.__day = day + self._year = year + self._month = month + self._day = day return self # Additional constructors @@ -796,11 +800,20 @@ # Conversions to string def __repr__(self): - "Convert to formal string, for repr()." + """Convert to formal string, for repr(). + + >>> dt = datetime(2010, 1, 1) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0)' + + >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)' + """ return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__year, - self.__month, - self.__day) + self._year, + self._month, + self._day) # XXX These shouldn't depend on time.localtime(), because that # clips the usable dates to [1970 .. 2038). At least ctime() is # easily done without using strftime() -- that's better too because @@ -808,19 +821,19 @@ def ctime(self): "Format a la ctime()." - return tmxxx(self.__year, self.__month, self.__day).ctime() + return tmxxx(self._year, self._month, self._day).ctime() def strftime(self, fmt): "Format using strftime()." return _wrap_strftime(self, fmt, self.timetuple()) - def __format__(self, format): - if not isinstance(format, (str, unicode)): + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) def isoformat(self): """Return the date formatted according to ISO. @@ -831,7 +844,7 @@ - http://www.w3.org/TR/NOTE-datetime - http://www.cl.cam.ac.uk/~mgk25/iso-time.html """ - return "%04d-%02d-%02d" % (self.__year, self.__month, self.__day) + return "%04d-%02d-%02d" % (self._year, self._month, self._day) __str__ = isoformat @@ -839,23 +852,23 @@ @property def year(self): """year (1-9999)""" - return self.__year + return self._year @property def month(self): """month (1-12)""" - return self.__month + return self._month @property def day(self): """day (1-31)""" - return self.__day + return self._day # Standard conversions, __cmp__, __hash__ (and helpers) def timetuple(self): "Return local time tuple compatible with time.localtime()." - return _build_struct_time(self.__year, self.__month, self.__day, + return _build_struct_time(self._year, self._month, self._day, 0, 0, 0, -1) def toordinal(self): @@ -864,16 +877,16 @@ January 1 of year 1 is day 1. Only the year, month and day values contribute to the result. """ - return _ymd2ord(self.__year, self.__month, self.__day) + return _ymd2ord(self._year, self._month, self._day) def replace(self, year=None, month=None, day=None): """Return a new date with new values for the specified fields.""" if year is None: - year = self.__year + year = self._year if month is None: - month = self.__month + month = self._month if day is None: - day = self.__day + day = self._day _check_date_fields(year, month, day) return date(year, month, day) @@ -881,7 +894,7 @@ def __eq__(self, other): if isinstance(other, date): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -889,7 +902,7 @@ def __ne__(self, other): if isinstance(other, date): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -897,7 +910,7 @@ def __le__(self, other): if isinstance(other, date): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -905,7 +918,7 @@ def __lt__(self, other): if isinstance(other, date): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -913,7 +926,7 @@ def __ge__(self, other): if isinstance(other, date): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -921,21 +934,21 @@ def __gt__(self, other): if isinstance(other, date): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple"): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, date) - y, m, d = self.__year, self.__month, self.__day - y2, m2, d2 = other.__year, other.__month, other.__day + y, m, d = self._year, self._month, self._day + y2, m2, d2 = other._year, other._month, other._day return cmp((y, m, d), (y2, m2, d2)) def __hash__(self): "Hash." - return hash(self.__getstate()) + return hash(self._getstate()) # Computations @@ -947,9 +960,9 @@ def __add__(self, other): "Add a date to a timedelta." if isinstance(other, timedelta): - t = tmxxx(self.__year, - self.__month, - self.__day + other.days) + t = tmxxx(self._year, + self._month, + self._day + other.days) self._checkOverflow(t.year) result = date(t.year, t.month, t.day) return result @@ -991,9 +1004,9 @@ ISO calendar algorithm taken from http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm """ - year = self.__year + year = self._year week1monday = _isoweek1monday(year) - today = _ymd2ord(self.__year, self.__month, self.__day) + today = _ymd2ord(self._year, self._month, self._day) # Internally, week and day have origin 0 week, day = divmod(today - week1monday, 7) if week < 0: @@ -1010,18 +1023,18 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - return ("%c%c%c%c" % (yhi, ylo, self.__month, self.__day), ) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + return ("%c%c%c%c" % (yhi, ylo, self._month, self._day), ) def __setstate(self, string): if len(string) != 4 or not (1 <= ord(string[2]) <= 12): raise TypeError("not enough arguments") - yhi, ylo, self.__month, self.__day = map(ord, string) - self.__year = yhi * 256 + ylo + yhi, ylo, self._month, self._day = map(ord, string) + self._year = yhi * 256 + ylo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) _date_class = date # so functions w/ args named "date" can get at the class @@ -1143,10 +1156,10 @@ return self _check_tzinfo_arg(tzinfo) _check_time_fields(hour, minute, second, microsecond) - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self @@ -1154,22 +1167,22 @@ @property def hour(self): """hour (0-23)""" - return self.__hour + return self._hour @property def minute(self): """minute (0-59)""" - return self.__minute + return self._minute @property def second(self): """second (0-59)""" - return self.__second + return self._second @property def microsecond(self): """microsecond (0-999999)""" - return self.__microsecond + return self._microsecond @property def tzinfo(self): @@ -1182,41 +1195,41 @@ def __eq__(self, other): if isinstance(other, time): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, time): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, time): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, time): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, time): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, time): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, time) mytz = self._tzinfo ottz = other._tzinfo @@ -1230,23 +1243,23 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._hour, self._minute, self._second, + self._microsecond), + (other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware times") - myhhmm = self.__hour * 60 + self.__minute - myoff - othhmm = other.__hour * 60 + other.__minute - otoff - return cmp((myhhmm, self.__second, self.__microsecond), - (othhmm, other.__second, other.__microsecond)) + myhhmm = self._hour * 60 + self._minute - myoff + othhmm = other._hour * 60 + other._minute - otoff + return cmp((myhhmm, self._second, self._microsecond), + (othhmm, other._second, other._microsecond)) def __hash__(self): """Hash.""" tzoff = self._utcoffset() if not tzoff: # zero or None - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) h, m = divmod(self.hour * 60 + self.minute - tzoff, 60) if 0 <= h < 24: return hash(time(h, m, self.second, self.microsecond)) @@ -1270,14 +1283,14 @@ def __repr__(self): """Convert to formal string, for repr().""" - if self.__microsecond != 0: - s = ", %d, %d" % (self.__second, self.__microsecond) - elif self.__second != 0: - s = ", %d" % self.__second + if self._microsecond != 0: + s = ", %d, %d" % (self._second, self._microsecond) + elif self._second != 0: + s = ", %d" % self._second else: s = "" s= "%s(%d, %d%s)" % ('datetime.' + self.__class__.__name__, - self.__hour, self.__minute, s) + self._hour, self._minute, s) if self._tzinfo is not None: assert s[-1:] == ")" s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")" @@ -1289,8 +1302,8 @@ This is 'HH:MM:SS.mmmmmm+zz:zz', or 'HH:MM:SS+zz:zz' if self.microsecond == 0. """ - s = _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond) + s = _format_time(self._hour, self._minute, self._second, + self._microsecond) tz = self._tzstr() if tz: s += tz @@ -1305,17 +1318,17 @@ # The year must be >= 1900 else Python's strftime implementation # can raise a bogus exception. timetuple = (1900, 1, 1, - self.__hour, self.__minute, self.__second, + self._hour, self._minute, self._second, 0, 1, -1) return _wrap_strftime(self, fmt, timetuple) - def __format__(self, format): - if not isinstance(format, (str, unicode)): + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) # Timezone functions @@ -1393,10 +1406,10 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 6) % (self.__hour, self.__minute, self.__second, + basestate = ("%c" * 6) % (self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1406,13 +1419,13 @@ def __setstate(self, string, tzinfo): if len(string) != 6 or ord(string[0]) >= 24: raise TypeError("an integer is required") - self.__hour, self.__minute, self.__second, us1, us2, us3 = \ + self._hour, self._minute, self._second, us1, us2, us3 = \ map(ord, string) - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (time, self.__getstate()) + return (time, self._getstate()) _time_class = time # so functions w/ args named "time" can get at the class @@ -1438,13 +1451,13 @@ _check_time_fields(hour, minute, second, microsecond) self = date.__new__(cls, year, month, day) # XXX This duplicates __year, __month, __day for convenience :-( - self.__year = year - self.__month = month - self.__day = day - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._year = year + self._month = month + self._day = day + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self @@ -1452,22 +1465,22 @@ @property def hour(self): """hour (0-23)""" - return self.__hour + return self._hour @property def minute(self): """minute (0-59)""" - return self.__minute + return self._minute @property def second(self): """second (0-59)""" - return self.__second + return self._second @property def microsecond(self): """microsecond (0-999999)""" - return self.__microsecond + return self._microsecond @property def tzinfo(self): @@ -1567,7 +1580,7 @@ def date(self): "Return the date part." - return date(self.__year, self.__month, self.__day) + return date(self._year, self._month, self._day) def time(self): "Return the time part, with tzinfo None." @@ -1627,8 +1640,8 @@ def ctime(self): "Format a la ctime()." - t = tmxxx(self.__year, self.__month, self.__day, self.__hour, - self.__minute, self.__second) + t = tmxxx(self._year, self._month, self._day, self._hour, + self._minute, self._second) return t.ctime() def isoformat(self, sep='T'): @@ -1643,10 +1656,10 @@ Optional argument sep specifies the separator between date and time, default 'T'. """ - s = ("%04d-%02d-%02d%c" % (self.__year, self.__month, self.__day, + s = ("%04d-%02d-%02d%c" % (self._year, self._month, self._day, sep) + - _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond)) + _format_time(self._hour, self._minute, self._second, + self._microsecond)) off = self._utcoffset() if off is not None: if off < 0: @@ -1660,8 +1673,8 @@ def __repr__(self): """Convert to formal string, for repr().""" - L = [self.__year, self.__month, self.__day, # These are never zero - self.__hour, self.__minute, self.__second, self.__microsecond] + L = [self._year, self._month, self._day, # These are never zero + self._hour, self._minute, self._second, self._microsecond] if L[-1] == 0: del L[-1] if L[-1] == 0: @@ -1738,7 +1751,7 @@ def __eq__(self, other): if isinstance(other, datetime): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1746,7 +1759,7 @@ def __ne__(self, other): if isinstance(other, datetime): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1754,7 +1767,7 @@ def __le__(self, other): if isinstance(other, datetime): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1762,7 +1775,7 @@ def __lt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1770,7 +1783,7 @@ def __ge__(self, other): if isinstance(other, datetime): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1778,13 +1791,13 @@ def __gt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, datetime) mytz = self._tzinfo ottz = other._tzinfo @@ -1800,12 +1813,12 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__year, self.__month, self.__day, - self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__year, other.__month, other.__day, - other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._year, self._month, self._day, + self._hour, self._minute, self._second, + self._microsecond), + (other._year, other._month, other._day, + other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware datetimes") @@ -1819,13 +1832,13 @@ "Add a datetime and a timedelta." if not isinstance(other, timedelta): return NotImplemented - t = tmxxx(self.__year, - self.__month, - self.__day + other.days, - self.__hour, - self.__minute, - self.__second + other.seconds, - self.__microsecond + other.microseconds) + t = tmxxx(self._year, + self._month, + self._day + other.days, + self._hour, + self._minute, + self._second + other.seconds, + self._microsecond + other.microseconds) self._checkOverflow(t.year) result = datetime(t.year, t.month, t.day, t.hour, t.minute, t.second, @@ -1843,11 +1856,11 @@ days1 = self.toordinal() days2 = other.toordinal() - secs1 = self.__second + self.__minute * 60 + self.__hour * 3600 - secs2 = other.__second + other.__minute * 60 + other.__hour * 3600 + secs1 = self._second + self._minute * 60 + self._hour * 3600 + secs2 = other._second + other._minute * 60 + other._hour * 3600 base = timedelta(days1 - days2, secs1 - secs2, - self.__microsecond - other.__microsecond) + self._microsecond - other._microsecond) if self._tzinfo is other._tzinfo: return base myoff = self._utcoffset() @@ -1855,13 +1868,13 @@ if myoff == otoff: return base if myoff is None or otoff is None: - raise TypeError, "cannot mix naive and timezone-aware time" + raise TypeError("cannot mix naive and timezone-aware time") return base + timedelta(minutes = otoff-myoff) def __hash__(self): tzoff = self._utcoffset() if tzoff is None: - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) days = _ymd2ord(self.year, self.month, self.day) seconds = self.hour * 3600 + (self.minute - tzoff) * 60 + self.second return hash(timedelta(days, seconds, self.microsecond)) @@ -1870,12 +1883,12 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 10) % (yhi, ylo, self.__month, self.__day, - self.__hour, self.__minute, self.__second, + basestate = ("%c" * 10) % (yhi, ylo, self._month, self._day, + self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1883,14 +1896,14 @@ return (basestate, self._tzinfo) def __setstate(self, string, tzinfo): - (yhi, ylo, self.__month, self.__day, self.__hour, - self.__minute, self.__second, us1, us2, us3) = map(ord, string) - self.__year = yhi * 256 + ylo - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + (yhi, ylo, self._month, self._day, self._hour, + self._minute, self._second, us1, us2, us3) = map(ord, string) + self._year = yhi * 256 + ylo + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) datetime.min = datetime(1, 1, 1) From noreply at buildbot.pypy.org Sat Jan 21 23:27:13 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sat, 21 Jan 2012 23:27:13 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: A branch to merge CPython 2.7.2 Message-ID: <20120121222713.2E40282D45@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51617:37357d7dabda Date: 2012-01-21 23:25 +0100 http://bitbucket.org/pypy/pypy/changeset/37357d7dabda/ Log: A branch to merge CPython 2.7.2 From noreply at buildbot.pypy.org Sun Jan 22 00:51:42 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 22 Jan 2012 00:51:42 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: remvoe debug cruft Message-ID: <20120121235142.A9D7B82D45@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51618:a8252a946ad8 Date: 2012-01-21 20:01 +0200 http://bitbucket.org/pypy/pypy/changeset/a8252a946ad8/ Log: remvoe debug cruft diff --git a/pypy/module/micronumpy/dot.py b/pypy/module/micronumpy/dot.py --- a/pypy/module/micronumpy/dot.py +++ b/pypy/module/micronumpy/dot.py @@ -69,8 +69,6 @@ _r = calculate_dot_strides(right.strides, right.backstrides, broadcast_shape, right_skip) righti = ViewIterator(right.start, _r[0], _r[1], broadcast_shape) - if right.size==4: - xxx while not outi.done(): ''' dot_driver.jit_merge_point(left=left, From noreply at buildbot.pypy.org Sun Jan 22 00:51:43 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 22 Jan 2012 00:51:43 +0100 (CET) Subject: [pypy-commit] pypy numppy-flatitter: need to rethink getitem, setitem Message-ID: <20120121235143.E54E982D45@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numppy-flatitter Changeset: r51619:aa7917ad8e16 Date: 2012-01-21 23:14 +0200 http://bitbucket.org/pypy/pypy/changeset/aa7917ad8e16/ Log: need to rethink getitem, setitem diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -1299,10 +1299,13 @@ size = 1 for sh in arr.shape: size *= sh - if arr.strides[-1] < arr.strides[0]: + if arr.strides[-1] <= arr.strides[0]: self.strides = [arr.strides[-1]] self.backstrides = [arr.backstrides[-1]] else: + XXX + # This will not work: getitem and setitem will + # fail. Need to be smarter: calculate the indices from the int self.strides = [arr.strides[0]] self.backstrides = [arr.backstrides[0]] ViewArray.__init__(self, size, [size], arr.dtype, order=arr.order, From noreply at buildbot.pypy.org Sun Jan 22 00:51:45 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 22 Jan 2012 00:51:45 +0100 (CET) Subject: [pypy-commit] pypy matrixmath-dot: fix bad test Message-ID: <20120121235145.28E6E82D45@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: matrixmath-dot Changeset: r51620:0125f74ace80 Date: 2012-01-22 01:17 +0200 http://bitbucket.org/pypy/pypy/changeset/0125f74ace80/ Log: fix bad test diff --git a/pypy/module/micronumpy/dot.py b/pypy/module/micronumpy/dot.py --- a/pypy/module/micronumpy/dot.py +++ b/pypy/module/micronumpy/dot.py @@ -70,7 +70,6 @@ broadcast_shape, right_skip) righti = ViewIterator(right.start, _r[0], _r[1], broadcast_shape) while not outi.done(): - ''' dot_driver.jit_merge_point(left=left, right=right, shape_len=shape_len, @@ -81,18 +80,14 @@ dtype=dtype, sig=None, #For get_printable_location ) - ''' lval = left.getitem(lefti.offset).convert_to(dtype) rval = right.getitem(righti.offset).convert_to(dtype) outval = result.getitem(outi.offset).convert_to(dtype) v = dtype.itemtype.mul(lval, rval) value = dtype.itemtype.add(v, outval) #Do I need to convert it to result.dtype or does settiem do that? - assert outi.offset < result.size result.setitem(outi.offset, value) outi = outi.next(shape_len) righti = righti.next(shape_len) lefti = lefti.next(shape_len) - assert lefti.done() - assert righti.done() return result diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -891,7 +891,7 @@ assert (c == [[[14, 38, 62], [38, 126, 214], [62, 214, 366]], [[86, 302, 518], [110, 390, 670], [134, 478, 822]]]).all() c = dot(a, b[:, 2]) - assert (c == [[38, 126, 214], [302, 390, 478]]).all() + assert (c == [[62, 214, 366], [518, 670, 822]]).all() def test_dot_constant(self): from _numpypy import array diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -379,7 +379,17 @@ def test_dot(self): result = self.run("dot") assert result == 184 - self.check_simple_loop({}) + self.check_simple_loop({{'arraylen_gc': 9, + 'float_add': 1, + 'float_mul': 1, + 'getinteriorfield_raw': 3, + 'guard_false': 3, + 'guard_true': 3, + 'int_add': 6, + 'int_lt': 6, + 'int_sub': 3, + 'jump': 1, + 'setinteriorfield_raw': 1}}) class TestNumpyOld(LLJitMixin): From noreply at buildbot.pypy.org Sun Jan 22 00:51:46 2012 From: noreply at buildbot.pypy.org (mattip) Date: Sun, 22 Jan 2012 00:51:46 +0100 (CET) Subject: [pypy-commit] pypy numpypy-shape-bug: add failing setshape test, fix for test Message-ID: <20120121235146.6225282D45@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: numpypy-shape-bug Changeset: r51621:e11082ff75b9 Date: 2012-01-22 01:50 +0200 http://bitbucket.org/pypy/pypy/changeset/e11082ff75b9/ Log: add failing setshape test, fix for test diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -156,7 +156,7 @@ # fits the new shape, using those steps. If there is a shape/step mismatch # (meaning that the realignment of elements crosses from one step into another) # return None so that the caller can raise an exception. -def calc_new_strides(new_shape, old_shape, old_strides): +def calc_new_strides(new_shape, old_shape, old_strides, order): # Return the proper strides for new_shape, or None if the mapping crosses # stepping boundaries @@ -166,7 +166,7 @@ last_step = 1 oldI = 0 new_strides = [] - if old_strides[0] < old_strides[-1]: + if order == 'F': for i in range(len(old_shape)): steps.append(old_strides[i] / last_step) last_step *= old_shape[i] @@ -187,7 +187,7 @@ break cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] - else: + elif order == 'C': for i in range(len(old_shape) - 1, -1, -1): steps.insert(0, old_strides[i] / last_step) last_step *= old_shape[i] @@ -543,8 +543,8 @@ concrete = self.get_concrete() new_shape = get_shape_from_iterable(space, concrete.size, w_shape) # Since we got to here, prod(new_shape) == self.size - new_strides = calc_new_strides(new_shape, - concrete.shape, concrete.strides) + new_strides = calc_new_strides(new_shape, concrete.shape, + concrete.strides, concrete.order) if new_strides: # We can create a view, strides somehow match up. ndims = len(new_shape) @@ -1105,7 +1105,8 @@ self.backstrides = backstrides self.shape = new_shape return - new_strides = calc_new_strides(new_shape, self.shape, self.strides) + new_strides = calc_new_strides(new_shape, self.shape, self.strides, + self.order) if new_strides is None: raise OperationError(space.w_AttributeError, space.wrap( "incompatible shape for a non-contiguous array")) diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -152,11 +152,11 @@ def test_calc_new_strides(self): from pypy.module.micronumpy.interp_numarray import calc_new_strides - assert calc_new_strides([2, 4], [4, 2], [4, 2]) == [8, 2] - assert calc_new_strides([2, 4, 3], [8, 3], [1, 16]) == [1, 2, 16] - assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None - assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None - assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([2, 4], [4, 2], [4, 2], "C") == [8, 2] + assert calc_new_strides([2, 4, 3], [8, 3], [1, 16], 'F') == [1, 2, 16] + assert calc_new_strides([2, 3, 4], [8, 3], [1, 16], 'F') is None + assert calc_new_strides([24], [2, 4, 3], [48, 6, 1], 'C') is None + assert calc_new_strides([24], [2, 4, 3], [24, 6, 2], 'C') == [2] class AppTestNumArray(BaseNumpyAppTest): @@ -381,6 +381,8 @@ a.shape = () #numpy allows this a.shape = (1,) + a = array(range(6)).reshape(2,3).T + raises(AttributeError, 'a.shape = 6') def test_reshape(self): from _numpypy import array, zeros @@ -765,7 +767,7 @@ assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() raises (ValueError, 'a[:, 1, :].sum(2)') assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() - skip("Those are broken on reshape, fix!") + skip("Those are broken, fix after removing Scalar") assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) From noreply at buildbot.pypy.org Sun Jan 22 01:02:19 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 01:02:19 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Install CPython 2.7.2 version of the std library Message-ID: <20120122000219.73C0382D45@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51622:d74ec4c815b1 Date: 2012-01-22 00:07 +0100 http://bitbucket.org/pypy/pypy/changeset/d74ec4c815b1/ Log: Install CPython 2.7.2 version of the std library diff too long, truncating to 10000 out of 120278 lines diff --git a/lib-python/2.7/BaseHTTPServer.py b/lib-python/2.7/BaseHTTPServer.py --- a/lib-python/2.7/BaseHTTPServer.py +++ b/lib-python/2.7/BaseHTTPServer.py @@ -310,7 +310,13 @@ """ try: - self.raw_requestline = self.rfile.readline() + self.raw_requestline = self.rfile.readline(65537) + if len(self.raw_requestline) > 65536: + self.requestline = '' + self.request_version = '' + self.command = '' + self.send_error(414) + return if not self.raw_requestline: self.close_connection = 1 return diff --git a/lib-python/2.7/ConfigParser.py b/lib-python/2.7/ConfigParser.py --- a/lib-python/2.7/ConfigParser.py +++ b/lib-python/2.7/ConfigParser.py @@ -545,6 +545,38 @@ if isinstance(val, list): options[name] = '\n'.join(val) +import UserDict as _UserDict + +class _Chainmap(_UserDict.DictMixin): + """Combine multiple mappings for successive lookups. + + For example, to emulate Python's normal lookup sequence: + + import __builtin__ + pylookup = _Chainmap(locals(), globals(), vars(__builtin__)) + """ + + def __init__(self, *maps): + self._maps = maps + + def __getitem__(self, key): + for mapping in self._maps: + try: + return mapping[key] + except KeyError: + pass + raise KeyError(key) + + def keys(self): + result = [] + seen = set() + for mapping in self_maps: + for key in mapping: + if key not in seen: + result.append(key) + seen.add(key) + return result + class ConfigParser(RawConfigParser): def get(self, section, option, raw=False, vars=None): @@ -559,16 +591,18 @@ The section DEFAULT is special. """ - d = self._defaults.copy() + sectiondict = {} try: - d.update(self._sections[section]) + sectiondict = self._sections[section] except KeyError: if section != DEFAULTSECT: raise NoSectionError(section) # Update with the entry specific variables + vardict = {} if vars: for key, value in vars.items(): - d[self.optionxform(key)] = value + vardict[self.optionxform(key)] = value + d = _Chainmap(vardict, sectiondict, self._defaults) option = self.optionxform(option) try: value = d[option] diff --git a/lib-python/2.7/Cookie.py b/lib-python/2.7/Cookie.py --- a/lib-python/2.7/Cookie.py +++ b/lib-python/2.7/Cookie.py @@ -258,6 +258,11 @@ '\033' : '\\033', '\034' : '\\034', '\035' : '\\035', '\036' : '\\036', '\037' : '\\037', + # Because of the way browsers really handle cookies (as opposed + # to what the RFC says) we also encode , and ; + + ',' : '\\054', ';' : '\\073', + '"' : '\\"', '\\' : '\\\\', '\177' : '\\177', '\200' : '\\200', '\201' : '\\201', diff --git a/lib-python/2.7/HTMLParser.py b/lib-python/2.7/HTMLParser.py --- a/lib-python/2.7/HTMLParser.py +++ b/lib-python/2.7/HTMLParser.py @@ -26,7 +26,7 @@ tagfind = re.compile('[a-zA-Z][-.a-zA-Z0-9:_]*') attrfind = re.compile( r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*' - r'(\'[^\']*\'|"[^"]*"|[-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~@]*))?') + r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?') locatestarttagend = re.compile(r""" <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name @@ -99,7 +99,7 @@ markupbase.ParserBase.reset(self) def feed(self, data): - """Feed data to the parser. + r"""Feed data to the parser. Call this as often as you want, with as little or as much text as you want (may include '\n'). @@ -367,13 +367,16 @@ return s def replaceEntities(s): s = s.groups()[0] - if s[0] == "#": - s = s[1:] - if s[0] in ['x','X']: - c = int(s[1:], 16) - else: - c = int(s) - return unichr(c) + try: + if s[0] == "#": + s = s[1:] + if s[0] in ['x','X']: + c = int(s[1:], 16) + else: + c = int(s) + return unichr(c) + except ValueError: + return '&#'+s+';' else: # Cannot use name2codepoint directly, because HTMLParser supports apos, # which is not part of HTML 4 diff --git a/lib-python/2.7/SimpleHTTPServer.py b/lib-python/2.7/SimpleHTTPServer.py --- a/lib-python/2.7/SimpleHTTPServer.py +++ b/lib-python/2.7/SimpleHTTPServer.py @@ -15,6 +15,7 @@ import BaseHTTPServer import urllib import cgi +import sys import shutil import mimetypes try: @@ -131,7 +132,8 @@ length = f.tell() f.seek(0) self.send_response(200) - self.send_header("Content-type", "text/html") + encoding = sys.getfilesystemencoding() + self.send_header("Content-type", "text/html; charset=%s" % encoding) self.send_header("Content-Length", str(length)) self.end_headers() return f diff --git a/lib-python/2.7/SimpleXMLRPCServer.py b/lib-python/2.7/SimpleXMLRPCServer.py --- a/lib-python/2.7/SimpleXMLRPCServer.py +++ b/lib-python/2.7/SimpleXMLRPCServer.py @@ -246,7 +246,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/lib-python/2.7/SocketServer.py b/lib-python/2.7/SocketServer.py --- a/lib-python/2.7/SocketServer.py +++ b/lib-python/2.7/SocketServer.py @@ -675,7 +675,7 @@ # A timeout to apply to the request socket, if not None. timeout = None - # Disable nagle algoritm for this socket, if True. + # Disable nagle algorithm for this socket, if True. # Use only when wbufsize != 0, to avoid small packets. disable_nagle_algorithm = False diff --git a/lib-python/2.7/StringIO.py b/lib-python/2.7/StringIO.py --- a/lib-python/2.7/StringIO.py +++ b/lib-python/2.7/StringIO.py @@ -266,6 +266,7 @@ 8th bit) will cause a UnicodeError to be raised when getvalue() is called. """ + _complain_ifclosed(self.closed) if self.buflist: self.buf += ''.join(self.buflist) self.buflist = [] diff --git a/lib-python/2.7/_abcoll.py b/lib-python/2.7/_abcoll.py --- a/lib-python/2.7/_abcoll.py +++ b/lib-python/2.7/_abcoll.py @@ -82,7 +82,7 @@ @classmethod def __subclasshook__(cls, C): if cls is Iterator: - if _hasattr(C, "next"): + if _hasattr(C, "next") and _hasattr(C, "__iter__"): return True return NotImplemented diff --git a/lib-python/2.7/_pyio.py b/lib-python/2.7/_pyio.py --- a/lib-python/2.7/_pyio.py +++ b/lib-python/2.7/_pyio.py @@ -16,6 +16,7 @@ import io from io import (__all__, SEEK_SET, SEEK_CUR, SEEK_END) +from errno import EINTR __metaclass__ = type @@ -559,7 +560,11 @@ if not data: break res += data - return bytes(res) + if res: + return bytes(res) + else: + # b'' or None + return data def readinto(self, b): """Read up to len(b) bytes into b. @@ -678,7 +683,7 @@ """ def __init__(self, raw): - self.raw = raw + self._raw = raw ### Positioning ### @@ -722,8 +727,8 @@ if self.raw is None: raise ValueError("raw stream already detached") self.flush() - raw = self.raw - self.raw = None + raw = self._raw + self._raw = None return raw ### Inquiries ### @@ -738,6 +743,10 @@ return self.raw.writable() @property + def raw(self): + return self._raw + + @property def closed(self): return self.raw.closed @@ -933,7 +942,12 @@ current_size = 0 while True: # Read until EOF or until read() would block. - chunk = self.raw.read() + try: + chunk = self.raw.read() + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -952,7 +966,12 @@ chunks = [buf[pos:]] wanted = max(self.buffer_size, n) while avail < n: - chunk = self.raw.read(wanted) + try: + chunk = self.raw.read(wanted) + except IOError as e: + if e.errno != EINTR: + raise + continue if chunk in empty_values: nodata_val = chunk break @@ -981,7 +1000,14 @@ have = len(self._read_buf) - self._read_pos if have < want or have <= 0: to_read = self.buffer_size - have - current = self.raw.read(to_read) + while True: + try: + current = self.raw.read(to_read) + except IOError as e: + if e.errno != EINTR: + raise + continue + break if current: self._read_buf = self._read_buf[self._read_pos:] + current self._read_pos = 0 @@ -1088,7 +1114,12 @@ written = 0 try: while self._write_buf: - n = self.raw.write(self._write_buf) + try: + n = self.raw.write(self._write_buf) + except IOError as e: + if e.errno != EINTR: + raise + continue if n > len(self._write_buf) or n < 0: raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] @@ -1456,7 +1487,7 @@ if not isinstance(errors, basestring): raise ValueError("invalid errors: %r" % errors) - self.buffer = buffer + self._buffer = buffer self._line_buffering = line_buffering self._encoding = encoding self._errors = errors @@ -1511,6 +1542,10 @@ def line_buffering(self): return self._line_buffering + @property + def buffer(self): + return self._buffer + def seekable(self): return self._seekable @@ -1724,8 +1759,8 @@ if self.buffer is None: raise ValueError("buffer is already detached") self.flush() - buffer = self.buffer - self.buffer = None + buffer = self._buffer + self._buffer = None return buffer def seek(self, cookie, whence=0): diff --git a/lib-python/2.7/_weakrefset.py b/lib-python/2.7/_weakrefset.py --- a/lib-python/2.7/_weakrefset.py +++ b/lib-python/2.7/_weakrefset.py @@ -66,7 +66,11 @@ return sum(x() is not None for x in self.data) def __contains__(self, item): - return ref(item) in self.data + try: + wr = ref(item) + except TypeError: + return False + return wr in self.data def __reduce__(self): return (self.__class__, (list(self),), diff --git a/lib-python/2.7/anydbm.py b/lib-python/2.7/anydbm.py --- a/lib-python/2.7/anydbm.py +++ b/lib-python/2.7/anydbm.py @@ -29,17 +29,8 @@ list = d.keys() # return a list of all existing keys (slow!) Future versions may change the order in which implementations are -tested for existence, add interfaces to other dbm-like +tested for existence, and add interfaces to other dbm-like implementations. - -The open function has an optional second argument. This can be 'r', -for read-only access, 'w', for read-write access of an existing -database, 'c' for read-write access to a new or existing database, and -'n' for read-write access to a new database. The default is 'r'. - -Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it -only if it doesn't exist; and 'n' always creates a new database. - """ class error(Exception): @@ -63,7 +54,18 @@ error = tuple(_errors) -def open(file, flag = 'r', mode = 0666): +def open(file, flag='r', mode=0666): + """Open or create database at path given by *file*. + + Optional argument *flag* can be 'r' (default) for read-only access, 'w' + for read-write access of an existing database, 'c' for read-write access + to a new or existing database, and 'n' for read-write access to a new + database. + + Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it + only if it doesn't exist; and 'n' always creates a new database. + """ + # guess the type of an existing database from whichdb import whichdb result=whichdb(file) diff --git a/lib-python/2.7/argparse.py b/lib-python/2.7/argparse.py --- a/lib-python/2.7/argparse.py +++ b/lib-python/2.7/argparse.py @@ -82,6 +82,7 @@ ] +import collections as _collections import copy as _copy import os as _os import re as _re @@ -1037,7 +1038,7 @@ self._prog_prefix = prog self._parser_class = parser_class - self._name_parser_map = {} + self._name_parser_map = _collections.OrderedDict() self._choices_actions = [] super(_SubParsersAction, self).__init__( @@ -1080,7 +1081,7 @@ parser = self._name_parser_map[parser_name] except KeyError: tup = parser_name, ', '.join(self._name_parser_map) - msg = _('unknown parser %r (choices: %s)' % tup) + msg = _('unknown parser %r (choices: %s)') % tup raise ArgumentError(self, msg) # parse all the remaining options into the namespace @@ -1109,7 +1110,7 @@ the builtin open() function. """ - def __init__(self, mode='r', bufsize=None): + def __init__(self, mode='r', bufsize=-1): self._mode = mode self._bufsize = bufsize @@ -1121,18 +1122,19 @@ elif 'w' in self._mode: return _sys.stdout else: - msg = _('argument "-" with mode %r' % self._mode) + msg = _('argument "-" with mode %r') % self._mode raise ValueError(msg) # all other arguments are used as file names - if self._bufsize: + try: return open(string, self._mode, self._bufsize) - else: - return open(string, self._mode) + except IOError as e: + message = _("can't open '%s': %s") + raise ArgumentTypeError(message % (string, e)) def __repr__(self): - args = [self._mode, self._bufsize] - args_str = ', '.join([repr(arg) for arg in args if arg is not None]) + args = self._mode, self._bufsize + args_str = ', '.join(repr(arg) for arg in args if arg != -1) return '%s(%s)' % (type(self).__name__, args_str) # =========================== @@ -1275,13 +1277,20 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) if not _callable(action_class): - raise ValueError('unknown action "%s"' % action_class) + raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) if not _callable(type_func): - raise ValueError('%r is not callable' % type_func) + raise ValueError('%r is not callable' % (type_func,)) + + # raise an error if the metavar does not match the type + if hasattr(self, "_get_formatter"): + try: + self._get_formatter()._format_args(action, None) + except TypeError: + raise ValueError("length of metavar tuple does not match nargs") return self._add_action(action) @@ -1481,6 +1490,7 @@ self._defaults = container._defaults self._has_negative_number_optionals = \ container._has_negative_number_optionals + self._mutually_exclusive_groups = container._mutually_exclusive_groups def _add_action(self, action): action = super(_ArgumentGroup, self)._add_action(action) diff --git a/lib-python/2.7/ast.py b/lib-python/2.7/ast.py --- a/lib-python/2.7/ast.py +++ b/lib-python/2.7/ast.py @@ -29,12 +29,12 @@ from _ast import __version__ -def parse(expr, filename='', mode='exec'): +def parse(source, filename='', mode='exec'): """ - Parse an expression into an AST node. - Equivalent to compile(expr, filename, mode, PyCF_ONLY_AST). + Parse the source into an AST node. + Equivalent to compile(source, filename, mode, PyCF_ONLY_AST). """ - return compile(expr, filename, mode, PyCF_ONLY_AST) + return compile(source, filename, mode, PyCF_ONLY_AST) def literal_eval(node_or_string): @@ -152,8 +152,6 @@ Increment the line number of each node in the tree starting at *node* by *n*. This is useful to "move code" to a different location in a file. """ - if 'lineno' in node._attributes: - node.lineno = getattr(node, 'lineno', 0) + n for child in walk(node): if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n @@ -204,9 +202,9 @@ def walk(node): """ - Recursively yield all child nodes of *node*, in no specified order. This is - useful if you only want to modify nodes in place and don't care about the - context. + Recursively yield all descendant nodes in the tree starting at *node* + (including *node* itself), in no specified order. This is useful if you + only want to modify nodes in place and don't care about the context. """ from collections import deque todo = deque([node]) diff --git a/lib-python/2.7/asyncore.py b/lib-python/2.7/asyncore.py --- a/lib-python/2.7/asyncore.py +++ b/lib-python/2.7/asyncore.py @@ -54,7 +54,11 @@ import os from errno import EALREADY, EINPROGRESS, EWOULDBLOCK, ECONNRESET, EINVAL, \ - ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, errorcode + ENOTCONN, ESHUTDOWN, EINTR, EISCONN, EBADF, ECONNABORTED, EPIPE, EAGAIN, \ + errorcode + +_DISCONNECTED = frozenset((ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED, EPIPE, + EBADF)) try: socket_map @@ -109,7 +113,7 @@ if flags & (select.POLLHUP | select.POLLERR | select.POLLNVAL): obj.handle_close() except socket.error, e: - if e.args[0] not in (EBADF, ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + if e.args[0] not in _DISCONNECTED: obj.handle_error() else: obj.handle_close() @@ -353,7 +357,7 @@ except TypeError: return None except socket.error as why: - if why.args[0] in (EWOULDBLOCK, ECONNABORTED): + if why.args[0] in (EWOULDBLOCK, ECONNABORTED, EAGAIN): return None else: raise @@ -367,7 +371,7 @@ except socket.error, why: if why.args[0] == EWOULDBLOCK: return 0 - elif why.args[0] in (ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED): + elif why.args[0] in _DISCONNECTED: self.handle_close() return 0 else: @@ -385,7 +389,7 @@ return data except socket.error, why: # winsock sometimes throws ENOTCONN - if why.args[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN, ECONNABORTED]: + if why.args[0] in _DISCONNECTED: self.handle_close() return '' else: diff --git a/lib-python/2.7/bdb.py b/lib-python/2.7/bdb.py --- a/lib-python/2.7/bdb.py +++ b/lib-python/2.7/bdb.py @@ -250,6 +250,12 @@ list.append(lineno) bp = Breakpoint(filename, lineno, temporary, cond, funcname) + def _prune_breaks(self, filename, lineno): + if (filename, lineno) not in Breakpoint.bplist: + self.breaks[filename].remove(lineno) + if not self.breaks[filename]: + del self.breaks[filename] + def clear_break(self, filename, lineno): filename = self.canonic(filename) if not filename in self.breaks: @@ -261,10 +267,7 @@ # pair, then remove the breaks entry for bp in Breakpoint.bplist[filename, lineno][:]: bp.deleteMe() - if (filename, lineno) not in Breakpoint.bplist: - self.breaks[filename].remove(lineno) - if not self.breaks[filename]: - del self.breaks[filename] + self._prune_breaks(filename, lineno) def clear_bpbynumber(self, arg): try: @@ -277,7 +280,8 @@ return 'Breakpoint number (%d) out of range' % number if not bp: return 'Breakpoint (%d) already deleted' % number - self.clear_break(bp.file, bp.line) + bp.deleteMe() + self._prune_breaks(bp.file, bp.line) def clear_all_file_breaks(self, filename): filename = self.canonic(filename) diff --git a/lib-python/2.7/collections.py b/lib-python/2.7/collections.py --- a/lib-python/2.7/collections.py +++ b/lib-python/2.7/collections.py @@ -6,59 +6,38 @@ __all__ += _abcoll.__all__ from _collections import deque, defaultdict -from operator import itemgetter as _itemgetter, eq as _eq +from operator import itemgetter as _itemgetter from keyword import iskeyword as _iskeyword import sys as _sys import heapq as _heapq -from itertools import repeat as _repeat, chain as _chain, starmap as _starmap, \ - ifilter as _ifilter, imap as _imap +from itertools import repeat as _repeat, chain as _chain, starmap as _starmap + try: - from thread import get_ident + from thread import get_ident as _get_ident except ImportError: - from dummy_thread import get_ident - -def _recursive_repr(user_function): - 'Decorator to make a repr function return "..." for a recursive call' - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return '...' - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - return wrapper + from dummy_thread import get_ident as _get_ident ################################################################################ ### OrderedDict ################################################################################ -class OrderedDict(dict, MutableMapping): +class OrderedDict(dict): 'Dictionary that remembers insertion order' # An inherited dict maps keys to values. # The inherited dict provides __getitem__, __len__, __contains__, and get. # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. + # Big-O running times for all methods are the same as regular dictionaries. - # The internal self.__map dictionary maps keys to links in a doubly linked list. + # The internal self.__map dict maps keys to links in a doubly linked list. # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # Each link is stored as a list of length three: [PREV, NEXT, KEY]. def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. + '''Initialize an ordered dictionary. The signature is the same as + regular dictionaries, but keyword arguments are not recommended because + their insertion order is arbitrary. ''' if len(args) > 1: @@ -66,17 +45,15 @@ try: self.__root except AttributeError: - self.__root = root = [None, None, None] # sentinel node - PREV = 0 - NEXT = 1 - root[PREV] = root[NEXT] = root + self.__root = root = [] # sentinel node + root[:] = [root, root, None] self.__map = {} - self.update(*args, **kwds) + self.__update(*args, **kwds) def __setitem__(self, key, value, PREV=0, NEXT=1, dict_setitem=dict.__setitem__): 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. + # Setting a new item creates a new link at the end of the linked list, + # and the inherited dictionary is updated with the new key/value pair. if key not in self: root = self.__root last = root[PREV] @@ -85,65 +62,160 @@ def __delitem__(self, key, PREV=0, NEXT=1, dict_delitem=dict.__delitem__): 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. + # Deleting an existing item uses self.__map to find the link which gets + # removed by updating the links in the predecessor and successor nodes. dict_delitem(self, key) - link = self.__map.pop(key) - link_prev = link[PREV] - link_next = link[NEXT] + link_prev, link_next, key = self.__map.pop(key) link_prev[NEXT] = link_next link_next[PREV] = link_prev - def __iter__(self, NEXT=1, KEY=2): + def __iter__(self): 'od.__iter__() <==> iter(od)' # Traverse the linked list in order. + NEXT, KEY = 1, 2 root = self.__root curr = root[NEXT] while curr is not root: yield curr[KEY] curr = curr[NEXT] - def __reversed__(self, PREV=0, KEY=2): + def __reversed__(self): 'od.__reversed__() <==> reversed(od)' # Traverse the linked list in reverse order. + PREV, KEY = 0, 2 root = self.__root curr = root[PREV] while curr is not root: yield curr[KEY] curr = curr[PREV] + def clear(self): + 'od.clear() -> None. Remove all items from od.' + for node in self.__map.itervalues(): + del node[:] + root = self.__root + root[:] = [root, root, None] + self.__map.clear() + dict.clear(self) + + # -- the following methods do not depend on the internal structure -- + + def keys(self): + 'od.keys() -> list of keys in od' + return list(self) + + def values(self): + 'od.values() -> list of values in od' + return [self[key] for key in self] + + def items(self): + 'od.items() -> list of (key, value) pairs in od' + return [(key, self[key]) for key in self] + + def iterkeys(self): + 'od.iterkeys() -> an iterator over the keys in od' + return iter(self) + + def itervalues(self): + 'od.itervalues -> an iterator over the values in od' + for k in self: + yield self[k] + + def iteritems(self): + 'od.iteritems -> an iterator over the (key, value) pairs in od' + for k in self: + yield (k, self[k]) + + update = MutableMapping.update + + __update = update # let subclasses override update without breaking __init__ + + __marker = object() + + def pop(self, key, default=__marker): + '''od.pop(k[,d]) -> v, remove specified key and return the corresponding + value. If key is not found, d is returned if given, otherwise KeyError + is raised. + + ''' + if key in self: + result = self[key] + del self[key] + return result + if default is self.__marker: + raise KeyError(key) + return default + + def setdefault(self, key, default=None): + 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' + if key in self: + return self[key] + self[key] = default + return default + + def popitem(self, last=True): + '''od.popitem() -> (k, v), return and remove a (key, value) pair. + Pairs are returned in LIFO order if last is true or FIFO order if false. + + ''' + if not self: + raise KeyError('dictionary is empty') + key = next(reversed(self) if last else iter(self)) + value = self.pop(key) + return key, value + + def __repr__(self, _repr_running={}): + 'od.__repr__() <==> repr(od)' + call_key = id(self), _get_ident() + if call_key in _repr_running: + return '...' + _repr_running[call_key] = 1 + try: + if not self: + return '%s()' % (self.__class__.__name__,) + return '%s(%r)' % (self.__class__.__name__, self.items()) + finally: + del _repr_running[call_key] + def __reduce__(self): 'Return state information for pickling' items = [[k, self[k]] for k in self] - tmp = self.__map, self.__root - del self.__map, self.__root inst_dict = vars(self).copy() - self.__map, self.__root = tmp + for k in vars(OrderedDict()): + inst_dict.pop(k, None) if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - self.__root[:] = [self.__root, self.__root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) + def copy(self): + 'od.copy() -> a shallow copy of od' + return self.__class__(self) - setdefault = MutableMapping.setdefault - update = MutableMapping.update - pop = MutableMapping.pop - keys = MutableMapping.keys - values = MutableMapping.values - items = MutableMapping.items - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - iteritems = MutableMapping.iteritems - __ne__ = MutableMapping.__ne__ + @classmethod + def fromkeys(cls, iterable, value=None): + '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. + If not specified, the value defaults to None. + + ''' + self = cls() + for key in iterable: + self[key] = value + return self + + def __eq__(self, other): + '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive + while comparison to a regular mapping is order-insensitive. + + ''' + if isinstance(other, OrderedDict): + return len(self)==len(other) and self.items() == other.items() + return dict.__eq__(self, other) + + def __ne__(self, other): + 'od.__ne__(y) <==> od!=y' + return not self == other + + # -- the following methods support python 3.x style dictionary views -- def viewkeys(self): "od.viewkeys() -> a set-like object providing a view on od's keys" @@ -157,49 +229,6 @@ "od.viewitems() -> a set-like object providing a view on od's items" return ItemsView(self) - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - key = next(reversed(self) if last else iter(self)) - value = self.pop(key) - return key, value - - @_recursive_repr - def __repr__(self): - 'od.__repr__() <==> repr(od)' - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and \ - all(_imap(_eq, self.iteritems(), other.iteritems())) - return dict.__eq__(self, other) - ################################################################################ ### namedtuple @@ -328,16 +357,16 @@ or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. - >>> c = Counter('abracadabra') # count elements from a string + >>> c = Counter('abcdeabcdabcaba') # count elements from a string >>> c.most_common(3) # three most common elements - [('a', 5), ('r', 2), ('b', 2)] + [('a', 5), ('b', 4), ('c', 3)] >>> sorted(c) # list all unique elements - ['a', 'b', 'c', 'd', 'r'] + ['a', 'b', 'c', 'd', 'e'] >>> ''.join(sorted(c.elements())) # list elements with repetitions - 'aaaaabbcdrr' + 'aaaaabbbbcccdde' >>> sum(c.values()) # total of all counts - 11 + 15 >>> c['a'] # count of letter 'a' 5 @@ -345,8 +374,8 @@ ... c[elem] += 1 # by adding 1 to each element's count >>> c['a'] # now there are seven 'a' 7 - >>> del c['r'] # remove all 'r' - >>> c['r'] # now there are zero 'r' + >>> del c['b'] # remove all 'b' + >>> c['b'] # now there are zero 'b' 0 >>> d = Counter('simsalabim') # make another counter @@ -385,6 +414,7 @@ >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' + super(Counter, self).__init__() self.update(iterable, **kwds) def __missing__(self, key): @@ -396,8 +426,8 @@ '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. - >>> Counter('abracadabra').most_common(3) - [('a', 5), ('r', 2), ('b', 2)] + >>> Counter('abcdeabcdabcaba').most_common(3) + [('a', 5), ('b', 4), ('c', 3)] ''' # Emulate Bag.sortedByCount from Smalltalk @@ -463,7 +493,7 @@ for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: - dict.update(self, iterable) # fast path when counter is empty + super(Counter, self).update(iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: @@ -499,13 +529,16 @@ self.subtract(kwds) def copy(self): - 'Like dict.copy() but returns a Counter instance instead of a dict.' - return Counter(self) + 'Return a shallow copy.' + return self.__class__(self) + + def __reduce__(self): + return self.__class__, (dict(self),) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: - dict.__delitem__(self, elem) + super(Counter, self).__delitem__(elem) def __repr__(self): if not self: @@ -532,10 +565,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] + other[elem] + for elem, count in self.items(): + newcount = count + other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __sub__(self, other): @@ -548,10 +584,13 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - newcount = self[elem] - other[elem] + for elem, count in self.items(): + newcount = count - other[elem] if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count < 0: + result[elem] = 0 - count return result def __or__(self, other): @@ -564,11 +603,14 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - for elem in set(self) | set(other): - p, q = self[elem], other[elem] - newcount = q if p < q else p + for elem, count in self.items(): + other_count = other[elem] + newcount = other_count if count < other_count else count if newcount > 0: result[elem] = newcount + for elem, count in other.items(): + if elem not in self and count > 0: + result[elem] = count return result def __and__(self, other): @@ -581,11 +623,9 @@ if not isinstance(other, Counter): return NotImplemented result = Counter() - if len(self) < len(other): - self, other = other, self - for elem in _ifilter(self.__contains__, other): - p, q = self[elem], other[elem] - newcount = p if p < q else q + for elem, count in self.items(): + other_count = other[elem] + newcount = count if count < other_count else other_count if newcount > 0: result[elem] = newcount return result diff --git a/lib-python/2.7/compileall.py b/lib-python/2.7/compileall.py --- a/lib-python/2.7/compileall.py +++ b/lib-python/2.7/compileall.py @@ -9,7 +9,6 @@ packages -- for now, you'll have to deal with packages separately.) See module py_compile for details of the actual byte-compilation. - """ import os import sys @@ -31,7 +30,6 @@ directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ if not quiet: print 'Listing', dir, '...' @@ -61,15 +59,16 @@ return success def compile_file(fullname, ddir=None, force=0, rx=None, quiet=0): - """Byte-compile file. - file: the file to byte-compile + """Byte-compile one file. + + Arguments (only fullname is required): + + fullname: the file to byte-compile ddir: if given, purported directory name (this is the directory name that will show up in error messages) force: if 1, force compilation, even if timestamps are up-to-date quiet: if 1, be quiet during compilation - """ - success = 1 name = os.path.basename(fullname) if ddir is not None: @@ -120,7 +119,6 @@ maxlevels: max recursion level (default 0) force: as for compile_dir() (default 0) quiet: as for compile_dir() (default 0) - """ success = 1 for dir in sys.path: diff --git a/lib-python/2.7/csv.py b/lib-python/2.7/csv.py --- a/lib-python/2.7/csv.py +++ b/lib-python/2.7/csv.py @@ -281,7 +281,7 @@ an all or nothing approach, so we allow for small variations in this number. 1) build a table of the frequency of each character on every line. - 2) build a table of freqencies of this frequency (meta-frequency?), + 2) build a table of frequencies of this frequency (meta-frequency?), e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows, 7 times in 2 rows' 3) use the mode of the meta-frequency to determine the /expected/ diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/2.7/ctypes/test/test_as_parameter.py b/lib-python/2.7/ctypes/test/test_as_parameter.py --- a/lib-python/2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -206,6 +206,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -116,7 +116,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/2.7/ctypes/test/test_init.py b/lib-python/2.7/ctypes/test/test_init.py --- a/lib-python/2.7/ctypes/test/test_init.py +++ b/lib-python/2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -157,7 +157,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/2.7/ctypes/test/test_win32.py b/lib-python/2.7/ctypes/test/test_win32.py --- a/lib-python/2.7/ctypes/test/test_win32.py +++ b/lib-python/2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/2.7/curses/wrapper.py b/lib-python/2.7/curses/wrapper.py --- a/lib-python/2.7/curses/wrapper.py +++ b/lib-python/2.7/curses/wrapper.py @@ -43,7 +43,8 @@ return func(stdscr, *args, **kwds) finally: # Set everything back to normal - stdscr.keypad(0) - curses.echo() - curses.nocbreak() - curses.endwin() + if 'stdscr' in locals(): + stdscr.keypad(0) + curses.echo() + curses.nocbreak() + curses.endwin() diff --git a/lib-python/2.7/decimal.py b/lib-python/2.7/decimal.py --- a/lib-python/2.7/decimal.py +++ b/lib-python/2.7/decimal.py @@ -1068,14 +1068,16 @@ if ans: return ans - if not self: - # -Decimal('0') is Decimal('0'), not Decimal('-0') + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # -Decimal('0') is Decimal('0'), not Decimal('-0'), except + # in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = self.copy_negate() - if context is None: - context = getcontext() return ans._fix(context) def __pos__(self, context=None): @@ -1088,14 +1090,15 @@ if ans: return ans - if not self: - # + (-0) = 0 + if context is None: + context = getcontext() + + if not self and context.rounding != ROUND_FLOOR: + # + (-0) = 0, except in ROUND_FLOOR rounding mode. ans = self.copy_abs() else: ans = Decimal(self) - if context is None: - context = getcontext() return ans._fix(context) def __abs__(self, round=True, context=None): @@ -1680,7 +1683,7 @@ self = _dec_from_triple(self._sign, '1', exp_min-1) digits = 0 rounding_method = self._pick_rounding_function[context.rounding] - changed = getattr(self, rounding_method)(digits) + changed = rounding_method(self, digits) coeff = self._int[:digits] or '0' if changed > 0: coeff = str(int(coeff)+1) @@ -1720,8 +1723,6 @@ # here self was representable to begin with; return unchanged return Decimal(self) - _pick_rounding_function = {} - # for each of the rounding functions below: # self is a finite, nonzero Decimal # prec is an integer satisfying 0 <= prec < len(self._int) @@ -1788,6 +1789,17 @@ else: return -self._round_down(prec) + _pick_rounding_function = dict( + ROUND_DOWN = _round_down, + ROUND_UP = _round_up, + ROUND_HALF_UP = _round_half_up, + ROUND_HALF_DOWN = _round_half_down, + ROUND_HALF_EVEN = _round_half_even, + ROUND_CEILING = _round_ceiling, + ROUND_FLOOR = _round_floor, + ROUND_05UP = _round_05up, + ) + def fma(self, other, third, context=None): """Fused multiply-add. @@ -2492,8 +2504,8 @@ if digits < 0: self = _dec_from_triple(self._sign, '1', exp-1) digits = 0 - this_function = getattr(self, self._pick_rounding_function[rounding]) - changed = this_function(digits) + this_function = self._pick_rounding_function[rounding] + changed = this_function(self, digits) coeff = self._int[:digits] or '0' if changed == 1: coeff = str(int(coeff)+1) @@ -3705,18 +3717,6 @@ ##### Context class ####################################################### - -# get rounding method function: -rounding_functions = [name for name in Decimal.__dict__.keys() - if name.startswith('_round_')] -for name in rounding_functions: - # name is like _round_half_even, goes to the global ROUND_HALF_EVEN value. - globalname = name[1:].upper() - val = globals()[globalname] - Decimal._pick_rounding_function[val] = name - -del name, val, globalname, rounding_functions - class _ContextManager(object): """Context manager class to support localcontext(). @@ -5990,7 +5990,7 @@ def _format_align(sign, body, spec): """Given an unpadded, non-aligned numeric string 'body' and sign - string 'sign', add padding and aligment conforming to the given + string 'sign', add padding and alignment conforming to the given format specifier dictionary 'spec' (as produced by parse_format_specifier). diff --git a/lib-python/2.7/difflib.py b/lib-python/2.7/difflib.py --- a/lib-python/2.7/difflib.py +++ b/lib-python/2.7/difflib.py @@ -1140,6 +1140,21 @@ return ch in ws +######################################################################## +### Unified Diff +######################################################################## + +def _format_range_unified(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if length == 1: + return '{}'.format(beginning) + if not length: + beginning -= 1 # empty ranges begin at line just before the range + return '{},{}'.format(beginning, length) + def unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): r""" @@ -1184,25 +1199,45 @@ started = False for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '--- %s%s%s' % (fromfile, fromdate, lineterm) - yield '+++ %s%s%s' % (tofile, todate, lineterm) started = True - i1, i2, j1, j2 = group[0][1], group[-1][2], group[0][3], group[-1][4] - yield "@@ -%d,%d +%d,%d @@%s" % (i1+1, i2-i1, j1+1, j2-j1, lineterm) + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '--- {}{}{}'.format(fromfile, fromdate, lineterm) + yield '+++ {}{}{}'.format(tofile, todate, lineterm) + + first, last = group[0], group[-1] + file1_range = _format_range_unified(first[1], last[2]) + file2_range = _format_range_unified(first[3], last[4]) + yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm) + for tag, i1, i2, j1, j2 in group: if tag == 'equal': for line in a[i1:i2]: yield ' ' + line continue - if tag == 'replace' or tag == 'delete': + if tag in ('replace', 'delete'): for line in a[i1:i2]: yield '-' + line - if tag == 'replace' or tag == 'insert': + if tag in ('replace', 'insert'): for line in b[j1:j2]: yield '+' + line + +######################################################################## +### Context Diff +######################################################################## + +def _format_range_context(start, stop): + 'Convert range to the "ed" format' + # Per the diff spec at http://www.unix.org/single_unix_specification/ + beginning = start + 1 # lines start numbering with one + length = stop - start + if not length: + beginning -= 1 # empty ranges begin at line just before the range + if length <= 1: + return '{}'.format(beginning) + return '{},{}'.format(beginning, beginning + length - 1) + # See http://www.unix.org/single_unix_specification/ def context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n'): @@ -1247,38 +1282,36 @@ four """ + prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ') started = False - prefixmap = {'insert':'+ ', 'delete':'- ', 'replace':'! ', 'equal':' '} for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n): if not started: - fromdate = '\t%s' % fromfiledate if fromfiledate else '' - todate = '\t%s' % tofiledate if tofiledate else '' - yield '*** %s%s%s' % (fromfile, fromdate, lineterm) - yield '--- %s%s%s' % (tofile, todate, lineterm) started = True + fromdate = '\t{}'.format(fromfiledate) if fromfiledate else '' + todate = '\t{}'.format(tofiledate) if tofiledate else '' + yield '*** {}{}{}'.format(fromfile, fromdate, lineterm) + yield '--- {}{}{}'.format(tofile, todate, lineterm) - yield '***************%s' % (lineterm,) - if group[-1][2] - group[0][1] >= 2: - yield '*** %d,%d ****%s' % (group[0][1]+1, group[-1][2], lineterm) - else: - yield '*** %d ****%s' % (group[-1][2], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'delete')] - if visiblechanges: + first, last = group[0], group[-1] + yield '***************' + lineterm + + file1_range = _format_range_context(first[1], last[2]) + yield '*** {} ****{}'.format(file1_range, lineterm) + + if any(tag in ('replace', 'delete') for tag, _, _, _, _ in group): for tag, i1, i2, _, _ in group: if tag != 'insert': for line in a[i1:i2]: - yield prefixmap[tag] + line + yield prefix[tag] + line - if group[-1][4] - group[0][3] >= 2: - yield '--- %d,%d ----%s' % (group[0][3]+1, group[-1][4], lineterm) - else: - yield '--- %d ----%s' % (group[-1][4], lineterm) - visiblechanges = [e for e in group if e[0] in ('replace', 'insert')] - if visiblechanges: + file2_range = _format_range_context(first[3], last[4]) + yield '--- {} ----{}'.format(file2_range, lineterm) + + if any(tag in ('replace', 'insert') for tag, _, _, _, _ in group): for tag, _, _, j1, j2 in group: if tag != 'delete': for line in b[j1:j2]: - yield prefixmap[tag] + line + yield prefix[tag] + line def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK): r""" @@ -1714,7 +1747,7 @@ line = line.replace(' ','\0') # expand tabs into spaces line = line.expandtabs(self._tabsize) - # relace spaces from expanded tabs back into tab characters + # replace spaces from expanded tabs back into tab characters # (we'll replace them with markup after we do differencing) line = line.replace(' ','\t') return line.replace('\0',' ').rstrip('\n') diff --git a/lib-python/2.7/distutils/__init__.py b/lib-python/2.7/distutils/__init__.py --- a/lib-python/2.7/distutils/__init__.py +++ b/lib-python/2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/2.7/distutils/archive_util.py b/lib-python/2.7/distutils/archive_util.py --- a/lib-python/2.7/distutils/archive_util.py +++ b/lib-python/2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/2.7/distutils/cmd.py b/lib-python/2.7/distutils/cmd.py --- a/lib-python/2.7/distutils/cmd.py +++ b/lib-python/2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -207,7 +207,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/2.7/distutils/command/sdist.py b/lib-python/2.7/distutils/command/sdist.py --- a/lib-python/2.7/distutils/command/sdist.py +++ b/lib-python/2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/2.7/distutils/command/upload.py b/lib-python/2.7/distutils/command/upload.py --- a/lib-python/2.7/distutils/command/upload.py +++ b/lib-python/2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/2.7/distutils/tests/__init__.py b/lib-python/2.7/distutils/tests/__init__.py --- a/lib-python/2.7/distutils/tests/__init__.py +++ b/lib-python/2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_archive_util.py b/lib-python/2.7/distutils/tests/test_archive_util.py --- a/lib-python/2.7/distutils/tests/test_archive_util.py +++ b/lib-python/2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_bdist_msi.py b/lib-python/2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/2.7/distutils/tests/test_build.py b/lib-python/2.7/distutils/tests/test_build.py --- a/lib-python/2.7/distutils/tests/test_build.py +++ b/lib-python/2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_clib.py b/lib-python/2.7/distutils/tests/test_build_clib.py --- a/lib-python/2.7/distutils/tests/test_build_clib.py +++ b/lib-python/2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/2.7/distutils/tests/test_build_py.py b/lib-python/2.7/distutils/tests/test_build_py.py --- a/lib-python/2.7/distutils/tests/test_build_py.py +++ b/lib-python/2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_build_scripts.py b/lib-python/2.7/distutils/tests/test_build_scripts.py --- a/lib-python/2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_check.py b/lib-python/2.7/distutils/tests/test_check.py --- a/lib-python/2.7/distutils/tests/test_check.py +++ b/lib-python/2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_clean.py b/lib-python/2.7/distutils/tests/test_clean.py --- a/lib-python/2.7/distutils/tests/test_clean.py +++ b/lib-python/2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_cmd.py b/lib-python/2.7/distutils/tests/test_cmd.py --- a/lib-python/2.7/distutils/tests/test_cmd.py +++ b/lib-python/2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/2.7/distutils/tests/test_config.py b/lib-python/2.7/distutils/tests/test_config.py --- a/lib-python/2.7/distutils/tests/test_config.py +++ b/lib-python/2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_config_cmd.py b/lib-python/2.7/distutils/tests/test_config_cmd.py --- a/lib-python/2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_core.py b/lib-python/2.7/distutils/tests/test_core.py --- a/lib-python/2.7/distutils/tests/test_core.py +++ b/lib-python/2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dep_util.py b/lib-python/2.7/distutils/tests/test_dep_util.py --- a/lib-python/2.7/distutils/tests/test_dep_util.py +++ b/lib-python/2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dir_util.py b/lib-python/2.7/distutils/tests/test_dir_util.py --- a/lib-python/2.7/distutils/tests/test_dir_util.py +++ b/lib-python/2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_dist.py b/lib-python/2.7/distutils/tests/test_dist.py --- a/lib-python/2.7/distutils/tests/test_dist.py +++ b/lib-python/2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_file_util.py b/lib-python/2.7/distutils/tests/test_file_util.py --- a/lib-python/2.7/distutils/tests/test_file_util.py +++ b/lib-python/2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_filelist.py b/lib-python/2.7/distutils/tests/test_filelist.py --- a/lib-python/2.7/distutils/tests/test_filelist.py +++ b/lib-python/2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -3,6 +3,8 @@ import os import unittest +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -52,4 +54,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_data.py b/lib-python/2.7/distutils/tests/test_install_data.py --- a/lib-python/2.7/distutils/tests/test_install_data.py +++ b/lib-python/2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_headers.py b/lib-python/2.7/distutils/tests/test_install_headers.py --- a/lib-python/2.7/distutils/tests/test_install_headers.py +++ b/lib-python/2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_lib.py b/lib-python/2.7/distutils/tests/test_install_lib.py --- a/lib-python/2.7/distutils/tests/test_install_lib.py +++ b/lib-python/2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_install_scripts.py b/lib-python/2.7/distutils/tests/test_install_scripts.py --- a/lib-python/2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_msvc9compiler.py b/lib-python/2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_register.py b/lib-python/2.7/distutils/tests/test_register.py --- a/lib-python/2.7/distutils/tests/test_register.py +++ b/lib-python/2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_sdist.py b/lib-python/2.7/distutils/tests/test_sdist.py --- a/lib-python/2.7/distutils/tests/test_sdist.py +++ b/lib-python/2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_spawn.py b/lib-python/2.7/distutils/tests/test_spawn.py --- a/lib-python/2.7/distutils/tests/test_spawn.py +++ b/lib-python/2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_text_file.py b/lib-python/2.7/distutils/tests/test_text_file.py --- a/lib-python/2.7/distutils/tests/test_text_file.py +++ b/lib-python/2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_unixccompiler.py b/lib-python/2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_upload.py b/lib-python/2.7/distutils/tests/test_upload.py --- a/lib-python/2.7/distutils/tests/test_upload.py +++ b/lib-python/2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_util.py b/lib-python/2.7/distutils/tests/test_util.py --- a/lib-python/2.7/distutils/tests/test_util.py +++ b/lib-python/2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_version.py b/lib-python/2.7/distutils/tests/test_version.py --- a/lib-python/2.7/distutils/tests/test_version.py +++ b/lib-python/2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/tests/test_versionpredicate.py b/lib-python/2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/2.7/distutils/util.py b/lib-python/2.7/distutils/util.py --- a/lib-python/2.7/distutils/util.py +++ b/lib-python/2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/2.7/doctest.py b/lib-python/2.7/doctest.py --- a/lib-python/2.7/doctest.py +++ b/lib-python/2.7/doctest.py @@ -1217,7 +1217,7 @@ # Process each example. for examplenum, example in enumerate(test.examples): - # If REPORT_ONLY_FIRST_FAILURE is set, then supress + # If REPORT_ONLY_FIRST_FAILURE is set, then suppress # reporting after the first failure. quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and failures > 0) @@ -2186,7 +2186,7 @@ caller can catch the errors and initiate post-mortem debugging. The DocTestCase provides a debug method that raises - UnexpectedException errors if there is an unexepcted + UnexpectedException errors if there is an unexpected exception: >>> test = DocTestParser().get_doctest('>>> raise KeyError\n42', diff --git a/lib-python/2.7/email/charset.py b/lib-python/2.7/email/charset.py --- a/lib-python/2.7/email/charset.py +++ b/lib-python/2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/2.7/email/generator.py b/lib-python/2.7/email/generator.py --- a/lib-python/2.7/email/generator.py +++ b/lib-python/2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/2.7/email/header.py b/lib-python/2.7/email/header.py --- a/lib-python/2.7/email/header.py +++ b/lib-python/2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/2.7/email/message.py b/lib-python/2.7/email/message.py --- a/lib-python/2.7/email/message.py +++ b/lib-python/2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/2.7/email/mime/application.py b/lib-python/2.7/email/mime/application.py --- a/lib-python/2.7/email/mime/application.py +++ b/lib-python/2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/2.7/email/test/data/msg_26.txt b/lib-python/2.7/email/test/data/msg_26.txt --- a/lib-python/2.7/email/test/data/msg_26.txt +++ b/lib-python/2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/2.7/email/test/test_email.py b/lib-python/2.7/email/test/test_email.py --- a/lib-python/2.7/email/test/test_email.py +++ b/lib-python/2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3113,6 +3135,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/2.7/ftplib.py b/lib-python/2.7/ftplib.py --- a/lib-python/2.7/ftplib.py +++ b/lib-python/2.7/ftplib.py @@ -599,7 +599,7 @@ Usage example: >>> from ftplib import FTP_TLS >>> ftps = FTP_TLS('ftp.python.org') - >>> ftps.login() # login anonimously previously securing control channel + >>> ftps.login() # login anonymously previously securing control channel '230 Guest login ok, access restrictions apply.' >>> ftps.prot_p() # switch to secure data connection '200 Protection level set to P' diff --git a/lib-python/2.7/functools.py b/lib-python/2.7/functools.py --- a/lib-python/2.7/functools.py +++ b/lib-python/2.7/functools.py @@ -53,17 +53,17 @@ def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { - '__lt__': [('__gt__', lambda self, other: other < self), - ('__le__', lambda self, other: not other < self), + '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), + ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], - '__le__': [('__ge__', lambda self, other: other <= self), - ('__lt__', lambda self, other: not other <= self), + '__le__': [('__ge__', lambda self, other: not self <= other or self == other), + ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], - '__gt__': [('__lt__', lambda self, other: other > self), - ('__ge__', lambda self, other: not other > self), + '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), + ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], - '__ge__': [('__le__', lambda self, other: other >= self), - ('__gt__', lambda self, other: not other >= self), + '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), + ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) @@ -80,6 +80,7 @@ def cmp_to_key(mycmp): """Convert a cmp= function into a key= function""" class K(object): + __slots__ = ['obj'] def __init__(self, obj, *args): self.obj = obj def __lt__(self, other): diff --git a/lib-python/2.7/getpass.py b/lib-python/2.7/getpass.py --- a/lib-python/2.7/getpass.py +++ b/lib-python/2.7/getpass.py @@ -62,7 +62,7 @@ try: old = termios.tcgetattr(fd) # a copy to save new = old[:] - new[3] &= ~(termios.ECHO|termios.ISIG) # 3 == 'lflags' + new[3] &= ~termios.ECHO # 3 == 'lflags' tcsetattr_flags = termios.TCSAFLUSH if hasattr(termios, 'TCSASOFT'): tcsetattr_flags |= termios.TCSASOFT diff --git a/lib-python/2.7/gettext.py b/lib-python/2.7/gettext.py --- a/lib-python/2.7/gettext.py +++ b/lib-python/2.7/gettext.py @@ -316,7 +316,7 @@ # Note: we unconditionally convert both msgids and msgstrs to # Unicode using the character encoding specified in the charset # parameter of the Content-Type header. The gettext documentation - # strongly encourages msgids to be us-ascii, but some appliations + # strongly encourages msgids to be us-ascii, but some applications # require alternative encodings (e.g. Zope's ZCML and ZPT). For # traditional gettext applications, the msgid conversion will # cause no problems since us-ascii should always be a subset of diff --git a/lib-python/2.7/hashlib.py b/lib-python/2.7/hashlib.py --- a/lib-python/2.7/hashlib.py +++ b/lib-python/2.7/hashlib.py @@ -64,26 +64,29 @@ def __get_builtin_constructor(name): - if name in ('SHA1', 'sha1'): - import _sha - return _sha.new - elif name in ('MD5', 'md5'): - import _md5 - return _md5.new - elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): - import _sha256 - bs = name[3:] - if bs == '256': - return _sha256.sha256 - elif bs == '224': - return _sha256.sha224 - elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): - import _sha512 - bs = name[3:] - if bs == '512': - return _sha512.sha512 - elif bs == '384': - return _sha512.sha384 + try: + if name in ('SHA1', 'sha1'): + import _sha + return _sha.new + elif name in ('MD5', 'md5'): + import _md5 + return _md5.new + elif name in ('SHA256', 'sha256', 'SHA224', 'sha224'): + import _sha256 + bs = name[3:] + if bs == '256': + return _sha256.sha256 + elif bs == '224': + return _sha256.sha224 + elif name in ('SHA512', 'sha512', 'SHA384', 'sha384'): + import _sha512 + bs = name[3:] + if bs == '512': + return _sha512.sha512 + elif bs == '384': + return _sha512.sha384 + except ImportError: + pass # no extension module, this hash is unsupported. raise ValueError('unsupported hash type %s' % name) diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -215,11 +220,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +244,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +299,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +368,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +406,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1233,6 +1247,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -943,8 +943,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no arguments (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -251,7 +251,7 @@ if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): + and self.indent is None and not self.sort_keys): _iterencode = c_make_encoder( markers, self.default, _encoder, self.indent, self.key_separator, self.item_separator, self.sort_keys, diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,72 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/lib-tk/Tix.py b/lib-python/2.7/lib-tk/Tix.py --- a/lib-python/2.7/lib-tk/Tix.py +++ b/lib-python/2.7/lib-tk/Tix.py @@ -163,7 +163,7 @@ extensions) exist, then the image type is chosen according to the depth of the X display: xbm images are chosen on monochrome displays and color images are chosen on color displays. By using - tix_ getimage, you can advoid hard coding the pathnames of the + tix_ getimage, you can avoid hard coding the pathnames of the image files in your application. When successful, this command returns the name of the newly created image, which can be used to configure the -image option of the Tk and Tix widgets. @@ -171,7 +171,7 @@ return self.tk.call('tix', 'getimage', name) def tix_option_get(self, name): - """Gets the options manitained by the Tix + """Gets the options maintained by the Tix scheme mechanism. Available options include: active_bg active_fg bg @@ -576,7 +576,7 @@ class ComboBox(TixWidget): """ComboBox - an Entry field with a dropdown menu. The user can select a - choice by either typing in the entry subwdget or selecting from the + choice by either typing in the entry subwidget or selecting from the listbox subwidget. Subwidget Class @@ -869,7 +869,7 @@ """HList - Hierarchy display widget can be used to display any data that have a hierarchical structure, for example, file system directory trees. The list entries are indented and connected by branch lines - according to their places in the hierachy. + according to their places in the hierarchy. Subwidgets - None""" @@ -1520,7 +1520,7 @@ self.tk.call(self._w, 'selection', 'set', first, last) class Tree(TixWidget): - """Tree - The tixTree widget can be used to display hierachical + """Tree - The tixTree widget can be used to display hierarchical data in a tree form. The user can adjust the view of the tree by opening or closing parts of the tree.""" diff --git a/lib-python/2.7/lib-tk/Tkinter.py b/lib-python/2.7/lib-tk/Tkinter.py --- a/lib-python/2.7/lib-tk/Tkinter.py +++ b/lib-python/2.7/lib-tk/Tkinter.py @@ -1660,7 +1660,7 @@ class Tk(Misc, Wm): """Toplevel widget of Tk which represents mostly the main window - of an appliation. It has an associated Tcl interpreter.""" + of an application. It has an associated Tcl interpreter.""" _w = '.' def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None): diff --git a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py --- a/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py +++ b/lib-python/2.7/lib-tk/test/test_ttk/test_functions.py @@ -136,7 +136,7 @@ # minimum acceptable for image type self.assertEqual(ttk._format_elemcreate('image', False, 'test'), ("test ", ())) - # specifiyng a state spec + # specifying a state spec self.assertEqual(ttk._format_elemcreate('image', False, 'test', ('', 'a')), ("test {} a", ())) # state spec with multiple states diff --git a/lib-python/2.7/lib-tk/ttk.py b/lib-python/2.7/lib-tk/ttk.py --- a/lib-python/2.7/lib-tk/ttk.py +++ b/lib-python/2.7/lib-tk/ttk.py @@ -707,7 +707,7 @@ textvariable, values, width """ # The "values" option may need special formatting, so leave to - # _format_optdict the responsability to format it + # _format_optdict the responsibility to format it if "values" in kw: kw["values"] = _format_optdict({'v': kw["values"]})[1] @@ -993,7 +993,7 @@ pane is either an integer index or the name of a managed subwindow. If kw is not given, returns a dict of the pane option values. If option is specified then the value for that option is returned. - Otherwise, sets the options to the correspoding values.""" + Otherwise, sets the options to the corresponding values.""" if option is not None: kw[option] = None return _val_or_dict(kw, self.tk.call, self._w, "pane", pane) diff --git a/lib-python/2.7/lib-tk/turtle.py b/lib-python/2.7/lib-tk/turtle.py --- a/lib-python/2.7/lib-tk/turtle.py +++ b/lib-python/2.7/lib-tk/turtle.py @@ -1385,7 +1385,7 @@ Optional argument: picname -- a string, name of a gif-file or "nopic". - If picname is a filename, set the corresponing image as background. + If picname is a filename, set the corresponding image as background. If picname is "nopic", delete backgroundimage, if present. If picname is None, return the filename of the current backgroundimage. @@ -1409,7 +1409,7 @@ Optional arguments: canvwidth -- positive integer, new width of canvas in pixels canvheight -- positive integer, new height of canvas in pixels - bg -- colorstring or color-tupel, new backgroundcolor + bg -- colorstring or color-tuple, new backgroundcolor If no arguments are given, return current (canvaswidth, canvasheight) Do not alter the drawing window. To observe hidden parts of @@ -3079,9 +3079,9 @@ fill="", width=ps) # Turtle now at position old, self._position = old - ## if undo is done during crating a polygon, the last vertex - ## will be deleted. if the polygon is entirel deleted, - ## creatigPoly will be set to False. + ## if undo is done during creating a polygon, the last vertex + ## will be deleted. if the polygon is entirely deleted, + ## creatingPoly will be set to False. ## Polygons created before the last one will not be affected by undo() if self._creatingPoly: if len(self._poly) > 0: @@ -3221,7 +3221,7 @@ def dot(self, size=None, *color): """Draw a dot with diameter size, using color. - Optional argumentS: + Optional arguments: size -- an integer >= 1 (if given) color -- a colorstring or a numeric color tuple @@ -3691,7 +3691,7 @@ class Turtle(RawTurtle): - """RawTurtle auto-crating (scrolled) canvas. + """RawTurtle auto-creating (scrolled) canvas. When a Turtle object is created or a function derived from some Turtle method is called a TurtleScreen object is automatically created. @@ -3731,7 +3731,7 @@ filename -- a string, used as filename default value is turtle_docstringdict - Has to be called explicitely, (not used by the turtle-graphics classes) + Has to be called explicitly, (not used by the turtle-graphics classes) The docstring dictionary will be written to the Python script .py It is intended to serve as a template for translation of the docstrings into different languages. diff --git a/lib-python/2.7/lib2to3/__main__.py b/lib-python/2.7/lib2to3/__main__.py new file mode 100644 --- /dev/null +++ b/lib-python/2.7/lib2to3/__main__.py @@ -0,0 +1,4 @@ +import sys +from .main import main + +sys.exit(main("lib2to3.fixes")) diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools.py b/lib-python/2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/2.7/lib2to3/fixes/fix_urllib.py b/lib-python/2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/2.7/lib2to3/main.py b/lib-python/2.7/lib2to3/main.py --- a/lib-python/2.7/lib2to3/main.py +++ b/lib-python/2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/2.7/lib2to3/patcomp.py b/lib-python/2.7/lib2to3/patcomp.py --- a/lib-python/2.7/lib2to3/patcomp.py +++ b/lib-python/2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/2.7/lib2to3/pgen2/conv.py b/lib-python/2.7/lib2to3/pgen2/conv.py --- a/lib-python/2.7/lib2to3/pgen2/conv.py +++ b/lib-python/2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/2.7/lib2to3/pgen2/driver.py b/lib-python/2.7/lib2to3/pgen2/driver.py --- a/lib-python/2.7/lib2to3/pgen2/driver.py +++ b/lib-python/2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/2.7/lib2to3/pytree.py b/lib-python/2.7/lib2to3/pytree.py --- a/lib-python/2.7/lib2to3/pytree.py +++ b/lib-python/2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -743,9 +743,11 @@ else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being - # ignored. - save_stderr = sys.stderr - sys.stderr = StringIO() + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,7 +761,8 @@ r[self.name] = nodes[:count] yield count, r finally: - sys.stderr = save_stderr + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/2.7/lib2to3/refactor.py b/lib-python/2.7/lib2to3/refactor.py --- a/lib-python/2.7/lib2to3/refactor.py +++ b/lib-python/2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/2.7/lib2to3/tests/test_fixers.py b/lib-python/2.7/lib2to3/tests/test_fixers.py --- a/lib-python/2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/2.7/lib2to3/tests/test_parser.py b/lib-python/2.7/lib2to3/tests/test_parser.py --- a/lib-python/2.7/lib2to3/tests/test_parser.py +++ b/lib-python/2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/2.7/lib2to3/tests/test_refactor.py b/lib-python/2.7/lib2to3/tests/test_refactor.py --- a/lib-python/2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/2.7/lib2to3/tests/test_util.py b/lib-python/2.7/lib2to3/tests/test_util.py --- a/lib-python/2.7/lib2to3/tests/test_util.py +++ b/lib-python/2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/2.7/locale.py b/lib-python/2.7/locale.py --- a/lib-python/2.7/locale.py +++ b/lib-python/2.7/locale.py @@ -621,7 +621,7 @@ 'tactis': 'TACTIS', 'euc_jp': 'eucJP', 'euc_kr': 'eucKR', - 'utf_8': 'UTF8', + 'utf_8': 'UTF-8', 'koi8_r': 'KOI8-R', 'koi8_u': 'KOI8-U', # XXX This list is still incomplete. If you know more diff --git a/lib-python/2.7/logging/__init__.py b/lib-python/2.7/logging/__init__.py --- a/lib-python/2.7/logging/__init__.py +++ b/lib-python/2.7/logging/__init__.py @@ -1627,6 +1627,7 @@ h = wr() if h: try: + h.acquire() h.flush() h.close() except (IOError, ValueError): @@ -1635,6 +1636,8 @@ # references to them are still around at # application exit. pass + finally: + h.release() except: if raiseExceptions: raise diff --git a/lib-python/2.7/logging/config.py b/lib-python/2.7/logging/config.py --- a/lib-python/2.7/logging/config.py +++ b/lib-python/2.7/logging/config.py @@ -226,14 +226,14 @@ propagate = 1 logger = logging.getLogger(qn) if qn in existing: - i = existing.index(qn) + i = existing.index(qn) + 1 # start with the entry after qn prefixed = qn + "." pflen = len(prefixed) num_existing = len(existing) - i = i + 1 # look at the entry after qn - while (i < num_existing) and (existing[i][:pflen] == prefixed): - child_loggers.append(existing[i]) - i = i + 1 + while i < num_existing: + if existing[i][:pflen] == prefixed: + child_loggers.append(existing[i]) + i += 1 existing.remove(qn) if "level" in opts: level = cp.get(sectname, "level") diff --git a/lib-python/2.7/logging/handlers.py b/lib-python/2.7/logging/handlers.py --- a/lib-python/2.7/logging/handlers.py +++ b/lib-python/2.7/logging/handlers.py @@ -125,6 +125,7 @@ """ if self.stream: self.stream.close() + self.stream = None if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) @@ -324,6 +325,7 @@ """ if self.stream: self.stream.close() + self.stream = None # get the time that this sequence started at and make it a TimeTuple t = self.rolloverAt - self.interval if self.utc: diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -234,27 +234,35 @@ def __init__(self, dirname, factory=rfc822.Message, create=True): """Initialize a Maildir instance.""" Mailbox.__init__(self, dirname, factory, create) + self._paths = { + 'tmp': os.path.join(self._path, 'tmp'), + 'new': os.path.join(self._path, 'new'), + 'cur': os.path.join(self._path, 'cur'), + } if not os.path.exists(self._path): if create: os.mkdir(self._path, 0700) - os.mkdir(os.path.join(self._path, 'tmp'), 0700) - os.mkdir(os.path.join(self._path, 'new'), 0700) - os.mkdir(os.path.join(self._path, 'cur'), 0700) + for path in self._paths.values(): + os.mkdir(path, 0o700) else: raise NoSuchMailboxError(self._path) self._toc = {} - self._last_read = None # Records last time we read cur/new - # NOTE: we manually invalidate _last_read each time we do any - # modifications ourselves, otherwise we might get tripped up by - # bogus mtime behaviour on some systems (see issue #6896). + self._toc_mtimes = {} + for subdir in ('cur', 'new'): + self._toc_mtimes[subdir] = os.path.getmtime(self._paths[subdir]) + self._last_read = time.time() # Records last time we read cur/new + self._skewfactor = 0.1 # Adjust if os/fs clocks are skewing def add(self, message): """Add message and return assigned key.""" tmp_file = self._create_tmp() try: self._dump_message(message, tmp_file) - finally: - _sync_close(tmp_file) + except BaseException: + tmp_file.close() + os.remove(tmp_file.name) + raise + _sync_close(tmp_file) if isinstance(message, MaildirMessage): subdir = message.get_subdir() suffix = self.colon + message.get_info() @@ -280,15 +288,11 @@ raise if isinstance(message, MaildirMessage): os.utime(dest, (os.path.getatime(dest), message.get_date())) - # Invalidate cached toc - self._last_read = None return uniq def remove(self, key): """Remove the keyed message; raise KeyError if it doesn't exist.""" os.remove(os.path.join(self._path, self._lookup(key))) - # Invalidate cached toc (only on success) - self._last_read = None def discard(self, key): """If the keyed message exists, remove it.""" @@ -323,8 +327,6 @@ if isinstance(message, MaildirMessage): os.utime(new_path, (os.path.getatime(new_path), message.get_date())) - # Invalidate cached toc - self._last_read = None def get_message(self, key): """Return a Message representation or raise a KeyError.""" @@ -380,8 +382,8 @@ def flush(self): """Write any pending changes to disk.""" # Maildir changes are always written immediately, so there's nothing - # to do except invalidate our cached toc. - self._last_read = None + # to do. + pass def lock(self): """Lock the mailbox.""" @@ -479,36 +481,39 @@ def _refresh(self): """Update table of contents mapping.""" - if self._last_read is not None: - for subdir in ('new', 'cur'): - mtime = os.path.getmtime(os.path.join(self._path, subdir)) - if mtime > self._last_read: - break - else: + # If it has been less than two seconds since the last _refresh() call, + # we have to unconditionally re-read the mailbox just in case it has + # been modified, because os.path.mtime() has a 2 sec resolution in the + # most common worst case (FAT) and a 1 sec resolution typically. This + # results in a few unnecessary re-reads when _refresh() is called + # multiple times in that interval, but once the clock ticks over, we + # will only re-read as needed. Because the filesystem might be being + # served by an independent system with its own clock, we record and + # compare with the mtimes from the filesystem. Because the other + # system's clock might be skewing relative to our clock, we add an + # extra delta to our wait. The default is one tenth second, but is an + # instance variable and so can be adjusted if dealing with a + # particularly skewed or irregular system. + if time.time() - self._last_read > 2 + self._skewfactor: + refresh = False + for subdir in self._toc_mtimes: + mtime = os.path.getmtime(self._paths[subdir]) + if mtime > self._toc_mtimes[subdir]: + refresh = True + self._toc_mtimes[subdir] = mtime + if not refresh: return - - # We record the current time - 1sec so that, if _refresh() is called - # again in the same second, we will always re-read the mailbox - # just in case it's been modified. (os.path.mtime() only has - # 1sec resolution.) This results in a few unnecessary re-reads - # when _refresh() is called multiple times in the same second, - # but once the clock ticks over, we will only re-read as needed. - now = time.time() - 1 - + # Refresh toc self._toc = {} - def update_dir (subdir): - path = os.path.join(self._path, subdir) + for subdir in self._toc_mtimes: + path = self._paths[subdir] for entry in os.listdir(path): p = os.path.join(path, entry) if os.path.isdir(p): continue uniq = entry.split(self.colon)[0] self._toc[uniq] = os.path.join(subdir, entry) - - update_dir('new') - update_dir('cur') - - self._last_read = now + self._last_read = time.time() def _lookup(self, key): """Use TOC to return subpath for given key, or raise a KeyError.""" @@ -551,7 +556,7 @@ f = open(self._path, 'wb+') else: raise NoSuchMailboxError(self._path) - elif e.errno == errno.EACCES: + elif e.errno in (errno.EACCES, errno.EROFS): f = open(self._path, 'rb') else: raise @@ -700,9 +705,14 @@ def _append_message(self, message): """Append message to mailbox and return (start, stop) offsets.""" self._file.seek(0, 2) - self._pre_message_hook(self._file) - offsets = self._install_message(message) - self._post_message_hook(self._file) + before = self._file.tell() + try: + self._pre_message_hook(self._file) + offsets = self._install_message(message) + self._post_message_hook(self._file) + except BaseException: + self._file.truncate(before) + raise self._file.flush() self._file_length = self._file.tell() # Record current length of mailbox return offsets @@ -868,18 +878,29 @@ new_key = max(keys) + 1 new_path = os.path.join(self._path, str(new_key)) f = _create_carefully(new_path) + closed = False try: if self._locked: _lock_file(f) try: - self._dump_message(message, f) + try: + self._dump_message(message, f) + except BaseException: + # Unlock and close so it can be deleted on Windows + if self._locked: + _unlock_file(f) + _sync_close(f) + closed = True + os.remove(new_path) + raise if isinstance(message, MHMessage): self._dump_sequences(message, new_key) finally: if self._locked: _unlock_file(f) finally: - _sync_close(f) + if not closed: + _sync_close(f) return new_key def remove(self, key): @@ -1886,7 +1907,7 @@ try: fcntl.lockf(f, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError, e: - if e.errno in (errno.EAGAIN, errno.EACCES): + if e.errno in (errno.EAGAIN, errno.EACCES, errno.EROFS): raise ExternalClashError('lockf: lock unavailable: %s' % f.name) else: @@ -1896,7 +1917,7 @@ pre_lock = _create_temporary(f.name + '.lock') pre_lock.close() except IOError, e: - if e.errno == errno.EACCES: + if e.errno in (errno.EACCES, errno.EROFS): return # Without write access, just skip dotlocking. else: raise diff --git a/lib-python/2.7/msilib/__init__.py b/lib-python/2.7/msilib/__init__.py --- a/lib-python/2.7/msilib/__init__.py +++ b/lib-python/2.7/msilib/__init__.py @@ -173,11 +173,10 @@ add_data(db, table, getattr(module, table)) def make_id(str): - #str = str.replace(".", "_") # colons are allowed - str = str.replace(" ", "_") - str = str.replace("-", "_") - if str[0] in string.digits: - str = "_"+str + identifier_chars = string.ascii_letters + string.digits + "._" + str = "".join([c if c in identifier_chars else "_" for c in str]) + if str[0] in (string.digits + "."): + str = "_" + str assert re.match("^[A-Za-z_][A-Za-z0-9_.]*$", str), "FILE"+str return str @@ -285,19 +284,28 @@ [(feature.id, component)]) def make_short(self, file): + oldfile = file + file = file.replace('+', '_') + file = ''.join(c for c in file if not c in ' "/\[]:;=,') parts = file.split(".") - if len(parts)>1: + if len(parts) > 1: + prefix = "".join(parts[:-1]).upper() suffix = parts[-1].upper() + if not prefix: + prefix = suffix + suffix = None else: + prefix = file.upper() suffix = None - prefix = parts[0].upper() - if len(prefix) <= 8 and (not suffix or len(suffix)<=3): + if len(parts) < 3 and len(prefix) <= 8 and file == oldfile and ( + not suffix or len(suffix) <= 3): if suffix: file = prefix+"."+suffix else: file = prefix - assert file not in self.short_names else: + file = None + if file is None or file in self.short_names: prefix = prefix[:6] if suffix: suffix = suffix[:3] diff --git a/lib-python/2.7/multiprocessing/__init__.py b/lib-python/2.7/multiprocessing/__init__.py --- a/lib-python/2.7/multiprocessing/__init__.py +++ b/lib-python/2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/2.7/multiprocessing/connection.py b/lib-python/2.7/multiprocessing/connection.py --- a/lib-python/2.7/multiprocessing/connection.py +++ b/lib-python/2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/dummy/__init__.py b/lib-python/2.7/multiprocessing/dummy/__init__.py --- a/lib-python/2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/dummy/connection.py b/lib-python/2.7/multiprocessing/dummy/connection.py --- a/lib-python/2.7/multiprocessing/dummy/connection.py +++ b/lib-python/2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -172,6 +198,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -181,7 +208,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -371,7 +398,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/2.7/multiprocessing/heap.py b/lib-python/2.7/multiprocessing/heap.py --- a/lib-python/2.7/multiprocessing/heap.py +++ b/lib-python/2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/2.7/multiprocessing/managers.py b/lib-python/2.7/multiprocessing/managers.py --- a/lib-python/2.7/multiprocessing/managers.py +++ b/lib-python/2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/2.7/multiprocessing/pool.py b/lib-python/2.7/multiprocessing/pool.py --- a/lib-python/2.7/multiprocessing/pool.py +++ b/lib-python/2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/2.7/multiprocessing/process.py b/lib-python/2.7/multiprocessing/process.py --- a/lib-python/2.7/multiprocessing/process.py +++ b/lib-python/2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/2.7/multiprocessing/queues.py b/lib-python/2.7/multiprocessing/queues.py --- a/lib-python/2.7/multiprocessing/queues.py +++ b/lib-python/2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/2.7/multiprocessing/reduction.py b/lib-python/2.7/multiprocessing/reduction.py --- a/lib-python/2.7/multiprocessing/reduction.py +++ b/lib-python/2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/2.7/multiprocessing/sharedctypes.py b/lib-python/2.7/multiprocessing/sharedctypes.py --- a/lib-python/2.7/multiprocessing/sharedctypes.py +++ b/lib-python/2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/2.7/multiprocessing/synchronize.py b/lib-python/2.7/multiprocessing/synchronize.py --- a/lib-python/2.7/multiprocessing/synchronize.py +++ b/lib-python/2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/2.7/multiprocessing/util.py b/lib-python/2.7/multiprocessing/util.py --- a/lib-python/2.7/multiprocessing/util.py +++ b/lib-python/2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/2.7/netrc.py b/lib-python/2.7/netrc.py --- a/lib-python/2.7/netrc.py +++ b/lib-python/2.7/netrc.py @@ -34,11 +34,19 @@ def _parse(self, file, fp): lexer = shlex.shlex(fp) lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" + lexer.commenters = lexer.commenters.replace('#', '') while 1: # Look for a machine, default, or macdef top-level keyword toplevel = tt = lexer.get_token() if not tt: break + elif tt[0] == '#': + # seek to beginning of comment, in case reading the token put + # us on a new line, and then skip the rest of the line. + pos = len(tt) + 1 + lexer.instream.seek(-pos, 1) + lexer.instream.readline() + continue elif tt == 'machine': entryname = lexer.get_token() elif tt == 'default': @@ -64,8 +72,8 @@ self.hosts[entryname] = {} while 1: tt = lexer.get_token() - if (tt=='' or tt == 'machine' or - tt == 'default' or tt =='macdef'): + if (tt.startswith('#') or + tt in {'', 'machine', 'default', 'macdef'}): if password: self.hosts[entryname] = (login, account, password) lexer.push_token(tt) diff --git a/lib-python/2.7/nntplib.py b/lib-python/2.7/nntplib.py --- a/lib-python/2.7/nntplib.py +++ b/lib-python/2.7/nntplib.py @@ -103,7 +103,7 @@ readermode is sometimes necessary if you are connecting to an NNTP server on the local machine and intend to call - reader-specific comamnds, such as `group'. If you get + reader-specific commands, such as `group'. If you get unexpected NNTPPermanentErrors, you might need to set readermode. """ diff --git a/lib-python/2.7/ntpath.py b/lib-python/2.7/ntpath.py --- a/lib-python/2.7/ntpath.py +++ b/lib-python/2.7/ntpath.py @@ -310,7 +310,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/lib-python/2.7/nturl2path.py b/lib-python/2.7/nturl2path.py --- a/lib-python/2.7/nturl2path.py +++ b/lib-python/2.7/nturl2path.py @@ -25,11 +25,14 @@ error = 'Bad URL: ' + url raise IOError, error drive = comp[0][-1].upper() + path = drive + ':' components = comp[1].split('/') - path = drive + ':' - for comp in components: + for comp in components: if comp: path = path + '\\' + urllib.unquote(comp) + # Issue #11474: url like '/C|/' should convert into 'C:\\' + if path.endswith(':') and url.endswith('/'): + path += '\\' return path def pathname2url(p): diff --git a/lib-python/2.7/numbers.py b/lib-python/2.7/numbers.py --- a/lib-python/2.7/numbers.py +++ b/lib-python/2.7/numbers.py @@ -63,7 +63,7 @@ @abstractproperty def imag(self): - """Retrieve the real component of this number. + """Retrieve the imaginary component of this number. This should subclass Real. """ diff --git a/lib-python/2.7/optparse.py b/lib-python/2.7/optparse.py --- a/lib-python/2.7/optparse.py +++ b/lib-python/2.7/optparse.py @@ -1131,6 +1131,11 @@ prog : string the name of the current program (to override os.path.basename(sys.argv[0])). + description : string + A paragraph of text giving a brief overview of your program. + optparse reformats this paragraph to fit the current terminal + width and prints it when the user requests help (after usage, + but before the list of options). epilog : string paragraph of help text to print after option help diff --git a/lib-python/2.7/pickletools.py b/lib-python/2.7/pickletools.py --- a/lib-python/2.7/pickletools.py +++ b/lib-python/2.7/pickletools.py @@ -1370,7 +1370,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/2.7/platform.py b/lib-python/2.7/platform.py --- a/lib-python/2.7/platform.py +++ b/lib-python/2.7/platform.py @@ -503,7 +503,7 @@ info = pipe.read() if pipe.close(): raise os.error,'command failed' - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error,why: #print 'Command %s failed: %s' % (cmd,why) @@ -1448,9 +1448,10 @@ """ Returns a string identifying the Python implementation. Currently, the following implementations are identified: - 'CPython' (C implementation of Python), - 'IronPython' (.NET implementation of Python), - 'Jython' (Java implementation of Python). + 'CPython' (C implementation of Python), + 'IronPython' (.NET implementation of Python), + 'Jython' (Java implementation of Python), + 'PyPy' (Python implementation of Python). """ return _sys_version()[0] diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
    \n' @@ -627,7 +630,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -643,13 +646,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -773,7 +776,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1042,18 +1045,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1113,7 +1116,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1186,7 +1189,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1718,8 +1721,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/2.7/pydoc_data/topics.py b/lib-python/2.7/pydoc_data/topics.py --- a/lib-python/2.7/pydoc_data/topics.py +++ b/lib-python/2.7/pydoc_data/topics.py @@ -1,16 +1,16 @@ -# Autogenerated by Sphinx on Sat Jul 3 08:52:04 2010 +# Autogenerated by Sphinx on Sat Jun 11 09:49:30 2011 topics = {'assert': u'\nThe ``assert`` statement\n************************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, ``assert expression``, is equivalent to\n\n if __debug__:\n if not expression: raise AssertionError\n\nThe extended form, ``assert expression1, expression2``, is equivalent\nto\n\n if __debug__:\n if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that ``__debug__`` and ``AssertionError``\nrefer to the built-in variables with those names. In the current\nimplementation, the built-in variable ``__debug__`` is ``True`` under\nnormal circumstances, ``False`` when optimization is requested\n(command line option -O). The current code generator emits no code\nfor an assert statement when optimization is requested at compile\ntime. Note that it is unnecessary to include the source code for the\nexpression that failed in the error message; it will be displayed as\npart of the stack trace.\n\nAssignments to ``__debug__`` are illegal. The value for the built-in\nvariable is determined when the interpreter starts.\n', - 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets. (This rule is relaxed as of\n Python 1.5; in earlier versions, the object had to be a tuple.\n Since strings are sequences, an assignment like ``a, b = "xy"`` is\n now legal as long as the string has the right length.)\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', + 'assignment': u'\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n target_list ::= target ("," target)* [","]\n target ::= identifier\n | "(" target_list ")"\n | "[" target_list "]"\n | attributeref\n | subscription\n | slicing\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable. The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list is recursively defined as\nfollows.\n\n* If the target list is a single target: The object is assigned to\n that target.\n\n* If the target list is a comma-separated list of targets: The object\n must be an iterable with the same number of items as there are\n targets in the target list, and the items are assigned, from left to\n right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n * If the name does not occur in a ``global`` statement in the\n current code block: the name is bound to the object in the current\n local namespace.\n\n * Otherwise: the name is bound to the object in the current global\n namespace.\n\n The name is rebound if it was already bound. This may cause the\n reference count for the object previously bound to the name to reach\n zero, causing the object to be deallocated and its destructor (if it\n has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in square\n brackets: The object must be an iterable with the same number of\n items as there are targets in the target list, and its items are\n assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n the reference is evaluated. It should yield an object with\n assignable attributes; if this is not the case, ``TypeError`` is\n raised. That object is then asked to assign the assigned object to\n the given attribute; if it cannot perform the assignment, it raises\n an exception (usually but not necessarily ``AttributeError``).\n\n Note: If the object is a class instance and the attribute reference\n occurs on both sides of the assignment operator, the RHS expression,\n ``a.x`` can access either an instance attribute or (if no instance\n attribute exists) a class attribute. The LHS target ``a.x`` is\n always set as an instance attribute, creating it if necessary.\n Thus, the two occurrences of ``a.x`` do not necessarily refer to the\n same attribute: if the RHS expression refers to a class attribute,\n the LHS creates a new instance attribute as the target of the\n assignment:\n\n class Cls:\n x = 3 # class variable\n inst = Cls()\n inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3\n\n This description does not necessarily apply to descriptor\n attributes, such as properties created with ``property()``.\n\n* If the target is a subscription: The primary expression in the\n reference is evaluated. It should yield either a mutable sequence\n object (such as a list) or a mapping object (such as a dictionary).\n Next, the subscript expression is evaluated.\n\n If the primary is a mutable sequence object (such as a list), the\n subscript must yield a plain integer. If it is negative, the\n sequence\'s length is added to it. The resulting value must be a\n nonnegative integer less than the sequence\'s length, and the\n sequence is asked to assign the assigned object to its item with\n that index. If the index is out of range, ``IndexError`` is raised\n (assignment to a subscripted sequence cannot add new items to a\n list).\n\n If the primary is a mapping object (such as a dictionary), the\n subscript must have a type compatible with the mapping\'s key type,\n and the mapping is then asked to create a key/datum pair which maps\n the subscript to the assigned object. This can either replace an\n existing key/value pair with the same key value, or insert a new\n key/value pair (if no key with the same value existed).\n\n* If the target is a slicing: The primary expression in the reference\n is evaluated. It should yield a mutable sequence object (such as a\n list). The assigned object should be a sequence object of the same\n type. Next, the lower and upper bound expressions are evaluated,\n insofar they are present; defaults are zero and the sequence\'s\n length. The bounds should evaluate to (small) integers. If either\n bound is negative, the sequence\'s length is added to it. The\n resulting bounds are clipped to lie between zero and the sequence\'s\n length, inclusive. Finally, the sequence object is asked to replace\n the slice with the items of the assigned sequence. The length of\n the slice may be different from the length of the assigned sequence,\n thus changing the length of the target sequence, if the object\n allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample ``a, b = b, a`` swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe! For instance, the\nfollowing program prints ``[0, 2]``:\n\n x = [0, 1]\n i = 0\n i, x[i] = 1, 2\n print x\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'atom-identifiers': u'\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name. See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a ``NameError`` exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them. The transformation inserts the\nclass name in front of the name, with leading underscores removed, and\na single underscore inserted in front of the class name. For example,\nthe identifier ``__spam`` occurring in a class named ``Ham`` will be\ntransformed to ``_Ham__spam``. This transformation is independent of\nthe syntactical context in which the identifier is used. If the\ntransformed name is extremely long (longer than 255 characters),\nimplementation defined truncation may happen. If the class name\nconsists only of underscores, no transformation is done.\n', 'atom-literals': u"\nLiterals\n********\n\nPython supports string literals and various numeric literals:\n\n literal ::= stringliteral | integer | longinteger\n | floatnumber | imagnumber\n\nEvaluation of a literal yields an object of the given type (string,\ninteger, long integer, floating point number, complex number) with the\ngiven value. The value may be approximated in the case of floating\npoint and imaginary (complex) literals. See section *Literals* for\ndetails.\n\nAll literals correspond to immutable data types, and hence the\nobject's identity is less important than its value. Multiple\nevaluations of literals with the same value (either the same\noccurrence in the program text or a different occurrence) may obtain\nthe same object or a different object with the same value.\n", - 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', + 'attribute-access': u'\nCustomizing attribute access\n****************************\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n===========================================\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n========================\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n====================\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n=========\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n', 'attribute-references': u'\nAttribute references\n********************\n\nAn attribute reference is a primary followed by a period and a name:\n\n attributeref ::= primary "." identifier\n\nThe primary must evaluate to an object of a type that supports\nattribute references, e.g., a module, list, or an instance. This\nobject is then asked to produce the attribute whose name is the\nidentifier. If this attribute is not available, the exception\n``AttributeError`` is raised. Otherwise, the type and value of the\nobject produced is determined by the object. Multiple evaluations of\nthe same attribute reference may yield different objects.\n', 'augassign': u'\nAugmented assignment statements\n*******************************\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n augtarget ::= identifier | attributeref | subscription | slicing\n augop ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like ``x += 1`` can be rewritten as\n``x = x + 1`` to achieve a similar, but not exactly equal effect. In\nthe augmented version, ``x`` is only evaluated once. Also, when\npossible, the actual operation is performed *in-place*, meaning that\nrather than creating a new object and assigning that to the target,\nthe old object is modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n', 'binary': u'\nBinary arithmetic operations\n****************************\n\nThe binary arithmetic operations have the conventional priority\nlevels. Note that some of these operations also apply to certain non-\nnumeric types. Apart from the power operator, there are only two\nlevels, one for multiplicative operators and one for additive\noperators:\n\n m_expr ::= u_expr | m_expr "*" u_expr | m_expr "//" u_expr | m_expr "/" u_expr\n | m_expr "%" u_expr\n a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr\n\nThe ``*`` (multiplication) operator yields the product of its\narguments. The arguments must either both be numbers, or one argument\nmust be an integer (plain or long) and the other must be a sequence.\nIn the former case, the numbers are converted to a common type and\nthen multiplied together. In the latter case, sequence repetition is\nperformed; a negative repetition factor yields an empty sequence.\n\nThe ``/`` (division) and ``//`` (floor division) operators yield the\nquotient of their arguments. The numeric arguments are first\nconverted to a common type. Plain or long integer division yields an\ninteger of the same type; the result is that of mathematical division\nwith the \'floor\' function applied to the result. Division by zero\nraises the ``ZeroDivisionError`` exception.\n\nThe ``%`` (modulo) operator yields the remainder from the division of\nthe first argument by the second. The numeric arguments are first\nconverted to a common type. A zero right argument raises the\n``ZeroDivisionError`` exception. The arguments may be floating point\nnumbers, e.g., ``3.14%0.7`` equals ``0.34`` (since ``3.14`` equals\n``4*0.7 + 0.34``.) The modulo operator always yields a result with\nthe same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second\noperand [2].\n\nThe integer division and modulo operators are connected by the\nfollowing identity: ``x == (x/y)*y + (x%y)``. Integer division and\nmodulo are also connected with the built-in function ``divmod()``:\n``divmod(x, y) == (x/y, x%y)``. These identities don\'t hold for\nfloating point numbers; there similar identities hold approximately\nwhere ``x/y`` is replaced by ``floor(x/y)`` or ``floor(x/y) - 1`` [3].\n\nIn addition to performing the modulo operation on numbers, the ``%``\noperator is also overloaded by string and unicode objects to perform\nstring formatting (also known as interpolation). The syntax for string\nformatting is described in the Python Library Reference, section\n*String Formatting Operations*.\n\nDeprecated since version 2.3: The floor division operator, the modulo\noperator, and the ``divmod()`` function are no longer defined for\ncomplex numbers. Instead, convert to a floating point number using\nthe ``abs()`` function if appropriate.\n\nThe ``+`` (addition) operator yields the sum of its arguments. The\narguments must either both be numbers or both sequences of the same\ntype. In the former case, the numbers are converted to a common type\nand then added together. In the latter case, the sequences are\nconcatenated.\n\nThe ``-`` (subtraction) operator yields the difference of its\narguments. The numeric arguments are first converted to a common\ntype.\n', 'bitwise': u'\nBinary bitwise operations\n*************************\n\nEach of the three bitwise operations has a different priority level:\n\n and_expr ::= shift_expr | and_expr "&" shift_expr\n xor_expr ::= and_expr | xor_expr "^" and_expr\n or_expr ::= xor_expr | or_expr "|" xor_expr\n\nThe ``&`` operator yields the bitwise AND of its arguments, which must\nbe plain or long integers. The arguments are converted to a common\ntype.\n\nThe ``^`` operator yields the bitwise XOR (exclusive OR) of its\narguments, which must be plain or long integers. The arguments are\nconverted to a common type.\n\nThe ``|`` operator yields the bitwise (inclusive) OR of its arguments,\nwhich must be plain or long integers. The arguments are converted to\na common type.\n', 'bltin-code-objects': u'\nCode Objects\n************\n\nCode objects are used by the implementation to represent "pseudo-\ncompiled" executable Python code such as a function body. They differ\nfrom function objects because they don\'t contain a reference to their\nglobal execution environment. Code objects are returned by the built-\nin ``compile()`` function and can be extracted from function objects\nthrough their ``func_code`` attribute. See also the ``code`` module.\n\nA code object can be executed or evaluated by passing it (instead of a\nsource string) to the ``exec`` statement or the built-in ``eval()``\nfunction.\n\nSee *The standard type hierarchy* for more information.\n', 'bltin-ellipsis-object': u'\nThe Ellipsis Object\n*******************\n\nThis object is used by extended slice notation (see *Slicings*). It\nsupports no special operations. There is exactly one ellipsis object,\nnamed ``Ellipsis`` (a built-in name).\n\nIt is written as ``Ellipsis``.\n', - 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. An empty string is\n returned *only* when EOF is encountered immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with the *--with-universal-newlines* option to\n **configure** (the default) this read-only attribute exists, and\n for files opened in universal newline read mode it keeps track of\n the types of newlines encountered while reading the file. The\n values it can take are ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None``\n (unknown, no newlines read yet) or a tuple containing all the\n newline types seen, to indicate that multiple newline conventions\n were encountered. For files not opened in universal newline read\n mode the value of this attribute will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', + 'bltin-file-objects': u'\nFile Objects\n************\n\nFile objects are implemented using C\'s ``stdio`` package and can be\ncreated with the built-in ``open()`` function. File objects are also\nreturned by some other built-in functions and methods, such as\n``os.popen()`` and ``os.fdopen()`` and the ``makefile()`` method of\nsocket objects. Temporary files can be created using the ``tempfile``\nmodule, and high-level file operations such as copying, moving, and\ndeleting files and directories can be achieved with the ``shutil``\nmodule.\n\nWhen a file operation fails for an I/O-related reason, the exception\n``IOError`` is raised. This includes situations where the operation\nis not defined for some reason, like ``seek()`` on a tty device or\nwriting a file opened for reading.\n\nFiles have the following methods:\n\nfile.close()\n\n Close the file. A closed file cannot be read or written any more.\n Any operation which requires that the file be open will raise a\n ``ValueError`` after the file has been closed. Calling ``close()``\n more than once is allowed.\n\n As of Python 2.5, you can avoid having to call this method\n explicitly if you use the ``with`` statement. For example, the\n following code will automatically close *f* when the ``with`` block\n is exited:\n\n from __future__ import with_statement # This isn\'t required in Python 2.6\n\n with open("hello.txt") as f:\n for line in f:\n print line\n\n In older versions of Python, you would have needed to do this to\n get the same effect:\n\n f = open("hello.txt")\n try:\n for line in f:\n print line\n finally:\n f.close()\n\n Note: Not all "file-like" types in Python support use as a context\n manager for the ``with`` statement. If your code is intended to\n work with any file-like object, you can use the function\n ``contextlib.closing()`` instead of using the object directly.\n\nfile.flush()\n\n Flush the internal buffer, like ``stdio``\'s ``fflush()``. This may\n be a no-op on some file-like objects.\n\n Note: ``flush()`` does not necessarily write the file\'s data to disk.\n Use ``flush()`` followed by ``os.fsync()`` to ensure this\n behavior.\n\nfile.fileno()\n\n Return the integer "file descriptor" that is used by the underlying\n implementation to request I/O operations from the operating system.\n This can be useful for other, lower level interfaces that use file\n descriptors, such as the ``fcntl`` module or ``os.read()`` and\n friends.\n\n Note: File-like objects which do not have a real file descriptor should\n *not* provide this method!\n\nfile.isatty()\n\n Return ``True`` if the file is connected to a tty(-like) device,\n else ``False``.\n\n Note: If a file-like object is not associated with a real file, this\n method should *not* be implemented.\n\nfile.next()\n\n A file object is its own iterator, for example ``iter(f)`` returns\n *f* (unless *f* is closed). When a file is used as an iterator,\n typically in a ``for`` loop (for example, ``for line in f: print\n line``), the ``next()`` method is called repeatedly. This method\n returns the next input line, or raises ``StopIteration`` when EOF\n is hit when the file is open for reading (behavior is undefined\n when the file is open for writing). In order to make a ``for``\n loop the most efficient way of looping over the lines of a file (a\n very common operation), the ``next()`` method uses a hidden read-\n ahead buffer. As a consequence of using a read-ahead buffer,\n combining ``next()`` with other file methods (like ``readline()``)\n does not work right. However, using ``seek()`` to reposition the\n file to an absolute position will flush the read-ahead buffer.\n\n New in version 2.3.\n\nfile.read([size])\n\n Read at most *size* bytes from the file (less if the read hits EOF\n before obtaining *size* bytes). If the *size* argument is negative\n or omitted, read all data until EOF is reached. The bytes are\n returned as a string object. An empty string is returned when EOF\n is encountered immediately. (For certain files, like ttys, it\n makes sense to continue reading after an EOF is hit.) Note that\n this method may call the underlying C function ``fread()`` more\n than once in an effort to acquire as close to *size* bytes as\n possible. Also note that when in non-blocking mode, less data than\n was requested may be returned, even if no *size* parameter was\n given.\n\n Note: This function is simply a wrapper for the underlying ``fread()``\n C function, and will behave the same in corner cases, such as\n whether the EOF value is cached.\n\nfile.readline([size])\n\n Read one entire line from the file. A trailing newline character\n is kept in the string (but may be absent when a file ends with an\n incomplete line). [5] If the *size* argument is present and non-\n negative, it is a maximum byte count (including the trailing\n newline) and an incomplete line may be returned. When *size* is not\n 0, an empty string is returned *only* when EOF is encountered\n immediately.\n\n Note: Unlike ``stdio``\'s ``fgets()``, the returned string contains null\n characters (``\'\\0\'``) if they occurred in the input.\n\nfile.readlines([sizehint])\n\n Read until EOF using ``readline()`` and return a list containing\n the lines thus read. If the optional *sizehint* argument is\n present, instead of reading up to EOF, whole lines totalling\n approximately *sizehint* bytes (possibly after rounding up to an\n internal buffer size) are read. Objects implementing a file-like\n interface may choose to ignore *sizehint* if it cannot be\n implemented, or cannot be implemented efficiently.\n\nfile.xreadlines()\n\n This method returns the same thing as ``iter(f)``.\n\n New in version 2.1.\n\n Deprecated since version 2.3: Use ``for line in file`` instead.\n\nfile.seek(offset[, whence])\n\n Set the file\'s current position, like ``stdio``\'s ``fseek()``. The\n *whence* argument is optional and defaults to ``os.SEEK_SET`` or\n ``0`` (absolute file positioning); other values are ``os.SEEK_CUR``\n or ``1`` (seek relative to the current position) and\n ``os.SEEK_END`` or ``2`` (seek relative to the file\'s end). There\n is no return value.\n\n For example, ``f.seek(2, os.SEEK_CUR)`` advances the position by\n two and ``f.seek(-3, os.SEEK_END)`` sets the position to the third\n to last.\n\n Note that if the file is opened for appending (mode ``\'a\'`` or\n ``\'a+\'``), any ``seek()`` operations will be undone at the next\n write. If the file is only opened for writing in append mode (mode\n ``\'a\'``), this method is essentially a no-op, but it remains useful\n for files opened in append mode with reading enabled (mode\n ``\'a+\'``). If the file is opened in text mode (without ``\'b\'``),\n only offsets returned by ``tell()`` are legal. Use of other\n offsets causes undefined behavior.\n\n Note that not all file objects are seekable.\n\n Changed in version 2.6: Passing float values as offset has been\n deprecated.\n\nfile.tell()\n\n Return the file\'s current position, like ``stdio``\'s ``ftell()``.\n\n Note: On Windows, ``tell()`` can return illegal values (after an\n ``fgets()``) when reading files with Unix-style line-endings. Use\n binary mode (``\'rb\'``) to circumvent this problem.\n\nfile.truncate([size])\n\n Truncate the file\'s size. If the optional *size* argument is\n present, the file is truncated to (at most) that size. The size\n defaults to the current position. The current file position is not\n changed. Note that if a specified size exceeds the file\'s current\n size, the result is platform-dependent: possibilities include that\n the file may remain unchanged, increase to the specified size as if\n zero-filled, or increase to the specified size with undefined new\n content. Availability: Windows, many Unix variants.\n\nfile.write(str)\n\n Write a string to the file. There is no return value. Due to\n buffering, the string may not actually show up in the file until\n the ``flush()`` or ``close()`` method is called.\n\nfile.writelines(sequence)\n\n Write a sequence of strings to the file. The sequence can be any\n iterable object producing strings, typically a list of strings.\n There is no return value. (The name is intended to match\n ``readlines()``; ``writelines()`` does not add line separators.)\n\nFiles support the iterator protocol. Each iteration returns the same\nresult as ``file.readline()``, and iteration ends when the\n``readline()`` method returns an empty string.\n\nFile objects also offer a number of other interesting attributes.\nThese are not required for file-like objects, but should be\nimplemented if they make sense for the particular object.\n\nfile.closed\n\n bool indicating the current state of the file object. This is a\n read-only attribute; the ``close()`` method changes the value. It\n may not be available on all file-like objects.\n\nfile.encoding\n\n The encoding that this file uses. When Unicode strings are written\n to a file, they will be converted to byte strings using this\n encoding. In addition, when the file is connected to a terminal,\n the attribute gives the encoding that the terminal is likely to use\n (that information might be incorrect if the user has misconfigured\n the terminal). The attribute is read-only and may not be present\n on all file-like objects. It may also be ``None``, in which case\n the file uses the system default encoding for converting Unicode\n strings.\n\n New in version 2.3.\n\nfile.errors\n\n The Unicode error handler used along with the encoding.\n\n New in version 2.6.\n\nfile.mode\n\n The I/O mode for the file. If the file was created using the\n ``open()`` built-in function, this will be the value of the *mode*\n parameter. This is a read-only attribute and may not be present on\n all file-like objects.\n\nfile.name\n\n If the file object was created using ``open()``, the name of the\n file. Otherwise, some string that indicates the source of the file\n object, of the form ``<...>``. This is a read-only attribute and\n may not be present on all file-like objects.\n\nfile.newlines\n\n If Python was built with universal newlines enabled (the default)\n this read-only attribute exists, and for files opened in universal\n newline read mode it keeps track of the types of newlines\n encountered while reading the file. The values it can take are\n ``\'\\r\'``, ``\'\\n\'``, ``\'\\r\\n\'``, ``None`` (unknown, no newlines read\n yet) or a tuple containing all the newline types seen, to indicate\n that multiple newline conventions were encountered. For files not\n opened in universal newline read mode the value of this attribute\n will be ``None``.\n\nfile.softspace\n\n Boolean that indicates whether a space character needs to be\n printed before another value when using the ``print`` statement.\n Classes that are trying to simulate a file object should also have\n a writable ``softspace`` attribute, which should be initialized to\n zero. This will be automatic for most classes implemented in\n Python (care may be needed for objects that override attribute\n access); types implemented in C will have to provide a writable\n ``softspace`` attribute.\n\n Note: This attribute is not used to control the ``print`` statement,\n but to allow the implementation of ``print`` to keep track of its\n internal state.\n', 'bltin-null-object': u"\nThe Null Object\n***************\n\nThis object is returned by functions that don't explicitly return a\nvalue. It supports no special operations. There is exactly one null\nobject, named ``None`` (a built-in name).\n\nIt is written as ``None``.\n", 'bltin-type-objects': u"\nType Objects\n************\n\nType objects represent the various object types. An object's type is\naccessed by the built-in function ``type()``. There are no special\noperations on types. The standard module ``types`` defines names for\nall standard built-in types.\n\nTypes are written like this: ````.\n", 'booleans': u'\nBoolean operations\n******************\n\n or_test ::= and_test | or_test "or" and_test\n and_test ::= not_test | and_test "and" not_test\n not_test ::= comparison | "not" not_test\n\nIn the context of Boolean operations, and also when expressions are\nused by control flow statements, the following values are interpreted\nas false: ``False``, ``None``, numeric zero of all types, and empty\nstrings and containers (including strings, tuples, lists,\ndictionaries, sets and frozensets). All other values are interpreted\nas true. (See the ``__nonzero__()`` special method for a way to\nchange this.)\n\nThe operator ``not`` yields ``True`` if its argument is false,\n``False`` otherwise.\n\nThe expression ``x and y`` first evaluates *x*; if *x* is false, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\nThe expression ``x or y`` first evaluates *x*; if *x* is true, its\nvalue is returned; otherwise, *y* is evaluated and the resulting value\nis returned.\n\n(Note that neither ``and`` nor ``or`` restrict the value and type they\nreturn to ``False`` and ``True``, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if ``s`` is a string that\nshould be replaced by a default value if it is empty, the expression\n``s or \'foo\'`` yields the desired value. Because ``not`` has to\ninvent a value anyway, it does not bother to return a value of the\nsame type as its argument, so e.g., ``not \'foo\'`` yields ``False``,\nnot ``\'\'``.)\n', @@ -20,39 +20,39 @@ 'class': u'\nClass definitions\n*****************\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'coercion-rules': u"\nCoercion rules\n**************\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don't define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator '``+``', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base's ``__rop__()`` method, the right operand's ``__rop__()``\n method is tried *before* the left operand's ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand's ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type's ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like '``+=``') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n", 'comparisons': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', - 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', + 'compound': u'\nCompound statements\n*******************\n\nCompound statements contain (groups of) other statements; they affect\nor control the execution of those other statements in some way. In\ngeneral, compound statements span multiple lines, although in simple\nincarnations a whole compound statement may be contained in one line.\n\nThe ``if``, ``while`` and ``for`` statements implement traditional\ncontrol flow constructs. ``try`` specifies exception handlers and/or\ncleanup code for a group of statements. Function and class\ndefinitions are also syntactically compound statements.\n\nCompound statements consist of one or more \'clauses.\' A clause\nconsists of a header and a \'suite.\' The clause headers of a\nparticular compound statement are all at the same indentation level.\nEach clause header begins with a uniquely identifying keyword and ends\nwith a colon. A suite is a group of statements controlled by a\nclause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\'s\ncolon, or it can be one or more indented statements on subsequent\nlines. Only the latter form of suite can contain nested compound\nstatements; the following is illegal, mostly because it wouldn\'t be\nclear to which ``if`` clause a following ``else`` clause would belong:\n\n if test1: if test2: print x\n\nAlso note that the semicolon binds tighter than the colon in this\ncontext, so that in the following example, either all or none of the\n``print`` statements are executed:\n\n if x < y < z: print x; print y; print z\n\nSummarizing:\n\n compound_stmt ::= if_stmt\n | while_stmt\n | for_stmt\n | try_stmt\n | with_stmt\n | funcdef\n | classdef\n | decorated\n suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT\n statement ::= stmt_list NEWLINE | compound_stmt\n stmt_list ::= simple_stmt (";" simple_stmt)* [";"]\n\nNote that statements always end in a ``NEWLINE`` possibly followed by\na ``DEDENT``. Also note that optional continuation clauses always\nbegin with a keyword that cannot start a statement, thus there are no\nambiguities (the \'dangling ``else``\' problem is solved in Python by\nrequiring nested ``if`` statements to be indented).\n\nThe formatting of the grammar rules in the following sections places\neach clause on a separate line for clarity.\n\n\nThe ``if`` statement\n====================\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n\n\nThe ``while`` statement\n=======================\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n\n\nThe ``for`` statement\n=====================\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n\n\nThe ``try`` statement\n=====================\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n\n\nThe ``with`` statement\n======================\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nFunction definitions\n====================\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n\n\nClass definitions\n=================\n\nA class definition defines a class object (see section *The standard\ntype hierarchy*):\n\n classdef ::= "class" classname [inheritance] ":" suite\n inheritance ::= "(" [expression_list] ")"\n classname ::= identifier\n\nA class definition is an executable statement. It first evaluates the\ninheritance list, if present. Each item in the inheritance list\nshould evaluate to a class object or class type which allows\nsubclassing. The class\'s suite is then executed in a new execution\nframe (see section *Naming and binding*), using a newly created local\nnamespace and the original global namespace. (Usually, the suite\ncontains only function definitions.) When the class\'s suite finishes\nexecution, its execution frame is discarded but its local namespace is\nsaved. [4] A class object is then created using the inheritance list\nfor the base classes and the saved local namespace for the attribute\ndictionary. The class name is bound to this class object in the\noriginal local namespace.\n\n**Programmer\'s note:** Variables defined in the class definition are\nclass variables; they are shared by all instances. To create instance\nvariables, they can be set in a method with ``self.name = value``.\nBoth class and instance variables are accessible through the notation\n"``self.name``", and an instance variable hides a class variable with\nthe same name when accessed in this way. Class variables can be used\nas defaults for instance variables, but using mutable values there can\nlead to unexpected results. For *new-style class*es, descriptors can\nbe used to create instance variables with different implementation\ndetails.\n\nClass definitions, like function definitions, may be wrapped by one or\nmore *decorator* expressions. The evaluation rules for the decorator\nexpressions are the same as for functions. The result must be a class\nobject, which is then bound to the class name.\n\n-[ Footnotes ]-\n\n[1] The exception is propagated to the invocation stack only if there\n is no ``finally`` clause that negates the exception.\n\n[2] Currently, control "flows off the end" except in the case of an\n exception or the execution of a ``return``, ``continue``, or\n ``break`` statement.\n\n[3] A string literal appearing as the first statement in the function\n body is transformed into the function\'s ``__doc__`` attribute and\n therefore the function\'s *docstring*.\n\n[4] A string literal appearing as the first statement in the class\n body is transformed into the namespace\'s ``__doc__`` item and\n therefore the class\'s *docstring*.\n', 'context-managers': u'\nWith Statement Context Managers\n*******************************\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'continue': u'\nThe ``continue`` statement\n**************************\n\n continue_stmt ::= "continue"\n\n``continue`` may only occur syntactically nested in a ``for`` or\n``while`` loop, but not nested in a function or class definition or\n``finally`` clause within that loop. It continues with the next cycle\nof the nearest enclosing loop.\n\nWhen ``continue`` passes control out of a ``try`` statement with a\n``finally`` clause, that ``finally`` clause is executed before really\nstarting the next loop cycle.\n', 'conversions': u'\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," the arguments\nare coerced using the coercion rules listed at *Coercion rules*. If\nboth arguments are standard numeric types, the following coercions are\napplied:\n\n* If either argument is a complex number, the other is converted to\n complex;\n\n* otherwise, if either argument is a floating point number, the other\n is converted to floating point;\n\n* otherwise, if either argument is a long integer, the other is\n converted to long integer;\n\n* otherwise, both must be plain integers and no conversion is\n necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator). Extensions can define their own\ncoercions.\n', 'customization': u'\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n', - 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run_*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', + 'debugger': u'\n``pdb`` --- The Python Debugger\n*******************************\n\nThe module ``pdb`` defines an interactive source code debugger for\nPython programs. It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame. It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible --- it is actually defined as the class\n``Pdb``. This is currently undocumented but easily understood by\nreading the source. The extension interface uses the modules ``bdb``\nand ``cmd``.\n\nThe debugger\'s prompt is ``(Pdb)``. Typical usage to run a program\nunder control of the debugger is:\n\n >>> import pdb\n >>> import mymodule\n >>> pdb.run(\'mymodule.test()\')\n > (0)?()\n (Pdb) continue\n > (1)?()\n (Pdb) continue\n NameError: \'spam\'\n > (1)?()\n (Pdb)\n\n``pdb.py`` can also be invoked as a script to debug other scripts.\nFor example:\n\n python -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally. After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program. Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 2.4: Restarting post-mortem behavior added.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger. You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the ``c`` command.\n\nThe typical usage to inspect a crashed program is:\n\n >>> import pdb\n >>> import mymodule\n >>> mymodule.test()\n Traceback (most recent call last):\n File "", line 1, in ?\n File "./mymodule.py", line 4, in test\n test2()\n File "./mymodule.py", line 3, in test2\n print spam\n NameError: spam\n >>> pdb.pm()\n > ./mymodule.py(3)test2()\n -> print spam\n (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement[, globals[, locals]])\n\n Execute the *statement* (given as a string) under debugger control.\n The debugger prompt appears before any code is executed; you can\n set breakpoints and type ``continue``, or you can step through the\n statement using ``step`` or ``next`` (all these commands are\n explained below). The optional *globals* and *locals* arguments\n specify the environment in which the code is executed; by default\n the dictionary of the module ``__main__`` is used. (See the\n explanation of the ``exec`` statement or the ``eval()`` built-in\n function.)\n\npdb.runeval(expression[, globals[, locals]])\n\n Evaluate the *expression* (given as a string) under debugger\n control. When ``runeval()`` returns, it returns the value of the\n expression. Otherwise this function is similar to ``run()``.\n\npdb.runcall(function[, argument, ...])\n\n Call the *function* (a function or method object, not a string)\n with the given arguments. When ``runcall()`` returns, it returns\n whatever the function call returned. The debugger prompt appears\n as soon as the function is entered.\n\npdb.set_trace()\n\n Enter the debugger at the calling stack frame. This is useful to\n hard-code a breakpoint at a given point in a program, even if the\n code is not otherwise being debugged (e.g. when an assertion\n fails).\n\npdb.post_mortem([traceback])\n\n Enter post-mortem debugging of the given *traceback* object. If no\n *traceback* is given, it uses the one of the exception that is\n currently being handled (an exception must be being handled if the\n default is to be used).\n\npdb.pm()\n\n Enter post-mortem debugging of the traceback found in\n ``sys.last_traceback``.\n\nThe ``run*`` functions and ``set_trace()`` are aliases for\ninstantiating the ``Pdb`` class and calling the method of the same\nname. If you want to access further features, you have to do this\nyourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None)\n\n ``Pdb`` is the debugger class.\n\n The *completekey*, *stdin* and *stdout* arguments are passed to the\n underlying ``cmd.Cmd`` class; see the description there.\n\n The *skip* argument, if given, must be an iterable of glob-style\n module name patterns. The debugger will not step into frames that\n originate in a module that matches one of these patterns. [1]\n\n Example call to enable tracing with *skip*:\n\n import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n New in version 2.7: The *skip* argument.\n\n run(statement[, globals[, locals]])\n runeval(expression[, globals[, locals]])\n runcall(function[, argument, ...])\n set_trace()\n\n See the documentation for the functions explained above.\n', 'del': u'\nThe ``del`` statement\n*********************\n\n del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather that spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a ``global``\nstatement in the same code block. If the name is unbound, a\n``NameError`` exception will be raised.\n\nIt is illegal to delete a name from the local namespace if it occurs\nas a free variable in a nested block.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n', 'dict': u'\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n dict_display ::= "{" [key_datum_list | dict_comprehension] "}"\n key_datum_list ::= key_datum ("," key_datum)* [","]\n key_datum ::= expression ":" expression\n dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum. This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*. (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.) Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n', 'dynamic-features': u'\nInteraction with dynamic features\n*********************************\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n', 'else': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'exceptions': u'\nExceptions\n**********\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exec': u'\nThe ``exec`` statement\n**********************\n\n exec_stmt ::= "exec" or_expr ["in" expression ["," expression]]\n\nThis statement supports dynamic execution of Python code. The first\nexpression should evaluate to either a string, an open file object, or\na code object. If it is a string, the string is parsed as a suite of\nPython statements which is then executed (unless a syntax error\noccurs). [1] If it is an open file, the file is parsed until EOF and\nexecuted. If it is a code object, it is simply executed. In all\ncases, the code that\'s executed is expected to be valid as file input\n(see section *File input*). Be aware that the ``return`` and\n``yield`` statements may not be used outside of function definitions\neven within the context of code passed to the ``exec`` statement.\n\nIn all cases, if the optional parts are omitted, the code is executed\nin the current scope. If only the first expression after ``in`` is\nspecified, it should be a dictionary, which will be used for both the\nglobal and the local variables. If two expressions are given, they\nare used for the global and local variables, respectively. If\nprovided, *locals* can be any mapping object.\n\nChanged in version 2.4: Formerly, *locals* was required to be a\ndictionary.\n\nAs a side effect, an implementation may insert additional keys into\nthe dictionaries given besides those corresponding to variable names\nset by the executed code. For example, the current implementation may\nadd a reference to the dictionary of the built-in module\n``__builtin__`` under the key ``__builtins__`` (!).\n\n**Programmer\'s hints:** dynamic evaluation of expressions is supported\nby the built-in function ``eval()``. The built-in functions\n``globals()`` and ``locals()`` return the current global and local\ndictionary, respectively, which may be useful to pass around for use\nby ``exec``.\n\n-[ Footnotes ]-\n\n[1] Note that the parser only accepts the Unix-style end of line\n convention. If you are reading the code from a file, make sure to\n use universal newline mode to convert Windows or Mac-style\n newlines.\n', - 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', + 'execmodel': u'\nExecution model\n***************\n\n\nNaming and binding\n==================\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\'s dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no \'s\'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no \'s\') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n---------------------------------\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n\n\nExceptions\n==========\n\nExceptions are a means of breaking out of the normal flow of control\nof a code block in order to handle errors or other exceptional\nconditions. An exception is *raised* at the point where the error is\ndetected; it may be *handled* by the surrounding code block or by any\ncode block that directly or indirectly invoked the code block where\nthe error occurred.\n\nThe Python interpreter raises an exception when it detects a run-time\nerror (such as division by zero). A Python program can also\nexplicitly raise an exception with the ``raise`` statement. Exception\nhandlers are specified with the ``try`` ... ``except`` statement. The\n``finally`` clause of such a statement can be used to specify cleanup\ncode which does not handle the exception, but is executed whether an\nexception occurred or not in the preceding code.\n\nPython uses the "termination" model of error handling: an exception\nhandler can find out what happened and continue execution at an outer\nlevel, but it cannot repair the cause of the error and retry the\nfailing operation (except by re-entering the offending piece of code\nfrom the top).\n\nWhen an exception is not handled at all, the interpreter terminates\nexecution of the program, or returns to its interactive main loop. In\neither case, it prints a stack backtrace, except when the exception is\n``SystemExit``.\n\nExceptions are identified by class instances. The ``except`` clause\nis selected depending on the class of the instance: it must reference\nthe class of the instance or a base class thereof. The instance can\nbe received by the handler and can carry additional information about\nthe exceptional condition.\n\nExceptions can also be identified by strings, in which case the\n``except`` clause is selected by object identity. An arbitrary value\ncan be raised along with the identifying string which can be passed to\nthe handler.\n\nNote: Messages to exceptions are not part of the Python API. Their\n contents may change from one version of Python to the next without\n warning and should not be relied on by code which will run under\n multiple versions of the interpreter.\n\nSee also the description of the ``try`` statement in section *The try\nstatement* and ``raise`` statement in section *The raise statement*.\n\n-[ Footnotes ]-\n\n[1] This limitation occurs because the code that is executed by these\n operations is not available at the time the module is compiled.\n', 'exprlists': u'\nExpression lists\n****************\n\n expression_list ::= expression ( "," expression )* [","]\n\nAn expression list containing at least one comma yields a tuple. The\nlength of the tuple is the number of expressions in the list. The\nexpressions are evaluated from left to right.\n\nThe trailing comma is required only to create a single tuple (a.k.a. a\n*singleton*); it is optional in all other cases. A single expression\nwithout a trailing comma doesn\'t create a tuple, but rather yields the\nvalue of that expression. (To create an empty tuple, use an empty pair\nof parentheses: ``()``.)\n', 'floating': u'\nFloating point literals\n***********************\n\nFloating point literals are described by the following lexical\ndefinitions:\n\n floatnumber ::= pointfloat | exponentfloat\n pointfloat ::= [intpart] fraction | intpart "."\n exponentfloat ::= (intpart | pointfloat) exponent\n intpart ::= digit+\n fraction ::= "." digit+\n exponent ::= ("e" | "E") ["+" | "-"] digit+\n\nNote that the integer and exponent parts of floating point numbers can\nlook like octal integers, but are interpreted using radix 10. For\nexample, ``077e010`` is legal, and denotes the same number as\n``77e10``. The allowed range of floating point literals is\nimplementation-dependent. Some examples of floating point literals:\n\n 3.14 10. .001 1e100 3.14e-10 0e0\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator ``-`` and the\nliteral ``1``.\n', 'for': u'\nThe ``for`` statement\n*********************\n\nThe ``for`` statement is used to iterate over the elements of a\nsequence (such as a string, tuple or list) or other iterable object:\n\n for_stmt ::= "for" target_list "in" expression_list ":" suite\n ["else" ":" suite]\n\nThe expression list is evaluated once; it should yield an iterable\nobject. An iterator is created for the result of the\n``expression_list``. The suite is then executed once for each item\nprovided by the iterator, in the order of ascending indices. Each\nitem in turn is assigned to the target list using the standard rules\nfor assignments, and then the suite is executed. When the items are\nexhausted (which is immediately when the sequence is empty), the suite\nin the ``else`` clause, if present, is executed, and the loop\nterminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ncontinues with the next item, or with the ``else`` clause if there was\nno next item.\n\nThe suite may assign to the variable(s) in the target list; this does\nnot affect the next item assigned to it.\n\nThe target list is not deleted when the loop is finished, but if the\nsequence is empty, it will not have been assigned to at all by the\nloop. Hint: the built-in function ``range()`` returns a sequence of\nintegers suitable to emulate the effect of Pascal\'s ``for i := a to b\ndo``; e.g., ``range(3)`` returns the list ``[0, 1, 2]``.\n\nNote: There is a subtlety when the sequence is being modified by the loop\n (this can only occur for mutable sequences, i.e. lists). An internal\n counter is used to keep track of which item is used next, and this\n is incremented on each iteration. When this counter has reached the\n length of the sequence the loop terminates. This means that if the\n suite deletes the current (or a previous) item from the sequence,\n the next item will be skipped (since it gets the index of the\n current item which has already been treated). Likewise, if the\n suite inserts an item in the sequence before the current item, the\n current item will be treated again the next time through the loop.\n This can lead to nasty bugs that can be avoided by making a\n temporary copy using a slice of the whole sequence, e.g.,\n\n for x in a[:]:\n if x < 0: a.remove(x)\n', - 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'}\' (which\nsignifies the end of the field). The presence of a fill character is\nsignaled by the *next* character, which must be one of the alignment\noptions. If the second character of *format_spec* is not a valid\nalignment option, then it is assumed that both the fill character and\nthe alignment option are absent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Postive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{align}{fill}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', + 'formatstrings': u'\nFormat String Syntax\n********************\n\nThe ``str.format()`` method and the ``Formatter`` class share the same\nsyntax for format strings (although in the case of ``Formatter``,\nsubclasses can define their own format string syntax).\n\nFormat strings contain "replacement fields" surrounded by curly braces\n``{}``. Anything that is not contained in braces is considered literal\ntext, which is copied unchanged to the output. If you need to include\na brace character in the literal text, it can be escaped by doubling:\n``{{`` and ``}}``.\n\nThe grammar for a replacement field is as follows:\n\n replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}"\n field_name ::= arg_name ("." attribute_name | "[" element_index "]")*\n arg_name ::= [identifier | integer]\n attribute_name ::= identifier\n element_index ::= integer | index_string\n index_string ::= +\n conversion ::= "r" | "s"\n format_spec ::= \n\nIn less formal terms, the replacement field can start with a\n*field_name* that specifies the object whose value is to be formatted\nand inserted into the output instead of the replacement field. The\n*field_name* is optionally followed by a *conversion* field, which is\npreceded by an exclamation point ``\'!\'``, and a *format_spec*, which\nis preceded by a colon ``\':\'``. These specify a non-default format\nfor the replacement value.\n\nSee also the *Format Specification Mini-Language* section.\n\nThe *field_name* itself begins with an *arg_name* that is either\neither a number or a keyword. If it\'s a number, it refers to a\npositional argument, and if it\'s a keyword, it refers to a named\nkeyword argument. If the numerical arg_names in a format string are\n0, 1, 2, ... in sequence, they can all be omitted (not just some) and\nthe numbers 0, 1, 2, ... will be automatically inserted in that order.\nThe *arg_name* can be followed by any number of index or attribute\nexpressions. An expression of the form ``\'.name\'`` selects the named\nattribute using ``getattr()``, while an expression of the form\n``\'[index]\'`` does an index lookup using ``__getitem__()``.\n\nChanged in version 2.7: The positional argument specifiers can be\nomitted, so ``\'{} {}\'`` is equivalent to ``\'{0} {1}\'``.\n\nSome simple format string examples:\n\n "First, thou shalt count to {0}" # References first positional argument\n "Bring me a {}" # Implicitly references the first positional argument\n "From {} to {}" # Same as "From {0} to {1}"\n "My quest is {name}" # References keyword argument \'name\'\n "Weight in tons {0.weight}" # \'weight\' attribute of first positional arg\n "Units destroyed: {players[0]}" # First element of keyword argument \'players\'.\n\nThe *conversion* field causes a type coercion before formatting.\nNormally, the job of formatting a value is done by the\n``__format__()`` method of the value itself. However, in some cases\nit is desirable to force a type to be formatted as a string,\noverriding its own definition of formatting. By converting the value\nto a string before calling ``__format__()``, the normal formatting\nlogic is bypassed.\n\nTwo conversion flags are currently supported: ``\'!s\'`` which calls\n``str()`` on the value, and ``\'!r\'`` which calls ``repr()``.\n\nSome examples:\n\n "Harold\'s a clever {0!s}" # Calls str() on the argument first\n "Bring out the holy {name!r}" # Calls repr() on the argument first\n\nThe *format_spec* field contains a specification of how the value\nshould be presented, including such details as field width, alignment,\npadding, decimal precision and so on. Each value type can define its\nown "formatting mini-language" or interpretation of the *format_spec*.\n\nMost built-in types support a common formatting mini-language, which\nis described in the next section.\n\nA *format_spec* field can also include nested replacement fields\nwithin it. These nested replacement fields can contain only a field\nname; conversion flags and format specifications are not allowed. The\nreplacement fields within the format_spec are substituted before the\n*format_spec* string is interpreted. This allows the formatting of a\nvalue to be dynamically specified.\n\nSee the *Format examples* section for some examples.\n\n\nFormat Specification Mini-Language\n==================================\n\n"Format specifications" are used within replacement fields contained\nwithin a format string to define how individual values are presented\n(see *Format String Syntax*). They can also be passed directly to the\nbuilt-in ``format()`` function. Each formattable type may define how\nthe format specification is to be interpreted.\n\nMost built-in types implement the following options for format\nspecifications, although some of the formatting options are only\nsupported by the numeric types.\n\nA general convention is that an empty format string (``""``) produces\nthe same result as if you had called ``str()`` on the value. A non-\nempty format string typically modifies the result.\n\nThe general form of a *standard format specifier* is:\n\n format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]\n fill ::= \n align ::= "<" | ">" | "=" | "^"\n sign ::= "+" | "-" | " "\n width ::= integer\n precision ::= integer\n type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"\n\nThe *fill* character can be any character other than \'{\' or \'}\'. The\npresence of a fill character is signaled by the character following\nit, which must be one of the alignment options. If the second\ncharacter of *format_spec* is not a valid alignment option, then it is\nassumed that both the fill character and the alignment option are\nabsent.\n\nThe meaning of the various alignment options is as follows:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'<\'`` | Forces the field to be left-aligned within the available |\n | | space (this is the default for most objects). |\n +-----------+------------------------------------------------------------+\n | ``\'>\'`` | Forces the field to be right-aligned within the available |\n | | space (this is the default for numbers). |\n +-----------+------------------------------------------------------------+\n | ``\'=\'`` | Forces the padding to be placed after the sign (if any) |\n | | but before the digits. This is used for printing fields |\n | | in the form \'+000000120\'. This alignment option is only |\n | | valid for numeric types. |\n +-----------+------------------------------------------------------------+\n | ``\'^\'`` | Forces the field to be centered within the available |\n | | space. |\n +-----------+------------------------------------------------------------+\n\nNote that unless a minimum field width is defined, the field width\nwill always be the same size as the data to fill it, so that the\nalignment option has no meaning in this case.\n\nThe *sign* option is only valid for number types, and can be one of\nthe following:\n\n +-----------+------------------------------------------------------------+\n | Option | Meaning |\n +===========+============================================================+\n | ``\'+\'`` | indicates that a sign should be used for both positive as |\n | | well as negative numbers. |\n +-----------+------------------------------------------------------------+\n | ``\'-\'`` | indicates that a sign should be used only for negative |\n | | numbers (this is the default behavior). |\n +-----------+------------------------------------------------------------+\n | space | indicates that a leading space should be used on positive |\n | | numbers, and a minus sign on negative numbers. |\n +-----------+------------------------------------------------------------+\n\nThe ``\'#\'`` option is only valid for integers, and only for binary,\noctal, or hexadecimal output. If present, it specifies that the\noutput will be prefixed by ``\'0b\'``, ``\'0o\'``, or ``\'0x\'``,\nrespectively.\n\nThe ``\',\'`` option signals the use of a comma for a thousands\nseparator. For a locale aware separator, use the ``\'n\'`` integer\npresentation type instead.\n\nChanged in version 2.7: Added the ``\',\'`` option (see also **PEP\n378**).\n\n*width* is a decimal integer defining the minimum field width. If not\nspecified, then the field width will be determined by the content.\n\nIf the *width* field is preceded by a zero (``\'0\'``) character, this\nenables zero-padding. This is equivalent to an *alignment* type of\n``\'=\'`` and a *fill* character of ``\'0\'``.\n\nThe *precision* is a decimal number indicating how many digits should\nbe displayed after the decimal point for a floating point value\nformatted with ``\'f\'`` and ``\'F\'``, or before and after the decimal\npoint for a floating point value formatted with ``\'g\'`` or ``\'G\'``.\nFor non-number types the field indicates the maximum field size - in\nother words, how many characters will be used from the field content.\nThe *precision* is not allowed for integer values.\n\nFinally, the *type* determines how the data should be presented.\n\nThe available string presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'s\'`` | String format. This is the default type for strings and |\n | | may be omitted. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'s\'``. |\n +-----------+------------------------------------------------------------+\n\nThe available integer presentation types are:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'b\'`` | Binary format. Outputs the number in base 2. |\n +-----------+------------------------------------------------------------+\n | ``\'c\'`` | Character. Converts the integer to the corresponding |\n | | unicode character before printing. |\n +-----------+------------------------------------------------------------+\n | ``\'d\'`` | Decimal Integer. Outputs the number in base 10. |\n +-----------+------------------------------------------------------------+\n | ``\'o\'`` | Octal format. Outputs the number in base 8. |\n +-----------+------------------------------------------------------------+\n | ``\'x\'`` | Hex format. Outputs the number in base 16, using lower- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'X\'`` | Hex format. Outputs the number in base 16, using upper- |\n | | case letters for the digits above 9. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'d\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'d\'``. |\n +-----------+------------------------------------------------------------+\n\nIn addition to the above presentation types, integers can be formatted\nwith the floating point presentation types listed below (except\n``\'n\'`` and None). When doing so, ``float()`` is used to convert the\ninteger to a floating point number before formatting.\n\nThe available presentation types for floating point and decimal values\nare:\n\n +-----------+------------------------------------------------------------+\n | Type | Meaning |\n +===========+============================================================+\n | ``\'e\'`` | Exponent notation. Prints the number in scientific |\n | | notation using the letter \'e\' to indicate the exponent. |\n +-----------+------------------------------------------------------------+\n | ``\'E\'`` | Exponent notation. Same as ``\'e\'`` except it uses an upper |\n | | case \'E\' as the separator character. |\n +-----------+------------------------------------------------------------+\n | ``\'f\'`` | Fixed point. Displays the number as a fixed-point number. |\n +-----------+------------------------------------------------------------+\n | ``\'F\'`` | Fixed point. Same as ``\'f\'``. |\n +-----------+------------------------------------------------------------+\n | ``\'g\'`` | General format. For a given precision ``p >= 1``, this |\n | | rounds the number to ``p`` significant digits and then |\n | | formats the result in either fixed-point format or in |\n | | scientific notation, depending on its magnitude. The |\n | | precise rules are as follows: suppose that the result |\n | | formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1`` would have exponent ``exp``. Then if ``-4 <= exp |\n | | < p``, the number is formatted with presentation type |\n | | ``\'f\'`` and precision ``p-1-exp``. Otherwise, the number |\n | | is formatted with presentation type ``\'e\'`` and precision |\n | | ``p-1``. In both cases insignificant trailing zeros are |\n | | removed from the significand, and the decimal point is |\n | | also removed if there are no remaining digits following |\n | | it. Positive and negative infinity, positive and negative |\n | | zero, and nans, are formatted as ``inf``, ``-inf``, ``0``, |\n | | ``-0`` and ``nan`` respectively, regardless of the |\n | | precision. A precision of ``0`` is treated as equivalent |\n | | to a precision of ``1``. |\n +-----------+------------------------------------------------------------+\n | ``\'G\'`` | General format. Same as ``\'g\'`` except switches to ``\'E\'`` |\n | | if the number gets too large. The representations of |\n | | infinity and NaN are uppercased, too. |\n +-----------+------------------------------------------------------------+\n | ``\'n\'`` | Number. This is the same as ``\'g\'``, except that it uses |\n | | the current locale setting to insert the appropriate |\n | | number separator characters. |\n +-----------+------------------------------------------------------------+\n | ``\'%\'`` | Percentage. Multiplies the number by 100 and displays in |\n | | fixed (``\'f\'``) format, followed by a percent sign. |\n +-----------+------------------------------------------------------------+\n | None | The same as ``\'g\'``. |\n +-----------+------------------------------------------------------------+\n\n\nFormat examples\n===============\n\nThis section contains examples of the new format syntax and comparison\nwith the old ``%``-formatting.\n\nIn most of the cases the syntax is similar to the old\n``%``-formatting, with the addition of the ``{}`` and with ``:`` used\ninstead of ``%``. For example, ``\'%03.2f\'`` can be translated to\n``\'{:03.2f}\'``.\n\nThe new format syntax also supports new and different options, shown\nin the follow examples.\n\nAccessing arguments by position:\n\n >>> \'{0}, {1}, {2}\'.format(\'a\', \'b\', \'c\')\n \'a, b, c\'\n >>> \'{}, {}, {}\'.format(\'a\', \'b\', \'c\') # 2.7+ only\n \'a, b, c\'\n >>> \'{2}, {1}, {0}\'.format(\'a\', \'b\', \'c\')\n \'c, b, a\'\n >>> \'{2}, {1}, {0}\'.format(*\'abc\') # unpacking argument sequence\n \'c, b, a\'\n >>> \'{0}{1}{0}\'.format(\'abra\', \'cad\') # arguments\' indices can be repeated\n \'abracadabra\'\n\nAccessing arguments by name:\n\n >>> \'Coordinates: {latitude}, {longitude}\'.format(latitude=\'37.24N\', longitude=\'-115.81W\')\n \'Coordinates: 37.24N, -115.81W\'\n >>> coord = {\'latitude\': \'37.24N\', \'longitude\': \'-115.81W\'}\n >>> \'Coordinates: {latitude}, {longitude}\'.format(**coord)\n \'Coordinates: 37.24N, -115.81W\'\n\nAccessing arguments\' attributes:\n\n >>> c = 3-5j\n >>> (\'The complex number {0} is formed from the real part {0.real} \'\n ... \'and the imaginary part {0.imag}.\').format(c)\n \'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.\'\n >>> class Point(object):\n ... def __init__(self, x, y):\n ... self.x, self.y = x, y\n ... def __str__(self):\n ... return \'Point({self.x}, {self.y})\'.format(self=self)\n ...\n >>> str(Point(4, 2))\n \'Point(4, 2)\'\n\nAccessing arguments\' items:\n\n >>> coord = (3, 5)\n >>> \'X: {0[0]}; Y: {0[1]}\'.format(coord)\n \'X: 3; Y: 5\'\n\nReplacing ``%s`` and ``%r``:\n\n >>> "repr() shows quotes: {!r}; str() doesn\'t: {!s}".format(\'test1\', \'test2\')\n "repr() shows quotes: \'test1\'; str() doesn\'t: test2"\n\nAligning the text and specifying a width:\n\n >>> \'{:<30}\'.format(\'left aligned\')\n \'left aligned \'\n >>> \'{:>30}\'.format(\'right aligned\')\n \' right aligned\'\n >>> \'{:^30}\'.format(\'centered\')\n \' centered \'\n >>> \'{:*^30}\'.format(\'centered\') # use \'*\' as a fill char\n \'***********centered***********\'\n\nReplacing ``%+f``, ``%-f``, and ``% f`` and specifying a sign:\n\n >>> \'{:+f}; {:+f}\'.format(3.14, -3.14) # show it always\n \'+3.140000; -3.140000\'\n >>> \'{: f}; {: f}\'.format(3.14, -3.14) # show a space for positive numbers\n \' 3.140000; -3.140000\'\n >>> \'{:-f}; {:-f}\'.format(3.14, -3.14) # show only the minus -- same as \'{:f}; {:f}\'\n \'3.140000; -3.140000\'\n\nReplacing ``%x`` and ``%o`` and converting the value to different\nbases:\n\n >>> # format also supports binary numbers\n >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42)\n \'int: 42; hex: 2a; oct: 52; bin: 101010\'\n >>> # with 0x, 0o, or 0b as prefix:\n >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42)\n \'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010\'\n\nUsing the comma as a thousands separator:\n\n >>> \'{:,}\'.format(1234567890)\n \'1,234,567,890\'\n\nExpressing a percentage:\n\n >>> points = 19.5\n >>> total = 22\n >>> \'Correct answers: {:.2%}.\'.format(points/total)\n \'Correct answers: 88.64%\'\n\nUsing type-specific formatting:\n\n >>> import datetime\n >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58)\n >>> \'{:%Y-%m-%d %H:%M:%S}\'.format(d)\n \'2010-07-04 12:15:58\'\n\nNesting arguments and more complex examples:\n\n >>> for align, text in zip(\'<^>\', [\'left\', \'center\', \'right\']):\n ... \'{0:{fill}{align}16}\'.format(text, fill=align, align=align)\n ...\n \'left<<<<<<<<<<<<\'\n \'^^^^^center^^^^^\'\n \'>>>>>>>>>>>right\'\n >>>\n >>> octets = [192, 168, 0, 1]\n >>> \'{:02X}{:02X}{:02X}{:02X}\'.format(*octets)\n \'C0A80001\'\n >>> int(_, 16)\n 3232235521\n >>>\n >>> width = 5\n >>> for num in range(5,12):\n ... for base in \'dXob\':\n ... print \'{0:{width}{base}}\'.format(num, base=base, width=width),\n ... print\n ...\n 5 5 5 101\n 6 6 6 110\n 7 7 7 111\n 8 8 10 1000\n 9 9 11 1001\n 10 A 12 1010\n 11 B 13 1011\n', 'function': u'\nFunction definitions\n********************\n\nA function definition defines a user-defined function object (see\nsection *The standard type hierarchy*):\n\n decorated ::= decorators (classdef | funcdef)\n decorators ::= decorator+\n decorator ::= "@" dotted_name ["(" [argument_list [","]] ")"] NEWLINE\n funcdef ::= "def" funcname "(" [parameter_list] ")" ":" suite\n dotted_name ::= identifier ("." identifier)*\n parameter_list ::= (defparameter ",")*\n ( "*" identifier [, "**" identifier]\n | "**" identifier\n | defparameter [","] )\n defparameter ::= parameter ["=" expression]\n sublist ::= parameter ("," parameter)* [","]\n parameter ::= identifier | "(" sublist ")"\n funcname ::= identifier\n\nA function definition is an executable statement. Its execution binds\nthe function name in the current local namespace to a function object\n(a wrapper around the executable code for the function). This\nfunction object contains a reference to the current global namespace\nas the global namespace to be used when the function is called.\n\nThe function definition does not execute the function body; this gets\nexecuted only when the function is called. [3]\n\nA function definition may be wrapped by one or more *decorator*\nexpressions. Decorator expressions are evaluated when the function is\ndefined, in the scope that contains the function definition. The\nresult must be a callable, which is invoked with the function object\nas the only argument. The returned value is bound to the function name\ninstead of the function object. Multiple decorators are applied in\nnested fashion. For example, the following code:\n\n @f1(arg)\n @f2\n def func(): pass\n\nis equivalent to:\n\n def func(): pass\n func = f1(arg)(f2(func))\n\nWhen one or more top-level parameters have the form *parameter* ``=``\n*expression*, the function is said to have "default parameter values."\nFor a parameter with a default value, the corresponding argument may\nbe omitted from a call, in which case the parameter\'s default value is\nsubstituted. If a parameter has a default value, all following\nparameters must also have a default value --- this is a syntactic\nrestriction that is not expressed by the grammar.\n\n**Default parameter values are evaluated when the function definition\nis executed.** This means that the expression is evaluated once, when\nthe function is defined, and that that same "pre-computed" value is\nused for each call. This is especially important to understand when a\ndefault parameter is a mutable object, such as a list or a dictionary:\nif the function modifies the object (e.g. by appending an item to a\nlist), the default value is in effect modified. This is generally not\nwhat was intended. A way around this is to use ``None`` as the\ndefault, and explicitly test for it in the body of the function, e.g.:\n\n def whats_on_the_telly(penguin=None):\n if penguin is None:\n penguin = []\n penguin.append("property of the zoo")\n return penguin\n\nFunction call semantics are described in more detail in section\n*Calls*. A function call always assigns values to all parameters\nmentioned in the parameter list, either from position arguments, from\nkeyword arguments, or from default values. If the form\n"``*identifier``" is present, it is initialized to a tuple receiving\nany excess positional parameters, defaulting to the empty tuple. If\nthe form "``**identifier``" is present, it is initialized to a new\ndictionary receiving any excess keyword arguments, defaulting to a new\nempty dictionary.\n\nIt is also possible to create anonymous functions (functions not bound\nto a name), for immediate use in expressions. This uses lambda forms,\ndescribed in section *Lambdas*. Note that the lambda form is merely a\nshorthand for a simplified function definition; a function defined in\na "``def``" statement can be passed around or assigned to another name\njust like a function defined by a lambda form. The "``def``" form is\nactually more powerful since it allows the execution of multiple\nstatements.\n\n**Programmer\'s note:** Functions are first-class objects. A "``def``"\nform executed inside a function definition defines a local function\nthat can be returned or passed around. Free variables used in the\nnested function can access the local variables of the function\ncontaining the def. See section *Naming and binding* for details.\n', 'global': u'\nThe ``global`` statement\n************************\n\n global_stmt ::= "global" identifier ("," identifier)*\n\nThe ``global`` statement is a declaration which holds for the entire\ncurrent code block. It means that the listed identifiers are to be\ninterpreted as globals. It would be impossible to assign to a global\nvariable without ``global``, although free variables may refer to\nglobals without being declared global.\n\nNames listed in a ``global`` statement must not be used in the same\ncode block textually preceding that ``global`` statement.\n\nNames listed in a ``global`` statement must not be defined as formal\nparameters or in a ``for`` loop control target, ``class`` definition,\nfunction definition, or ``import`` statement.\n\n**CPython implementation detail:** The current implementation does not\nenforce the latter two restrictions, but programs should not abuse\nthis freedom, as future implementations may enforce them or silently\nchange the meaning of the program.\n\n**Programmer\'s note:** the ``global`` is a directive to the parser.\nIt applies only to code parsed at the same time as the ``global``\nstatement. In particular, a ``global`` statement contained in an\n``exec`` statement does not affect the code block *containing* the\n``exec`` statement, and code contained in an ``exec`` statement is\nunaffected by ``global`` statements in the code containing the\n``exec`` statement. The same applies to the ``eval()``,\n``execfile()`` and ``compile()`` functions.\n', - 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', - 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library);\n applications should not expect to define additional names using\n this convention. The set of names of this class defined by Python\n may be extended in future versions. See section *Special method\n names*.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'id-classes': u'\nReserved classes of identifiers\n*******************************\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', + 'identifiers': u'\nIdentifiers and keywords\n************************\n\nIdentifiers (also referred to as *names*) are described by the\nfollowing lexical definitions:\n\n identifier ::= (letter|"_") (letter | digit | "_")*\n letter ::= lowercase | uppercase\n lowercase ::= "a"..."z"\n uppercase ::= "A"..."Z"\n digit ::= "0"..."9"\n\nIdentifiers are unlimited in length. Case is significant.\n\n\nKeywords\n========\n\nThe following identifiers are used as reserved words, or *keywords* of\nthe language, and cannot be used as ordinary identifiers. They must\nbe spelled exactly as written here:\n\n and del from not while\n as elif global or with\n assert else if pass yield\n break except import print\n class exec in raise\n continue finally is return\n def for lambda try\n\nChanged in version 2.4: ``None`` became a constant and is now\nrecognized by the compiler as a name for the built-in object ``None``.\nAlthough it is not a keyword, you cannot assign a different object to\nit.\n\nChanged in version 2.5: Both ``as`` and ``with`` are only recognized\nwhen the ``with_statement`` future feature has been enabled. It will\nalways be enabled in Python 2.6. See section *The with statement* for\ndetails. Note that using ``as`` and ``with`` as identifiers will\nalways issue a warning, even when the ``with_statement`` future\ndirective is not in effect.\n\n\nReserved classes of identifiers\n===============================\n\nCertain classes of identifiers (besides keywords) have special\nmeanings. These classes are identified by the patterns of leading and\ntrailing underscore characters:\n\n``_*``\n Not imported by ``from module import *``. The special identifier\n ``_`` is used in the interactive interpreter to store the result of\n the last evaluation; it is stored in the ``__builtin__`` module.\n When not in interactive mode, ``_`` has no special meaning and is\n not defined. See section *The import statement*.\n\n Note: The name ``_`` is often used in conjunction with\n internationalization; refer to the documentation for the\n ``gettext`` module for more information on this convention.\n\n``__*__``\n System-defined names. These names are defined by the interpreter\n and its implementation (including the standard library). Current\n system names are discussed in the *Special method names* section\n and elsewhere. More will likely be defined in future versions of\n Python. *Any* use of ``__*__`` names, in any context, that does\n not follow explicitly documented use, is subject to breakage\n without warning.\n\n``__*``\n Class-private names. Names in this category, when used within the\n context of a class definition, are re-written to use a mangled form\n to help avoid name clashes between "private" attributes of base and\n derived classes. See section *Identifiers (Names)*.\n', 'if': u'\nThe ``if`` statement\n********************\n\nThe ``if`` statement is used for conditional execution:\n\n if_stmt ::= "if" expression ":" suite\n ( "elif" expression ":" suite )*\n ["else" ":" suite]\n\nIt selects exactly one of the suites by evaluating the expressions one\nby one until one is found to be true (see section *Boolean operations*\nfor the definition of true and false); then that suite is executed\n(and no other part of the ``if`` statement is executed or evaluated).\nIf all expressions are false, the suite of the ``else`` clause, if\npresent, is executed.\n', 'imaginary': u'\nImaginary literals\n******************\n\nImaginary literals are described by the following lexical definitions:\n\n imagnumber ::= (floatnumber | intpart) ("j" | "J")\n\nAn imaginary literal yields a complex number with a real part of 0.0.\nComplex numbers are represented as a pair of floating point numbers\nand have the same restrictions on their range. To create a complex\nnumber with a nonzero real part, add a floating point number to it,\ne.g., ``(3+4j)``. Some examples of imaginary literals:\n\n 3.14j 10.j 10j .001j 1e100j 3.14e-10j\n', - 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimprt mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', + 'import': u'\nThe ``import`` statement\n************************\n\n import_stmt ::= "import" module ["as" name] ( "," module ["as" name] )*\n | "from" relative_module "import" identifier ["as" name]\n ( "," identifier ["as" name] )*\n | "from" relative_module "import" "(" identifier ["as" name]\n ( "," identifier ["as" name] )* [","] ")"\n | "from" module "import" "*"\n module ::= (identifier ".")* identifier\n relative_module ::= "."* module | "."+\n name ::= identifier\n\nImport statements are executed in two steps: (1) find a module, and\ninitialize it if necessary; (2) define a name or names in the local\nnamespace (of the scope where the ``import`` statement occurs). The\nstatement comes in two forms differing on whether it uses the ``from``\nkeyword. The first form (without ``from``) repeats these steps for\neach identifier in the list. The form with ``from`` performs step (1)\nonce, and then performs step (2) repeatedly.\n\nTo understand how step (1) occurs, one must first understand how\nPython handles hierarchical naming of modules. To help organize\nmodules and provide a hierarchy in naming, Python has a concept of\npackages. A package can contain other packages and modules while\nmodules cannot contain other modules or packages. From a file system\nperspective, packages are directories and modules are files. The\noriginal specification for packages is still available to read,\nalthough minor details have changed since the writing of that\ndocument.\n\nOnce the name of the module is known (unless otherwise specified, the\nterm "module" will refer to both packages and modules), searching for\nthe module or package can begin. The first place checked is\n``sys.modules``, the cache of all modules that have been imported\npreviously. If the module is found there then it is used in step (2)\nof import.\n\nIf the module is not found in the cache, then ``sys.meta_path`` is\nsearched (the specification for ``sys.meta_path`` can be found in\n**PEP 302**). The object is a list of *finder* objects which are\nqueried in order as to whether they know how to load the module by\ncalling their ``find_module()`` method with the name of the module. If\nthe module happens to be contained within a package (as denoted by the\nexistence of a dot in the name), then a second argument to\n``find_module()`` is given as the value of the ``__path__`` attribute\nfrom the parent package (everything up to the last dot in the name of\nthe module being imported). If a finder can find the module it returns\na *loader* (discussed later) or returns ``None``.\n\nIf none of the finders on ``sys.meta_path`` are able to find the\nmodule then some implicitly defined finders are queried.\nImplementations of Python vary in what implicit meta path finders are\ndefined. The one they all do define, though, is one that handles\n``sys.path_hooks``, ``sys.path_importer_cache``, and ``sys.path``.\n\nThe implicit finder searches for the requested module in the "paths"\nspecified in one of two places ("paths" do not have to be file system\npaths). If the module being imported is supposed to be contained\nwithin a package then the second argument passed to ``find_module()``,\n``__path__`` on the parent package, is used as the source of paths. If\nthe module is not contained in a package then ``sys.path`` is used as\nthe source of paths.\n\nOnce the source of paths is chosen it is iterated over to find a\nfinder that can handle that path. The dict at\n``sys.path_importer_cache`` caches finders for paths and is checked\nfor a finder. If the path does not have a finder cached then\n``sys.path_hooks`` is searched by calling each object in the list with\na single argument of the path, returning a finder or raises\n``ImportError``. If a finder is returned then it is cached in\n``sys.path_importer_cache`` and then used for that path entry. If no\nfinder can be found but the path exists then a value of ``None`` is\nstored in ``sys.path_importer_cache`` to signify that an implicit,\nfile-based finder that handles modules stored as individual files\nshould be used for that path. If the path does not exist then a finder\nwhich always returns ``None`` is placed in the cache for the path.\n\nIf no finder can find the module then ``ImportError`` is raised.\nOtherwise some finder returned a loader whose ``load_module()`` method\nis called with the name of the module to load (see **PEP 302** for the\noriginal definition of loaders). A loader has several responsibilities\nto perform on a module it loads. First, if the module already exists\nin ``sys.modules`` (a possibility if the loader is called outside of\nthe import machinery) then it is to use that module for initialization\nand not a new module. But if the module does not exist in\n``sys.modules`` then it is to be added to that dict before\ninitialization begins. If an error occurs during loading of the module\nand it was added to ``sys.modules`` it is to be removed from the dict.\nIf an error occurs but the module was already in ``sys.modules`` it is\nleft in the dict.\n\nThe loader must set several attributes on the module. ``__name__`` is\nto be set to the name of the module. ``__file__`` is to be the "path"\nto the file unless the module is built-in (and thus listed in\n``sys.builtin_module_names``) in which case the attribute is not set.\nIf what is being imported is a package then ``__path__`` is to be set\nto a list of paths to be searched when looking for modules and\npackages contained within the package being imported. ``__package__``\nis optional but should be set to the name of package that contains the\nmodule or package (the empty string is used for module not contained\nin a package). ``__loader__`` is also optional but should be set to\nthe loader object that is loading the module.\n\nIf an error occurs during loading then the loader raises\n``ImportError`` if some other exception is not already being\npropagated. Otherwise the loader returns the module that was loaded\nand initialized.\n\nWhen step (1) finishes without raising an exception, step (2) can\nbegin.\n\nThe first form of ``import`` statement binds the module name in the\nlocal namespace to the module object, and then goes on to import the\nnext identifier, if any. If the module name is followed by ``as``,\nthe name following ``as`` is used as the local name for the module.\n\nThe ``from`` form does not bind the module name: it goes through the\nlist of identifiers, looks each one of them up in the module found in\nstep (1), and binds the name in the local namespace to the object thus\nfound. As with the first form of ``import``, an alternate local name\ncan be supplied by specifying "``as`` localname". If a name is not\nfound, ``ImportError`` is raised. If the list of identifiers is\nreplaced by a star (``\'*\'``), all public names defined in the module\nare bound in the local namespace of the ``import`` statement..\n\nThe *public names* defined by a module are determined by checking the\nmodule\'s namespace for a variable named ``__all__``; if defined, it\nmust be a sequence of strings which are names defined or imported by\nthat module. The names given in ``__all__`` are all considered public\nand are required to exist. If ``__all__`` is not defined, the set of\npublic names includes all names found in the module\'s namespace which\ndo not begin with an underscore character (``\'_\'``). ``__all__``\nshould contain the entire public API. It is intended to avoid\naccidentally exporting items that are not part of the API (such as\nlibrary modules which were imported and used within the module).\n\nThe ``from`` form with ``*`` may only occur in a module scope. If the\nwild card form of import --- ``import *`` --- is used in a function\nand the function contains or is a nested block with free variables,\nthe compiler will raise a ``SyntaxError``.\n\nWhen specifying what module to import you do not have to specify the\nabsolute name of the module. When a module or package is contained\nwithin another package it is possible to make a relative import within\nthe same top package without having to mention the package name. By\nusing leading dots in the specified module or package after ``from``\nyou can specify how high to traverse up the current package hierarchy\nwithout specifying exact names. One leading dot means the current\npackage where the module making the import exists. Two dots means up\none package level. Three dots is up two levels, etc. So if you execute\n``from . import mod`` from a module in the ``pkg`` package then you\nwill end up importing ``pkg.mod``. If you execute ``from ..subpkg2\nimport mod`` from within ``pkg.subpkg1`` you will import\n``pkg.subpkg2.mod``. The specification for relative imports is\ncontained within **PEP 328**.\n\n``importlib.import_module()`` is provided to support applications that\ndetermine which modules need to be loaded dynamically.\n\n\nFuture statements\n=================\n\nA *future statement* is a directive to the compiler that a particular\nmodule should be compiled using syntax or semantics that will be\navailable in a specified future release of Python. The future\nstatement is intended to ease migration to future versions of Python\nthat introduce incompatible changes to the language. It allows use of\nthe new features on a per-module basis before the release in which the\nfeature becomes standard.\n\n future_statement ::= "from" "__future__" "import" feature ["as" name]\n ("," feature ["as" name])*\n | "from" "__future__" "import" "(" feature ["as" name]\n ("," feature ["as" name])* [","] ")"\n feature ::= identifier\n name ::= identifier\n\nA future statement must appear near the top of the module. The only\nlines that can appear before a future statement are:\n\n* the module docstring (if any),\n\n* comments,\n\n* blank lines, and\n\n* other future statements.\n\nThe features recognized by Python 2.6 are ``unicode_literals``,\n``print_function``, ``absolute_import``, ``division``, ``generators``,\n``nested_scopes`` and ``with_statement``. ``generators``,\n``with_statement``, ``nested_scopes`` are redundant in Python version\n2.6 and above because they are always enabled.\n\nA future statement is recognized and treated specially at compile\ntime: Changes to the semantics of core constructs are often\nimplemented by generating different code. It may even be the case\nthat a new feature introduces new incompatible syntax (such as a new\nreserved word), in which case the compiler may need to parse the\nmodule differently. Such decisions cannot be pushed off until\nruntime.\n\nFor any given release, the compiler knows which feature names have\nbeen defined, and raises a compile-time error if a future statement\ncontains a feature not known to it.\n\nThe direct runtime semantics are the same as for any import statement:\nthere is a standard module ``__future__``, described later, and it\nwill be imported in the usual way at the time the future statement is\nexecuted.\n\nThe interesting runtime semantics depend on the specific feature\nenabled by the future statement.\n\nNote that there is nothing special about the statement:\n\n import __future__ [as name]\n\nThat is not a future statement; it\'s an ordinary import statement with\nno special semantics or syntax restrictions.\n\nCode compiled by an ``exec`` statement or calls to the built-in\nfunctions ``compile()`` and ``execfile()`` that occur in a module\n``M`` containing a future statement will, by default, use the new\nsyntax or semantics associated with the future statement. This can,\nstarting with Python 2.2 be controlled by optional arguments to\n``compile()`` --- see the documentation of that function for details.\n\nA future statement typed at an interactive interpreter prompt will\ntake effect for the rest of the interpreter session. If an\ninterpreter is started with the *-i* option, is passed a script name\nto execute, and the script includes a future statement, it will be in\neffect in the interactive session started after the script is\nexecuted.\n\nSee also:\n\n **PEP 236** - Back to the __future__\n The original proposal for the __future__ mechanism.\n', 'in': u'\nComparisons\n***********\n\nUnlike C, all comparison operations in Python have the same priority,\nwhich is lower than that of any arithmetic, shifting or bitwise\noperation. Also unlike C, expressions like ``a < b < c`` have the\ninterpretation that is conventional in mathematics:\n\n comparison ::= or_expr ( comp_operator or_expr )*\n comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "<>" | "!="\n | "is" ["not"] | ["not"] "in"\n\nComparisons yield boolean values: ``True`` or ``False``.\n\nComparisons can be chained arbitrarily, e.g., ``x < y <= z`` is\nequivalent to ``x < y and y <= z``, except that ``y`` is evaluated\nonly once (but in both cases ``z`` is not evaluated at all when ``x <\ny`` is found to be false).\n\nFormally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,\n*op2*, ..., *opN* are comparison operators, then ``a op1 b op2 c ... y\nopN z`` is equivalent to ``a op1 b and b op2 c and ... y opN z``,\nexcept that each expression is evaluated at most once.\n\nNote that ``a op1 b op2 c`` doesn\'t imply any kind of comparison\nbetween *a* and *c*, so that, e.g., ``x < y > z`` is perfectly legal\n(though perhaps not pretty).\n\nThe forms ``<>`` and ``!=`` are equivalent; for consistency with C,\n``!=`` is preferred; where ``!=`` is mentioned below ``<>`` is also\naccepted. The ``<>`` spelling is considered obsolescent.\n\nThe operators ``<``, ``>``, ``==``, ``>=``, ``<=``, and ``!=`` compare\nthe values of two objects. The objects need not have the same type.\nIf both are numbers, they are converted to a common type. Otherwise,\nobjects of different types *always* compare unequal, and are ordered\nconsistently but arbitrarily. You can control comparison behavior of\nobjects of non-built-in types by defining a ``__cmp__`` method or rich\ncomparison methods like ``__gt__``, described in section *Special\nmethod names*.\n\n(This unusual definition of comparison was used to simplify the\ndefinition of operations like sorting and the ``in`` and ``not in``\noperators. In the future, the comparison rules for objects of\ndifferent types are likely to change.)\n\nComparison of objects of the same type depends on the type:\n\n* Numbers are compared arithmetically.\n\n* Strings are compared lexicographically using the numeric equivalents\n (the result of the built-in function ``ord()``) of their characters.\n Unicode and 8-bit strings are fully interoperable in this behavior.\n [4]\n\n* Tuples and lists are compared lexicographically using comparison of\n corresponding elements. This means that to compare equal, each\n element must compare equal and the two sequences must be of the same\n type and have the same length.\n\n If not equal, the sequences are ordered the same as their first\n differing elements. For example, ``cmp([1,2,x], [1,2,y])`` returns\n the same as ``cmp(x,y)``. If the corresponding element does not\n exist, the shorter sequence is ordered first (for example, ``[1,2] <\n [1,2,3]``).\n\n* Mappings (dictionaries) compare equal if and only if their sorted\n (key, value) lists compare equal. [5] Outcomes other than equality\n are resolved consistently, but are not otherwise defined. [6]\n\n* Most other objects of built-in types compare unequal unless they are\n the same object; the choice whether one object is considered smaller\n or larger than another one is made arbitrarily but consistently\n within one execution of a program.\n\nThe operators ``in`` and ``not in`` test for collection membership.\n``x in s`` evaluates to true if *x* is a member of the collection *s*,\nand false otherwise. ``x not in s`` returns the negation of ``x in\ns``. The collection membership test has traditionally been bound to\nsequences; an object is a member of a collection if the collection is\na sequence and contains an element equal to that object. However, it\nmake sense for many other object types to support membership tests\nwithout being a sequence. In particular, dictionaries (for keys) and\nsets support membership testing.\n\nFor the list and tuple types, ``x in y`` is true if and only if there\nexists an index *i* such that ``x == y[i]`` is true.\n\nFor the Unicode and string types, ``x in y`` is true if and only if\n*x* is a substring of *y*. An equivalent test is ``y.find(x) != -1``.\nNote, *x* and *y* need not be the same type; consequently, ``u\'ab\' in\n\'abc\'`` will return ``True``. Empty strings are always considered to\nbe a substring of any other string, so ``"" in "abc"`` will return\n``True``.\n\nChanged in version 2.3: Previously, *x* was required to be a string of\nlength ``1``.\n\nFor user-defined classes which define the ``__contains__()`` method,\n``x in y`` is true if and only if ``y.__contains__(x)`` is true.\n\nFor user-defined classes which do not define ``__contains__()`` but do\ndefine ``__iter__()``, ``x in y`` is true if some value ``z`` with ``x\n== z`` is produced while iterating over ``y``. If an exception is\nraised during the iteration, it is as if ``in`` raised that exception.\n\nLastly, the old-style iteration protocol is tried: if a class defines\n``__getitem__()``, ``x in y`` is true if and only if there is a non-\nnegative integer index *i* such that ``x == y[i]``, and all lower\ninteger indices do not raise ``IndexError`` exception. (If any other\nexception is raised, it is as if ``in`` raised that exception).\n\nThe operator ``not in`` is defined to have the inverse true value of\n``in``.\n\nThe operators ``is`` and ``is not`` test for object identity: ``x is\ny`` is true if and only if *x* and *y* are the same object. ``x is\nnot y`` yields the inverse truth value. [7]\n', 'integers': u'\nInteger and long integer literals\n*********************************\n\nInteger and long integer literals are described by the following\nlexical definitions:\n\n longinteger ::= integer ("l" | "L")\n integer ::= decimalinteger | octinteger | hexinteger | bininteger\n decimalinteger ::= nonzerodigit digit* | "0"\n octinteger ::= "0" ("o" | "O") octdigit+ | "0" octdigit+\n hexinteger ::= "0" ("x" | "X") hexdigit+\n bininteger ::= "0" ("b" | "B") bindigit+\n nonzerodigit ::= "1"..."9"\n octdigit ::= "0"..."7"\n bindigit ::= "0" | "1"\n hexdigit ::= digit | "a"..."f" | "A"..."F"\n\nAlthough both lower case ``\'l\'`` and upper case ``\'L\'`` are allowed as\nsuffix for long integers, it is strongly recommended to always use\n``\'L\'``, since the letter ``\'l\'`` looks too much like the digit\n``\'1\'``.\n\nPlain integer literals that are above the largest representable plain\ninteger (e.g., 2147483647 when using 32-bit arithmetic) are accepted\nas if they were long integers instead. [1] There is no limit for long\ninteger literals apart from what can be stored in available memory.\n\nSome examples of plain integer literals (first row) and long integer\nliterals (second and third rows):\n\n 7 2147483647 0177\n 3L 79228162514264337593543950336L 0377L 0x100000000L\n 79228162514264337593543950336 0xdeadbeef\n', 'lambda': u'\nLambdas\n*******\n\n lambda_form ::= "lambda" [parameter_list]: expression\n old_lambda_form ::= "lambda" [parameter_list]: old_expression\n\nLambda forms (lambda expressions) have the same syntactic position as\nexpressions. They are a shorthand to create anonymous functions; the\nexpression ``lambda arguments: expression`` yields a function object.\nThe unnamed object behaves like a function object defined with\n\n def name(arguments):\n return expression\n\nSee section *Function definitions* for the syntax of parameter lists.\nNote that functions created with lambda forms cannot contain\nstatements.\n', 'lists': u'\nList displays\n*************\n\nA list display is a possibly empty series of expressions enclosed in\nsquare brackets:\n\n list_display ::= "[" [expression_list | list_comprehension] "]"\n list_comprehension ::= expression list_for\n list_for ::= "for" target_list "in" old_expression_list [list_iter]\n old_expression_list ::= old_expression [("," old_expression)+ [","]]\n old_expression ::= or_test | old_lambda_form\n list_iter ::= list_for | list_if\n list_if ::= "if" old_expression [list_iter]\n\nA list display yields a new list object. Its contents are specified\nby providing either a list of expressions or a list comprehension.\nWhen a comma-separated list of expressions is supplied, its elements\nare evaluated from left to right and placed into the list object in\nthat order. When a list comprehension is supplied, it consists of a\nsingle expression followed by at least one ``for`` clause and zero or\nmore ``for`` or ``if`` clauses. In this case, the elements of the new\nlist are those that would be produced by considering each of the\n``for`` or ``if`` clauses a block, nesting from left to right, and\nevaluating the expression to produce a list element each time the\ninnermost block is reached [1].\n', - 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe global statement has the same scope as a name binding operation in\nthe same block. If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", + 'naming': u"\nNaming and binding\n******************\n\n*Names* refer to objects. Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block. A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the '**-c**' option) is a code block. The file read by the\nbuilt-in function ``execfile()`` is a code block. The string argument\npassed to the built-in function ``eval()`` and to the ``exec``\nstatement is a code block. The expression read and evaluated by the\nbuilt-in function ``input()`` is a code block.\n\nA code block is executed in an *execution frame*. A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block's execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block. If a local\nvariable is defined in a block, its scope includes that block. If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name. The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes generator expressions since\nthey are implemented using a function scope. This means that the\nfollowing will fail:\n\n class A:\n a = 42\n b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope. The set of all such scopes visible to a code block\nis called the block's *environment*.\n\nIf a name is bound in a block, it is a local variable of that block.\nIf a name is bound at the module level, it is a global variable. (The\nvariables of the module code block are local and global.) If a\nvariable is used in a code block but not defined there, it is a *free\nvariable*.\n\nWhen a name is not found at all, a ``NameError`` exception is raised.\nIf the name refers to a local variable that has not been bound, a\n``UnboundLocalError`` exception is raised. ``UnboundLocalError`` is a\nsubclass of ``NameError``.\n\nThe following constructs bind names: formal parameters to functions,\n``import`` statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, ``for`` loop header, in the\nsecond position of an ``except`` clause header or after ``as`` in a\n``with`` statement. The ``import`` statement of the form ``from ...\nimport *`` binds all names defined in the imported module, except\nthose beginning with an underscore. This form may only be used at the\nmodule level.\n\nA target occurring in a ``del`` statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name). It\nis illegal to unbind a name that is referenced by an enclosing scope;\nthe compiler will report a ``SyntaxError``.\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block. This can lead to errors when a name is used within a\nblock before it is bound. This rule is subtle. Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block. The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the global statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace. Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module ``__builtin__``. The global namespace is searched\nfirst. If the name is not found there, the builtins namespace is\nsearched. The global statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name ``__builtins__`` in its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module's dictionary is used). By default, when in the\n``__main__`` module, ``__builtins__`` is the built-in module\n``__builtin__`` (note: no 's'); when in any other module,\n``__builtins__`` is an alias for the dictionary of the ``__builtin__``\nmodule itself. ``__builtins__`` can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n``__builtins__``; it is strictly an implementation detail. Users\nwanting to override values in the builtins namespace should ``import``\nthe ``__builtin__`` (no 's') module and modify its attributes\nappropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported. The main module for a script is always called\n``__main__``.\n\nThe ``global`` statement has the same scope as a name binding\noperation in the same block. If the nearest enclosing scope for a\nfree variable contains a global statement, the free variable is\ntreated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class. Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name. An error will be reported at compile time.\n\nIf the wild card form of import --- ``import *`` --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a ``SyntaxError``.\n\nIf ``exec`` is used in a function and the function contains or is a\nnested block with free variables, the compiler will raise a\n``SyntaxError`` unless the exec explicitly specifies the local\nnamespace for the ``exec``. (In other words, ``exec obj`` would be\nillegal, but ``exec obj in ns`` would be legal.)\n\nThe ``eval()``, ``execfile()``, and ``input()`` functions and the\n``exec`` statement do not have access to the full environment for\nresolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the\nnearest enclosing namespace, but in the global namespace. [1] The\n``exec`` statement and the ``eval()`` and ``execfile()`` functions\nhave optional arguments to override the global and local namespace.\nIf only one namespace is specified, it is used for both.\n", 'numbers': u"\nNumeric literals\n****************\n\nThere are four types of numeric literals: plain integers, long\nintegers, floating point numbers, and imaginary numbers. There are no\ncomplex literals (complex numbers can be formed by adding a real\nnumber and an imaginary number).\n\nNote that numeric literals do not include a sign; a phrase like ``-1``\nis actually an expression composed of the unary operator '``-``' and\nthe literal ``1``.\n", 'numeric-types': u'\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n', - 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change.\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', - 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. Function\n ``fmod()`` in the ``math`` module returns a result whose sign\n matches the sign of the first argument instead, and so returns\n ``-1e-100`` in this case. Which approach is more appropriate\n depends on the application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', + 'objects': u'\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data. All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value. An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory. The \'``is``\' operator compares the\nidentity of two objects; the ``id()`` function returns an integer\nrepresenting its identity (currently implemented as its address). An\nobject\'s *type* is also unchangeable. [1] An object\'s type determines\nthe operations that the object supports (e.g., "does it have a\nlength?") and also defines the possible values for objects of that\ntype. The ``type()`` function returns an object\'s type (which is an\nobject itself). The *value* of some objects can change. Objects\nwhose value can change are said to be *mutable*; objects whose value\nis unchangeable once they are created are called *immutable*. (The\nvalue of an immutable container object that contains a reference to a\nmutable object can change when the latter\'s value is changed; however\nthe container is still considered immutable, because the collection of\nobjects it contains cannot be changed. So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected. An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references. See the documentation of the ``gc`` module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'``try``...``except``\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows. It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a ``close()`` method. Programs\nare strongly recommended to explicitly close such objects. The\n\'``try``...``finally``\' statement provides a convenient way to do\nthis.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries. The references are part of a container\'s value. In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied. So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior. Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed. E.g., after ``a = 1; b =\n1``, ``a`` and ``b`` may or may not refer to the same object with the\nvalue one, depending on the implementation, but after ``c = []; d =\n[]``, ``c`` and ``d`` are guaranteed to refer to two different,\nunique, newly created empty lists. (Note that ``c = d = []`` assigns\nthe same object to both ``c`` and ``d``.)\n', + 'operator-summary': u'\nSummary\n*******\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding). Operators in the same box have the same precedence. Unless\nthe syntax is explicitly given, operators are binary. Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator | Description |\n+=================================================+=======================================+\n| ``lambda`` | Lambda expression |\n+-------------------------------------------------+---------------------------------------+\n| ``if`` -- ``else`` | Conditional expression |\n+-------------------------------------------------+---------------------------------------+\n| ``or`` | Boolean OR |\n+-------------------------------------------------+---------------------------------------+\n| ``and`` | Boolean AND |\n+-------------------------------------------------+---------------------------------------+\n| ``not`` *x* | Boolean NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``in``, ``not`` ``in``, ``is``, ``is not``, | Comparisons, including membership |\n| ``<``, ``<=``, ``>``, ``>=``, ``<>``, ``!=``, | tests and identity tests, |\n| ``==`` | |\n+-------------------------------------------------+---------------------------------------+\n| ``|`` | Bitwise OR |\n+-------------------------------------------------+---------------------------------------+\n| ``^`` | Bitwise XOR |\n+-------------------------------------------------+---------------------------------------+\n| ``&`` | Bitwise AND |\n+-------------------------------------------------+---------------------------------------+\n| ``<<``, ``>>`` | Shifts |\n+-------------------------------------------------+---------------------------------------+\n| ``+``, ``-`` | Addition and subtraction |\n+-------------------------------------------------+---------------------------------------+\n| ``*``, ``/``, ``//``, ``%`` | Multiplication, division, remainder |\n| | [8] |\n+-------------------------------------------------+---------------------------------------+\n| ``+x``, ``-x``, ``~x`` | Positive, negative, bitwise NOT |\n+-------------------------------------------------+---------------------------------------+\n| ``**`` | Exponentiation [9] |\n+-------------------------------------------------+---------------------------------------+\n| ``x[index]``, ``x[index:index]``, | Subscription, slicing, call, |\n| ``x(arguments...)``, ``x.attribute`` | attribute reference |\n+-------------------------------------------------+---------------------------------------+\n| ``(expressions...)``, ``[expressions...]``, | Binding or tuple display, list |\n| ``{key:datum...}``, ```expressions...``` | display, dictionary display, string |\n| | conversion |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] In Python 2.3 and later releases, a list comprehension "leaks" the\n control variables of each ``for`` it contains into the containing\n scope. However, this behavior is deprecated, and relying on it\n will not work in Python 3.0\n\n[2] While ``abs(x%y) < abs(y)`` is true mathematically, for floats it\n may not be true numerically due to roundoff. For example, and\n assuming a platform on which a Python float is an IEEE 754 double-\n precision number, in order that ``-1e-100 % 1e100`` have the same\n sign as ``1e100``, the computed result is ``-1e-100 + 1e100``,\n which is numerically exactly equal to ``1e100``. The function\n ``math.fmod()`` returns a result whose sign matches the sign of\n the first argument instead, and so returns ``-1e-100`` in this\n case. Which approach is more appropriate depends on the\n application.\n\n[3] If x is very close to an exact integer multiple of y, it\'s\n possible for ``floor(x/y)`` to be one larger than ``(x-x%y)/y``\n due to rounding. In such cases, Python returns the latter result,\n in order to preserve that ``divmod(x,y)[0] * y + x % y`` be very\n close to ``x``.\n\n[4] While comparisons between unicode strings make sense at the byte\n level, they may be counter-intuitive to users. For example, the\n strings ``u"\\u00C7"`` and ``u"\\u0043\\u0327"`` compare differently,\n even though they both represent the same unicode character (LATIN\n CAPITAL LETTER C WITH CEDILLA). To compare strings in a human\n recognizable way, compare using ``unicodedata.normalize()``.\n\n[5] The implementation computes this efficiently, without constructing\n lists or sorting.\n\n[6] Earlier versions of Python used lexicographic comparison of the\n sorted (key, value) lists, but this was very expensive for the\n common case of comparing for equality. An even earlier version of\n Python compared dictionaries by identity only, but this caused\n surprises because people expected to be able to test a dictionary\n for emptiness by comparing it to ``{}``.\n\n[7] Due to automatic garbage-collection, free lists, and the dynamic\n nature of descriptors, you may notice seemingly unusual behaviour\n in certain uses of the ``is`` operator, like those involving\n comparisons between instance methods, or constants. Check their\n documentation for more info.\n\n[8] The ``%`` operator is also used for string formatting; the same\n precedence applies.\n\n[9] The power operator ``**`` binds less tightly than an arithmetic or\n bitwise unary operator on its right, that is, ``2**-1`` is\n ``0.5``.\n', 'pass': u'\nThe ``pass`` statement\n**********************\n\n pass_stmt ::= "pass"\n\n``pass`` is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n def f(arg): pass # a function that does nothing (yet)\n\n class C: pass # a class with no methods (yet)\n', 'power': u'\nThe power operator\n******************\n\nThe power operator binds more tightly than unary operators on its\nleft; it binds less tightly than unary operators on its right. The\nsyntax is:\n\n power ::= primary ["**" u_expr]\n\nThus, in an unparenthesized sequence of power and unary operators, the\noperators are evaluated from right to left (this does not constrain\nthe evaluation order for the operands): ``-1**2`` results in ``-1``.\n\nThe power operator has the same semantics as the built-in ``pow()``\nfunction, when called with two arguments: it yields its left argument\nraised to the power of its right argument. The numeric arguments are\nfirst converted to a common type. The result type is that of the\narguments after coercion.\n\nWith mixed operand types, the coercion rules for binary arithmetic\noperators apply. For int and long int operands, the result has the\nsame type as the operands (after coercion) unless the second argument\nis negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, ``10**2`` returns ``100``, but\n``10**-2`` returns ``0.01``. (This last feature was added in Python\n2.2. In Python 2.1 and before, if both arguments were of integer types\nand the second argument was negative, an exception was raised).\n\nRaising ``0.0`` to a negative power results in a\n``ZeroDivisionError``. Raising a negative number to a fractional power\nresults in a ``ValueError``.\n', 'print': u'\nThe ``print`` statement\n***********************\n\n print_stmt ::= "print" ([expression ("," expression)* [","]]\n | ">>" expression [("," expression)+ [","]])\n\n``print`` evaluates each expression in turn and writes the resulting\nobject to standard output (see below). If an object is not a string,\nit is first converted to a string using the rules for string\nconversions. The (resulting or original) string is then written. A\nspace is written before each object is (converted and) written, unless\nthe output system believes it is positioned at the beginning of a\nline. This is the case (1) when no characters have yet been written\nto standard output, (2) when the last character written to standard\noutput is a whitespace character except ``\' \'``, or (3) when the last\nwrite operation on standard output was not a ``print`` statement. (In\nsome cases it may be functional to write an empty string to standard\noutput for this reason.)\n\nNote: Objects which act like file objects but which are not the built-in\n file objects often do not properly emulate this aspect of the file\n object\'s behavior, so it is best not to rely on this.\n\nA ``\'\\n\'`` character is written at the end, unless the ``print``\nstatement ends with a comma. This is the only action if the statement\ncontains just the keyword ``print``.\n\nStandard output is defined as the file object named ``stdout`` in the\nbuilt-in module ``sys``. If no such object exists, or if it does not\nhave a ``write()`` method, a ``RuntimeError`` exception is raised.\n\n``print`` also has an extended form, defined by the second portion of\nthe syntax described above. This form is sometimes referred to as\n"``print`` chevron." In this form, the first expression after the\n``>>`` must evaluate to a "file-like" object, specifically an object\nthat has a ``write()`` method as described above. With this extended\nform, the subsequent expressions are printed to this file object. If\nthe first expression evaluates to ``None``, then ``sys.stdout`` is\nused as the file for output.\n', @@ -63,21 +63,21 @@ 'shifting': u'\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept plain or long integers as arguments. The\narguments are converted to a common type. They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as division by ``pow(2, n)``. A\nleft shift by *n* bits is defined as multiplication with ``pow(2,\nn)``. Negative shift counts raise a ``ValueError`` exception.\n\nNote: In the current implementation, the right-hand operand is required to\n be at most ``sys.maxsize``. If the right-hand operand is larger\n than ``sys.maxsize`` an ``OverflowError`` exception is raised.\n', 'slicings': u'\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list). Slicings may be used as expressions or as\ntargets in assignment or ``del`` statements. The syntax for a\nslicing:\n\n slicing ::= simple_slicing | extended_slicing\n simple_slicing ::= primary "[" short_slice "]"\n extended_slicing ::= primary "[" slice_list "]"\n slice_list ::= slice_item ("," slice_item)* [","]\n slice_item ::= expression | proper_slice | ellipsis\n proper_slice ::= short_slice | long_slice\n short_slice ::= [lower_bound] ":" [upper_bound]\n long_slice ::= short_slice ":" [stride]\n lower_bound ::= expression\n upper_bound ::= expression\n stride ::= expression\n ellipsis ::= "..."\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing. Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice nor ellipses). Similarly, when the slice\nlist has exactly one short slice and no trailing comma, the\ninterpretation as a simple slicing takes priority over that as an\nextended slicing.\n\nThe semantics for a simple slicing are as follows. The primary must\nevaluate to a sequence object. The lower and upper bound expressions,\nif present, must evaluate to plain integers; defaults are zero and the\n``sys.maxint``, respectively. If either bound is negative, the\nsequence\'s length is added to it. The slicing now selects all items\nwith index *k* such that ``i <= k < j`` where *i* and *j* are the\nspecified lower and upper bounds. This may be an empty sequence. It\nis not an error if *i* or *j* lie outside the range of valid indexes\n(such items don\'t exist so they aren\'t selected).\n\nThe semantics for an extended slicing are as follows. The primary\nmust evaluate to a mapping object, and it is indexed with a key that\nis constructed from the slice list, as follows. If the slice list\ncontains at least one comma, the key is a tuple containing the\nconversion of the slice items; otherwise, the conversion of the lone\nslice item is the key. The conversion of a slice item that is an\nexpression is that expression. The conversion of an ellipsis slice\nitem is the built-in ``Ellipsis`` object. The conversion of a proper\nslice is a slice object (see section *The standard type hierarchy*)\nwhose ``start``, ``stop`` and ``step`` attributes are the values of\nthe expressions given as lower bound, upper bound and stride,\nrespectively, substituting ``None`` for missing expressions.\n', 'specialattrs': u"\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant. Some of these are not reported\nby the ``dir()`` built-in function.\n\nobject.__dict__\n\n A dictionary or other mapping object used to store an object's\n (writable) attributes.\n\nobject.__methods__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\nobject.__members__\n\n Deprecated since version 2.2: Use the built-in function ``dir()``\n to get a list of an object's attributes. This attribute is no\n longer available.\n\ninstance.__class__\n\n The class to which a class instance belongs.\n\nclass.__bases__\n\n The tuple of base classes of a class object.\n\nclass.__name__\n\n The name of the class or type.\n\nThe following attributes are only supported by *new-style class*es.\n\nclass.__mro__\n\n This attribute is a tuple of classes that are considered when\n looking for base classes during method resolution.\n\nclass.mro()\n\n This method can be overridden by a metaclass to customize the\n method resolution order for its instances. It is called at class\n instantiation, and its result is stored in ``__mro__``.\n\nclass.__subclasses__()\n\n Each new-style class keeps a list of weak references to its\n immediate subclasses. This method returns a list of all those\n references still alive. Example:\n\n >>> int.__subclasses__()\n []\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found in\n the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list ``[1, 2]`` is considered equal to\n ``[1.0, 2.0]``, and similarly for tuples.\n\n[3] They must have since the parser can't tell the type of the\n operands.\n\n[4] To format only a tuple you should therefore provide a singleton\n tuple whose only element is the tuple to be formatted.\n\n[5] The advantage of leaving the newline on is that returning an empty\n string is then an unambiguous EOF indication. It is also possible\n (in cases where it might matter, for example, if you want to make\n an exact copy of a file while scanning its lines) to tell whether\n the last line of a file ended in a newline or not (yes this\n happens!).\n", - 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in the\nclass dictionary of another new-style class, known as the *owner*\nclass. In the examples below, "the attribute" refers to the attribute\nwhose name is the key of the property in the owner class\'\n``__dict__``. Descriptors can only be implemented as new-style\nclasses themselves.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, A)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', + 'specialnames': u'\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators. For instance, if a class defines\na method named ``__getitem__()``, and ``x`` is an instance of this\nclass, then ``x[i]`` is roughly equivalent to ``x.__getitem__(i)`` for\nold-style classes and ``type(x).__getitem__(x, i)`` for new-style\nclasses. Except where mentioned, attempts to execute an operation\nraise an exception when no appropriate method is defined (typically\n``AttributeError`` or ``TypeError``).\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled. For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense. (One example of this is the\n``NodeList`` interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n Called to create a new instance of class *cls*. ``__new__()`` is a\n static method (special-cased so you need not declare it as such)\n that takes the class of which an instance was requested as its\n first argument. The remaining arguments are those passed to the\n object constructor expression (the call to the class). The return\n value of ``__new__()`` should be the new object instance (usually\n an instance of *cls*).\n\n Typical implementations create a new instance of the class by\n invoking the superclass\'s ``__new__()`` method using\n ``super(currentclass, cls).__new__(cls[, ...])`` with appropriate\n arguments and then modifying the newly-created instance as\n necessary before returning it.\n\n If ``__new__()`` returns an instance of *cls*, then the new\n instance\'s ``__init__()`` method will be invoked like\n ``__init__(self[, ...])``, where *self* is the new instance and the\n remaining arguments are the same as were passed to ``__new__()``.\n\n If ``__new__()`` does not return an instance of *cls*, then the new\n instance\'s ``__init__()`` method will not be invoked.\n\n ``__new__()`` is intended mainly to allow subclasses of immutable\n types (like int, str, or tuple) to customize instance creation. It\n is also commonly overridden in custom metaclasses in order to\n customize class creation.\n\nobject.__init__(self[, ...])\n\n Called when the instance is created. The arguments are those\n passed to the class constructor expression. If a base class has an\n ``__init__()`` method, the derived class\'s ``__init__()`` method,\n if any, must explicitly call it to ensure proper initialization of\n the base class part of the instance; for example:\n ``BaseClass.__init__(self, [args...])``. As a special constraint\n on constructors, no value may be returned; doing so will cause a\n ``TypeError`` to be raised at runtime.\n\nobject.__del__(self)\n\n Called when the instance is about to be destroyed. This is also\n called a destructor. If a base class has a ``__del__()`` method,\n the derived class\'s ``__del__()`` method, if any, must explicitly\n call it to ensure proper deletion of the base class part of the\n instance. Note that it is possible (though not recommended!) for\n the ``__del__()`` method to postpone destruction of the instance by\n creating a new reference to it. It may then be called at a later\n time when this new reference is deleted. It is not guaranteed that\n ``__del__()`` methods are called for objects that still exist when\n the interpreter exits.\n\n Note: ``del x`` doesn\'t directly call ``x.__del__()`` --- the former\n decrements the reference count for ``x`` by one, and the latter\n is only called when ``x``\'s reference count reaches zero. Some\n common situations that may prevent the reference count of an\n object from going to zero include: circular references between\n objects (e.g., a doubly-linked list or a tree data structure with\n parent and child pointers); a reference to the object on the\n stack frame of a function that caught an exception (the traceback\n stored in ``sys.exc_traceback`` keeps the stack frame alive); or\n a reference to the object on the stack frame that raised an\n unhandled exception in interactive mode (the traceback stored in\n ``sys.last_traceback`` keeps the stack frame alive). The first\n situation can only be remedied by explicitly breaking the cycles;\n the latter two situations can be resolved by storing ``None`` in\n ``sys.exc_traceback`` or ``sys.last_traceback``. Circular\n references which are garbage are detected when the option cycle\n detector is enabled (it\'s on by default), but can only be cleaned\n up if there are no Python-level ``__del__()`` methods involved.\n Refer to the documentation for the ``gc`` module for more\n information about how ``__del__()`` methods are handled by the\n cycle detector, particularly the description of the ``garbage``\n value.\n\n Warning: Due to the precarious circumstances under which ``__del__()``\n methods are invoked, exceptions that occur during their execution\n are ignored, and a warning is printed to ``sys.stderr`` instead.\n Also, when ``__del__()`` is invoked in response to a module being\n deleted (e.g., when execution of the program is done), other\n globals referenced by the ``__del__()`` method may already have\n been deleted or in the process of being torn down (e.g. the\n import machinery shutting down). For this reason, ``__del__()``\n methods should do the absolute minimum needed to maintain\n external invariants. Starting with version 1.5, Python\n guarantees that globals whose name begins with a single\n underscore are deleted from their module before other globals are\n deleted; if no other references to such globals exist, this may\n help in assuring that imported modules are still available at the\n time when the ``__del__()`` method is called.\n\nobject.__repr__(self)\n\n Called by the ``repr()`` built-in function and by string\n conversions (reverse quotes) to compute the "official" string\n representation of an object. If at all possible, this should look\n like a valid Python expression that could be used to recreate an\n object with the same value (given an appropriate environment). If\n this is not possible, a string of the form ``<...some useful\n description...>`` should be returned. The return value must be a\n string object. If a class defines ``__repr__()`` but not\n ``__str__()``, then ``__repr__()`` is also used when an "informal"\n string representation of instances of that class is required.\n\n This is typically used for debugging, so it is important that the\n representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n Called by the ``str()`` built-in function and by the ``print``\n statement to compute the "informal" string representation of an\n object. This differs from ``__repr__()`` in that it does not have\n to be a valid Python expression: a more convenient or concise\n representation may be used instead. The return value must be a\n string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n New in version 2.1.\n\n These are the so-called "rich comparison" methods, and are called\n for comparison operators in preference to ``__cmp__()`` below. The\n correspondence between operator symbols and method names is as\n follows: ``xy`` call ``x.__ne__(y)``, ``x>y`` calls ``x.__gt__(y)``, and\n ``x>=y`` calls ``x.__ge__(y)``.\n\n A rich comparison method may return the singleton\n ``NotImplemented`` if it does not implement the operation for a\n given pair of arguments. By convention, ``False`` and ``True`` are\n returned for a successful comparison. However, these methods can\n return any value, so if the comparison operator is used in a\n Boolean context (e.g., in the condition of an ``if`` statement),\n Python will call ``bool()`` on the value to determine if the result\n is true or false.\n\n There are no implied relationships among the comparison operators.\n The truth of ``x==y`` does not imply that ``x!=y`` is false.\n Accordingly, when defining ``__eq__()``, one should also define\n ``__ne__()`` so that the operators will behave as expected. See\n the paragraph on ``__hash__()`` for some important notes on\n creating *hashable* objects which support custom comparison\n operations and are usable as dictionary keys.\n\n There are no swapped-argument versions of these methods (to be used\n when the left argument does not support the operation but the right\n argument does); rather, ``__lt__()`` and ``__gt__()`` are each\n other\'s reflection, ``__le__()`` and ``__ge__()`` are each other\'s\n reflection, and ``__eq__()`` and ``__ne__()`` are their own\n reflection.\n\n Arguments to rich comparison methods are never coerced.\n\n To automatically generate ordering operations from a single root\n operation, see ``functools.total_ordering()``.\n\nobject.__cmp__(self, other)\n\n Called by comparison operations if rich comparison (see above) is\n not defined. Should return a negative integer if ``self < other``,\n zero if ``self == other``, a positive integer if ``self > other``.\n If no ``__cmp__()``, ``__eq__()`` or ``__ne__()`` operation is\n defined, class instances are compared by object identity\n ("address"). See also the description of ``__hash__()`` for some\n important notes on creating *hashable* objects which support custom\n comparison operations and are usable as dictionary keys. (Note: the\n restriction that exceptions are not propagated by ``__cmp__()`` has\n been removed since Python 1.5.)\n\nobject.__rcmp__(self, other)\n\n Changed in version 2.1: No longer supported.\n\nobject.__hash__(self)\n\n Called by built-in function ``hash()`` and for operations on\n members of hashed collections including ``set``, ``frozenset``, and\n ``dict``. ``__hash__()`` should return an integer. The only\n required property is that objects which compare equal have the same\n hash value; it is advised to somehow mix together (e.g. using\n exclusive or) the hash values for the components of the object that\n also play a part in comparison of objects.\n\n If a class does not define a ``__cmp__()`` or ``__eq__()`` method\n it should not define a ``__hash__()`` operation either; if it\n defines ``__cmp__()`` or ``__eq__()`` but not ``__hash__()``, its\n instances will not be usable in hashed collections. If a class\n defines mutable objects and implements a ``__cmp__()`` or\n ``__eq__()`` method, it should not implement ``__hash__()``, since\n hashable collection implementations require that a object\'s hash\n value is immutable (if the object\'s hash value changes, it will be\n in the wrong hash bucket).\n\n User-defined classes have ``__cmp__()`` and ``__hash__()`` methods\n by default; with them, all objects compare unequal (except with\n themselves) and ``x.__hash__()`` returns ``id(x)``.\n\n Classes which inherit a ``__hash__()`` method from a parent class\n but change the meaning of ``__cmp__()`` or ``__eq__()`` such that\n the hash value returned is no longer appropriate (e.g. by switching\n to a value-based concept of equality instead of the default\n identity based equality) can explicitly flag themselves as being\n unhashable by setting ``__hash__ = None`` in the class definition.\n Doing so means that not only will instances of the class raise an\n appropriate ``TypeError`` when a program attempts to retrieve their\n hash value, but they will also be correctly identified as\n unhashable when checking ``isinstance(obj, collections.Hashable)``\n (unlike classes which define their own ``__hash__()`` to explicitly\n raise ``TypeError``).\n\n Changed in version 2.5: ``__hash__()`` may now also return a long\n integer object; the 32-bit integer is then derived from the hash of\n that object.\n\n Changed in version 2.6: ``__hash__`` may now be set to ``None`` to\n explicitly flag instances of a class as unhashable.\n\nobject.__nonzero__(self)\n\n Called to implement truth value testing and the built-in operation\n ``bool()``; should return ``False`` or ``True``, or their integer\n equivalents ``0`` or ``1``. When this method is not defined,\n ``__len__()`` is called, if it is defined, and the object is\n considered true if its result is nonzero. If a class defines\n neither ``__len__()`` nor ``__nonzero__()``, all its instances are\n considered true.\n\nobject.__unicode__(self)\n\n Called to implement ``unicode()`` built-in; should return a Unicode\n object. When this method is not defined, string conversion is\n attempted, and the result of string conversion is converted to\n Unicode using the system default encoding.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of ``x.name``)\nfor class instances.\n\nobject.__getattr__(self, name)\n\n Called when an attribute lookup has not found the attribute in the\n usual places (i.e. it is not an instance attribute nor is it found\n in the class tree for ``self``). ``name`` is the attribute name.\n This method should return the (computed) attribute value or raise\n an ``AttributeError`` exception.\n\n Note that if the attribute is found through the normal mechanism,\n ``__getattr__()`` is not called. (This is an intentional asymmetry\n between ``__getattr__()`` and ``__setattr__()``.) This is done both\n for efficiency reasons and because otherwise ``__getattr__()``\n would have no way to access other attributes of the instance. Note\n that at least for instance variables, you can fake total control by\n not inserting any values in the instance attribute dictionary (but\n instead inserting them in another object). See the\n ``__getattribute__()`` method below for a way to actually get total\n control in new-style classes.\n\nobject.__setattr__(self, name, value)\n\n Called when an attribute assignment is attempted. This is called\n instead of the normal mechanism (i.e. store the value in the\n instance dictionary). *name* is the attribute name, *value* is the\n value to be assigned to it.\n\n If ``__setattr__()`` wants to assign to an instance attribute, it\n should not simply execute ``self.name = value`` --- this would\n cause a recursive call to itself. Instead, it should insert the\n value in the dictionary of instance attributes, e.g.,\n ``self.__dict__[name] = value``. For new-style classes, rather\n than accessing the instance dictionary, it should call the base\n class method with the same name, for example,\n ``object.__setattr__(self, name, value)``.\n\nobject.__delattr__(self, name)\n\n Like ``__setattr__()`` but for attribute deletion instead of\n assignment. This should only be implemented if ``del obj.name`` is\n meaningful for the object.\n\n\nMore attribute access for new-style classes\n-------------------------------------------\n\nThe following methods only apply to new-style classes.\n\nobject.__getattribute__(self, name)\n\n Called unconditionally to implement attribute accesses for\n instances of the class. If the class also defines\n ``__getattr__()``, the latter will not be called unless\n ``__getattribute__()`` either calls it explicitly or raises an\n ``AttributeError``. This method should return the (computed)\n attribute value or raise an ``AttributeError`` exception. In order\n to avoid infinite recursion in this method, its implementation\n should always call the base class method with the same name to\n access any attributes it needs, for example,\n ``object.__getattribute__(self, name)``.\n\n Note: This method may still be bypassed when looking up special methods\n as the result of implicit invocation via language syntax or\n built-in functions. See *Special method lookup for new-style\n classes*.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents). In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' ``__dict__``.\n\nobject.__get__(self, instance, owner)\n\n Called to get the attribute of the owner class (class attribute\n access) or of an instance of that class (instance attribute\n access). *owner* is always the owner class, while *instance* is the\n instance that the attribute was accessed through, or ``None`` when\n the attribute is accessed through the *owner*. This method should\n return the (computed) attribute value or raise an\n ``AttributeError`` exception.\n\nobject.__set__(self, instance, value)\n\n Called to set the attribute on an instance *instance* of the owner\n class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n Called to delete the attribute on an instance *instance* of the\n owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol: ``__get__()``, ``__set__()``, and\n``__delete__()``. If any of those methods are defined for an object,\nit is said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, ``a.x`` has a\nlookup chain starting with ``a.__dict__[\'x\']``, then\n``type(a).__dict__[\'x\']``, and continuing through the base classes of\n``type(a)`` excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead. Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called. Note that descriptors are only invoked for new\nstyle objects or classes (ones that subclass ``object()`` or\n``type()``).\n\nThe starting point for descriptor invocation is a binding, ``a.x``.\nHow the arguments are assembled depends on ``a``:\n\nDirect Call\n The simplest and least common call is when user code directly\n invokes a descriptor method: ``x.__get__(a)``.\n\nInstance Binding\n If binding to a new-style object instance, ``a.x`` is transformed\n into the call: ``type(a).__dict__[\'x\'].__get__(a, type(a))``.\n\nClass Binding\n If binding to a new-style class, ``A.x`` is transformed into the\n call: ``A.__dict__[\'x\'].__get__(None, A)``.\n\nSuper Binding\n If ``a`` is an instance of ``super``, then the binding ``super(B,\n obj).m()`` searches ``obj.__class__.__mro__`` for the base class\n ``A`` immediately preceding ``B`` and then invokes the descriptor\n with the call: ``A.__dict__[\'m\'].__get__(obj, obj.__class__)``.\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined. A descriptor can define\nany combination of ``__get__()``, ``__set__()`` and ``__delete__()``.\nIf it does not define ``__get__()``, then accessing the attribute will\nreturn the descriptor object itself unless there is a value in the\nobject\'s instance dictionary. If the descriptor defines ``__set__()``\nand/or ``__delete__()``, it is a data descriptor; if it defines\nneither, it is a non-data descriptor. Normally, data descriptors\ndefine both ``__get__()`` and ``__set__()``, while non-data\ndescriptors have just the ``__get__()`` method. Data descriptors with\n``__set__()`` and ``__get__()`` defined always override a redefinition\nin an instance dictionary. In contrast, non-data descriptors can be\noverridden by instances.\n\nPython methods (including ``staticmethod()`` and ``classmethod()``)\nare implemented as non-data descriptors. Accordingly, instances can\nredefine and override methods. This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe ``property()`` function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of both old and new-style classes have a\ndictionary for attribute storage. This wastes space for objects\nhaving very few instance variables. The space consumption can become\nacute when creating large numbers of instances.\n\nThe default can be overridden by defining *__slots__* in a new-style\nclass definition. The *__slots__* declaration takes a sequence of\ninstance variables and reserves just enough space in each instance to\nhold a value for each variable. Space is saved because *__dict__* is\nnot created for each instance.\n\n__slots__\n\n This class variable can be assigned a string, iterable, or sequence\n of strings with variable names used by instances. If defined in a\n new-style class, *__slots__* reserves space for the declared\n variables and prevents the automatic creation of *__dict__* and\n *__weakref__* for each instance.\n\n New in version 2.2.\n\nNotes on using *__slots__*\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n attribute of that class will always be accessible, so a *__slots__*\n definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n variables not listed in the *__slots__* definition. Attempts to\n assign to an unlisted variable name raises ``AttributeError``. If\n dynamic assignment of new variables is desired, then add\n ``\'__dict__\'`` to the sequence of strings in the *__slots__*\n declaration.\n\n Changed in version 2.3: Previously, adding ``\'__dict__\'`` to the\n *__slots__* declaration would not enable the assignment of new\n attributes not specifically listed in the sequence of instance\n variable names.\n\n* Without a *__weakref__* variable for each instance, classes defining\n *__slots__* do not support weak references to its instances. If weak\n reference support is needed, then add ``\'__weakref__\'`` to the\n sequence of strings in the *__slots__* declaration.\n\n Changed in version 2.3: Previously, adding ``\'__weakref__\'`` to the\n *__slots__* declaration would not enable support for weak\n references.\n\n* *__slots__* are implemented at the class level by creating\n descriptors (*Implementing Descriptors*) for each variable name. As\n a result, class attributes cannot be used to set default values for\n instance variables defined by *__slots__*; otherwise, the class\n attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n where it is defined. As a result, subclasses will have a *__dict__*\n unless they also define *__slots__* (which must only contain names\n of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the instance\n variable defined by the base class slot is inaccessible (except by\n retrieving its descriptor directly from the base class). This\n renders the meaning of the program undefined. In the future, a\n check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n "variable-length" built-in types such as ``long``, ``str`` and\n ``tuple``.\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings may\n also be used; however, in the future, special meaning may be\n assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n *__slots__*.\n\n Changed in version 2.6: Previously, *__class__* assignment raised an\n error if either new or old class had *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, new-style classes are constructed using ``type()``. A\nclass definition is read into a separate namespace and the value of\nclass name is bound to the result of ``type(name, bases, dict)``.\n\nWhen the class definition is read, if *__metaclass__* is defined then\nthe callable assigned to it will be called instead of ``type()``. This\nallows classes or functions to be written which monitor or alter the\nclass creation process:\n\n* Modifying the class dictionary prior to the class being created.\n\n* Returning an instance of another class -- essentially performing the\n role of a factory function.\n\nThese steps will have to be performed in the metaclass\'s ``__new__()``\nmethod -- ``type.__new__()`` can then be called from this method to\ncreate a class with different properties. This example adds a new\nelement to the class dictionary before creating the class:\n\n class metacls(type):\n def __new__(mcs, name, bases, dict):\n dict[\'foo\'] = \'metacls was here\'\n return type.__new__(mcs, name, bases, dict)\n\nYou can of course also override other class methods (or add new\nmethods); for example defining a custom ``__call__()`` method in the\nmetaclass allows custom behavior when the class is called, e.g. not\nalways creating a new instance.\n\n__metaclass__\n\n This variable can be any callable accepting arguments for ``name``,\n ``bases``, and ``dict``. Upon class creation, the callable is used\n instead of the built-in ``type()``.\n\n New in version 2.2.\n\nThe appropriate metaclass is determined by the following precedence\nrules:\n\n* If ``dict[\'__metaclass__\']`` exists, it is used.\n\n* Otherwise, if there is at least one base class, its metaclass is\n used (this looks for a *__class__* attribute first and if not found,\n uses its type).\n\n* Otherwise, if a global variable named __metaclass__ exists, it is\n used.\n\n* Otherwise, the old-style, classic metaclass (types.ClassType) is\n used.\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored including logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\n\nCustomizing instance and subclass checks\n========================================\n\nNew in version 2.6.\n\nThe following methods are used to override the default behavior of the\n``isinstance()`` and ``issubclass()`` built-in functions.\n\nIn particular, the metaclass ``abc.ABCMeta`` implements these methods\nin order to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n Return true if *instance* should be considered a (direct or\n indirect) instance of *class*. If defined, called to implement\n ``isinstance(instance, class)``.\n\nclass.__subclasscheck__(self, subclass)\n\n Return true if *subclass* should be considered a (direct or\n indirect) subclass of *class*. If defined, called to implement\n ``issubclass(subclass, class)``.\n\nNote that these methods are looked up on the type (metaclass) of a\nclass. They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also:\n\n **PEP 3119** - Introducing Abstract Base Classes\n Includes the specification for customizing ``isinstance()`` and\n ``issubclass()`` behavior through ``__instancecheck__()`` and\n ``__subclasscheck__()``, with motivation for this functionality\n in the context of adding Abstract Base Classes (see the ``abc``\n module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n Called when the instance is "called" as a function; if this method\n is defined, ``x(arg1, arg2, ...)`` is a shorthand for\n ``x.__call__(arg1, arg2, ...)``.\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well. The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which ``0 <= k < N``\nwhere *N* is the length of the sequence, or slice objects, which\ndefine a range of items. (For backwards compatibility, the method\n``__getslice__()`` (see below) can also be defined to handle simple,\nbut not extended slices.) It is also recommended that mappings provide\nthe methods ``keys()``, ``values()``, ``items()``, ``has_key()``,\n``get()``, ``clear()``, ``setdefault()``, ``iterkeys()``,\n``itervalues()``, ``iteritems()``, ``pop()``, ``popitem()``,\n``copy()``, and ``update()`` behaving similar to those for Python\'s\nstandard dictionary objects. The ``UserDict`` module provides a\n``DictMixin`` class to help create those methods from a base set of\n``__getitem__()``, ``__setitem__()``, ``__delitem__()``, and\n``keys()``. Mutable sequences should provide methods ``append()``,\n``count()``, ``index()``, ``extend()``, ``insert()``, ``pop()``,\n``remove()``, ``reverse()`` and ``sort()``, like Python standard list\nobjects. Finally, sequence types should implement addition (meaning\nconcatenation) and multiplication (meaning repetition) by defining the\nmethods ``__add__()``, ``__radd__()``, ``__iadd__()``, ``__mul__()``,\n``__rmul__()`` and ``__imul__()`` described below; they should not\ndefine ``__coerce__()`` or other numerical operators. It is\nrecommended that both mappings and sequences implement the\n``__contains__()`` method to allow efficient use of the ``in``\noperator; for mappings, ``in`` should be equivalent of ``has_key()``;\nfor sequences, it should search through the values. It is further\nrecommended that both mappings and sequences implement the\n``__iter__()`` method to allow efficient iteration through the\ncontainer; for mappings, ``__iter__()`` should be the same as\n``iterkeys()``; for sequences, it should iterate through the values.\n\nobject.__len__(self)\n\n Called to implement the built-in function ``len()``. Should return\n the length of the object, an integer ``>=`` 0. Also, an object\n that doesn\'t define a ``__nonzero__()`` method and whose\n ``__len__()`` method returns zero is considered to be false in a\n Boolean context.\n\nobject.__getitem__(self, key)\n\n Called to implement evaluation of ``self[key]``. For sequence\n types, the accepted keys should be integers and slice objects.\n Note that the special interpretation of negative indexes (if the\n class wishes to emulate a sequence type) is up to the\n ``__getitem__()`` method. If *key* is of an inappropriate type,\n ``TypeError`` may be raised; if of a value outside the set of\n indexes for the sequence (after any special interpretation of\n negative values), ``IndexError`` should be raised. For mapping\n types, if *key* is missing (not in the container), ``KeyError``\n should be raised.\n\n Note: ``for`` loops expect that an ``IndexError`` will be raised for\n illegal indexes to allow proper detection of the end of the\n sequence.\n\nobject.__setitem__(self, key, value)\n\n Called to implement assignment to ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support changes to the values for keys, or if new keys\n can be added, or for sequences if elements can be replaced. The\n same exceptions should be raised for improper *key* values as for\n the ``__getitem__()`` method.\n\nobject.__delitem__(self, key)\n\n Called to implement deletion of ``self[key]``. Same note as for\n ``__getitem__()``. This should only be implemented for mappings if\n the objects support removal of keys, or for sequences if elements\n can be removed from the sequence. The same exceptions should be\n raised for improper *key* values as for the ``__getitem__()``\n method.\n\nobject.__iter__(self)\n\n This method is called when an iterator is required for a container.\n This method should return a new iterator object that can iterate\n over all the objects in the container. For mappings, it should\n iterate over the keys of the container, and should also be made\n available as the method ``iterkeys()``.\n\n Iterator objects also need to implement this method; they are\n required to return themselves. For more information on iterator\n objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n Called (if present) by the ``reversed()`` built-in to implement\n reverse iteration. It should return a new iterator object that\n iterates over all the objects in the container in reverse order.\n\n If the ``__reversed__()`` method is not provided, the\n ``reversed()`` built-in will fall back to using the sequence\n protocol (``__len__()`` and ``__getitem__()``). Objects that\n support the sequence protocol should only provide\n ``__reversed__()`` if they can provide an implementation that is\n more efficient than the one provided by ``reversed()``.\n\n New in version 2.6.\n\nThe membership test operators (``in`` and ``not in``) are normally\nimplemented as an iteration through a sequence. However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n Called to implement membership test operators. Should return true\n if *item* is in *self*, false otherwise. For mapping objects, this\n should consider the keys of the mapping rather than the values or\n the key-item pairs.\n\n For objects that don\'t define ``__contains__()``, the membership\n test first tries iteration via ``__iter__()``, then the old\n sequence iteration protocol via ``__getitem__()``, see *this\n section in the language reference*.\n\n\nAdditional methods for emulation of sequence types\n==================================================\n\nThe following optional methods can be defined to further emulate\nsequence objects. Immutable sequences methods should at most only\ndefine ``__getslice__()``; mutable sequences might define all three\nmethods.\n\nobject.__getslice__(self, i, j)\n\n Deprecated since version 2.0: Support slice objects as parameters\n to the ``__getitem__()`` method. (However, built-in types in\n CPython currently still implement ``__getslice__()``. Therefore,\n you have to override it in derived classes when implementing\n slicing.)\n\n Called to implement evaluation of ``self[i:j]``. The returned\n object should be of the same type as *self*. Note that missing *i*\n or *j* in the slice expression are replaced by zero or\n ``sys.maxint``, respectively. If negative indexes are used in the\n slice, the length of the sequence is added to that index. If the\n instance does not implement the ``__len__()`` method, an\n ``AttributeError`` is raised. No guarantee is made that indexes\n adjusted this way are not still negative. Indexes which are\n greater than the length of the sequence are not modified. If no\n ``__getslice__()`` is found, a slice object is created instead, and\n passed to ``__getitem__()`` instead.\n\nobject.__setslice__(self, i, j, sequence)\n\n Called to implement assignment to ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``.\n\n This method is deprecated. If no ``__setslice__()`` is found, or\n for extended slicing of the form ``self[i:j:k]``, a slice object is\n created, and passed to ``__setitem__()``, instead of\n ``__setslice__()`` being called.\n\nobject.__delslice__(self, i, j)\n\n Called to implement deletion of ``self[i:j]``. Same notes for *i*\n and *j* as for ``__getslice__()``. This method is deprecated. If no\n ``__delslice__()`` is found, or for extended slicing of the form\n ``self[i:j:k]``, a slice object is created, and passed to\n ``__delitem__()``, instead of ``__delslice__()`` being called.\n\nNotice that these methods are only invoked when a single slice with a\nsingle colon is used, and the slice method is available. For slice\noperations involving extended slice notation, or in absence of the\nslice methods, ``__getitem__()``, ``__setitem__()`` or\n``__delitem__()`` is called with a slice object as argument.\n\nThe following example demonstrate how to make your program or module\ncompatible with earlier versions of Python (assuming that methods\n``__getitem__()``, ``__setitem__()`` and ``__delitem__()`` support\nslice objects as arguments):\n\n class MyClass:\n ...\n def __getitem__(self, index):\n ...\n def __setitem__(self, index, value):\n ...\n def __delitem__(self, index):\n ...\n\n if sys.version_info < (2, 0):\n # They won\'t be defined if version is at least 2.0 final\n\n def __getslice__(self, i, j):\n return self[max(0, i):max(0, j):]\n def __setslice__(self, i, j, seq):\n self[max(0, i):max(0, j):] = seq\n def __delslice__(self, i, j):\n del self[max(0, i):max(0, j):]\n ...\n\nNote the calls to ``max()``; these are necessary because of the\nhandling of negative indices before the ``__*slice__()`` methods are\ncalled. When negative indexes are used, the ``__*item__()`` methods\nreceive them as provided, but the ``__*slice__()`` methods get a\n"cooked" form of the index values. For each negative index value, the\nlength of the sequence is added to the index before calling the method\n(which may still result in a negative index); this is the customary\nhandling of negative indexes by the built-in sequence types, and the\n``__*item__()`` methods are expected to do this as well. However,\nsince they should already be doing that, negative indexes cannot be\npassed in; they must be constrained to the bounds of the sequence\nbefore being passed to the ``__*item__()`` methods. Calling ``max(0,\ni)`` conveniently returns the proper value.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``//``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``). For\n instance, to evaluate the expression ``x + y``, where *x* is an\n instance of a class that has an ``__add__()`` method,\n ``x.__add__(y)`` is called. The ``__divmod__()`` method should be\n the equivalent to using ``__floordiv__()`` and ``__mod__()``; it\n should not be related to ``__truediv__()`` (described below). Note\n that ``__pow__()`` should be defined to accept an optional third\n argument if the ternary version of the built-in ``pow()`` function\n is to be supported.\n\n If one of those methods does not support the operation with the\n supplied arguments, it should return ``NotImplemented``.\n\nobject.__div__(self, other)\nobject.__truediv__(self, other)\n\n The division operator (``/``) is implemented by these methods. The\n ``__truediv__()`` method is used when ``__future__.division`` is in\n effect, otherwise ``__div__()`` is used. If only one of these two\n methods is defined, the object will not support division in the\n alternate context; ``TypeError`` will be raised instead.\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rdiv__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n These methods are called to implement the binary arithmetic\n operations (``+``, ``-``, ``*``, ``/``, ``%``, ``divmod()``,\n ``pow()``, ``**``, ``<<``, ``>>``, ``&``, ``^``, ``|``) with\n reflected (swapped) operands. These functions are only called if\n the left operand does not support the corresponding operation and\n the operands are of different types. [2] For instance, to evaluate\n the expression ``x - y``, where *y* is an instance of a class that\n has an ``__rsub__()`` method, ``y.__rsub__(x)`` is called if\n ``x.__sub__(y)`` returns *NotImplemented*.\n\n Note that ternary ``pow()`` will not try calling ``__rpow__()``\n (the coercion rules would become too complicated).\n\n Note: If the right operand\'s type is a subclass of the left operand\'s\n type and that subclass provides the reflected method for the\n operation, this method will be called before the left operand\'s\n non-reflected method. This behavior allows subclasses to\n override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__idiv__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n These methods are called to implement the augmented arithmetic\n assignments (``+=``, ``-=``, ``*=``, ``/=``, ``//=``, ``%=``,\n ``**=``, ``<<=``, ``>>=``, ``&=``, ``^=``, ``|=``). These methods\n should attempt to do the operation in-place (modifying *self*) and\n return the result (which could be, but does not have to be,\n *self*). If a specific method is not defined, the augmented\n assignment falls back to the normal methods. For instance, to\n execute the statement ``x += y``, where *x* is an instance of a\n class that has an ``__iadd__()`` method, ``x.__iadd__(y)`` is\n called. If *x* is an instance of a class that does not define a\n ``__iadd__()`` method, ``x.__add__(y)`` and ``y.__radd__(x)`` are\n considered, as with the evaluation of ``x + y``.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n Called to implement the unary arithmetic operations (``-``, ``+``,\n ``abs()`` and ``~``).\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__long__(self)\nobject.__float__(self)\n\n Called to implement the built-in functions ``complex()``,\n ``int()``, ``long()``, and ``float()``. Should return a value of\n the appropriate type.\n\nobject.__oct__(self)\nobject.__hex__(self)\n\n Called to implement the built-in functions ``oct()`` and ``hex()``.\n Should return a string value.\n\nobject.__index__(self)\n\n Called to implement ``operator.index()``. Also called whenever\n Python needs an integer object (such as in slicing). Must return\n an integer (int or long).\n\n New in version 2.5.\n\nobject.__coerce__(self, other)\n\n Called to implement "mixed-mode" numeric arithmetic. Should either\n return a 2-tuple containing *self* and *other* converted to a\n common numeric type, or ``None`` if conversion is impossible. When\n the common type would be the type of ``other``, it is sufficient to\n return ``None``, since the interpreter will also ask the other\n object to attempt a coercion (but sometimes, if the implementation\n of the other type cannot be changed, it is useful to do the\n conversion to the other type here). A return value of\n ``NotImplemented`` is equivalent to returning ``None``.\n\n\nCoercion rules\n==============\n\nThis section used to document the rules for coercion. As the language\nhas evolved, the coercion rules have become hard to document\nprecisely; documenting what one version of one particular\nimplementation does is undesirable. Instead, here are some informal\nguidelines regarding coercion. In Python 3.0, coercion will not be\nsupported.\n\n* If the left operand of a % operator is a string or Unicode object,\n no coercion takes place and the string formatting operation is\n invoked instead.\n\n* It is no longer recommended to define a coercion operation. Mixed-\n mode operations on types that don\'t define coercion pass the\n original arguments to the operation.\n\n* New-style classes (those derived from ``object``) never invoke the\n ``__coerce__()`` method in response to a binary operator; the only\n time ``__coerce__()`` is invoked is when the built-in function\n ``coerce()`` is called.\n\n* For most intents and purposes, an operator that returns\n ``NotImplemented`` is treated the same as one that is not\n implemented at all.\n\n* Below, ``__op__()`` and ``__rop__()`` are used to signify the\n generic method names corresponding to an operator; ``__iop__()`` is\n used for the corresponding in-place operator. For example, for the\n operator \'``+``\', ``__add__()`` and ``__radd__()`` are used for the\n left and right variant of the binary operator, and ``__iadd__()``\n for the in-place variant.\n\n* For objects *x* and *y*, first ``x.__op__(y)`` is tried. If this is\n not implemented or returns ``NotImplemented``, ``y.__rop__(x)`` is\n tried. If this is also not implemented or returns\n ``NotImplemented``, a ``TypeError`` exception is raised. But see\n the following exception:\n\n* Exception to the previous item: if the left operand is an instance\n of a built-in type or a new-style class, and the right operand is an\n instance of a proper subclass of that type or class and overrides\n the base\'s ``__rop__()`` method, the right operand\'s ``__rop__()``\n method is tried *before* the left operand\'s ``__op__()`` method.\n\n This is done so that a subclass can completely override binary\n operators. Otherwise, the left operand\'s ``__op__()`` method would\n always accept the right operand: when an instance of a given class\n is expected, an instance of a subclass of that class is always\n acceptable.\n\n* When either operand type defines a coercion, this coercion is called\n before that type\'s ``__op__()`` or ``__rop__()`` method is called,\n but no sooner. If the coercion returns an object of a different\n type for the operand whose coercion is invoked, part of the process\n is redone using the new object.\n\n* When an in-place operator (like \'``+=``\') is used, if the left\n operand implements ``__iop__()``, it is invoked without any\n coercion. When the operation falls back to ``__op__()`` and/or\n ``__rop__()``, the normal coercion rules apply.\n\n* In ``x + y``, if *x* is a sequence that implements sequence\n concatenation, sequence concatenation is invoked.\n\n* In ``x * y``, if one operator is a sequence that implements sequence\n repetition, and the other is an integer (``int`` or ``long``),\n sequence repetition is invoked.\n\n* Rich comparisons (implemented by methods ``__eq__()`` and so on)\n never use coercion. Three-way comparison (implemented by\n ``__cmp__()``) does use coercion under the same conditions as other\n binary operations use it.\n\n* In the current implementation, the built-in numeric types ``int``,\n ``long``, ``float``, and ``complex`` do not use coercion. All these\n types implement a ``__coerce__()`` method, for use by the built-in\n ``coerce()`` function.\n\n Changed in version 2.7.\n\n\nWith Statement Context Managers\n===============================\n\nNew in version 2.5.\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a ``with`` statement. The context\nmanager handles the entry into, and the exit from, the desired runtime\ncontext for the execution of the block of code. Context managers are\nnormally invoked using the ``with`` statement (described in section\n*The with statement*), but can also be used by directly invoking their\nmethods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n Enter the runtime context related to this object. The ``with``\n statement will bind this method\'s return value to the target(s)\n specified in the ``as`` clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n Exit the runtime context related to this object. The parameters\n describe the exception that caused the context to be exited. If the\n context was exited without an exception, all three arguments will\n be ``None``.\n\n If an exception is supplied, and the method wishes to suppress the\n exception (i.e., prevent it from being propagated), it should\n return a true value. Otherwise, the exception will be processed\n normally upon exit from this method.\n\n Note that ``__exit__()`` methods should not reraise the passed-in\n exception; this is the caller\'s responsibility.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n\n\nSpecial method lookup for old-style classes\n===========================================\n\nFor old-style classes, special methods are always looked up in exactly\nthe same way as any other method or attribute. This is the case\nregardless of whether the method is being looked up explicitly as in\n``x.__getitem__(i)`` or implicitly as in ``x[i]``.\n\nThis behaviour means that special methods may exhibit different\nbehaviour for different instances of a single old-style class if the\nappropriate special attributes are set differently:\n\n >>> class C:\n ... pass\n ...\n >>> c1 = C()\n >>> c2 = C()\n >>> c1.__len__ = lambda: 5\n >>> c2.__len__ = lambda: 9\n >>> len(c1)\n 5\n >>> len(c2)\n 9\n\n\nSpecial method lookup for new-style classes\n===========================================\n\nFor new-style classes, implicit invocations of special methods are\nonly guaranteed to work correctly if defined on an object\'s type, not\nin the object\'s instance dictionary. That behaviour is the reason why\nthe following code raises an exception (unlike the equivalent example\nwith old-style classes):\n\n >>> class C(object):\n ... pass\n ...\n >>> c = C()\n >>> c.__len__ = lambda: 5\n >>> len(c)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as ``__hash__()`` and ``__repr__()`` that are implemented\nby all objects, including type objects. If the implicit lookup of\nthese methods used the conventional lookup process, they would fail\nwhen invoked on the type object itself:\n\n >>> 1 .__hash__() == hash(1)\n True\n >>> int.__hash__() == hash(int)\n Traceback (most recent call last):\n File "", line 1, in \n TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n >>> type(1).__hash__(1) == hash(1)\n True\n >>> type(int).__hash__(int) == hash(int)\n True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe ``__getattribute__()`` method even of the object\'s metaclass:\n\n >>> class Meta(type):\n ... def __getattribute__(*args):\n ... print "Metaclass getattribute invoked"\n ... return type.__getattribute__(*args)\n ...\n >>> class C(object):\n ... __metaclass__ = Meta\n ... def __len__(self):\n ... return 10\n ... def __getattribute__(*args):\n ... print "Class getattribute invoked"\n ... return object.__getattribute__(*args)\n ...\n >>> c = C()\n >>> c.__len__() # Explicit lookup via instance\n Class getattribute invoked\n 10\n >>> type(c).__len__(c) # Explicit lookup via type\n Metaclass getattribute invoked\n 10\n >>> len(c) # Implicit lookup\n 10\n\nBypassing the ``__getattribute__()`` machinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method *must* be set on the class object\nitself in order to be consistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type, under\n certain controlled conditions. It generally isn\'t a good idea\n though, since it can lead to some very strange behaviour if it is\n handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n reflected method (such as ``__add__()``) fails the operation is\n not supported, which is why the reflected method is not called.\n', 'string-conversions': u'\nString conversions\n******************\n\nA string conversion is an expression list enclosed in reverse (a.k.a.\nbackward) quotes:\n\n string_conversion ::= "\'" expression_list "\'"\n\nA string conversion evaluates the contained expression list and\nconverts the resulting object into a string according to rules\nspecific to its type.\n\nIf the object is a string, a number, ``None``, or a tuple, list or\ndictionary containing only objects whose type is one of these, the\nresulting string is a valid Python expression which can be passed to\nthe built-in function ``eval()`` to yield an expression with the same\nvalue (or an approximation, if floating point numbers are involved).\n\n(In particular, converting a string adds quotes around it and converts\n"funny" characters to escape sequences that are safe to print.)\n\nRecursive objects (for example, lists or dictionaries that contain a\nreference to themselves, directly or indirectly) use ``...`` to\nindicate a recursive reference, and the result cannot be passed to\n``eval()`` to get an equal value (``SyntaxError`` will be raised\ninstead).\n\nThe built-in function ``repr()`` performs exactly the same conversion\nin its argument as enclosing it in parentheses and reverse quotes\ndoes. The built-in function ``str()`` performs a similar but more\nuser-friendly conversion.\n', - 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', - 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. The two prefix characters may be\ncombined; in this case, ``\'u\'`` must appear before ``\'r\'``.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', + 'string-methods': u'\nString Methods\n**************\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n', + 'strings': u'\nString literals\n***************\n\nString literals are described by the following lexical definitions:\n\n stringliteral ::= [stringprefix](shortstring | longstring)\n stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"\n | "b" | "B" | "br" | "Br" | "bR" | "BR"\n shortstring ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n longstring ::= "\'\'\'" longstringitem* "\'\'\'"\n | \'"""\' longstringitem* \'"""\'\n shortstringitem ::= shortstringchar | escapeseq\n longstringitem ::= longstringchar | escapeseq\n shortstringchar ::= \n longstringchar ::= \n escapeseq ::= "\\" \n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the **stringprefix** and the rest of\nthe string literal. The source character set is defined by the\nencoding declaration; it is ASCII if no encoding declaration is given\nin the source file; see section *Encoding declarations*.\n\nIn plain English: String literals can be enclosed in matching single\nquotes (``\'``) or double quotes (``"``). They can also be enclosed in\nmatching groups of three single or double quotes (these are generally\nreferred to as *triple-quoted strings*). The backslash (``\\``)\ncharacter is used to escape characters that otherwise have a special\nmeaning, such as newline, backslash itself, or the quote character.\nString literals may optionally be prefixed with a letter ``\'r\'`` or\n``\'R\'``; such strings are called *raw strings* and use different rules\nfor interpreting backslash escape sequences. A prefix of ``\'u\'`` or\n``\'U\'`` makes the string a Unicode string. Unicode strings use the\nUnicode character set as defined by the Unicode Consortium and ISO\n10646. Some additional escape sequences, described below, are\navailable in Unicode strings. A prefix of ``\'b\'`` or ``\'B\'`` is\nignored in Python 2; it indicates that the literal should become a\nbytes literal in Python 3 (e.g. when code is automatically converted\nwith 2to3). A ``\'u\'`` or ``\'b\'`` prefix may be followed by an ``\'r\'``\nprefix.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string. (A "quote" is the character used to open the\nstring, i.e. either ``\'`` or ``"``.)\n\nUnless an ``\'r\'`` or ``\'R\'`` prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence | Meaning | Notes |\n+===================+===================================+=========+\n| ``\\newline`` | Ignored | |\n+-------------------+-----------------------------------+---------+\n| ``\\\\`` | Backslash (``\\``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\\'`` | Single quote (``\'``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\"`` | Double quote (``"``) | |\n+-------------------+-----------------------------------+---------+\n| ``\\a`` | ASCII Bell (BEL) | |\n+-------------------+-----------------------------------+---------+\n| ``\\b`` | ASCII Backspace (BS) | |\n+-------------------+-----------------------------------+---------+\n| ``\\f`` | ASCII Formfeed (FF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\n`` | ASCII Linefeed (LF) | |\n+-------------------+-----------------------------------+---------+\n| ``\\N{name}`` | Character named *name* in the | |\n| | Unicode database (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\r`` | ASCII Carriage Return (CR) | |\n+-------------------+-----------------------------------+---------+\n| ``\\t`` | ASCII Horizontal Tab (TAB) | |\n+-------------------+-----------------------------------+---------+\n| ``\\uxxxx`` | Character with 16-bit hex value | (1) |\n| | *xxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\Uxxxxxxxx`` | Character with 32-bit hex value | (2) |\n| | *xxxxxxxx* (Unicode only) | |\n+-------------------+-----------------------------------+---------+\n| ``\\v`` | ASCII Vertical Tab (VT) | |\n+-------------------+-----------------------------------+---------+\n| ``\\ooo`` | Character with octal value *ooo* | (3,5) |\n+-------------------+-----------------------------------+---------+\n| ``\\xhh`` | Character with hex value *hh* | (4,5) |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. Individual code units which form parts of a surrogate pair can be\n encoded using this escape sequence.\n\n2. Any Unicode character can be encoded this way, but characters\n outside the Basic Multilingual Plane (BMP) will be encoded using a\n surrogate pair if Python is compiled to use 16-bit code units (the\n default). Individual code units which form parts of a surrogate\n pair can be encoded using this escape sequence.\n\n3. As in Standard C, up to three octal digits are accepted.\n\n4. Unlike in Standard C, exactly two hex digits are required.\n\n5. In a string literal, hexadecimal and octal escapes denote the byte\n with the given value; it is not necessary that the byte encodes a\n character in the source character set. In a Unicode literal, these\n escapes denote a Unicode character with the given value.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*. (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.) It is also\nimportant to note that the escape sequences marked as "(Unicode only)"\nin the table above fall into the category of unrecognized escapes for\nnon-Unicode string literals.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is present, a character following a\nbackslash is included in the string without change, and *all\nbackslashes are left in the string*. For example, the string literal\n``r"\\n"`` consists of two characters: a backslash and a lowercase\n``\'n\'``. String quotes can be escaped with a backslash, but the\nbackslash remains in the string; for example, ``r"\\""`` is a valid\nstring literal consisting of two characters: a backslash and a double\nquote; ``r"\\"`` is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes). Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character). Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n\nWhen an ``\'r\'`` or ``\'R\'`` prefix is used in conjunction with a\n``\'u\'`` or ``\'U\'`` prefix, then the ``\\uXXXX`` and ``\\UXXXXXXXX``\nescape sequences are processed while *all other backslashes are left\nin the string*. For example, the string literal ``ur"\\u0062\\n"``\nconsists of three Unicode characters: \'LATIN SMALL LETTER B\', \'REVERSE\nSOLIDUS\', and \'LATIN SMALL LETTER N\'. Backslashes can be escaped with\na preceding backslash; however, both remain in the string. As a\nresult, ``\\uXXXX`` escape sequences are only recognized when there are\nan odd number of backslashes.\n', 'subscriptions': u'\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object of a sequence or mapping type.\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey. (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to a\nplain integer. If this value is negative, the length of the sequence\nis added to it (so that, e.g., ``x[-1]`` selects the last item of\n``x``.) The resulting value must be a nonnegative integer less than\nthe number of items in the sequence, and the subscription selects the\nitem whose index is that value (counting from zero).\n\nA string\'s items are characters. A character is not a separate data\ntype but a string of exactly one character.\n', 'truth': u"\nTruth Value Testing\n*******************\n\nAny object can be tested for truth value, for use in an ``if`` or\n``while`` condition or as operand of the Boolean operations below. The\nfollowing values are considered false:\n\n* ``None``\n\n* ``False``\n\n* zero of any numeric type, for example, ``0``, ``0L``, ``0.0``,\n ``0j``.\n\n* any empty sequence, for example, ``''``, ``()``, ``[]``.\n\n* any empty mapping, for example, ``{}``.\n\n* instances of user-defined classes, if the class defines a\n ``__nonzero__()`` or ``__len__()`` method, when that method returns\n the integer zero or ``bool`` value ``False``. [1]\n\nAll other values are considered true --- so objects of many types are\nalways true.\n\nOperations and built-in functions that have a Boolean result always\nreturn ``0`` or ``False`` for false and ``1`` or ``True`` for true,\nunless otherwise stated. (Important exception: the Boolean operations\n``or`` and ``and`` always return one of their operands.)\n", 'try': u'\nThe ``try`` statement\n*********************\n\nThe ``try`` statement specifies exception handlers and/or cleanup code\nfor a group of statements:\n\n try_stmt ::= try1_stmt | try2_stmt\n try1_stmt ::= "try" ":" suite\n ("except" [expression [("as" | ",") target]] ":" suite)+\n ["else" ":" suite]\n ["finally" ":" suite]\n try2_stmt ::= "try" ":" suite\n "finally" ":" suite\n\nChanged in version 2.5: In previous versions of Python,\n``try``...``except``...``finally`` did not work. ``try``...``except``\nhad to be nested in ``try``...``finally``.\n\nThe ``except`` clause(s) specify one or more exception handlers. When\nno exception occurs in the ``try`` clause, no exception handler is\nexecuted. When an exception occurs in the ``try`` suite, a search for\nan exception handler is started. This search inspects the except\nclauses in turn until one is found that matches the exception. An\nexpression-less except clause, if present, must be last; it matches\nany exception. For an except clause with an expression, that\nexpression is evaluated, and the clause matches the exception if the\nresulting object is "compatible" with the exception. An object is\ncompatible with an exception if it is the class or a base class of the\nexception object, a tuple containing an item compatible with the\nexception, or, in the (deprecated) case of string exceptions, is the\nraised string itself (note that the object identities must match, i.e.\nit must be the same string object, not just a string with the same\nvalue).\n\nIf no except clause matches the exception, the search for an exception\nhandler continues in the surrounding code and on the invocation stack.\n[1]\n\nIf the evaluation of an expression in the header of an except clause\nraises an exception, the original search for a handler is canceled and\na search starts for the new exception in the surrounding code and on\nthe call stack (it is treated as if the entire ``try`` statement\nraised the exception).\n\nWhen a matching except clause is found, the exception is assigned to\nthe target specified in that except clause, if present, and the except\nclause\'s suite is executed. All except clauses must have an\nexecutable block. When the end of this block is reached, execution\ncontinues normally after the entire try statement. (This means that\nif two nested handlers exist for the same exception, and the exception\noccurs in the try clause of the inner handler, the outer handler will\nnot handle the exception.)\n\nBefore an except clause\'s suite is executed, details about the\nexception are assigned to three variables in the ``sys`` module:\n``sys.exc_type`` receives the object identifying the exception;\n``sys.exc_value`` receives the exception\'s parameter;\n``sys.exc_traceback`` receives a traceback object (see section *The\nstandard type hierarchy*) identifying the point in the program where\nthe exception occurred. These details are also available through the\n``sys.exc_info()`` function, which returns a tuple ``(exc_type,\nexc_value, exc_traceback)``. Use of the corresponding variables is\ndeprecated in favor of this function, since their use is unsafe in a\nthreaded program. As of Python 1.5, the variables are restored to\ntheir previous values (before the call) when returning from a function\nthat handled an exception.\n\nThe optional ``else`` clause is executed if and when control flows off\nthe end of the ``try`` clause. [2] Exceptions in the ``else`` clause\nare not handled by the preceding ``except`` clauses.\n\nIf ``finally`` is present, it specifies a \'cleanup\' handler. The\n``try`` clause is executed, including any ``except`` and ``else``\nclauses. If an exception occurs in any of the clauses and is not\nhandled, the exception is temporarily saved. The ``finally`` clause is\nexecuted. If there is a saved exception, it is re-raised at the end\nof the ``finally`` clause. If the ``finally`` clause raises another\nexception or executes a ``return`` or ``break`` statement, the saved\nexception is lost. The exception information is not available to the\nprogram during execution of the ``finally`` clause.\n\nWhen a ``return``, ``break`` or ``continue`` statement is executed in\nthe ``try`` suite of a ``try``...``finally`` statement, the\n``finally`` clause is also executed \'on the way out.\' A ``continue``\nstatement is illegal in the ``finally`` clause. (The reason is a\nproblem with the current implementation --- this restriction may be\nlifted in the future).\n\nAdditional information on exceptions can be found in section\n*Exceptions*, and information on using the ``raise`` statement to\ngenerate exceptions may be found in section *The raise statement*.\n', - 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *list*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', + 'types': u'\nThe standard type hierarchy\n***************************\n\nBelow is a list of the types that are built into Python. Extension\nmodules (written in C, Java, or other languages, depending on the\nimplementation) can define additional types. Future versions of\nPython may add types to the type hierarchy (e.g., rational numbers,\nefficiently stored arrays of integers, etc.).\n\nSome of the type descriptions below contain a paragraph listing\n\'special attributes.\' These are attributes that provide access to the\nimplementation and are not intended for general use. Their definition\nmay change in the future.\n\nNone\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name ``None``.\n It is used to signify the absence of a value in many situations,\n e.g., it is returned from functions that don\'t explicitly return\n anything. Its truth value is false.\n\nNotImplemented\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``NotImplemented``. Numeric methods and rich comparison methods may\n return this value if they do not implement the operation for the\n operands provided. (The interpreter will then try the reflected\n operation, or some other fallback, depending on the operator.) Its\n truth value is true.\n\nEllipsis\n This type has a single value. There is a single object with this\n value. This object is accessed through the built-in name\n ``Ellipsis``. It is used to indicate the presence of the ``...``\n syntax in a slice. Its truth value is true.\n\n``numbers.Number``\n These are created by numeric literals and returned as results by\n arithmetic operators and arithmetic built-in functions. Numeric\n objects are immutable; once created their value never changes.\n Python numbers are of course strongly related to mathematical\n numbers, but subject to the limitations of numerical representation\n in computers.\n\n Python distinguishes between integers, floating point numbers, and\n complex numbers:\n\n ``numbers.Integral``\n These represent elements from the mathematical set of integers\n (positive and negative).\n\n There are three types of integers:\n\n Plain integers\n These represent numbers in the range -2147483648 through\n 2147483647. (The range may be larger on machines with a\n larger natural word size, but not smaller.) When the result\n of an operation would fall outside this range, the result is\n normally returned as a long integer (in some cases, the\n exception ``OverflowError`` is raised instead). For the\n purpose of shift and mask operations, integers are assumed to\n have a binary, 2\'s complement notation using 32 or more bits,\n and hiding no bits from the user (i.e., all 4294967296\n different bit patterns correspond to different values).\n\n Long integers\n These represent numbers in an unlimited range, subject to\n available (virtual) memory only. For the purpose of shift\n and mask operations, a binary representation is assumed, and\n negative numbers are represented in a variant of 2\'s\n complement which gives the illusion of an infinite string of\n sign bits extending to the left.\n\n Booleans\n These represent the truth values False and True. The two\n objects representing the values False and True are the only\n Boolean objects. The Boolean type is a subtype of plain\n integers, and Boolean values behave like the values 0 and 1,\n respectively, in almost all contexts, the exception being\n that when converted to a string, the strings ``"False"`` or\n ``"True"`` are returned, respectively.\n\n The rules for integer representation are intended to give the\n most meaningful interpretation of shift and mask operations\n involving negative integers and the least surprises when\n switching between the plain and long integer domains. Any\n operation, if it yields a result in the plain integer domain,\n will yield the same result in the long integer domain or when\n using mixed operands. The switch between domains is transparent\n to the programmer.\n\n ``numbers.Real`` (``float``)\n These represent machine-level double precision floating point\n numbers. You are at the mercy of the underlying machine\n architecture (and C or Java implementation) for the accepted\n range and handling of overflow. Python does not support single-\n precision floating point numbers; the savings in processor and\n memory usage that are usually the reason for using these is\n dwarfed by the overhead of using objects in Python, so there is\n no reason to complicate the language with two kinds of floating\n point numbers.\n\n ``numbers.Complex``\n These represent complex numbers as a pair of machine-level\n double precision floating point numbers. The same caveats apply\n as for floating point numbers. The real and imaginary parts of a\n complex number ``z`` can be retrieved through the read-only\n attributes ``z.real`` and ``z.imag``.\n\nSequences\n These represent finite ordered sets indexed by non-negative\n numbers. The built-in function ``len()`` returns the number of\n items of a sequence. When the length of a sequence is *n*, the\n index set contains the numbers 0, 1, ..., *n*-1. Item *i* of\n sequence *a* is selected by ``a[i]``.\n\n Sequences also support slicing: ``a[i:j]`` selects all items with\n index *k* such that *i* ``<=`` *k* ``<`` *j*. When used as an\n expression, a slice is a sequence of the same type. This implies\n that the index set is renumbered so that it starts at 0.\n\n Some sequences also support "extended slicing" with a third "step"\n parameter: ``a[i:j:k]`` selects all items of *a* with index *x*\n where ``x = i + n*k``, *n* ``>=`` ``0`` and *i* ``<=`` *x* ``<``\n *j*.\n\n Sequences are distinguished according to their mutability:\n\n Immutable sequences\n An object of an immutable sequence type cannot change once it is\n created. (If the object contains references to other objects,\n these other objects may be mutable and may be changed; however,\n the collection of objects directly referenced by an immutable\n object cannot change.)\n\n The following types are immutable sequences:\n\n Strings\n The items of a string are characters. There is no separate\n character type; a character is represented by a string of one\n item. Characters represent (at least) 8-bit bytes. The\n built-in functions ``chr()`` and ``ord()`` convert between\n characters and nonnegative integers representing the byte\n values. Bytes with the values 0-127 usually represent the\n corresponding ASCII values, but the interpretation of values\n is up to the program. The string data type is also used to\n represent arrays of bytes, e.g., to hold data read from a\n file.\n\n (On systems whose native character set is not ASCII, strings\n may use EBCDIC in their internal representation, provided the\n functions ``chr()`` and ``ord()`` implement a mapping between\n ASCII and EBCDIC, and string comparison preserves the ASCII\n order. Or perhaps someone can propose a better rule?)\n\n Unicode\n The items of a Unicode object are Unicode code units. A\n Unicode code unit is represented by a Unicode object of one\n item and can hold either a 16-bit or 32-bit value\n representing a Unicode ordinal (the maximum value for the\n ordinal is given in ``sys.maxunicode``, and depends on how\n Python is configured at compile time). Surrogate pairs may\n be present in the Unicode object, and will be reported as two\n separate items. The built-in functions ``unichr()`` and\n ``ord()`` convert between code units and nonnegative integers\n representing the Unicode ordinals as defined in the Unicode\n Standard 3.0. Conversion from and to other encodings are\n possible through the Unicode method ``encode()`` and the\n built-in function ``unicode()``.\n\n Tuples\n The items of a tuple are arbitrary Python objects. Tuples of\n two or more items are formed by comma-separated lists of\n expressions. A tuple of one item (a \'singleton\') can be\n formed by affixing a comma to an expression (an expression by\n itself does not create a tuple, since parentheses must be\n usable for grouping of expressions). An empty tuple can be\n formed by an empty pair of parentheses.\n\n Mutable sequences\n Mutable sequences can be changed after they are created. The\n subscription and slicing notations can be used as the target of\n assignment and ``del`` (delete) statements.\n\n There are currently two intrinsic mutable sequence types:\n\n Lists\n The items of a list are arbitrary Python objects. Lists are\n formed by placing a comma-separated list of expressions in\n square brackets. (Note that there are no special cases needed\n to form lists of length 0 or 1.)\n\n Byte Arrays\n A bytearray object is a mutable array. They are created by\n the built-in ``bytearray()`` constructor. Aside from being\n mutable (and hence unhashable), byte arrays otherwise provide\n the same interface and functionality as immutable bytes\n objects.\n\n The extension module ``array`` provides an additional example of\n a mutable sequence type.\n\nSet types\n These represent unordered, finite sets of unique, immutable\n objects. As such, they cannot be indexed by any subscript. However,\n they can be iterated over, and the built-in function ``len()``\n returns the number of items in a set. Common uses for sets are fast\n membership testing, removing duplicates from a sequence, and\n computing mathematical operations such as intersection, union,\n difference, and symmetric difference.\n\n For set elements, the same immutability rules apply as for\n dictionary keys. Note that numeric types obey the normal rules for\n numeric comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``), only one of them can be contained in a set.\n\n There are currently two intrinsic set types:\n\n Sets\n These represent a mutable set. They are created by the built-in\n ``set()`` constructor and can be modified afterwards by several\n methods, such as ``add()``.\n\n Frozen sets\n These represent an immutable set. They are created by the\n built-in ``frozenset()`` constructor. As a frozenset is\n immutable and *hashable*, it can be used again as an element of\n another set, or as a dictionary key.\n\nMappings\n These represent finite sets of objects indexed by arbitrary index\n sets. The subscript notation ``a[k]`` selects the item indexed by\n ``k`` from the mapping ``a``; this can be used in expressions and\n as the target of assignments or ``del`` statements. The built-in\n function ``len()`` returns the number of items in a mapping.\n\n There is currently a single intrinsic mapping type:\n\n Dictionaries\n These represent finite sets of objects indexed by nearly\n arbitrary values. The only types of values not acceptable as\n keys are values containing lists or dictionaries or other\n mutable types that are compared by value rather than by object\n identity, the reason being that the efficient implementation of\n dictionaries requires a key\'s hash value to remain constant.\n Numeric types used for keys obey the normal rules for numeric\n comparison: if two numbers compare equal (e.g., ``1`` and\n ``1.0``) then they can be used interchangeably to index the same\n dictionary entry.\n\n Dictionaries are mutable; they can be created by the ``{...}``\n notation (see section *Dictionary displays*).\n\n The extension modules ``dbm``, ``gdbm``, and ``bsddb`` provide\n additional examples of mapping types.\n\nCallable types\n These are the types to which the function call operation (see\n section *Calls*) can be applied:\n\n User-defined functions\n A user-defined function object is created by a function\n definition (see section *Function definitions*). It should be\n called with an argument list containing the same number of items\n as the function\'s formal parameter list.\n\n Special attributes:\n\n +-------------------------+---------------------------------+-------------+\n | Attribute | Meaning | |\n +=========================+=================================+=============+\n | ``func_doc`` | The function\'s documentation | Writable |\n | | string, or ``None`` if | |\n | | unavailable | |\n +-------------------------+---------------------------------+-------------+\n | ``__doc__`` | Another way of spelling | Writable |\n | | ``func_doc`` | |\n +-------------------------+---------------------------------+-------------+\n | ``func_name`` | The function\'s name | Writable |\n +-------------------------+---------------------------------+-------------+\n | ``__name__`` | Another way of spelling | Writable |\n | | ``func_name`` | |\n +-------------------------+---------------------------------+-------------+\n | ``__module__`` | The name of the module the | Writable |\n | | function was defined in, or | |\n | | ``None`` if unavailable. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_defaults`` | A tuple containing default | Writable |\n | | argument values for those | |\n | | arguments that have defaults, | |\n | | or ``None`` if no arguments | |\n | | have a default value | |\n +-------------------------+---------------------------------+-------------+\n | ``func_code`` | The code object representing | Writable |\n | | the compiled function body. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_globals`` | A reference to the dictionary | Read-only |\n | | that holds the function\'s | |\n | | global variables --- the global | |\n | | namespace of the module in | |\n | | which the function was defined. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_dict`` | The namespace supporting | Writable |\n | | arbitrary function attributes. | |\n +-------------------------+---------------------------------+-------------+\n | ``func_closure`` | ``None`` or a tuple of cells | Read-only |\n | | that contain bindings for the | |\n | | function\'s free variables. | |\n +-------------------------+---------------------------------+-------------+\n\n Most of the attributes labelled "Writable" check the type of the\n assigned value.\n\n Changed in version 2.4: ``func_name`` is now writable.\n\n Function objects also support getting and setting arbitrary\n attributes, which can be used, for example, to attach metadata\n to functions. Regular attribute dot-notation is used to get and\n set such attributes. *Note that the current implementation only\n supports function attributes on user-defined functions. Function\n attributes on built-in functions may be supported in the\n future.*\n\n Additional information about a function\'s definition can be\n retrieved from its code object; see the description of internal\n types below.\n\n User-defined methods\n A user-defined method object combines a class, a class instance\n (or ``None``) and any callable object (normally a user-defined\n function).\n\n Special read-only attributes: ``im_self`` is the class instance\n object, ``im_func`` is the function object; ``im_class`` is the\n class of ``im_self`` for bound methods or the class that asked\n for the method for unbound methods; ``__doc__`` is the method\'s\n documentation (same as ``im_func.__doc__``); ``__name__`` is the\n method name (same as ``im_func.__name__``); ``__module__`` is\n the name of the module the method was defined in, or ``None`` if\n unavailable.\n\n Changed in version 2.2: ``im_self`` used to refer to the class\n that defined the method.\n\n Changed in version 2.6: For 3.0 forward-compatibility,\n ``im_func`` is also available as ``__func__``, and ``im_self``\n as ``__self__``.\n\n Methods also support accessing (but not setting) the arbitrary\n function attributes on the underlying function object.\n\n User-defined method objects may be created when getting an\n attribute of a class (perhaps via an instance of that class), if\n that attribute is a user-defined function object, an unbound\n user-defined method object, or a class method object. When the\n attribute is a user-defined method object, a new method object\n is only created if the class from which it is being retrieved is\n the same as, or a derived class of, the class stored in the\n original method object; otherwise, the original method object is\n used as it is.\n\n When a user-defined method object is created by retrieving a\n user-defined function object from a class, its ``im_self``\n attribute is ``None`` and the method object is said to be\n unbound. When one is created by retrieving a user-defined\n function object from a class via one of its instances, its\n ``im_self`` attribute is the instance, and the method object is\n said to be bound. In either case, the new method\'s ``im_class``\n attribute is the class from which the retrieval takes place, and\n its ``im_func`` attribute is the original function object.\n\n When a user-defined method object is created by retrieving\n another method object from a class or instance, the behaviour is\n the same as for a function object, except that the ``im_func``\n attribute of the new instance is not the original method object\n but its ``im_func`` attribute.\n\n When a user-defined method object is created by retrieving a\n class method object from a class or instance, its ``im_self``\n attribute is the class itself (the same as the ``im_class``\n attribute), and its ``im_func`` attribute is the function object\n underlying the class method.\n\n When an unbound user-defined method object is called, the\n underlying function (``im_func``) is called, with the\n restriction that the first argument must be an instance of the\n proper class (``im_class``) or of a derived class thereof.\n\n When a bound user-defined method object is called, the\n underlying function (``im_func``) is called, inserting the class\n instance (``im_self``) in front of the argument list. For\n instance, when ``C`` is a class which contains a definition for\n a function ``f()``, and ``x`` is an instance of ``C``, calling\n ``x.f(1)`` is equivalent to calling ``C.f(x, 1)``.\n\n When a user-defined method object is derived from a class method\n object, the "class instance" stored in ``im_self`` will actually\n be the class itself, so that calling either ``x.f(1)`` or\n ``C.f(1)`` is equivalent to calling ``f(C,1)`` where ``f`` is\n the underlying function.\n\n Note that the transformation from function object to (unbound or\n bound) method object happens each time the attribute is\n retrieved from the class or instance. In some cases, a fruitful\n optimization is to assign the attribute to a local variable and\n call that local variable. Also notice that this transformation\n only happens for user-defined functions; other callable objects\n (and all non-callable objects) are retrieved without\n transformation. It is also important to note that user-defined\n functions which are attributes of a class instance are not\n converted to bound methods; this *only* happens when the\n function is an attribute of the class.\n\n Generator functions\n A function or method which uses the ``yield`` statement (see\n section *The yield statement*) is called a *generator function*.\n Such a function, when called, always returns an iterator object\n which can be used to execute the body of the function: calling\n the iterator\'s ``next()`` method will cause the function to\n execute until it provides a value using the ``yield`` statement.\n When the function executes a ``return`` statement or falls off\n the end, a ``StopIteration`` exception is raised and the\n iterator will have reached the end of the set of values to be\n returned.\n\n Built-in functions\n A built-in function object is a wrapper around a C function.\n Examples of built-in functions are ``len()`` and ``math.sin()``\n (``math`` is a standard built-in module). The number and type of\n the arguments are determined by the C function. Special read-\n only attributes: ``__doc__`` is the function\'s documentation\n string, or ``None`` if unavailable; ``__name__`` is the\n function\'s name; ``__self__`` is set to ``None`` (but see the\n next item); ``__module__`` is the name of the module the\n function was defined in or ``None`` if unavailable.\n\n Built-in methods\n This is really a different disguise of a built-in function, this\n time containing an object passed to the C function as an\n implicit extra argument. An example of a built-in method is\n ``alist.append()``, assuming *alist* is a list object. In this\n case, the special read-only attribute ``__self__`` is set to the\n object denoted by *alist*.\n\n Class Types\n Class types, or "new-style classes," are callable. These\n objects normally act as factories for new instances of\n themselves, but variations are possible for class types that\n override ``__new__()``. The arguments of the call are passed to\n ``__new__()`` and, in the typical case, to ``__init__()`` to\n initialize the new instance.\n\n Classic Classes\n Class objects are described below. When a class object is\n called, a new class instance (also described below) is created\n and returned. This implies a call to the class\'s ``__init__()``\n method if it has one. Any arguments are passed on to the\n ``__init__()`` method. If there is no ``__init__()`` method,\n the class must be called without arguments.\n\n Class instances\n Class instances are described below. Class instances are\n callable only when the class has a ``__call__()`` method;\n ``x(arguments)`` is a shorthand for ``x.__call__(arguments)``.\n\nModules\n Modules are imported by the ``import`` statement (see section *The\n import statement*). A module object has a namespace implemented by\n a dictionary object (this is the dictionary referenced by the\n func_globals attribute of functions defined in the module).\n Attribute references are translated to lookups in this dictionary,\n e.g., ``m.x`` is equivalent to ``m.__dict__["x"]``. A module object\n does not contain the code object used to initialize the module\n (since it isn\'t needed once the initialization is done).\n\n Attribute assignment updates the module\'s namespace dictionary,\n e.g., ``m.x = 1`` is equivalent to ``m.__dict__["x"] = 1``.\n\n Special read-only attribute: ``__dict__`` is the module\'s namespace\n as a dictionary object.\n\n **CPython implementation detail:** Because of the way CPython\n clears module dictionaries, the module dictionary will be cleared\n when the module falls out of scope even if the dictionary still has\n live references. To avoid this, copy the dictionary or keep the\n module around while using its dictionary directly.\n\n Predefined (writable) attributes: ``__name__`` is the module\'s\n name; ``__doc__`` is the module\'s documentation string, or ``None``\n if unavailable; ``__file__`` is the pathname of the file from which\n the module was loaded, if it was loaded from a file. The\n ``__file__`` attribute is not present for C modules that are\n statically linked into the interpreter; for extension modules\n loaded dynamically from a shared library, it is the pathname of the\n shared library file.\n\nClasses\n Both class types (new-style classes) and class objects (old-\n style/classic classes) are typically created by class definitions\n (see section *Class definitions*). A class has a namespace\n implemented by a dictionary object. Class attribute references are\n translated to lookups in this dictionary, e.g., ``C.x`` is\n translated to ``C.__dict__["x"]`` (although for new-style classes\n in particular there are a number of hooks which allow for other\n means of locating attributes). When the attribute name is not found\n there, the attribute search continues in the base classes. For\n old-style classes, the search is depth-first, left-to-right in the\n order of occurrence in the base class list. New-style classes use\n the more complex C3 method resolution order which behaves correctly\n even in the presence of \'diamond\' inheritance structures where\n there are multiple inheritance paths leading back to a common\n ancestor. Additional details on the C3 MRO used by new-style\n classes can be found in the documentation accompanying the 2.3\n release at http://www.python.org/download/releases/2.3/mro/.\n\n When a class attribute reference (for class ``C``, say) would yield\n a user-defined function object or an unbound user-defined method\n object whose associated class is either ``C`` or one of its base\n classes, it is transformed into an unbound user-defined method\n object whose ``im_class`` attribute is ``C``. When it would yield a\n class method object, it is transformed into a bound user-defined\n method object whose ``im_class`` and ``im_self`` attributes are\n both ``C``. When it would yield a static method object, it is\n transformed into the object wrapped by the static method object.\n See section *Implementing Descriptors* for another way in which\n attributes retrieved from a class may differ from those actually\n contained in its ``__dict__`` (note that only new-style classes\n support descriptors).\n\n Class attribute assignments update the class\'s dictionary, never\n the dictionary of a base class.\n\n A class object can be called (see above) to yield a class instance\n (see below).\n\n Special attributes: ``__name__`` is the class name; ``__module__``\n is the module name in which the class was defined; ``__dict__`` is\n the dictionary containing the class\'s namespace; ``__bases__`` is a\n tuple (possibly empty or a singleton) containing the base classes,\n in the order of their occurrence in the base class list;\n ``__doc__`` is the class\'s documentation string, or None if\n undefined.\n\nClass instances\n A class instance is created by calling a class object (see above).\n A class instance has a namespace implemented as a dictionary which\n is the first place in which attribute references are searched.\n When an attribute is not found there, and the instance\'s class has\n an attribute by that name, the search continues with the class\n attributes. If a class attribute is found that is a user-defined\n function object or an unbound user-defined method object whose\n associated class is the class (call it ``C``) of the instance for\n which the attribute reference was initiated or one of its bases, it\n is transformed into a bound user-defined method object whose\n ``im_class`` attribute is ``C`` and whose ``im_self`` attribute is\n the instance. Static method and class method objects are also\n transformed, as if they had been retrieved from class ``C``; see\n above under "Classes". See section *Implementing Descriptors* for\n another way in which attributes of a class retrieved via its\n instances may differ from the objects actually stored in the\n class\'s ``__dict__``. If no class attribute is found, and the\n object\'s class has a ``__getattr__()`` method, that is called to\n satisfy the lookup.\n\n Attribute assignments and deletions update the instance\'s\n dictionary, never a class\'s dictionary. If the class has a\n ``__setattr__()`` or ``__delattr__()`` method, this is called\n instead of updating the instance dictionary directly.\n\n Class instances can pretend to be numbers, sequences, or mappings\n if they have methods with certain special names. See section\n *Special method names*.\n\n Special attributes: ``__dict__`` is the attribute dictionary;\n ``__class__`` is the instance\'s class.\n\nFiles\n A file object represents an open file. File objects are created by\n the ``open()`` built-in function, and also by ``os.popen()``,\n ``os.fdopen()``, and the ``makefile()`` method of socket objects\n (and perhaps by other functions or methods provided by extension\n modules). The objects ``sys.stdin``, ``sys.stdout`` and\n ``sys.stderr`` are initialized to file objects corresponding to the\n interpreter\'s standard input, output and error streams. See *File\n Objects* for complete documentation of file objects.\n\nInternal types\n A few types used internally by the interpreter are exposed to the\n user. Their definitions may change with future versions of the\n interpreter, but they are mentioned here for completeness.\n\n Code objects\n Code objects represent *byte-compiled* executable Python code,\n or *bytecode*. The difference between a code object and a\n function object is that the function object contains an explicit\n reference to the function\'s globals (the module in which it was\n defined), while a code object contains no context; also the\n default argument values are stored in the function object, not\n in the code object (because they represent values calculated at\n run-time). Unlike function objects, code objects are immutable\n and contain no references (directly or indirectly) to mutable\n objects.\n\n Special read-only attributes: ``co_name`` gives the function\n name; ``co_argcount`` is the number of positional arguments\n (including arguments with default values); ``co_nlocals`` is the\n number of local variables used by the function (including\n arguments); ``co_varnames`` is a tuple containing the names of\n the local variables (starting with the argument names);\n ``co_cellvars`` is a tuple containing the names of local\n variables that are referenced by nested functions;\n ``co_freevars`` is a tuple containing the names of free\n variables; ``co_code`` is a string representing the sequence of\n bytecode instructions; ``co_consts`` is a tuple containing the\n literals used by the bytecode; ``co_names`` is a tuple\n containing the names used by the bytecode; ``co_filename`` is\n the filename from which the code was compiled;\n ``co_firstlineno`` is the first line number of the function;\n ``co_lnotab`` is a string encoding the mapping from bytecode\n offsets to line numbers (for details see the source code of the\n interpreter); ``co_stacksize`` is the required stack size\n (including local variables); ``co_flags`` is an integer encoding\n a number of flags for the interpreter.\n\n The following flag bits are defined for ``co_flags``: bit\n ``0x04`` is set if the function uses the ``*arguments`` syntax\n to accept an arbitrary number of positional arguments; bit\n ``0x08`` is set if the function uses the ``**keywords`` syntax\n to accept arbitrary keyword arguments; bit ``0x20`` is set if\n the function is a generator.\n\n Future feature declarations (``from __future__ import\n division``) also use bits in ``co_flags`` to indicate whether a\n code object was compiled with a particular feature enabled: bit\n ``0x2000`` is set if the function was compiled with future\n division enabled; bits ``0x10`` and ``0x1000`` were used in\n earlier versions of Python.\n\n Other bits in ``co_flags`` are reserved for internal use.\n\n If a code object represents a function, the first item in\n ``co_consts`` is the documentation string of the function, or\n ``None`` if undefined.\n\n Frame objects\n Frame objects represent execution frames. They may occur in\n traceback objects (see below).\n\n Special read-only attributes: ``f_back`` is to the previous\n stack frame (towards the caller), or ``None`` if this is the\n bottom stack frame; ``f_code`` is the code object being executed\n in this frame; ``f_locals`` is the dictionary used to look up\n local variables; ``f_globals`` is used for global variables;\n ``f_builtins`` is used for built-in (intrinsic) names;\n ``f_restricted`` is a flag indicating whether the function is\n executing in restricted execution mode; ``f_lasti`` gives the\n precise instruction (this is an index into the bytecode string\n of the code object).\n\n Special writable attributes: ``f_trace``, if not ``None``, is a\n function called at the start of each source code line (this is\n used by the debugger); ``f_exc_type``, ``f_exc_value``,\n ``f_exc_traceback`` represent the last exception raised in the\n parent frame provided another exception was ever raised in the\n current frame (in all other cases they are None); ``f_lineno``\n is the current line number of the frame --- writing to this from\n within a trace function jumps to the given line (only for the\n bottom-most frame). A debugger can implement a Jump command\n (aka Set Next Statement) by writing to f_lineno.\n\n Traceback objects\n Traceback objects represent a stack trace of an exception. A\n traceback object is created when an exception occurs. When the\n search for an exception handler unwinds the execution stack, at\n each unwound level a traceback object is inserted in front of\n the current traceback. When an exception handler is entered,\n the stack trace is made available to the program. (See section\n *The try statement*.) It is accessible as ``sys.exc_traceback``,\n and also as the third item of the tuple returned by\n ``sys.exc_info()``. The latter is the preferred interface,\n since it works correctly when the program is using multiple\n threads. When the program contains no suitable handler, the\n stack trace is written (nicely formatted) to the standard error\n stream; if the interpreter is interactive, it is also made\n available to the user as ``sys.last_traceback``.\n\n Special read-only attributes: ``tb_next`` is the next level in\n the stack trace (towards the frame where the exception\n occurred), or ``None`` if there is no next level; ``tb_frame``\n points to the execution frame of the current level;\n ``tb_lineno`` gives the line number where the exception\n occurred; ``tb_lasti`` indicates the precise instruction. The\n line number and last instruction in the traceback may differ\n from the line number of its frame object if the exception\n occurred in a ``try`` statement with no matching except clause\n or with a finally clause.\n\n Slice objects\n Slice objects are used to represent slices when *extended slice\n syntax* is used. This is a slice using two colons, or multiple\n slices or ellipses separated by commas, e.g., ``a[i:j:step]``,\n ``a[i:j, k:l]``, or ``a[..., i:j]``. They are also created by\n the built-in ``slice()`` function.\n\n Special read-only attributes: ``start`` is the lower bound;\n ``stop`` is the upper bound; ``step`` is the step value; each is\n ``None`` if omitted. These attributes can have any type.\n\n Slice objects support one method:\n\n slice.indices(self, length)\n\n This method takes a single integer argument *length* and\n computes information about the extended slice that the slice\n object would describe if applied to a sequence of *length*\n items. It returns a tuple of three integers; respectively\n these are the *start* and *stop* indices and the *step* or\n stride length of the slice. Missing or out-of-bounds indices\n are handled in a manner consistent with regular slices.\n\n New in version 2.3.\n\n Static method objects\n Static method objects provide a way of defeating the\n transformation of function objects to method objects described\n above. A static method object is a wrapper around any other\n object, usually a user-defined method object. When a static\n method object is retrieved from a class or a class instance, the\n object actually returned is the wrapped object, which is not\n subject to any further transformation. Static method objects are\n not themselves callable, although the objects they wrap usually\n are. Static method objects are created by the built-in\n ``staticmethod()`` constructor.\n\n Class method objects\n A class method object, like a static method object, is a wrapper\n around another object that alters the way in which that object\n is retrieved from classes and class instances. The behaviour of\n class method objects upon such retrieval is described above,\n under "User-defined methods". Class method objects are created\n by the built-in ``classmethod()`` constructor.\n', 'typesfunctions': u'\nFunctions\n*********\n\nFunction objects are created by function definitions. The only\noperation on a function object is to call it: ``func(argument-list)``.\n\nThere are really two flavors of function objects: built-in functions\nand user-defined functions. Both support the same operation (to call\nthe function), but the implementation is different, hence the\ndifferent object types.\n\nSee *Function definitions* for more information.\n', - 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 2, "two": 3}``:\n\n * ``dict(one=2, two=3)``\n\n * ``dict({\'one\': 2, \'two\': 3})``\n\n * ``dict(zip((\'one\', \'two\'), (2, 3)))``\n\n * ``dict([[\'two\', 3], [\'one\', 2]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as a tuple or other iterable of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', + 'typesmapping': u'\nMapping Types --- ``dict``\n**************************\n\nA *mapping* object maps *hashable* values to arbitrary objects.\nMappings are mutable objects. There is currently only one standard\nmapping type, the *dictionary*. (For other containers see the built\nin ``list``, ``set``, and ``tuple`` classes, and the ``collections``\nmodule.)\n\nA dictionary\'s keys are *almost* arbitrary values. Values that are\nnot *hashable*, that is, values containing lists, dictionaries or\nother mutable types (that are compared by value rather than by object\nidentity) may not be used as keys. Numeric types used for keys obey\nthe normal rules for numeric comparison: if two numbers compare equal\n(such as ``1`` and ``1.0``) then they can be used interchangeably to\nindex the same dictionary entry. (Note however, that since computers\nstore floating-point numbers as approximations it is usually unwise to\nuse them as dictionary keys.)\n\nDictionaries can be created by placing a comma-separated list of\n``key: value`` pairs within braces, for example: ``{\'jack\': 4098,\n\'sjoerd\': 4127}`` or ``{4098: \'jack\', 4127: \'sjoerd\'}``, or by the\n``dict`` constructor.\n\nclass class dict([arg])\n\n Return a new dictionary initialized from an optional positional\n argument or from a set of keyword arguments. If no arguments are\n given, return a new empty dictionary. If the positional argument\n *arg* is a mapping object, return a dictionary mapping the same\n keys to the same values as does the mapping object. Otherwise the\n positional argument must be a sequence, a container that supports\n iteration, or an iterator object. The elements of the argument\n must each also be of one of those kinds, and each must in turn\n contain exactly two objects. The first is used as a key in the new\n dictionary, and the second as the key\'s value. If a given key is\n seen more than once, the last value associated with it is retained\n in the new dictionary.\n\n If keyword arguments are given, the keywords themselves with their\n associated values are added as items to the dictionary. If a key is\n specified both in the positional argument and as a keyword\n argument, the value associated with the keyword is retained in the\n dictionary. For example, these all return a dictionary equal to\n ``{"one": 1, "two": 2}``:\n\n * ``dict(one=1, two=2)``\n\n * ``dict({\'one\': 1, \'two\': 2})``\n\n * ``dict(zip((\'one\', \'two\'), (1, 2)))``\n\n * ``dict([[\'two\', 2], [\'one\', 1]])``\n\n The first example only works for keys that are valid Python\n identifiers; the others work with any valid keys.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for building a dictionary from\n keyword arguments added.\n\n These are the operations that dictionaries support (and therefore,\n custom mapping types should support too):\n\n len(d)\n\n Return the number of items in the dictionary *d*.\n\n d[key]\n\n Return the item of *d* with key *key*. Raises a ``KeyError`` if\n *key* is not in the map.\n\n New in version 2.5: If a subclass of dict defines a method\n ``__missing__()``, if the key *key* is not present, the\n ``d[key]`` operation calls that method with the key *key* as\n argument. The ``d[key]`` operation then returns or raises\n whatever is returned or raised by the ``__missing__(key)`` call\n if the key is not present. No other operations or methods invoke\n ``__missing__()``. If ``__missing__()`` is not defined,\n ``KeyError`` is raised. ``__missing__()`` must be a method; it\n cannot be an instance variable. For an example, see\n ``collections.defaultdict``.\n\n d[key] = value\n\n Set ``d[key]`` to *value*.\n\n del d[key]\n\n Remove ``d[key]`` from *d*. Raises a ``KeyError`` if *key* is\n not in the map.\n\n key in d\n\n Return ``True`` if *d* has a key *key*, else ``False``.\n\n New in version 2.2.\n\n key not in d\n\n Equivalent to ``not key in d``.\n\n New in version 2.2.\n\n iter(d)\n\n Return an iterator over the keys of the dictionary. This is a\n shortcut for ``iterkeys()``.\n\n clear()\n\n Remove all items from the dictionary.\n\n copy()\n\n Return a shallow copy of the dictionary.\n\n fromkeys(seq[, value])\n\n Create a new dictionary with keys from *seq* and values set to\n *value*.\n\n ``fromkeys()`` is a class method that returns a new dictionary.\n *value* defaults to ``None``.\n\n New in version 2.3.\n\n get(key[, default])\n\n Return the value for *key* if *key* is in the dictionary, else\n *default*. If *default* is not given, it defaults to ``None``,\n so that this method never raises a ``KeyError``.\n\n has_key(key)\n\n Test for the presence of *key* in the dictionary. ``has_key()``\n is deprecated in favor of ``key in d``.\n\n items()\n\n Return a copy of the dictionary\'s list of ``(key, value)``\n pairs.\n\n **CPython implementation detail:** Keys and values are listed in\n an arbitrary order which is non-random, varies across Python\n implementations, and depends on the dictionary\'s history of\n insertions and deletions.\n\n If ``items()``, ``keys()``, ``values()``, ``iteritems()``,\n ``iterkeys()``, and ``itervalues()`` are called with no\n intervening modifications to the dictionary, the lists will\n directly correspond. This allows the creation of ``(value,\n key)`` pairs using ``zip()``: ``pairs = zip(d.values(),\n d.keys())``. The same relationship holds for the ``iterkeys()``\n and ``itervalues()`` methods: ``pairs = zip(d.itervalues(),\n d.iterkeys())`` provides the same value for ``pairs``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.iteritems()]``.\n\n iteritems()\n\n Return an iterator over the dictionary\'s ``(key, value)`` pairs.\n See the note for ``dict.items()``.\n\n Using ``iteritems()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n iterkeys()\n\n Return an iterator over the dictionary\'s keys. See the note for\n ``dict.items()``.\n\n Using ``iterkeys()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n itervalues()\n\n Return an iterator over the dictionary\'s values. See the note\n for ``dict.items()``.\n\n Using ``itervalues()`` while adding or deleting entries in the\n dictionary may raise a ``RuntimeError`` or fail to iterate over\n all entries.\n\n New in version 2.2.\n\n keys()\n\n Return a copy of the dictionary\'s list of keys. See the note\n for ``dict.items()``.\n\n pop(key[, default])\n\n If *key* is in the dictionary, remove it and return its value,\n else return *default*. If *default* is not given and *key* is\n not in the dictionary, a ``KeyError`` is raised.\n\n New in version 2.3.\n\n popitem()\n\n Remove and return an arbitrary ``(key, value)`` pair from the\n dictionary.\n\n ``popitem()`` is useful to destructively iterate over a\n dictionary, as often used in set algorithms. If the dictionary\n is empty, calling ``popitem()`` raises a ``KeyError``.\n\n setdefault(key[, default])\n\n If *key* is in the dictionary, return its value. If not, insert\n *key* with a value of *default* and return *default*. *default*\n defaults to ``None``.\n\n update([other])\n\n Update the dictionary with the key/value pairs from *other*,\n overwriting existing keys. Return ``None``.\n\n ``update()`` accepts either another dictionary object or an\n iterable of key/value pairs (as tuples or other iterables of\n length two). If keyword arguments are specified, the dictionary\n is then updated with those key/value pairs: ``d.update(red=1,\n blue=2)``.\n\n Changed in version 2.4: Allowed the argument to be an iterable\n of key/value pairs and allowed keyword arguments.\n\n values()\n\n Return a copy of the dictionary\'s list of values. See the note\n for ``dict.items()``.\n\n viewitems()\n\n Return a new view of the dictionary\'s items (``(key, value)``\n pairs). See below for documentation of view objects.\n\n New in version 2.7.\n\n viewkeys()\n\n Return a new view of the dictionary\'s keys. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n viewvalues()\n\n Return a new view of the dictionary\'s values. See below for\n documentation of view objects.\n\n New in version 2.7.\n\n\nDictionary view objects\n=======================\n\nThe objects returned by ``dict.viewkeys()``, ``dict.viewvalues()`` and\n``dict.viewitems()`` are *view objects*. They provide a dynamic view\non the dictionary\'s entries, which means that when the dictionary\nchanges, the view reflects these changes.\n\nDictionary views can be iterated over to yield their respective data,\nand support membership tests:\n\nlen(dictview)\n\n Return the number of entries in the dictionary.\n\niter(dictview)\n\n Return an iterator over the keys, values or items (represented as\n tuples of ``(key, value)``) in the dictionary.\n\n Keys and values are iterated over in an arbitrary order which is\n non-random, varies across Python implementations, and depends on\n the dictionary\'s history of insertions and deletions. If keys,\n values and items views are iterated over with no intervening\n modifications to the dictionary, the order of items will directly\n correspond. This allows the creation of ``(value, key)`` pairs\n using ``zip()``: ``pairs = zip(d.values(), d.keys())``. Another\n way to create the same list is ``pairs = [(v, k) for (k, v) in\n d.items()]``.\n\n Iterating views while adding or deleting entries in the dictionary\n may raise a ``RuntimeError`` or fail to iterate over all entries.\n\nx in dictview\n\n Return ``True`` if *x* is in the underlying dictionary\'s keys,\n values or items (in the latter case, *x* should be a ``(key,\n value)`` tuple).\n\nKeys views are set-like since their entries are unique and hashable.\nIf all values are hashable, so that (key, value) pairs are unique and\nhashable, then the items view is also set-like. (Values views are not\ntreated as set-like since the entries are generally not unique.) Then\nthese set operations are available ("other" refers either to another\nview or a set):\n\ndictview & other\n\n Return the intersection of the dictview and the other object as a\n new set.\n\ndictview | other\n\n Return the union of the dictview and the other object as a new set.\n\ndictview - other\n\n Return the difference between the dictview and the other object\n (all elements in *dictview* that aren\'t in *other*) as a new set.\n\ndictview ^ other\n\n Return the symmetric difference (all elements either in *dictview*\n or *other*, but not in both) of the dictview and the other object\n as a new set.\n\nAn example of dictionary view usage:\n\n >>> dishes = {\'eggs\': 2, \'sausage\': 1, \'bacon\': 1, \'spam\': 500}\n >>> keys = dishes.viewkeys()\n >>> values = dishes.viewvalues()\n\n >>> # iteration\n >>> n = 0\n >>> for val in values:\n ... n += val\n >>> print(n)\n 504\n\n >>> # keys and values are iterated over in the same order\n >>> list(keys)\n [\'eggs\', \'bacon\', \'sausage\', \'spam\']\n >>> list(values)\n [2, 1, 1, 500]\n\n >>> # view objects are dynamic and reflect dict changes\n >>> del dishes[\'eggs\']\n >>> del dishes[\'sausage\']\n >>> list(keys)\n [\'spam\', \'bacon\']\n\n >>> # set operations\n >>> keys & {\'eggs\', \'bacon\', \'salad\'}\n {\'bacon\'}\n', 'typesmethods': u"\nMethods\n*******\n\nMethods are functions that are called using the attribute notation.\nThere are two flavors: built-in methods (such as ``append()`` on\nlists) and class instance methods. Built-in methods are described\nwith the types that support them.\n\nThe implementation adds two special read-only attributes to class\ninstance methods: ``m.im_self`` is the object on which the method\noperates, and ``m.im_func`` is the function implementing the method.\nCalling ``m(arg-1, arg-2, ..., arg-n)`` is completely equivalent to\ncalling ``m.im_func(m.im_self, arg-1, arg-2, ..., arg-n)``.\n\nClass instance methods are either *bound* or *unbound*, referring to\nwhether the method was accessed through an instance or a class,\nrespectively. When a method is unbound, its ``im_self`` attribute\nwill be ``None`` and if called, an explicit ``self`` object must be\npassed as the first argument. In this case, ``self`` must be an\ninstance of the unbound method's class (or a subclass of that class),\notherwise a ``TypeError`` is raised.\n\nLike function objects, methods objects support getting arbitrary\nattributes. However, since method attributes are actually stored on\nthe underlying function object (``meth.im_func``), setting method\nattributes on either bound or unbound methods is disallowed.\nAttempting to set a method attribute results in a ``TypeError`` being\nraised. In order to set a method attribute, you need to explicitly\nset it on the underlying function object:\n\n class C:\n def method(self):\n pass\n\n c = C()\n c.method.im_func.whoami = 'my name is c'\n\nSee *The standard type hierarchy* for more information.\n", 'typesmodules': u"\nModules\n*******\n\nThe only special operation on a module is attribute access:\n``m.name``, where *m* is a module and *name* accesses a name defined\nin *m*'s symbol table. Module attributes can be assigned to. (Note\nthat the ``import`` statement is not, strictly speaking, an operation\non a module object; ``import foo`` does not require a module object\nnamed *foo* to exist, rather it requires an (external) *definition*\nfor a module named *foo* somewhere.)\n\nA special member of every module is ``__dict__``. This is the\ndictionary containing the module's symbol table. Modifying this\ndictionary will actually change the module's symbol table, but direct\nassignment to the ``__dict__`` attribute is not possible (you can\nwrite ``m.__dict__['a'] = 1``, which defines ``m.a`` to be ``1``, but\nyou can't write ``m.__dict__ = {}``). Modifying ``__dict__`` directly\nis not recommended.\n\nModules built into the interpreter are written like this: ````. If loaded from a file, they are written as\n````.\n", - 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``buffer``, ``xrange``\n************************************************************************************\n\nThere are six sequence types: strings, Unicode strings, lists, tuples,\nbuffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbuffer, xrange* section. To output formatted strings use template\nstrings or the ``%`` operator described in the *String Formatting\nOperations* section. Also, see the ``re`` module for string functions\nbased on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with only its first character\n capitalized.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(#)03d quote types.\' % \\\n... {\'language\': "Python", "#": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', - 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList objects support additional operations that allow in-place\nmodification of the object. Other mutable sequence types (when added\nto the language) should also support these operations. Strings and\ntuples are immutable sequence types: such objects cannot be modified\nonce created. The following operations are defined on mutable sequence\ntypes (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", + 'typesseq': u'\nSequence Types --- ``str``, ``unicode``, ``list``, ``tuple``, ``bytearray``, ``buffer``, ``xrange``\n***************************************************************************************************\n\nThere are seven sequence types: strings, Unicode strings, lists,\ntuples, bytearrays, buffers, and xrange objects.\n\nFor other containers see the built in ``dict`` and ``set`` classes,\nand the ``collections`` module.\n\nString literals are written in single or double quotes: ``\'xyzzy\'``,\n``"frobozz"``. See *String literals* for more about string literals.\nUnicode strings are much like strings, but are specified in the syntax\nusing a preceding ``\'u\'`` character: ``u\'abc\'``, ``u"def"``. In\naddition to the functionality described here, there are also string-\nspecific methods described in the *String Methods* section. Lists are\nconstructed with square brackets, separating items with commas: ``[a,\nb, c]``. Tuples are constructed by the comma operator (not within\nsquare brackets), with or without enclosing parentheses, but an empty\ntuple must have the enclosing parentheses, such as ``a, b, c`` or\n``()``. A single item tuple must have a trailing comma, such as\n``(d,)``.\n\nBytearray objects are created with the built-in function\n``bytearray()``.\n\nBuffer objects are not directly supported by Python syntax, but can be\ncreated by calling the built-in function ``buffer()``. They don\'t\nsupport concatenation or repetition.\n\nObjects of type xrange are similar to buffers in that there is no\nspecific syntax to create them, but they are created using the\n``xrange()`` function. They don\'t support slicing, concatenation or\nrepetition, and using ``in``, ``not in``, ``min()`` or ``max()`` on\nthem is inefficient.\n\nMost sequence types support the following operations. The ``in`` and\n``not in`` operations have the same priorities as the comparison\noperations. The ``+`` and ``*`` operations have the same priority as\nthe corresponding numeric operations. [3] Additional methods are\nprovided for *Mutable Sequence Types*.\n\nThis table lists the sequence operations sorted in ascending priority\n(operations in the same box have the same priority). In the table,\n*s* and *t* are sequences of the same type; *n*, *i* and *j* are\nintegers:\n\n+--------------------+----------------------------------+------------+\n| Operation | Result | Notes |\n+====================+==================================+============+\n| ``x in s`` | ``True`` if an item of *s* is | (1) |\n| | equal to *x*, else ``False`` | |\n+--------------------+----------------------------------+------------+\n| ``x not in s`` | ``False`` if an item of *s* is | (1) |\n| | equal to *x*, else ``True`` | |\n+--------------------+----------------------------------+------------+\n| ``s + t`` | the concatenation of *s* and *t* | (6) |\n+--------------------+----------------------------------+------------+\n| ``s * n, n * s`` | *n* shallow copies of *s* | (2) |\n| | concatenated | |\n+--------------------+----------------------------------+------------+\n| ``s[i]`` | *i*\'th item of *s*, origin 0 | (3) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j]`` | slice of *s* from *i* to *j* | (3)(4) |\n+--------------------+----------------------------------+------------+\n| ``s[i:j:k]`` | slice of *s* from *i* to *j* | (3)(5) |\n| | with step *k* | |\n+--------------------+----------------------------------+------------+\n| ``len(s)`` | length of *s* | |\n+--------------------+----------------------------------+------------+\n| ``min(s)`` | smallest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``max(s)`` | largest item of *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.index(i)`` | index of the first occurence of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n| ``s.count(i)`` | total number of occurences of | |\n| | *i* in *s* | |\n+--------------------+----------------------------------+------------+\n\nSequence types also support comparisons. In particular, tuples and\nlists are compared lexicographically by comparing corresponding\nelements. This means that to compare equal, every element must compare\nequal and the two sequences must be of the same type and have the same\nlength. (For full details see *Comparisons* in the language\nreference.)\n\nNotes:\n\n1. When *s* is a string or Unicode string object the ``in`` and ``not\n in`` operations act like a substring test. In Python versions\n before 2.3, *x* had to be a string of length 1. In Python 2.3 and\n beyond, *x* may be a string of any length.\n\n2. Values of *n* less than ``0`` are treated as ``0`` (which yields an\n empty sequence of the same type as *s*). Note also that the copies\n are shallow; nested structures are not copied. This often haunts\n new Python programmers; consider:\n\n >>> lists = [[]] * 3\n >>> lists\n [[], [], []]\n >>> lists[0].append(3)\n >>> lists\n [[3], [3], [3]]\n\n What has happened is that ``[[]]`` is a one-element list containing\n an empty list, so all three elements of ``[[]] * 3`` are (pointers\n to) this single empty list. Modifying any of the elements of\n ``lists`` modifies this single list. You can create a list of\n different lists this way:\n\n >>> lists = [[] for i in range(3)]\n >>> lists[0].append(3)\n >>> lists[1].append(5)\n >>> lists[2].append(7)\n >>> lists\n [[3], [5], [7]]\n\n3. If *i* or *j* is negative, the index is relative to the end of the\n string: ``len(s) + i`` or ``len(s) + j`` is substituted. But note\n that ``-0`` is still ``0``.\n\n4. The slice of *s* from *i* to *j* is defined as the sequence of\n items with index *k* such that ``i <= k < j``. If *i* or *j* is\n greater than ``len(s)``, use ``len(s)``. If *i* is omitted or\n ``None``, use ``0``. If *j* is omitted or ``None``, use\n ``len(s)``. If *i* is greater than or equal to *j*, the slice is\n empty.\n\n5. The slice of *s* from *i* to *j* with step *k* is defined as the\n sequence of items with index ``x = i + n*k`` such that ``0 <= n <\n (j-i)/k``. In other words, the indices are ``i``, ``i+k``,\n ``i+2*k``, ``i+3*k`` and so on, stopping when *j* is reached (but\n never including *j*). If *i* or *j* is greater than ``len(s)``,\n use ``len(s)``. If *i* or *j* are omitted or ``None``, they become\n "end" values (which end depends on the sign of *k*). Note, *k*\n cannot be zero. If *k* is ``None``, it is treated like ``1``.\n\n6. **CPython implementation detail:** If *s* and *t* are both strings,\n some Python implementations such as CPython can usually perform an\n in-place optimization for assignments of the form ``s = s + t`` or\n ``s += t``. When applicable, this optimization makes quadratic\n run-time much less likely. This optimization is both version and\n implementation dependent. For performance sensitive code, it is\n preferable to use the ``str.join()`` method which assures\n consistent linear concatenation performance across versions and\n implementations.\n\n Changed in version 2.4: Formerly, string concatenation never\n occurred in-place.\n\n\nString Methods\n==============\n\nBelow are listed the string methods which both 8-bit strings and\nUnicode objects support. Some of them are also available on\n``bytearray`` objects.\n\nIn addition, Python\'s strings support the sequence type methods\ndescribed in the *Sequence Types --- str, unicode, list, tuple,\nbytearray, buffer, xrange* section. To output formatted strings use\ntemplate strings or the ``%`` operator described in the *String\nFormatting Operations* section. Also, see the ``re`` module for string\nfunctions based on regular expressions.\n\nstr.capitalize()\n\n Return a copy of the string with its first character capitalized\n and the rest lowercased.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.center(width[, fillchar])\n\n Return centered in a string of length *width*. Padding is done\n using the specified *fillchar* (default is a space).\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.count(sub[, start[, end]])\n\n Return the number of non-overlapping occurrences of substring *sub*\n in the range [*start*, *end*]. Optional arguments *start* and\n *end* are interpreted as in slice notation.\n\nstr.decode([encoding[, errors]])\n\n Decodes the string using the codec registered for *encoding*.\n *encoding* defaults to the default string encoding. *errors* may\n be given to set a different error handling scheme. The default is\n ``\'strict\'``, meaning that encoding errors raise ``UnicodeError``.\n Other possible values are ``\'ignore\'``, ``\'replace\'`` and any other\n name registered via ``codecs.register_error()``, see section *Codec\n Base Classes*.\n\n New in version 2.2.\n\n Changed in version 2.3: Support for other error handling schemes\n added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.encode([encoding[, errors]])\n\n Return an encoded version of the string. Default encoding is the\n current default string encoding. *errors* may be given to set a\n different error handling scheme. The default for *errors* is\n ``\'strict\'``, meaning that encoding errors raise a\n ``UnicodeError``. Other possible values are ``\'ignore\'``,\n ``\'replace\'``, ``\'xmlcharrefreplace\'``, ``\'backslashreplace\'`` and\n any other name registered via ``codecs.register_error()``, see\n section *Codec Base Classes*. For a list of possible encodings, see\n section *Standard Encodings*.\n\n New in version 2.0.\n\n Changed in version 2.3: Support for ``\'xmlcharrefreplace\'`` and\n ``\'backslashreplace\'`` and other error handling schemes added.\n\n Changed in version 2.7: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n Return ``True`` if the string ends with the specified *suffix*,\n otherwise return ``False``. *suffix* can also be a tuple of\n suffixes to look for. With optional *start*, test beginning at\n that position. With optional *end*, stop comparing at that\n position.\n\n Changed in version 2.5: Accept tuples as *suffix*.\n\nstr.expandtabs([tabsize])\n\n Return a copy of the string where all tab characters are replaced\n by one or more spaces, depending on the current column and the\n given tab size. The column number is reset to zero after each\n newline occurring in the string. If *tabsize* is not given, a tab\n size of ``8`` characters is assumed. This doesn\'t understand other\n non-printing characters or escape sequences.\n\nstr.find(sub[, start[, end]])\n\n Return the lowest index in the string where substring *sub* is\n found, such that *sub* is contained in the slice ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` if *sub* is not found.\n\n Note: The ``find()`` method should be used only if you need to know the\n position of *sub*. To check if *sub* is a substring or not, use\n the ``in`` operator:\n\n >>> \'Py\' in \'Python\'\n True\n\nstr.format(*args, **kwargs)\n\n Perform a string formatting operation. The string on which this\n method is called can contain literal text or replacement fields\n delimited by braces ``{}``. Each replacement field contains either\n the numeric index of a positional argument, or the name of a\n keyword argument. Returns a copy of the string where each\n replacement field is replaced with the string value of the\n corresponding argument.\n\n >>> "The sum of 1 + 2 is {0}".format(1+2)\n \'The sum of 1 + 2 is 3\'\n\n See *Format String Syntax* for a description of the various\n formatting options that can be specified in format strings.\n\n This method of string formatting is the new standard in Python 3.0,\n and should be preferred to the ``%`` formatting described in\n *String Formatting Operations* in new code.\n\n New in version 2.6.\n\nstr.index(sub[, start[, end]])\n\n Like ``find()``, but raise ``ValueError`` when the substring is not\n found.\n\nstr.isalnum()\n\n Return true if all characters in the string are alphanumeric and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isalpha()\n\n Return true if all characters in the string are alphabetic and\n there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isdigit()\n\n Return true if all characters in the string are digits and there is\n at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.islower()\n\n Return true if all cased characters in the string are lowercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isspace()\n\n Return true if there are only whitespace characters in the string\n and there is at least one character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.istitle()\n\n Return true if the string is a titlecased string and there is at\n least one character, for example uppercase characters may only\n follow uncased characters and lowercase characters only cased ones.\n Return false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.isupper()\n\n Return true if all cased characters in the string are uppercase and\n there is at least one cased character, false otherwise.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.join(iterable)\n\n Return a string which is the concatenation of the strings in the\n *iterable* *iterable*. The separator between elements is the\n string providing this method.\n\nstr.ljust(width[, fillchar])\n\n Return the string left justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.lower()\n\n Return a copy of the string converted to lowercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.lstrip([chars])\n\n Return a copy of the string with leading characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a prefix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.lstrip()\n \'spacious \'\n >>> \'www.example.com\'.lstrip(\'cmowz.\')\n \'example.com\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.partition(sep)\n\n Split the string at the first occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing the string itself, followed by\n two empty strings.\n\n New in version 2.5.\n\nstr.replace(old, new[, count])\n\n Return a copy of the string with all occurrences of substring *old*\n replaced by *new*. If the optional argument *count* is given, only\n the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n Return the highest index in the string where substring *sub* is\n found, such that *sub* is contained within ``s[start:end]``.\n Optional arguments *start* and *end* are interpreted as in slice\n notation. Return ``-1`` on failure.\n\nstr.rindex(sub[, start[, end]])\n\n Like ``rfind()`` but raises ``ValueError`` when the substring *sub*\n is not found.\n\nstr.rjust(width[, fillchar])\n\n Return the string right justified in a string of length *width*.\n Padding is done using the specified *fillchar* (default is a\n space). The original string is returned if *width* is less than\n ``len(s)``.\n\n Changed in version 2.4: Support for the *fillchar* argument.\n\nstr.rpartition(sep)\n\n Split the string at the last occurrence of *sep*, and return a\n 3-tuple containing the part before the separator, the separator\n itself, and the part after the separator. If the separator is not\n found, return a 3-tuple containing two empty strings, followed by\n the string itself.\n\n New in version 2.5.\n\nstr.rsplit([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n are done, the *rightmost* ones. If *sep* is not specified or\n ``None``, any whitespace string is a separator. Except for\n splitting from the right, ``rsplit()`` behaves like ``split()``\n which is described in detail below.\n\n New in version 2.4.\n\nstr.rstrip([chars])\n\n Return a copy of the string with trailing characters removed. The\n *chars* argument is a string specifying the set of characters to be\n removed. If omitted or ``None``, the *chars* argument defaults to\n removing whitespace. The *chars* argument is not a suffix; rather,\n all combinations of its values are stripped:\n\n >>> \' spacious \'.rstrip()\n \' spacious\'\n >>> \'mississippi\'.rstrip(\'ipz\')\n \'mississ\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.split([sep[, maxsplit]])\n\n Return a list of the words in the string, using *sep* as the\n delimiter string. If *maxsplit* is given, at most *maxsplit*\n splits are done (thus, the list will have at most ``maxsplit+1``\n elements). If *maxsplit* is not specified, then there is no limit\n on the number of splits (all possible splits are made).\n\n If *sep* is given, consecutive delimiters are not grouped together\n and are deemed to delimit empty strings (for example,\n ``\'1,,2\'.split(\',\')`` returns ``[\'1\', \'\', \'2\']``). The *sep*\n argument may consist of multiple characters (for example,\n ``\'1<>2<>3\'.split(\'<>\')`` returns ``[\'1\', \'2\', \'3\']``). Splitting\n an empty string with a specified separator returns ``[\'\']``.\n\n If *sep* is not specified or is ``None``, a different splitting\n algorithm is applied: runs of consecutive whitespace are regarded\n as a single separator, and the result will contain no empty strings\n at the start or end if the string has leading or trailing\n whitespace. Consequently, splitting an empty string or a string\n consisting of just whitespace with a ``None`` separator returns\n ``[]``.\n\n For example, ``\' 1 2 3 \'.split()`` returns ``[\'1\', \'2\', \'3\']``,\n and ``\' 1 2 3 \'.split(None, 1)`` returns ``[\'1\', \'2 3 \']``.\n\nstr.splitlines([keepends])\n\n Return a list of the lines in the string, breaking at line\n boundaries. Line breaks are not included in the resulting list\n unless *keepends* is given and true.\n\nstr.startswith(prefix[, start[, end]])\n\n Return ``True`` if string starts with the *prefix*, otherwise\n return ``False``. *prefix* can also be a tuple of prefixes to look\n for. With optional *start*, test string beginning at that\n position. With optional *end*, stop comparing string at that\n position.\n\n Changed in version 2.5: Accept tuples as *prefix*.\n\nstr.strip([chars])\n\n Return a copy of the string with the leading and trailing\n characters removed. The *chars* argument is a string specifying the\n set of characters to be removed. If omitted or ``None``, the\n *chars* argument defaults to removing whitespace. The *chars*\n argument is not a prefix or suffix; rather, all combinations of its\n values are stripped:\n\n >>> \' spacious \'.strip()\n \'spacious\'\n >>> \'www.example.com\'.strip(\'cmowz.\')\n \'example\'\n\n Changed in version 2.2.2: Support for the *chars* argument.\n\nstr.swapcase()\n\n Return a copy of the string with uppercase characters converted to\n lowercase and vice versa.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.title()\n\n Return a titlecased version of the string where words start with an\n uppercase character and the remaining characters are lowercase.\n\n The algorithm uses a simple language-independent definition of a\n word as groups of consecutive letters. The definition works in\n many contexts but it means that apostrophes in contractions and\n possessives form word boundaries, which may not be the desired\n result:\n\n >>> "they\'re bill\'s friends from the UK".title()\n "They\'Re Bill\'S Friends From The Uk"\n\n A workaround for apostrophes can be constructed using regular\n expressions:\n\n >>> import re\n >>> def titlecase(s):\n return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n lambda mo: mo.group(0)[0].upper() +\n mo.group(0)[1:].lower(),\n s)\n\n >>> titlecase("they\'re bill\'s friends.")\n "They\'re Bill\'s Friends."\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.translate(table[, deletechars])\n\n Return a copy of the string where all characters occurring in the\n optional argument *deletechars* are removed, and the remaining\n characters have been mapped through the given translation table,\n which must be a string of length 256.\n\n You can use the ``maketrans()`` helper function in the ``string``\n module to create a translation table. For string objects, set the\n *table* argument to ``None`` for translations that only delete\n characters:\n\n >>> \'read this short text\'.translate(None, \'aeiou\')\n \'rd ths shrt txt\'\n\n New in version 2.6: Support for a ``None`` *table* argument.\n\n For Unicode objects, the ``translate()`` method does not accept the\n optional *deletechars* argument. Instead, it returns a copy of the\n *s* where all characters have been mapped through the given\n translation table which must be a mapping of Unicode ordinals to\n Unicode ordinals, Unicode strings or ``None``. Unmapped characters\n are left untouched. Characters mapped to ``None`` are deleted.\n Note, a more flexible approach is to create a custom character\n mapping codec using the ``codecs`` module (see ``encodings.cp1251``\n for an example).\n\nstr.upper()\n\n Return a copy of the string converted to uppercase.\n\n For 8-bit strings, this method is locale-dependent.\n\nstr.zfill(width)\n\n Return the numeric string left filled with zeros in a string of\n length *width*. A sign prefix is handled correctly. The original\n string is returned if *width* is less than ``len(s)``.\n\n New in version 2.2.2.\n\nThe following methods are present only on unicode objects:\n\nunicode.isnumeric()\n\n Return ``True`` if there are only numeric characters in S,\n ``False`` otherwise. Numeric characters include digit characters,\n and all characters that have the Unicode numeric value property,\n e.g. U+2155, VULGAR FRACTION ONE FIFTH.\n\nunicode.isdecimal()\n\n Return ``True`` if there are only decimal characters in S,\n ``False`` otherwise. Decimal characters include digit characters,\n and all characters that that can be used to form decimal-radix\n numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\n\nString Formatting Operations\n============================\n\nString and Unicode objects have one unique built-in operation: the\n``%`` operator (modulo). This is also known as the string\n*formatting* or *interpolation* operator. Given ``format % values``\n(where *format* is a string or Unicode object), ``%`` conversion\nspecifications in *format* are replaced with zero or more elements of\n*values*. The effect is similar to the using ``sprintf()`` in the C\nlanguage. If *format* is a Unicode object, or if any of the objects\nbeing converted using the ``%s`` conversion are Unicode objects, the\nresult will also be a Unicode object.\n\nIf *format* requires a single argument, *values* may be a single non-\ntuple object. [4] Otherwise, *values* must be a tuple with exactly\nthe number of items specified by the format string, or a single\nmapping object (for example, a dictionary).\n\nA conversion specifier contains two or more characters and has the\nfollowing components, which must occur in this order:\n\n1. The ``\'%\'`` character, which marks the start of the specifier.\n\n2. Mapping key (optional), consisting of a parenthesised sequence of\n characters (for example, ``(somename)``).\n\n3. Conversion flags (optional), which affect the result of some\n conversion types.\n\n4. Minimum field width (optional). If specified as an ``\'*\'``\n (asterisk), the actual width is read from the next element of the\n tuple in *values*, and the object to convert comes after the\n minimum field width and optional precision.\n\n5. Precision (optional), given as a ``\'.\'`` (dot) followed by the\n precision. If specified as ``\'*\'`` (an asterisk), the actual width\n is read from the next element of the tuple in *values*, and the\n value to convert comes after the precision.\n\n6. Length modifier (optional).\n\n7. Conversion type.\n\nWhen the right argument is a dictionary (or other mapping type), then\nthe formats in the string *must* include a parenthesised mapping key\ninto that dictionary inserted immediately after the ``\'%\'`` character.\nThe mapping key selects the value to be formatted from the mapping.\nFor example:\n\n>>> print \'%(language)s has %(number)03d quote types.\' % \\\n... {"language": "Python", "number": 2}\nPython has 002 quote types.\n\nIn this case no ``*`` specifiers may occur in a format (since they\nrequire a sequential parameter list).\n\nThe conversion flag characters are:\n\n+-----------+-----------------------------------------------------------------------+\n| Flag | Meaning |\n+===========+=======================================================================+\n| ``\'#\'`` | The value conversion will use the "alternate form" (where defined |\n| | below). |\n+-----------+-----------------------------------------------------------------------+\n| ``\'0\'`` | The conversion will be zero padded for numeric values. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'-\'`` | The converted value is left adjusted (overrides the ``\'0\'`` |\n| | conversion if both are given). |\n+-----------+-----------------------------------------------------------------------+\n| ``\' \'`` | (a space) A blank should be left before a positive number (or empty |\n| | string) produced by a signed conversion. |\n+-----------+-----------------------------------------------------------------------+\n| ``\'+\'`` | A sign character (``\'+\'`` or ``\'-\'``) will precede the conversion |\n| | (overrides a "space" flag). |\n+-----------+-----------------------------------------------------------------------+\n\nA length modifier (``h``, ``l``, or ``L``) may be present, but is\nignored as it is not necessary for Python -- so e.g. ``%ld`` is\nidentical to ``%d``.\n\nThe conversion types are:\n\n+--------------+-------------------------------------------------------+---------+\n| Conversion | Meaning | Notes |\n+==============+=======================================================+=========+\n| ``\'d\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'i\'`` | Signed integer decimal. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'o\'`` | Signed octal value. | (1) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'u\'`` | Obsolete type -- it is identical to ``\'d\'``. | (7) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'x\'`` | Signed hexadecimal (lowercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'X\'`` | Signed hexadecimal (uppercase). | (2) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'e\'`` | Floating point exponential format (lowercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'E\'`` | Floating point exponential format (uppercase). | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'f\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'F\'`` | Floating point decimal format. | (3) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'g\'`` | Floating point format. Uses lowercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'G\'`` | Floating point format. Uses uppercase exponential | (4) |\n| | format if exponent is less than -4 or not less than | |\n| | precision, decimal format otherwise. | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'c\'`` | Single character (accepts integer or single character | |\n| | string). | |\n+--------------+-------------------------------------------------------+---------+\n| ``\'r\'`` | String (converts any Python object using ``repr()``). | (5) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'s\'`` | String (converts any Python object using ``str()``). | (6) |\n+--------------+-------------------------------------------------------+---------+\n| ``\'%\'`` | No argument is converted, results in a ``\'%\'`` | |\n| | character in the result. | |\n+--------------+-------------------------------------------------------+---------+\n\nNotes:\n\n1. The alternate form causes a leading zero (``\'0\'``) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n2. The alternate form causes a leading ``\'0x\'`` or ``\'0X\'`` (depending\n on whether the ``\'x\'`` or ``\'X\'`` format was used) to be inserted\n between left-hand padding and the formatting of the number if the\n leading character of the result is not already a zero.\n\n3. The alternate form causes the result to always contain a decimal\n point, even if no digits follow it.\n\n The precision determines the number of digits after the decimal\n point and defaults to 6.\n\n4. The alternate form causes the result to always contain a decimal\n point, and trailing zeroes are not removed as they would otherwise\n be.\n\n The precision determines the number of significant digits before\n and after the decimal point and defaults to 6.\n\n5. The ``%r`` conversion was added in Python 2.0.\n\n The precision determines the maximal number of characters used.\n\n6. If the object or format provided is a ``unicode`` string, the\n resulting string will also be ``unicode``.\n\n The precision determines the maximal number of characters used.\n\n7. See **PEP 237**.\n\nSince Python strings have an explicit length, ``%s`` conversions do\nnot assume that ``\'\\0\'`` is the end of the string.\n\nChanged in version 2.7: ``%f`` conversions for numbers whose absolute\nvalue is over 1e50 are no longer replaced by ``%g`` conversions.\n\nAdditional string operations are defined in standard modules\n``string`` and ``re``.\n\n\nXRange Type\n===========\n\nThe ``xrange`` type is an immutable sequence which is commonly used\nfor looping. The advantage of the ``xrange`` type is that an\n``xrange`` object will always take the same amount of memory, no\nmatter the size of the range it represents. There are no consistent\nperformance advantages.\n\nXRange objects have very little behavior: they only support indexing,\niteration, and the ``len()`` function.\n\n\nMutable Sequence Types\n======================\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*\'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn\'t have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don\'t return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n', + 'typesseq-mutable': u"\nMutable Sequence Types\n**********************\n\nList and ``bytearray`` objects support additional operations that\nallow in-place modification of the object. Other mutable sequence\ntypes (when added to the language) should also support these\noperations. Strings and tuples are immutable sequence types: such\nobjects cannot be modified once created. The following operations are\ndefined on mutable sequence types (where *x* is an arbitrary object):\n\n+--------------------------------+----------------------------------+-----------------------+\n| Operation | Result | Notes |\n+================================+==================================+=======================+\n| ``s[i] = x`` | item *i* of *s* is replaced by | |\n| | *x* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j] = t`` | slice of *s* from *i* to *j* is | |\n| | replaced by the contents of the | |\n| | iterable *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j]`` | same as ``s[i:j] = []`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s[i:j:k] = t`` | the elements of ``s[i:j:k]`` are | (1) |\n| | replaced by those of *t* | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``del s[i:j:k]`` | removes the elements of | |\n| | ``s[i:j:k]`` from the list | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.append(x)`` | same as ``s[len(s):len(s)] = | (2) |\n| | [x]`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.extend(x)`` | same as ``s[len(s):len(s)] = x`` | (3) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.count(x)`` | return number of *i*'s for which | |\n| | ``s[i] == x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.index(x[, i[, j]])`` | return smallest *k* such that | (4) |\n| | ``s[k] == x`` and ``i <= k < j`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.insert(i, x)`` | same as ``s[i:i] = [x]`` | (5) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.pop([i])`` | same as ``x = s[i]; del s[i]; | (6) |\n| | return x`` | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.remove(x)`` | same as ``del s[s.index(x)]`` | (4) |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.reverse()`` | reverses the items of *s* in | (7) |\n| | place | |\n+--------------------------------+----------------------------------+-----------------------+\n| ``s.sort([cmp[, key[, | sort the items of *s* in place | (7)(8)(9)(10) |\n| reverse]]])`` | | |\n+--------------------------------+----------------------------------+-----------------------+\n\nNotes:\n\n1. *t* must have the same length as the slice it is replacing.\n\n2. The C implementation of Python has historically accepted multiple\n parameters and implicitly joined them into a tuple; this no longer\n works in Python 2.0. Use of this misfeature has been deprecated\n since Python 1.4.\n\n3. *x* can be any iterable object.\n\n4. Raises ``ValueError`` when *x* is not found in *s*. When a negative\n index is passed as the second or third parameter to the ``index()``\n method, the list length is added, as for slice indices. If it is\n still negative, it is truncated to zero, as for slice indices.\n\n Changed in version 2.3: Previously, ``index()`` didn't have\n arguments for specifying start and stop positions.\n\n5. When a negative index is passed as the first parameter to the\n ``insert()`` method, the list length is added, as for slice\n indices. If it is still negative, it is truncated to zero, as for\n slice indices.\n\n Changed in version 2.3: Previously, all negative indices were\n truncated to zero.\n\n6. The ``pop()`` method is only supported by the list and array types.\n The optional argument *i* defaults to ``-1``, so that by default\n the last item is removed and returned.\n\n7. The ``sort()`` and ``reverse()`` methods modify the list in place\n for economy of space when sorting or reversing a large list. To\n remind you that they operate by side effect, they don't return the\n sorted or reversed list.\n\n8. The ``sort()`` method takes optional arguments for controlling the\n comparisons.\n\n *cmp* specifies a custom comparison function of two arguments (list\n items) which should return a negative, zero or positive number\n depending on whether the first argument is considered smaller than,\n equal to, or larger than the second argument: ``cmp=lambda x,y:\n cmp(x.lower(), y.lower())``. The default value is ``None``.\n\n *key* specifies a function of one argument that is used to extract\n a comparison key from each list element: ``key=str.lower``. The\n default value is ``None``.\n\n *reverse* is a boolean value. If set to ``True``, then the list\n elements are sorted as if each comparison were reversed.\n\n In general, the *key* and *reverse* conversion processes are much\n faster than specifying an equivalent *cmp* function. This is\n because *cmp* is called multiple times for each list element while\n *key* and *reverse* touch each element only once. Use\n ``functools.cmp_to_key()`` to convert an old-style *cmp* function\n to a *key* function.\n\n Changed in version 2.3: Support for ``None`` as an equivalent to\n omitting *cmp* was added.\n\n Changed in version 2.4: Support for *key* and *reverse* was added.\n\n9. Starting with Python 2.3, the ``sort()`` method is guaranteed to be\n stable. A sort is stable if it guarantees not to change the\n relative order of elements that compare equal --- this is helpful\n for sorting in multiple passes (for example, sort by department,\n then by salary grade).\n\n10. **CPython implementation detail:** While a list is being sorted,\n the effect of attempting to mutate, or even inspect, the list is\n undefined. The C implementation of Python 2.3 and newer makes the\n list appear empty for the duration, and raises ``ValueError`` if\n it can detect that the list has been mutated during a sort.\n", 'unary': u'\nUnary arithmetic and bitwise operations\n***************************************\n\nAll unary arithmetic and bitwise operations have the same priority:\n\n u_expr ::= power | "-" u_expr | "+" u_expr | "~" u_expr\n\nThe unary ``-`` (minus) operator yields the negation of its numeric\nargument.\n\nThe unary ``+`` (plus) operator yields its numeric argument unchanged.\n\nThe unary ``~`` (invert) operator yields the bitwise inversion of its\nplain or long integer argument. The bitwise inversion of ``x`` is\ndefined as ``-(x+1)``. It only applies to integral numbers.\n\nIn all three cases, if the argument does not have the proper type, a\n``TypeError`` exception is raised.\n', 'while': u'\nThe ``while`` statement\n***********************\n\nThe ``while`` statement is used for repeated execution as long as an\nexpression is true:\n\n while_stmt ::= "while" expression ":" suite\n ["else" ":" suite]\n\nThis repeatedly tests the expression and, if it is true, executes the\nfirst suite; if the expression is false (which may be the first time\nit is tested) the suite of the ``else`` clause, if present, is\nexecuted and the loop terminates.\n\nA ``break`` statement executed in the first suite terminates the loop\nwithout executing the ``else`` clause\'s suite. A ``continue``\nstatement executed in the first suite skips the rest of the suite and\ngoes back to testing the expression.\n', - 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', + 'with': u'\nThe ``with`` statement\n**********************\n\nNew in version 2.5.\n\nThe ``with`` statement is used to wrap the execution of a block with\nmethods defined by a context manager (see section *With Statement\nContext Managers*). This allows common\n``try``...``except``...``finally`` usage patterns to be encapsulated\nfor convenient reuse.\n\n with_stmt ::= "with" with_item ("," with_item)* ":" suite\n with_item ::= expression ["as" target]\n\nThe execution of the ``with`` statement with one "item" proceeds as\nfollows:\n\n1. The context expression (the expression given in the **with_item**)\n is evaluated to obtain a context manager.\n\n2. The context manager\'s ``__exit__()`` is loaded for later use.\n\n3. The context manager\'s ``__enter__()`` method is invoked.\n\n4. If a target was included in the ``with`` statement, the return\n value from ``__enter__()`` is assigned to it.\n\n Note: The ``with`` statement guarantees that if the ``__enter__()``\n method returns without an error, then ``__exit__()`` will always\n be called. Thus, if an error occurs during the assignment to the\n target list, it will be treated the same as an error occurring\n within the suite would be. See step 6 below.\n\n5. The suite is executed.\n\n6. The context manager\'s ``__exit__()`` method is invoked. If an\n exception caused the suite to be exited, its type, value, and\n traceback are passed as arguments to ``__exit__()``. Otherwise,\n three ``None`` arguments are supplied.\n\n If the suite was exited due to an exception, and the return value\n from the ``__exit__()`` method was false, the exception is\n reraised. If the return value was true, the exception is\n suppressed, and execution continues with the statement following\n the ``with`` statement.\n\n If the suite was exited for any reason other than an exception, the\n return value from ``__exit__()`` is ignored, and execution proceeds\n at the normal location for the kind of exit that was taken.\n\nWith more than one item, the context managers are processed as if\nmultiple ``with`` statements were nested:\n\n with A() as a, B() as b:\n suite\n\nis equivalent to\n\n with A() as a:\n with B() as b:\n suite\n\nNote: In Python 2.5, the ``with`` statement is only allowed when the\n ``with_statement`` feature has been enabled. It is always enabled\n in Python 2.6.\n\nChanged in version 2.7: Support for multiple context expressions.\n\nSee also:\n\n **PEP 0343** - The "with" statement\n The specification, background, and examples for the Python\n ``with`` statement.\n', 'yield': u'\nThe ``yield`` statement\n***********************\n\n yield_stmt ::= yield_expression\n\nThe ``yield`` statement is only used when defining a generator\nfunction, and is only used in the body of the generator function.\nUsing a ``yield`` statement in a function definition is sufficient to\ncause that definition to create a generator function instead of a\nnormal function.\n\nWhen a generator function is called, it returns an iterator known as a\ngenerator iterator, or more commonly, a generator. The body of the\ngenerator function is executed by calling the generator\'s ``next()``\nmethod repeatedly until it raises an exception.\n\nWhen a ``yield`` statement is executed, the state of the generator is\nfrozen and the value of **expression_list** is returned to\n``next()``\'s caller. By "frozen" we mean that all local state is\nretained, including the current bindings of local variables, the\ninstruction pointer, and the internal evaluation stack: enough\ninformation is saved so that the next time ``next()`` is invoked, the\nfunction can proceed exactly as if the ``yield`` statement were just\nanother external call.\n\nAs of Python version 2.5, the ``yield`` statement is now allowed in\nthe ``try`` clause of a ``try`` ... ``finally`` construct. If the\ngenerator is not resumed before it is finalized (by reaching a zero\nreference count or by being garbage collected), the generator-\niterator\'s ``close()`` method will be called, allowing any pending\n``finally`` clauses to execute.\n\nNote: In Python 2.2, the ``yield`` statement was only allowed when the\n ``generators`` feature has been enabled. This ``__future__`` import\n statement was used to enable the feature:\n\n from __future__ import generators\n\nSee also:\n\n **PEP 0255** - Simple Generators\n The proposal for adding generators and the ``yield`` statement\n to Python.\n\n **PEP 0342** - Coroutines via Enhanced Generators\n The proposal that, among other generator enhancements, proposed\n allowing ``yield`` to appear inside a ``try`` ... ``finally``\n block.\n'} diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -317,7 +317,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -490,6 +490,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -592,7 +598,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/2.7/re.py b/lib-python/2.7/re.py --- a/lib-python/2.7/re.py +++ b/lib-python/2.7/re.py @@ -207,8 +207,7 @@ "Escape all non-alphanumeric characters in pattern." s = list(pattern) alphanum = _alphanum - for i in range(len(pattern)): - c = pattern[i] + for i, c in enumerate(pattern): if c not in alphanum: if c == "\000": s[i] = "\\000" diff --git a/lib-python/2.7/shutil.py b/lib-python/2.7/shutil.py --- a/lib-python/2.7/shutil.py +++ b/lib-python/2.7/shutil.py @@ -277,6 +277,12 @@ """ real_dst = dst if os.path.isdir(dst): + if _samefile(src, dst): + # We might be on a case insensitive filesystem, + # perform the rename anyway. + os.rename(src, dst) + return + real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst @@ -336,7 +342,7 @@ archive that is being built. If not provided, the current owner and group will be used. - The output tar file will be named 'base_dir' + ".tar", possibly plus + The output tar file will be named 'base_name' + ".tar", possibly plus the appropriate compression extension (".gz", or ".bz2"). Returns the output filename. @@ -406,7 +412,7 @@ def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises ExecError. Returns the name of the output zip diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -155,17 +156,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/2.7/smtplib.py b/lib-python/2.7/smtplib.py --- a/lib-python/2.7/smtplib.py +++ b/lib-python/2.7/smtplib.py @@ -49,17 +49,18 @@ from email.base64mime import encode as encode_base64 from sys import stderr -__all__ = ["SMTPException","SMTPServerDisconnected","SMTPResponseException", - "SMTPSenderRefused","SMTPRecipientsRefused","SMTPDataError", - "SMTPConnectError","SMTPHeloError","SMTPAuthenticationError", - "quoteaddr","quotedata","SMTP"] +__all__ = ["SMTPException", "SMTPServerDisconnected", "SMTPResponseException", + "SMTPSenderRefused", "SMTPRecipientsRefused", "SMTPDataError", + "SMTPConnectError", "SMTPHeloError", "SMTPAuthenticationError", + "quoteaddr", "quotedata", "SMTP"] SMTP_PORT = 25 SMTP_SSL_PORT = 465 -CRLF="\r\n" +CRLF = "\r\n" OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I) + # Exception classes used by this module. class SMTPException(Exception): """Base class for all exceptions raised by this module.""" @@ -109,7 +110,7 @@ def __init__(self, recipients): self.recipients = recipients - self.args = ( recipients,) + self.args = (recipients,) class SMTPDataError(SMTPResponseException): @@ -128,6 +129,7 @@ combination provided. """ + def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. @@ -138,7 +140,7 @@ m = email.utils.parseaddr(addr)[1] except AttributeError: pass - if m == (None, None): # Indicates parse failure or AttributeError + if m == (None, None): # Indicates parse failure or AttributeError # something weird here.. punt -ddm return "<%s>" % addr elif m is None: @@ -175,7 +177,8 @@ chr = None while chr != "\n": chr = self.sslobj.read(1) - if not chr: break + if not chr: + break str += chr return str @@ -219,6 +222,7 @@ ehlo_msg = "ehlo" ehlo_resp = None does_esmtp = 0 + default_port = SMTP_PORT def __init__(self, host='', port=0, local_hostname=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): @@ -234,7 +238,6 @@ """ self.timeout = timeout self.esmtp_features = {} - self.default_port = SMTP_PORT if host: (code, msg) = self.connect(host, port) if code != 220: @@ -269,10 +272,11 @@ def _get_socket(self, port, host, timeout): # This makes it simpler for SMTP_SSL to use the SMTP connect code # and just alter the socket connection bit. - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) return socket.create_connection((port, host), timeout) - def connect(self, host='localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to a host on a given port. If the hostname ends with a colon (`:') followed by a number, and @@ -286,20 +290,25 @@ if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: - host, port = host[:i], host[i+1:] - try: port = int(port) + host, port = host[:i], host[i + 1:] + try: + port = int(port) except ValueError: raise socket.error, "nonnumeric port" - if not port: port = self.default_port - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if not port: + port = self.default_port + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) self.sock = self._get_socket(host, port, self.timeout) (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) def send(self, str): """Send `str' to the server.""" - if self.debuglevel > 0: print>>stderr, 'send:', repr(str) + if self.debuglevel > 0: + print>>stderr, 'send:', repr(str) if hasattr(self, 'sock') and self.sock: try: self.sock.sendall(str) @@ -330,7 +339,7 @@ Raises SMTPServerDisconnected if end-of-file is reached. """ - resp=[] + resp = [] if self.file is None: self.file = self.sock.makefile('rb') while 1: @@ -341,9 +350,10 @@ if line == '': self.close() raise SMTPServerDisconnected("Connection unexpectedly closed") - if self.debuglevel > 0: print>>stderr, 'reply:', repr(line) + if self.debuglevel > 0: + print>>stderr, 'reply:', repr(line) resp.append(line[4:].strip()) - code=line[:3] + code = line[:3] # Check that the error code is syntactically correct. # Don't attempt to read a continuation line if it is broken. try: @@ -352,17 +362,17 @@ errcode = -1 break # Check if multiline response. - if line[3:4]!="-": + if line[3:4] != "-": break errmsg = "\n".join(resp) if self.debuglevel > 0: - print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode,errmsg) + print>>stderr, 'reply: retcode (%s); Msg: %s' % (errcode, errmsg) return errcode, errmsg def docmd(self, cmd, args=""): """Send a command, and return its response code.""" - self.putcmd(cmd,args) + self.putcmd(cmd, args) return self.getreply() # std smtp commands @@ -372,9 +382,9 @@ host. """ self.putcmd("helo", name or self.local_hostname) - (code,msg)=self.getreply() - self.helo_resp=msg - return (code,msg) + (code, msg) = self.getreply() + self.helo_resp = msg + return (code, msg) def ehlo(self, name=''): """ SMTP 'ehlo' command. @@ -383,19 +393,19 @@ """ self.esmtp_features = {} self.putcmd(self.ehlo_msg, name or self.local_hostname) - (code,msg)=self.getreply() + (code, msg) = self.getreply() # According to RFC1869 some (badly written) # MTA's will disconnect on an ehlo. Toss an exception if # that happens -ddm if code == -1 and len(msg) == 0: self.close() raise SMTPServerDisconnected("Server not connected") - self.ehlo_resp=msg + self.ehlo_resp = msg if code != 250: - return (code,msg) - self.does_esmtp=1 + return (code, msg) + self.does_esmtp = 1 #parse the ehlo response -ddm - resp=self.ehlo_resp.split('\n') + resp = self.ehlo_resp.split('\n') del resp[0] for each in resp: # To be able to communicate with as many SMTP servers as possible, @@ -415,16 +425,16 @@ # It's actually stricter, in that only spaces are allowed between # parameters, but were not going to check for that here. Note # that the space isn't present if there are no parameters. - m=re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?',each) + m = re.match(r'(?P[A-Za-z0-9][A-Za-z0-9\-]*) ?', each) if m: - feature=m.group("feature").lower() - params=m.string[m.end("feature"):].strip() + feature = m.group("feature").lower() + params = m.string[m.end("feature"):].strip() if feature == "auth": self.esmtp_features[feature] = self.esmtp_features.get(feature, "") \ + " " + params else: - self.esmtp_features[feature]=params - return (code,msg) + self.esmtp_features[feature] = params + return (code, msg) def has_extn(self, opt): """Does the server support a given SMTP service extension?""" @@ -444,23 +454,23 @@ """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop") - def mail(self,sender,options=[]): + def mail(self, sender, options=[]): """SMTP 'mail' command -- begins mail xfer session.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender) ,optionlist)) + self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist)) return self.getreply() - def rcpt(self,recip,options=[]): + def rcpt(self, recip, options=[]): """SMTP 'rcpt' command -- indicates 1 recipient for this mail.""" optionlist = '' if options and self.does_esmtp: optionlist = ' ' + ' '.join(options) - self.putcmd("rcpt","TO:%s%s" % (quoteaddr(recip),optionlist)) + self.putcmd("rcpt", "TO:%s%s" % (quoteaddr(recip), optionlist)) return self.getreply() - def data(self,msg): + def data(self, msg): """SMTP 'DATA' command -- sends message data to server. Automatically quotes lines beginning with a period per rfc821. @@ -469,26 +479,28 @@ response code received when the all data is sent. """ self.putcmd("data") - (code,repl)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,repl) + (code, repl) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, repl) if code != 354: - raise SMTPDataError(code,repl) + raise SMTPDataError(code, repl) else: q = quotedata(msg) if q[-2:] != CRLF: q = q + CRLF q = q + "." + CRLF self.send(q) - (code,msg)=self.getreply() - if self.debuglevel >0 : print>>stderr, "data:", (code,msg) - return (code,msg) + (code, msg) = self.getreply() + if self.debuglevel > 0: + print>>stderr, "data:", (code, msg) + return (code, msg) def verify(self, address): """SMTP 'verify' command -- checks for address validity.""" self.putcmd("vrfy", quoteaddr(address)) return self.getreply() # a.k.a. - vrfy=verify + vrfy = verify def expn(self, address): """SMTP 'expn' command -- expands a mailing list.""" @@ -592,7 +604,7 @@ raise SMTPAuthenticationError(code, resp) return (code, resp) - def starttls(self, keyfile = None, certfile = None): + def starttls(self, keyfile=None, certfile=None): """Puts the connection to the SMTP server into TLS mode. If there has been no previous EHLO or HELO command this session, this @@ -695,22 +707,22 @@ for option in mail_options: esmtp_opts.append(option) - (code,resp) = self.mail(from_addr, esmtp_opts) + (code, resp) = self.mail(from_addr, esmtp_opts) if code != 250: self.rset() raise SMTPSenderRefused(code, resp, from_addr) - senderrs={} + senderrs = {} if isinstance(to_addrs, basestring): to_addrs = [to_addrs] for each in to_addrs: - (code,resp)=self.rcpt(each, rcpt_options) + (code, resp) = self.rcpt(each, rcpt_options) if (code != 250) and (code != 251): - senderrs[each]=(code,resp) - if len(senderrs)==len(to_addrs): + senderrs[each] = (code, resp) + if len(senderrs) == len(to_addrs): # the server refused all our recipients self.rset() raise SMTPRecipientsRefused(senderrs) - (code,resp) = self.data(msg) + (code, resp) = self.data(msg) if code != 250: self.rset() raise SMTPDataError(code, resp) @@ -744,16 +756,19 @@ are also optional - they can contain a PEM formatted private key and certificate chain file for the SSL connection. """ + + default_port = SMTP_SSL_PORT + def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): self.keyfile = keyfile self.certfile = certfile SMTP.__init__(self, host, port, local_hostname, timeout) - self.default_port = SMTP_SSL_PORT def _get_socket(self, host, port, timeout): - if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) + if self.debuglevel > 0: + print>>stderr, 'connect:', (host, port) new_socket = socket.create_connection((host, port), timeout) new_socket = ssl.wrap_socket(new_socket, self.keyfile, self.certfile) self.file = SSLFakeFile(new_socket) @@ -781,11 +796,11 @@ ehlo_msg = "lhlo" - def __init__(self, host = '', port = LMTP_PORT, local_hostname = None): + def __init__(self, host='', port=LMTP_PORT, local_hostname=None): """Initialize a new instance.""" SMTP.__init__(self, host, port, local_hostname) - def connect(self, host = 'localhost', port = 0): + def connect(self, host='localhost', port=0): """Connect to the LMTP daemon, on either a Unix or a TCP socket.""" if host[0] != '/': return SMTP.connect(self, host, port) @@ -795,13 +810,15 @@ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.sock.connect(host) except socket.error, msg: - if self.debuglevel > 0: print>>stderr, 'connect fail:', host + if self.debuglevel > 0: + print>>stderr, 'connect fail:', host if self.sock: self.sock.close() self.sock = None raise socket.error, msg (code, msg) = self.getreply() - if self.debuglevel > 0: print>>stderr, "connect:", msg + if self.debuglevel > 0: + print>>stderr, "connect:", msg return (code, msg) @@ -815,7 +832,7 @@ return sys.stdin.readline().strip() fromaddr = prompt("From") - toaddrs = prompt("To").split(',') + toaddrs = prompt("To").split(',') print "Enter message, end with ^D:" msg = '' while 1: diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -121,9 +121,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -293,21 +295,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -883,7 +894,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -956,7 +967,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1051,14 +1066,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1128,21 +1146,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1194,7 +1216,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1240,7 +1266,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1317,9 +1351,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1358,11 +1399,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/2.7/symbol.py b/lib-python/2.7/symbol.py --- a/lib-python/2.7/symbol.py +++ b/lib-python/2.7/symbol.py @@ -82,20 +82,19 @@ sliceop = 325 exprlist = 326 testlist = 327 -dictmaker = 328 -dictorsetmaker = 329 -classdef = 330 -arglist = 331 -argument = 332 -list_iter = 333 -list_for = 334 -list_if = 335 -comp_iter = 336 -comp_for = 337 -comp_if = 338 -testlist1 = 339 -encoding_decl = 340 -yield_expr = 341 +dictorsetmaker = 328 +classdef = 329 +arglist = 330 +argument = 331 +list_iter = 332 +list_for = 333 +list_if = 334 +comp_iter = 335 +comp_for = 336 +comp_if = 337 +testlist1 = 338 +encoding_decl = 339 +yield_expr = 340 #--end constants-- sym_name = {} diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -271,7 +271,7 @@ def _get_makefile_filename(): if _PYTHON_BUILD: return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('stdlib'), "config", "Makefile") + return os.path.join(get_path('platstdlib'), "config", "Makefile") def _init_posix(vars): @@ -297,21 +297,6 @@ msg = msg + " (%s)" % e.strerror raise IOError(msg) - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in vars: - cfg_target = vars['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" ' - 'during configure' % (cur_target, cfg_target)) - raise IOError(msg) - # On AIX, there are wrong paths to the linker scripts in the Makefile # -- these paths are relative to the Python source, but when installed # the scripts are in another directory. @@ -616,9 +601,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -639,7 +622,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -2239,10 +2239,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/2.7/telnetlib.py b/lib-python/2.7/telnetlib.py --- a/lib-python/2.7/telnetlib.py +++ b/lib-python/2.7/telnetlib.py @@ -236,7 +236,7 @@ """ if self.debuglevel > 0: - print 'Telnet(%s,%d):' % (self.host, self.port), + print 'Telnet(%s,%s):' % (self.host, self.port), if args: print msg % args else: diff --git a/lib-python/2.7/test/cjkencodings/big5-utf8.txt b/lib-python/2.7/test/cjkencodings/big5-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5-utf8.txt @@ -0,0 +1,9 @@ +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/big5.txt b/lib-python/2.7/test/cjkencodings/big5.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5.txt @@ -0,0 +1,9 @@ +�p��b Python ���ϥάJ���� C library? +�@�b��T��ާֳt�o�i������, �}�o�δ��ճn�骺�t�׬O���e������ +���D. ���[�ֶ}�o�δ��ժ��t��, �ڭ̫K�`�Ʊ��Q�Τ@�Ǥw�}�o�n�� +library, �æ��@�� fast prototyping �� programming language �i +�Ѩϥ�. �ثe���\�\�h�h�� library �O�H C �g��, �� Python �O�@�� +fast prototyping �� programming language. �G�ڭ̧Ʊ��N�J���� +C library ���� Python �����Ҥ����դξ�X. �䤤�̥D�n�]�O�ڭ̩� +�n�Q�ת����D�N�O: + diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs-utf8.txt @@ -0,0 +1,2 @@ +𠄌Ě鵮罓洆 +ÊÊ̄ê êê̄ diff --git a/lib-python/2.7/test/cjkencodings/big5hkscs.txt b/lib-python/2.7/test/cjkencodings/big5hkscs.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/big5hkscs.txt @@ -0,0 +1,2 @@ +�E�\�s�ڍ� +�f�b�� ���� diff --git a/lib-python/2.7/test/cjkencodings/cp949-utf8.txt b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/cp949.txt b/lib-python/2.7/test/cjkencodings/cp949.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/cp949.txt @@ -0,0 +1,9 @@ +�c�氢�� �����ݶ� + +������!! �������В�p�� �ި��R������ ���� �ѵ� ��. . +䬿��Ѵ��� . . . . ����� ������ ʫ�R ! ! !��.�� +������ �������٤�_�� � ����O ���� �h������ ����O +���j ʫ�R . . . . ���֚f �ѱ� �ސt�ƒO ���������� +�;��R ! ! 䬿��� ʫ�ɱ� ��߾�� ���ɱŴ� 䬴ɵ��R �۾֊� +�޷����� ��Ǵ���R � ����������Ĩ���!! �������� �ҡ�* + diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jisx0213.txt @@ -0,0 +1,8 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + +�Τ� �� �ȥ����� ���� ��ԏ���� diff --git a/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/euc_jp.txt b/lib-python/2.7/test/cjkencodings/euc_jp.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_jp.txt @@ -0,0 +1,7 @@ +Python �γ�ȯ�ϡ�1990 ǯ�����鳫�Ϥ���Ƥ��ޤ��� +��ȯ�Ԥ� Guido van Rossum �϶����ѤΥץ���ߥ󥰸����ABC�פγ�ȯ�˻��ä��Ƥ��ޤ�������ABC �ϼ��Ѿ����Ū�ˤϤ��ޤ�Ŭ���Ƥ��ޤ���Ǥ����� +���Τ��ᡢGuido �Ϥ�����Ū�ʥץ���ߥ󥰸���γ�ȯ�򳫻Ϥ����ѹ� BBS �����Υ���ǥ����ȡ֥��ƥ� �ѥ�����פΥե���Ǥ��� Guido �Ϥ��θ�����Python�פ�̾�Ť��ޤ����� +���Τ褦���طʤ������ޤ줿 Python �θ����߷פϡ��֥���ץ�פǡֽ������ưספȤ�����ɸ�˽������֤���Ƥ��ޤ��� +¿���Υ�����ץȷϸ���Ǥϥ桼�����������������ͥ�褷�ƿ����ʵ�ǽ��������ǤȤ��Ƽ��������礬¿���ΤǤ�����Python �ǤϤ������ä����ٹ����ɲä���뤳�ȤϤ��ޤꤢ��ޤ��� +���켫�Τε�ǽ�ϺǾ��¤˲�������ɬ�פʵ�ǽ�ϳ�ĥ�⥸�塼��Ȥ����ɲä��롢�Ȥ����Τ� Python �Υݥꥷ���Ǥ��� + diff --git a/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr-utf8.txt @@ -0,0 +1,7 @@ +◎ 파이썬(Python)은 배우기 쉽고, 강력한 프로그래밍 언어입니다. 파이썬은 +효율적인 고수준 데이터 구조와 간단하지만 효율적인 객체지향프로그래밍을 +지원합니다. 파이썬의 우아(優雅)한 문법과 동적 타이핑, 그리고 인터프리팅 +환경은 파이썬을 스크립팅과 여러 분야에서와 대부분의 플랫폼에서의 빠른 +애플리케이션 개발을 할 수 있는 이상적인 언어로 만들어줍니다. + +☆첫가끝: 날아라 쓔쓔쓩~ 닁큼! 뜽금없이 전홥니다. 뷁. 그런거 읎다. diff --git a/lib-python/2.7/test/cjkencodings/euc_kr.txt b/lib-python/2.7/test/cjkencodings/euc_kr.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/euc_kr.txt @@ -0,0 +1,7 @@ +�� ���̽�(Python)�� ���� ����, ������ ���α׷��� ����Դϴ�. ���̽��� +ȿ������ ����� ������ ������ ���������� ȿ������ ��ü�������α׷����� +�����մϴ�. ���̽��� ���(���)�� ������ ���� Ÿ����, �׸��� ���������� +ȯ���� ���̽��� ��ũ���ð� ���� �о߿����� ��κ��� �÷��������� ���� +���ø����̼� ������ �� �� �ִ� �̻����� ���� ������ݴϴ�. + +��ù����: ���ƶ� �Ԥ��ФԤԤ��ФԾ�~ �Ԥ��Ҥ�ŭ! �Ԥ��Ѥ��ݾ��� ���Ԥ��Ȥ��ϴ�. �Ԥ��Τ�. �׷��� �Ԥ��Ѥ���. diff --git a/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030-utf8.txt @@ -0,0 +1,15 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: +파이썬은 강력한 기능을 지닌 범용 컴퓨터 프로그래밍 언어다. + diff --git a/lib-python/2.7/test/cjkencodings/gb18030.txt b/lib-python/2.7/test/cjkencodings/gb18030.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb18030.txt @@ -0,0 +1,15 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: +�5�1�3�3�2�1�3�1 �7�6�0�4�6�3 �8�5�8�6�3�5 �3�1�9�5 �0�9�3�0 �4�3�5�7�5�5 �5�5�0�9�8�9�9�3�0�4 �2�9�2�5�9�9. + diff --git a/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312-utf8.txt @@ -0,0 +1,6 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 + diff --git a/lib-python/2.7/test/cjkencodings/gb2312.txt b/lib-python/2.7/test/cjkencodings/gb2312.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gb2312.txt @@ -0,0 +1,6 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ + diff --git a/lib-python/2.7/test/cjkencodings/gbk-utf8.txt b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk-utf8.txt @@ -0,0 +1,14 @@ +Python(派森)语言是一种功能强大而完善的通用型计算机程序设计语言, +已经具有十多年的发展历史,成熟且稳定。这种语言具有非常简捷而清晰 +的语法特点,适合完成各种高层任务,几乎可以在所有的操作系统中 +运行。这种语言简单而强大,适合各种人士学习使用。目前,基于这 +种语言的相关技术正在飞速的发展,用户数量急剧扩大,相关的资源非常多。 +如何在 Python 中使用既有的 C library? + 在資訊科技快速發展的今天, 開發及測試軟體的速度是不容忽視的 +課題. 為加快開發及測試的速度, 我們便常希望能利用一些已開發好的 +library, 並有一個 fast prototyping 的 programming language 可 +供使用. 目前有許許多多的 library 是以 C 寫成, 而 Python 是一個 +fast prototyping 的 programming language. 故我們希望能將既有的 +C library 拿到 Python 的環境中測試及整合. 其中最主要也是我們所 +要討論的問題就是: + diff --git a/lib-python/2.7/test/cjkencodings/gbk.txt b/lib-python/2.7/test/cjkencodings/gbk.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/gbk.txt @@ -0,0 +1,14 @@ +Python����ɭ��������һ�ֹ���ǿ������Ƶ�ͨ���ͼ��������������ԣ� +�Ѿ�����ʮ����ķ�չ��ʷ���������ȶ����������Ծ��зdz���ݶ����� +���﷨�ص㣬�ʺ���ɸ��ָ߲����񣬼������������еIJ���ϵͳ�� +���С��������Լ򵥶�ǿ���ʺϸ�����ʿѧϰʹ�á�Ŀǰ�������� +�����Ե���ؼ������ڷ��ٵķ�չ���û���������������ص���Դ�dz��ࡣ +����� Python ��ʹ�ü��е� C library? +�����YӍ�Ƽ����ٰlչ�Ľ���, �_�l���yԇܛ�w���ٶ��Dz��ݺ�ҕ�� +�n�}. ��ӿ��_�l���yԇ���ٶ�, �҂��㳣ϣ��������һЩ���_�l�õ� +library, �K��һ�� fast prototyping �� programming language �� +��ʹ��. Ŀǰ���S�S���� library ���� C ����, �� Python ��һ�� +fast prototyping �� programming language. ���҂�ϣ���܌����е� +C library �õ� Python �ĭh���Мyԇ������. ��������ҪҲ���҂��� +ҪӑՓ�Ć��}����: + diff --git a/lib-python/2.7/test/cjkencodings/hz-utf8.txt b/lib-python/2.7/test/cjkencodings/hz-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz-utf8.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.己所不欲,勿施於人。Bye. diff --git a/lib-python/2.7/test/cjkencodings/hz.txt b/lib-python/2.7/test/cjkencodings/hz.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/hz.txt @@ -0,0 +1,2 @@ +This sentence is in ASCII. +The next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye. diff --git a/lib-python/2.7/test/cjkencodings/johab-utf8.txt b/lib-python/2.7/test/cjkencodings/johab-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab-utf8.txt @@ -0,0 +1,9 @@ +똠방각하 펲시콜라 + +㉯㉯납!! 因九月패믤릔궈 ⓡⓖ훀¿¿¿ 긍뒙 ⓔ뎨 ㉯. . +亞영ⓔ능횹 . . . . 서울뤄 뎐학乙 家훀 ! ! !ㅠ.ㅠ +흐흐흐 ㄱㄱㄱ☆ㅠ_ㅠ 어릨 탸콰긐 뎌응 칑九들乙 ㉯드긐 +설릌 家훀 . . . . 굴애쉌 ⓔ궈 ⓡ릘㉱긐 因仁川女中까즼 +와쒀훀 ! ! 亞영ⓔ 家능궈 ☆上관 없능궈능 亞능뒈훀 글애듴 +ⓡ려듀九 싀풔숴훀 어릨 因仁川女中싁⑨들앜!! ㉯㉯납♡ ⌒⌒* + diff --git a/lib-python/2.7/test/cjkencodings/johab.txt b/lib-python/2.7/test/cjkencodings/johab.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/johab.txt @@ -0,0 +1,9 @@ +���w�b�a �\��ũ�a + +�����s!! �g��Ú������ �����zٯٯٯ �w�� �ѕ� ��. . +�<�w�ѓw�s . . . . �ᶉ�� �e�b�� �;�z ! ! !�A.�A +�a�a�a �A�A�A�i�A_�A �៚ ȡ���z �a�w ×✗i�� ���a�z +��z �;�z . . . . ������ �ъ� �ޟ��‹z �g�b�I����a�� +�����z ! ! �<�w�� �;�w�� �i꾉� ���w���w �<�w���z �i���z +�ޝa�A� ��Ρ���z �៚ �g�b�I���鯂��i�z!! �����sٽ �b�b* + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis-utf8.txt @@ -0,0 +1,7 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + diff --git a/lib-python/2.7/test/cjkencodings/shift_jis.txt b/lib-python/2.7/test/cjkencodings/shift_jis.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jis.txt @@ -0,0 +1,7 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213-utf8.txt @@ -0,0 +1,8 @@ +Python の開発は、1990 年ごろから開始されています。 +開発者の Guido van Rossum は教育用のプログラミング言語「ABC」の開発に参加していましたが、ABC は実用上の目的にはあまり適していませんでした。 +このため、Guido はより実用的なプログラミング言語の開発を開始し、英国 BBS 放送のコメディ番組「モンティ パイソン」のファンである Guido はこの言語を「Python」と名づけました。 +このような背景から生まれた Python の言語設計は、「シンプル」で「習得が容易」という目標に重点が置かれています。 +多くのスクリプト系言語ではユーザの目先の利便性を優先して色々な機能を言語要素として取り入れる場合が多いのですが、Python ではそういった小細工が追加されることはあまりありません。 +言語自体の機能は最小限に押さえ、必要な機能は拡張モジュールとして追加する、というのが Python のポリシーです。 + +ノか゚ ト゚ トキ喝塀 𡚴𪎌 麀齁𩛰 diff --git a/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt new file mode 100644 --- /dev/null +++ b/lib-python/2.7/test/cjkencodings/shift_jisx0213.txt @@ -0,0 +1,8 @@ +Python �̊J���́A1990 �N���납��J�n����Ă��܂��B +�J���҂� Guido van Rossum �͋���p�̃v���O���~���O����uABC�v�̊J���ɎQ�����Ă��܂������AABC �͎��p��̖ړI�ɂ͂��܂�K���Ă��܂���ł����B +���̂��߁AGuido �͂����p�I�ȃv���O���~���O����̊J�����J�n���A�p�� BBS �����̃R���f�B�ԑg�u�����e�B �p�C�\���v�̃t�@���ł��� Guido �͂��̌�����uPython�v�Ɩ��Â��܂����B +���̂悤�Ȕw�i���琶�܂ꂽ Python �̌���݌v�́A�u�V���v���v�Łu�K�����e�Ձv�Ƃ����ڕW�ɏd�_���u����Ă��܂��B +�����̃X�N���v�g�n����ł̓��[�U�̖ڐ�̗��֐���D�悵�ĐF�X�ȋ@�\������v�f�Ƃ��Ď������ꍇ�������̂ł����APython �ł͂������������׍H���lj�����邱�Ƃ͂��܂肠��܂���B +���ꎩ�̂̋@�\�͍ŏ����ɉ������A�K�v�ȋ@�\�͊g�����W���[���Ƃ��Ēlj�����A�Ƃ����̂� Python �̃|���V�[�ł��B + +�m�� �� �g�L�K�y ���� ������ diff --git a/lib-python/2.7/test/cjkencodings_test.py b/lib-python/2.7/test/cjkencodings_test.py deleted file mode 100644 --- a/lib-python/2.7/test/cjkencodings_test.py +++ /dev/null @@ -1,1019 +0,0 @@ -teststring = { -'big5': ( -"\xa6\x70\xa6\xf3\xa6\x62\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xa4" -"\xa8\xcf\xa5\xce\xac\x4a\xa6\xb3\xaa\xba\x20\x43\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x3f\x0a\xa1\x40\xa6\x62\xb8\xea\xb0\x54\xac\xec" -"\xa7\xde\xa7\xd6\xb3\x74\xb5\x6f\xae\x69\xaa\xba\xa4\xb5\xa4\xd1" -"\x2c\x20\xb6\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xb3\x6e\xc5\xe9" -"\xaa\xba\xb3\x74\xab\xd7\xac\x4f\xa4\xa3\xae\x65\xa9\xbf\xb5\xf8" -"\xaa\xba\x0a\xbd\xd2\xc3\x44\x2e\x20\xac\xb0\xa5\x5b\xa7\xd6\xb6" -"\x7d\xb5\x6f\xa4\xce\xb4\xfa\xb8\xd5\xaa\xba\xb3\x74\xab\xd7\x2c" -"\x20\xa7\xda\xad\xcc\xab\x4b\xb1\x60\xa7\xc6\xb1\xe6\xaf\xe0\xa7" -"\x51\xa5\xce\xa4\x40\xa8\xc7\xa4\x77\xb6\x7d\xb5\x6f\xa6\x6e\xaa" -"\xba\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xa8\xc3\xa6\xb3\xa4" -"\x40\xad\xd3\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79" -"\x70\x69\x6e\x67\x20\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d" -"\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20\xa5\x69\x0a" -"\xa8\xd1\xa8\xcf\xa5\xce\x2e\x20\xa5\xd8\xab\x65\xa6\xb3\xb3\x5c" -"\xb3\x5c\xa6\x68\xa6\x68\xaa\xba\x20\x6c\x69\x62\x72\x61\x72\x79" -"\x20\xac\x4f\xa5\x48\x20\x43\x20\xbc\x67\xa6\xa8\x2c\x20\xa6\xd3" -"\x20\x50\x79\x74\x68\x6f\x6e\x20\xac\x4f\xa4\x40\xad\xd3\x0a\x66" -"\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20" -"\xaa\xba\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c" -"\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xac\x47\xa7\xda\xad\xcc\xa7" -"\xc6\xb1\xe6\xaf\xe0\xb1\x4e\xac\x4a\xa6\xb3\xaa\xba\x0a\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x20\xae\xb3\xa8\xec\x20\x50\x79\x74" -"\x68\x6f\x6e\x20\xaa\xba\xc0\xf4\xb9\xd2\xa4\xa4\xb4\xfa\xb8\xd5" -"\xa4\xce\xbe\xe3\xa6\x58\x2e\x20\xa8\xe4\xa4\xa4\xb3\xcc\xa5\x44" -"\xad\x6e\xa4\x5d\xac\x4f\xa7\xda\xad\xcc\xa9\xd2\x0a\xad\x6e\xb0" -"\x51\xbd\xd7\xaa\xba\xb0\xdd\xc3\x44\xb4\x4e\xac\x4f\x3a\x0a\x0a", -"\xe5\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3" -"\x80\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a" -"\x80\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84" -"\xe4\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f" -"\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84" -"\xe9\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5" -"\xbf\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e" -"\x20\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc" -"\xe5\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5" -"\xba\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8" -"\xe5\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4" -"\xb8\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5" -"\xbd\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8" -"\xa6\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7" -"\x94\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1" -"\xe8\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62" -"\x72\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf" -"\xab\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20" -"\x70\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20" -"\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67" -"\x75\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c" -"\x89\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6" -"\x8b\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84" -"\xe7\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5" -"\x8f\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad" -"\xe6\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6" -"\x88\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8" -"\xab\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98" -"\xaf\x3a\x0a\x0a"), -'big5hkscs': ( -"\x88\x45\x88\x5c\x8a\x73\x8b\xda\x8d\xd8\x0a\x88\x66\x88\x62\x88" -"\xa7\x20\x88\xa7\x88\xa3\x0a", -"\xf0\xa0\x84\x8c\xc4\x9a\xe9\xb5\xae\xe7\xbd\x93\xe6\xb4\x86\x0a" -"\xc3\x8a\xc3\x8a\xcc\x84\xc3\xaa\x20\xc3\xaa\xc3\xaa\xcc\x84\x0a"), -'cp949': ( -"\x8c\x63\xb9\xe6\xb0\xa2\xc7\xcf\x20\xbc\x84\xbd\xc3\xc4\xdd\xb6" -"\xf3\x0a\x0a\xa8\xc0\xa8\xc0\xb3\xb3\x21\x21\x20\xec\xd7\xce\xfa" -"\xea\xc5\xc6\xd0\x92\xe6\x90\x70\xb1\xc5\x20\xa8\xde\xa8\xd3\xc4" -"\x52\xa2\xaf\xa2\xaf\xa2\xaf\x20\xb1\xe0\x8a\x96\x20\xa8\xd1\xb5" -"\xb3\x20\xa8\xc0\x2e\x20\x2e\x0a\xe4\xac\xbf\xb5\xa8\xd1\xb4\xc9" -"\xc8\xc2\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xbc\xad\xbf\xef\xb7" -"\xef\x20\xb5\xaf\xc7\xd0\xeb\xe0\x20\xca\xab\xc4\x52\x20\x21\x20" -"\x21\x20\x21\xa4\xd0\x2e\xa4\xd0\x0a\xc8\xe5\xc8\xe5\xc8\xe5\x20" -"\xa4\xa1\xa4\xa1\xa4\xa1\xa1\xd9\xa4\xd0\x5f\xa4\xd0\x20\xbe\xee" -"\x90\x8a\x20\xc5\xcb\xc4\xe2\x83\x4f\x20\xb5\xae\xc0\xc0\x20\xaf" -"\x68\xce\xfa\xb5\xe9\xeb\xe0\x20\xa8\xc0\xb5\xe5\x83\x4f\x0a\xbc" -"\xb3\x90\x6a\x20\xca\xab\xc4\x52\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\xb1\xbc\xbe\xd6\x9a\x66\x20\xa8\xd1\xb1\xc5\x20\xa8\xde\x90" -"\x74\xa8\xc2\x83\x4f\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9" -"\xb1\xee\xa3\x8e\x0a\xbf\xcd\xbe\xac\xc4\x52\x20\x21\x20\x21\x20" -"\xe4\xac\xbf\xb5\xa8\xd1\x20\xca\xab\xb4\xc9\xb1\xc5\x20\xa1\xd9" -"\xdf\xbe\xb0\xfc\x20\xbe\xf8\xb4\xc9\xb1\xc5\xb4\xc9\x20\xe4\xac" -"\xb4\xc9\xb5\xd8\xc4\x52\x20\xb1\xdb\xbe\xd6\x8a\xdb\x0a\xa8\xde" -"\xb7\xc1\xb5\xe0\xce\xfa\x20\x9a\xc3\xc7\xb4\xbd\xa4\xc4\x52\x20" -"\xbe\xee\x90\x8a\x20\xec\xd7\xec\xd2\xf4\xb9\xe5\xfc\xf1\xe9\x9a" -"\xc4\xa8\xef\xb5\xe9\x9d\xda\x21\x21\x20\xa8\xc0\xa8\xc0\xb3\xb3" -"\xa2\xbd\x20\xa1\xd2\xa1\xd2\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'euc_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a\xa5\xce\xa4\xf7\x20\xa5\xfe\x20" -"\xa5\xc8\xa5\xad\xaf\xac\xaf\xda\x20\xcf\xe3\x8f\xfe\xd8\x20\x8f" -"\xfe\xd4\x8f\xfe\xe8\x8f\xfc\xd6\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -'euc_jp': ( -"\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb3\xab\xc8\xaf\xa4\xcf\xa1" -"\xa2\x31\x39\x39\x30\x20\xc7\xaf\xa4\xb4\xa4\xed\xa4\xab\xa4\xe9" -"\xb3\xab\xbb\xcf\xa4\xb5\xa4\xec\xa4\xc6\xa4\xa4\xa4\xde\xa4\xb9" -"\xa1\xa3\x0a\xb3\xab\xc8\xaf\xbc\xd4\xa4\xce\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\xa4\xcf\xb6" -"\xb5\xb0\xe9\xcd\xd1\xa4\xce\xa5\xd7\xa5\xed\xa5\xb0\xa5\xe9\xa5" -"\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa1\xd6\x41\x42\x43\xa1\xd7" -"\xa4\xce\xb3\xab\xc8\xaf\xa4\xcb\xbb\xb2\xb2\xc3\xa4\xb7\xa4\xc6" -"\xa4\xa4\xa4\xde\xa4\xb7\xa4\xbf\xa4\xac\xa1\xa2\x41\x42\x43\x20" -"\xa4\xcf\xbc\xc2\xcd\xd1\xbe\xe5\xa4\xce\xcc\xdc\xc5\xaa\xa4\xcb" -"\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xc5\xac\xa4\xb7\xa4\xc6\xa4\xa4" -"\xa4\xde\xa4\xbb\xa4\xf3\xa4\xc7\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4" -"\xb3\xa4\xce\xa4\xbf\xa4\xe1\xa1\xa2\x47\x75\x69\x64\x6f\x20\xa4" -"\xcf\xa4\xe8\xa4\xea\xbc\xc2\xcd\xd1\xc5\xaa\xa4\xca\xa5\xd7\xa5" -"\xed\xa5\xb0\xa5\xe9\xa5\xdf\xa5\xf3\xa5\xb0\xb8\xc0\xb8\xec\xa4" -"\xce\xb3\xab\xc8\xaf\xa4\xf2\xb3\xab\xbb\xcf\xa4\xb7\xa1\xa2\xb1" -"\xd1\xb9\xf1\x20\x42\x42\x53\x20\xca\xfc\xc1\xf7\xa4\xce\xa5\xb3" -"\xa5\xe1\xa5\xc7\xa5\xa3\xc8\xd6\xc1\xc8\xa1\xd6\xa5\xe2\xa5\xf3" -"\xa5\xc6\xa5\xa3\x20\xa5\xd1\xa5\xa4\xa5\xbd\xa5\xf3\xa1\xd7\xa4" -"\xce\xa5\xd5\xa5\xa1\xa5\xf3\xa4\xc7\xa4\xa2\xa4\xeb\x20\x47\x75" -"\x69\x64\x6f\x20\xa4\xcf\xa4\xb3\xa4\xce\xb8\xc0\xb8\xec\xa4\xf2" -"\xa1\xd6\x50\x79\x74\x68\x6f\x6e\xa1\xd7\xa4\xc8\xcc\xbe\xa4\xc5" -"\xa4\xb1\xa4\xde\xa4\xb7\xa4\xbf\xa1\xa3\x0a\xa4\xb3\xa4\xce\xa4" -"\xe8\xa4\xa6\xa4\xca\xc7\xd8\xb7\xca\xa4\xab\xa4\xe9\xc0\xb8\xa4" -"\xde\xa4\xec\xa4\xbf\x20\x50\x79\x74\x68\x6f\x6e\x20\xa4\xce\xb8" -"\xc0\xb8\xec\xc0\xdf\xb7\xd7\xa4\xcf\xa1\xa2\xa1\xd6\xa5\xb7\xa5" -"\xf3\xa5\xd7\xa5\xeb\xa1\xd7\xa4\xc7\xa1\xd6\xbd\xac\xc6\xc0\xa4" -"\xac\xcd\xc6\xb0\xd7\xa1\xd7\xa4\xc8\xa4\xa4\xa4\xa6\xcc\xdc\xc9" -"\xb8\xa4\xcb\xbd\xc5\xc5\xc0\xa4\xac\xc3\xd6\xa4\xab\xa4\xec\xa4" -"\xc6\xa4\xa4\xa4\xde\xa4\xb9\xa1\xa3\x0a\xc2\xbf\xa4\xaf\xa4\xce" -"\xa5\xb9\xa5\xaf\xa5\xea\xa5\xd7\xa5\xc8\xb7\xcf\xb8\xc0\xb8\xec" -"\xa4\xc7\xa4\xcf\xa5\xe6\xa1\xbc\xa5\xb6\xa4\xce\xcc\xdc\xc0\xe8" -"\xa4\xce\xcd\xf8\xca\xd8\xc0\xad\xa4\xf2\xcd\xa5\xc0\xe8\xa4\xb7" -"\xa4\xc6\xbf\xa7\xa1\xb9\xa4\xca\xb5\xa1\xc7\xbd\xa4\xf2\xb8\xc0" -"\xb8\xec\xcd\xd7\xc1\xc7\xa4\xc8\xa4\xb7\xa4\xc6\xbc\xe8\xa4\xea" -"\xc6\xfe\xa4\xec\xa4\xeb\xbe\xec\xb9\xe7\xa4\xac\xc2\xbf\xa4\xa4" -"\xa4\xce\xa4\xc7\xa4\xb9\xa4\xac\xa1\xa2\x50\x79\x74\x68\x6f\x6e" -"\x20\xa4\xc7\xa4\xcf\xa4\xbd\xa4\xa6\xa4\xa4\xa4\xc3\xa4\xbf\xbe" -"\xae\xba\xd9\xb9\xa9\xa4\xac\xc4\xc9\xb2\xc3\xa4\xb5\xa4\xec\xa4" -"\xeb\xa4\xb3\xa4\xc8\xa4\xcf\xa4\xa2\xa4\xde\xa4\xea\xa4\xa2\xa4" -"\xea\xa4\xde\xa4\xbb\xa4\xf3\xa1\xa3\x0a\xb8\xc0\xb8\xec\xbc\xab" -"\xc2\xce\xa4\xce\xb5\xa1\xc7\xbd\xa4\xcf\xba\xc7\xbe\xae\xb8\xc2" -"\xa4\xcb\xb2\xa1\xa4\xb5\xa4\xa8\xa1\xa2\xc9\xac\xcd\xd7\xa4\xca" -"\xb5\xa1\xc7\xbd\xa4\xcf\xb3\xc8\xc4\xa5\xa5\xe2\xa5\xb8\xa5\xe5" -"\xa1\xbc\xa5\xeb\xa4\xc8\xa4\xb7\xa4\xc6\xc4\xc9\xb2\xc3\xa4\xb9" -"\xa4\xeb\xa1\xa2\xa4\xc8\xa4\xa4\xa4\xa6\xa4\xce\xa4\xac\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\xa4\xce\xa5\xdd\xa5\xea\xa5\xb7\xa1\xbc" -"\xa4\xc7\xa4\xb9\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'euc_kr': ( -"\xa1\xdd\x20\xc6\xc4\xc0\xcc\xbd\xe3\x28\x50\x79\x74\x68\x6f\x6e" -"\x29\xc0\xba\x20\xb9\xe8\xbf\xec\xb1\xe2\x20\xbd\xb1\xb0\xed\x2c" -"\x20\xb0\xad\xb7\xc2\xc7\xd1\x20\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1" -"\xb9\xd6\x20\xbe\xf0\xbe\xee\xc0\xd4\xb4\xcf\xb4\xd9\x2e\x20\xc6" -"\xc4\xc0\xcc\xbd\xe3\xc0\xba\x0a\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce" -"\x20\xb0\xed\xbc\xf6\xc1\xd8\x20\xb5\xa5\xc0\xcc\xc5\xcd\x20\xb1" -"\xb8\xc1\xb6\xbf\xcd\x20\xb0\xa3\xb4\xdc\xc7\xcf\xc1\xf6\xb8\xb8" -"\x20\xc8\xbf\xc0\xb2\xc0\xfb\xc0\xce\x20\xb0\xb4\xc3\xbc\xc1\xf6" -"\xc7\xe2\xc7\xc1\xb7\xce\xb1\xd7\xb7\xa1\xb9\xd6\xc0\xbb\x0a\xc1" -"\xf6\xbf\xf8\xc7\xd5\xb4\xcf\xb4\xd9\x2e\x20\xc6\xc4\xc0\xcc\xbd" -"\xe3\xc0\xc7\x20\xbf\xec\xbe\xc6\x28\xe9\xd0\xe4\xba\x29\xc7\xd1" -"\x20\xb9\xae\xb9\xfd\xb0\xfa\x20\xb5\xbf\xc0\xfb\x20\xc5\xb8\xc0" -"\xcc\xc7\xce\x2c\x20\xb1\xd7\xb8\xae\xb0\xed\x20\xc0\xce\xc5\xcd" -"\xc7\xc1\xb8\xae\xc6\xc3\x0a\xc8\xaf\xb0\xe6\xc0\xba\x20\xc6\xc4" -"\xc0\xcc\xbd\xe3\xc0\xbb\x20\xbd\xba\xc5\xa9\xb8\xb3\xc6\xc3\xb0" -"\xfa\x20\xbf\xa9\xb7\xaf\x20\xba\xd0\xbe\xdf\xbf\xa1\xbc\xad\xbf" -"\xcd\x20\xb4\xeb\xba\xce\xba\xd0\xc0\xc7\x20\xc7\xc3\xb7\xa7\xc6" -"\xfb\xbf\xa1\xbc\xad\xc0\xc7\x20\xba\xfc\xb8\xa5\x0a\xbe\xd6\xc7" -"\xc3\xb8\xae\xc4\xc9\xc0\xcc\xbc\xc7\x20\xb0\xb3\xb9\xdf\xc0\xbb" -"\x20\xc7\xd2\x20\xbc\xf6\x20\xc0\xd6\xb4\xc2\x20\xc0\xcc\xbb\xf3" -"\xc0\xfb\xc0\xce\x20\xbe\xf0\xbe\xee\xb7\xce\x20\xb8\xb8\xb5\xe9" -"\xbe\xee\xc1\xdd\xb4\xcf\xb4\xd9\x2e\x0a\x0a\xa1\xd9\xc3\xb9\xb0" -"\xa1\xb3\xa1\x3a\x20\xb3\xaf\xbe\xc6\xb6\xf3\x20\xa4\xd4\xa4\xb6" -"\xa4\xd0\xa4\xd4\xa4\xd4\xa4\xb6\xa4\xd0\xa4\xd4\xbe\xb1\x7e\x20" -"\xa4\xd4\xa4\xa4\xa4\xd2\xa4\xb7\xc5\xad\x21\x20\xa4\xd4\xa4\xa8" -"\xa4\xd1\xa4\xb7\xb1\xdd\xbe\xf8\xc0\xcc\x20\xc0\xfc\xa4\xd4\xa4" -"\xbe\xa4\xc8\xa4\xb2\xb4\xcf\xb4\xd9\x2e\x20\xa4\xd4\xa4\xb2\xa4" -"\xce\xa4\xaa\x2e\x20\xb1\xd7\xb7\xb1\xb0\xc5\x20\xa4\xd4\xa4\xb7" -"\xa4\xd1\xa4\xb4\xb4\xd9\x2e\x0a", -"\xe2\x97\x8e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\x28\x50\x79" -"\x74\x68\x6f\x6e\x29\xec\x9d\x80\x20\xeb\xb0\xb0\xec\x9a\xb0\xea" -"\xb8\xb0\x20\xec\x89\xbd\xea\xb3\xa0\x2c\x20\xea\xb0\x95\xeb\xa0" -"\xa5\xed\x95\x9c\x20\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e" -"\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96\xb4\xec\x9e\x85\xeb\x8b" -"\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec" -"\x9d\x80\x0a\xed\x9a\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20" -"\xea\xb3\xa0\xec\x88\x98\xec\xa4\x80\x20\xeb\x8d\xb0\xec\x9d\xb4" -"\xed\x84\xb0\x20\xea\xb5\xac\xec\xa1\xb0\xec\x99\x80\x20\xea\xb0" -"\x84\xeb\x8b\xa8\xed\x95\x98\xec\xa7\x80\xeb\xa7\x8c\x20\xed\x9a" -"\xa8\xec\x9c\xa8\xec\xa0\x81\xec\x9d\xb8\x20\xea\xb0\x9d\xec\xb2" -"\xb4\xec\xa7\x80\xed\x96\xa5\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8" -"\xeb\x9e\x98\xeb\xb0\x8d\xec\x9d\x84\x0a\xec\xa7\x80\xec\x9b\x90" -"\xed\x95\xa9\xeb\x8b\x88\xeb\x8b\xa4\x2e\x20\xed\x8c\x8c\xec\x9d" -"\xb4\xec\x8d\xac\xec\x9d\x98\x20\xec\x9a\xb0\xec\x95\x84\x28\xe5" -"\x84\xaa\xe9\x9b\x85\x29\xed\x95\x9c\x20\xeb\xac\xb8\xeb\xb2\x95" -"\xea\xb3\xbc\x20\xeb\x8f\x99\xec\xa0\x81\x20\xed\x83\x80\xec\x9d" -"\xb4\xed\x95\x91\x2c\x20\xea\xb7\xb8\xeb\xa6\xac\xea\xb3\xa0\x20" -"\xec\x9d\xb8\xed\x84\xb0\xed\x94\x84\xeb\xa6\xac\xed\x8c\x85\x0a" -"\xed\x99\x98\xea\xb2\xbd\xec\x9d\x80\x20\xed\x8c\x8c\xec\x9d\xb4" -"\xec\x8d\xac\xec\x9d\x84\x20\xec\x8a\xa4\xed\x81\xac\xeb\xa6\xbd" -"\xed\x8c\x85\xea\xb3\xbc\x20\xec\x97\xac\xeb\x9f\xac\x20\xeb\xb6" -"\x84\xec\x95\xbc\xec\x97\x90\xec\x84\x9c\xec\x99\x80\x20\xeb\x8c" -"\x80\xeb\xb6\x80\xeb\xb6\x84\xec\x9d\x98\x20\xed\x94\x8c\xeb\x9e" -"\xab\xed\x8f\xbc\xec\x97\x90\xec\x84\x9c\xec\x9d\x98\x20\xeb\xb9" -"\xa0\xeb\xa5\xb8\x0a\xec\x95\xa0\xed\x94\x8c\xeb\xa6\xac\xec\xbc" -"\x80\xec\x9d\xb4\xec\x85\x98\x20\xea\xb0\x9c\xeb\xb0\x9c\xec\x9d" -"\x84\x20\xed\x95\xa0\x20\xec\x88\x98\x20\xec\x9e\x88\xeb\x8a\x94" -"\x20\xec\x9d\xb4\xec\x83\x81\xec\xa0\x81\xec\x9d\xb8\x20\xec\x96" -"\xb8\xec\x96\xb4\xeb\xa1\x9c\x20\xeb\xa7\x8c\xeb\x93\xa4\xec\x96" -"\xb4\xec\xa4\x8d\xeb\x8b\x88\xeb\x8b\xa4\x2e\x0a\x0a\xe2\x98\x86" -"\xec\xb2\xab\xea\xb0\x80\xeb\x81\x9d\x3a\x20\xeb\x82\xa0\xec\x95" -"\x84\xeb\x9d\xbc\x20\xec\x93\x94\xec\x93\x94\xec\x93\xa9\x7e\x20" -"\xeb\x8b\x81\xed\x81\xbc\x21\x20\xeb\x9c\xbd\xea\xb8\x88\xec\x97" -"\x86\xec\x9d\xb4\x20\xec\xa0\x84\xed\x99\xa5\xeb\x8b\x88\xeb\x8b" -"\xa4\x2e\x20\xeb\xb7\x81\x2e\x20\xea\xb7\xb8\xeb\x9f\xb0\xea\xb1" -"\xb0\x20\xec\x9d\x8e\xeb\x8b\xa4\x2e\x0a"), -'gb18030': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x83\x35\xc7\x31\x83\x33\x9a\x33\x83\x32\xb1\x31\x83\x33" -"\x95\x31\x20\x82\x37\xd1\x36\x83\x30\x8c\x34\x83\x36\x84\x33\x20" -"\x82\x38\x89\x35\x82\x38\xfb\x36\x83\x33\x95\x35\x20\x83\x33\xd5" -"\x31\x82\x39\x81\x35\x20\x83\x30\xfd\x39\x83\x33\x86\x30\x20\x83" -"\x34\xdc\x33\x83\x35\xf6\x37\x83\x35\x97\x35\x20\x83\x35\xf9\x35" -"\x83\x30\x91\x39\x82\x38\x83\x39\x82\x39\xfc\x33\x83\x30\xf0\x34" -"\x20\x83\x32\xeb\x39\x83\x32\xeb\x35\x82\x39\x83\x39\x2e\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\xed\x8c\x8c\xec\x9d\xb4\xec\x8d\xac\xec\x9d\x80\x20\xea" -"\xb0\x95\xeb\xa0\xa5\xed\x95\x9c\x20\xea\xb8\xb0\xeb\x8a\xa5\xec" -"\x9d\x84\x20\xec\xa7\x80\xeb\x8b\x8c\x20\xeb\xb2\x94\xec\x9a\xa9" -"\x20\xec\xbb\xb4\xed\x93\xa8\xed\x84\xb0\x20\xed\x94\x84\xeb\xa1" -"\x9c\xea\xb7\xb8\xeb\x9e\x98\xeb\xb0\x8d\x20\xec\x96\xb8\xec\x96" -"\xb4\xeb\x8b\xa4\x2e\x0a\x0a"), -'gb2312': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\x0a"), -'gbk': ( -"\x50\x79\x74\x68\x6f\x6e\xa3\xa8\xc5\xc9\xc9\xad\xa3\xa9\xd3\xef" -"\xd1\xd4\xca\xc7\xd2\xbb\xd6\xd6\xb9\xa6\xc4\xdc\xc7\xbf\xb4\xf3" -"\xb6\xf8\xcd\xea\xc9\xc6\xb5\xc4\xcd\xa8\xd3\xc3\xd0\xcd\xbc\xc6" -"\xcb\xe3\xbb\xfa\xb3\xcc\xd0\xf2\xc9\xe8\xbc\xc6\xd3\xef\xd1\xd4" -"\xa3\xac\x0a\xd2\xd1\xbe\xad\xbe\xdf\xd3\xd0\xca\xae\xb6\xe0\xc4" -"\xea\xb5\xc4\xb7\xa2\xd5\xb9\xc0\xfa\xca\xb7\xa3\xac\xb3\xc9\xca" -"\xec\xc7\xd2\xce\xc8\xb6\xa8\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1" -"\xd4\xbe\xdf\xd3\xd0\xb7\xc7\xb3\xa3\xbc\xf2\xbd\xdd\xb6\xf8\xc7" -"\xe5\xce\xfa\x0a\xb5\xc4\xd3\xef\xb7\xa8\xcc\xd8\xb5\xe3\xa3\xac" -"\xca\xca\xba\xcf\xcd\xea\xb3\xc9\xb8\xf7\xd6\xd6\xb8\xdf\xb2\xe3" -"\xc8\xce\xce\xf1\xa3\xac\xbc\xb8\xba\xf5\xbf\xc9\xd2\xd4\xd4\xda" -"\xcb\xf9\xd3\xd0\xb5\xc4\xb2\xd9\xd7\xf7\xcf\xb5\xcd\xb3\xd6\xd0" -"\x0a\xd4\xcb\xd0\xd0\xa1\xa3\xd5\xe2\xd6\xd6\xd3\xef\xd1\xd4\xbc" -"\xf2\xb5\xa5\xb6\xf8\xc7\xbf\xb4\xf3\xa3\xac\xca\xca\xba\xcf\xb8" -"\xf7\xd6\xd6\xc8\xcb\xca\xbf\xd1\xa7\xcf\xb0\xca\xb9\xd3\xc3\xa1" -"\xa3\xc4\xbf\xc7\xb0\xa3\xac\xbb\xf9\xd3\xda\xd5\xe2\x0a\xd6\xd6" -"\xd3\xef\xd1\xd4\xb5\xc4\xcf\xe0\xb9\xd8\xbc\xbc\xca\xf5\xd5\xfd" -"\xd4\xda\xb7\xc9\xcb\xd9\xb5\xc4\xb7\xa2\xd5\xb9\xa3\xac\xd3\xc3" -"\xbb\xa7\xca\xfd\xc1\xbf\xbc\xb1\xbe\xe7\xc0\xa9\xb4\xf3\xa3\xac" -"\xcf\xe0\xb9\xd8\xb5\xc4\xd7\xca\xd4\xb4\xb7\xc7\xb3\xa3\xb6\xe0" -"\xa1\xa3\x0a\xc8\xe7\xba\xce\xd4\xda\x20\x50\x79\x74\x68\x6f\x6e" -"\x20\xd6\xd0\xca\xb9\xd3\xc3\xbc\xc8\xd3\xd0\xb5\xc4\x20\x43\x20" -"\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xa1\xa1\xd4\xda\xd9\x59\xd3" -"\x8d\xbf\xc6\xbc\xbc\xbf\xec\xcb\xd9\xb0\x6c\xd5\xb9\xb5\xc4\xbd" -"\xf1\xcc\xec\x2c\x20\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xdc" -"\x9b\xf3\x77\xb5\xc4\xcb\xd9\xb6\xc8\xca\xc7\xb2\xbb\xc8\xdd\xba" -"\xf6\xd2\x95\xb5\xc4\x0a\xd5\x6e\xee\x7d\x2e\x20\x9e\xe9\xbc\xd3" -"\xbf\xec\xe9\x5f\xb0\x6c\xbc\xb0\x9c\x79\xd4\x87\xb5\xc4\xcb\xd9" -"\xb6\xc8\x2c\x20\xce\xd2\x82\x83\xb1\xe3\xb3\xa3\xcf\xa3\xcd\xfb" -"\xc4\xdc\xc0\xfb\xd3\xc3\xd2\xbb\xd0\xa9\xd2\xd1\xe9\x5f\xb0\x6c" -"\xba\xc3\xb5\xc4\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\x81\x4b" -"\xd3\xd0\xd2\xbb\x82\x80\x20\x66\x61\x73\x74\x20\x70\x72\x6f\x74" -"\x6f\x74\x79\x70\x69\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72" -"\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x20" -"\xbf\xc9\x0a\xb9\xa9\xca\xb9\xd3\xc3\x2e\x20\xc4\xbf\xc7\xb0\xd3" -"\xd0\xd4\x53\xd4\x53\xb6\xe0\xb6\xe0\xb5\xc4\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xca\xc7\xd2\xd4\x20\x43\x20\x8c\x91\xb3\xc9\x2c" -"\x20\xb6\xf8\x20\x50\x79\x74\x68\x6f\x6e\x20\xca\xc7\xd2\xbb\x82" -"\x80\x0a\x66\x61\x73\x74\x20\x70\x72\x6f\x74\x6f\x74\x79\x70\x69" -"\x6e\x67\x20\xb5\xc4\x20\x70\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e" -"\x67\x20\x6c\x61\x6e\x67\x75\x61\x67\x65\x2e\x20\xb9\xca\xce\xd2" -"\x82\x83\xcf\xa3\xcd\xfb\xc4\xdc\x8c\xa2\xbc\xc8\xd3\xd0\xb5\xc4" -"\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xc4\xc3\xb5\xbd\x20" -"\x50\x79\x74\x68\x6f\x6e\x20\xb5\xc4\xad\x68\xbe\xb3\xd6\xd0\x9c" -"\x79\xd4\x87\xbc\xb0\xd5\xfb\xba\xcf\x2e\x20\xc6\xe4\xd6\xd0\xd7" -"\xee\xd6\xf7\xd2\xaa\xd2\xb2\xca\xc7\xce\xd2\x82\x83\xcb\xf9\x0a" -"\xd2\xaa\xd3\x91\xd5\x93\xb5\xc4\x86\x96\xee\x7d\xbe\xcd\xca\xc7" -"\x3a\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\xef\xbc\x88\xe6\xb4\xbe\xe6\xa3\xae\xef" -"\xbc\x89\xe8\xaf\xad\xe8\xa8\x80\xe6\x98\xaf\xe4\xb8\x80\xe7\xa7" -"\x8d\xe5\x8a\x9f\xe8\x83\xbd\xe5\xbc\xba\xe5\xa4\xa7\xe8\x80\x8c" -"\xe5\xae\x8c\xe5\x96\x84\xe7\x9a\x84\xe9\x80\x9a\xe7\x94\xa8\xe5" -"\x9e\x8b\xe8\xae\xa1\xe7\xae\x97\xe6\x9c\xba\xe7\xa8\x8b\xe5\xba" -"\x8f\xe8\xae\xbe\xe8\xae\xa1\xe8\xaf\xad\xe8\xa8\x80\xef\xbc\x8c" -"\x0a\xe5\xb7\xb2\xe7\xbb\x8f\xe5\x85\xb7\xe6\x9c\x89\xe5\x8d\x81" -"\xe5\xa4\x9a\xe5\xb9\xb4\xe7\x9a\x84\xe5\x8f\x91\xe5\xb1\x95\xe5" -"\x8e\x86\xe5\x8f\xb2\xef\xbc\x8c\xe6\x88\x90\xe7\x86\x9f\xe4\xb8" -"\x94\xe7\xa8\xb3\xe5\xae\x9a\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d" -"\xe8\xaf\xad\xe8\xa8\x80\xe5\x85\xb7\xe6\x9c\x89\xe9\x9d\x9e\xe5" -"\xb8\xb8\xe7\xae\x80\xe6\x8d\xb7\xe8\x80\x8c\xe6\xb8\x85\xe6\x99" -"\xb0\x0a\xe7\x9a\x84\xe8\xaf\xad\xe6\xb3\x95\xe7\x89\xb9\xe7\x82" -"\xb9\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\xae\x8c\xe6\x88\x90" -"\xe5\x90\x84\xe7\xa7\x8d\xe9\xab\x98\xe5\xb1\x82\xe4\xbb\xbb\xe5" -"\x8a\xa1\xef\xbc\x8c\xe5\x87\xa0\xe4\xb9\x8e\xe5\x8f\xaf\xe4\xbb" -"\xa5\xe5\x9c\xa8\xe6\x89\x80\xe6\x9c\x89\xe7\x9a\x84\xe6\x93\x8d" -"\xe4\xbd\x9c\xe7\xb3\xbb\xe7\xbb\x9f\xe4\xb8\xad\x0a\xe8\xbf\x90" -"\xe8\xa1\x8c\xe3\x80\x82\xe8\xbf\x99\xe7\xa7\x8d\xe8\xaf\xad\xe8" -"\xa8\x80\xe7\xae\x80\xe5\x8d\x95\xe8\x80\x8c\xe5\xbc\xba\xe5\xa4" -"\xa7\xef\xbc\x8c\xe9\x80\x82\xe5\x90\x88\xe5\x90\x84\xe7\xa7\x8d" -"\xe4\xba\xba\xe5\xa3\xab\xe5\xad\xa6\xe4\xb9\xa0\xe4\xbd\xbf\xe7" -"\x94\xa8\xe3\x80\x82\xe7\x9b\xae\xe5\x89\x8d\xef\xbc\x8c\xe5\x9f" -"\xba\xe4\xba\x8e\xe8\xbf\x99\x0a\xe7\xa7\x8d\xe8\xaf\xad\xe8\xa8" -"\x80\xe7\x9a\x84\xe7\x9b\xb8\xe5\x85\xb3\xe6\x8a\x80\xe6\x9c\xaf" -"\xe6\xad\xa3\xe5\x9c\xa8\xe9\xa3\x9e\xe9\x80\x9f\xe7\x9a\x84\xe5" -"\x8f\x91\xe5\xb1\x95\xef\xbc\x8c\xe7\x94\xa8\xe6\x88\xb7\xe6\x95" -"\xb0\xe9\x87\x8f\xe6\x80\xa5\xe5\x89\xa7\xe6\x89\xa9\xe5\xa4\xa7" -"\xef\xbc\x8c\xe7\x9b\xb8\xe5\x85\xb3\xe7\x9a\x84\xe8\xb5\x84\xe6" -"\xba\x90\xe9\x9d\x9e\xe5\xb8\xb8\xe5\xa4\x9a\xe3\x80\x82\x0a\xe5" -"\xa6\x82\xe4\xbd\x95\xe5\x9c\xa8\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe4\xb8\xad\xe4\xbd\xbf\xe7\x94\xa8\xe6\x97\xa2\xe6\x9c\x89\xe7" -"\x9a\x84\x20\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x3f\x0a\xe3\x80" -"\x80\xe5\x9c\xa8\xe8\xb3\x87\xe8\xa8\x8a\xe7\xa7\x91\xe6\x8a\x80" -"\xe5\xbf\xab\xe9\x80\x9f\xe7\x99\xbc\xe5\xb1\x95\xe7\x9a\x84\xe4" -"\xbb\x8a\xe5\xa4\xa9\x2c\x20\xe9\x96\x8b\xe7\x99\xbc\xe5\x8f\x8a" -"\xe6\xb8\xac\xe8\xa9\xa6\xe8\xbb\x9f\xe9\xab\x94\xe7\x9a\x84\xe9" -"\x80\x9f\xe5\xba\xa6\xe6\x98\xaf\xe4\xb8\x8d\xe5\xae\xb9\xe5\xbf" -"\xbd\xe8\xa6\x96\xe7\x9a\x84\x0a\xe8\xaa\xb2\xe9\xa1\x8c\x2e\x20" -"\xe7\x82\xba\xe5\x8a\xa0\xe5\xbf\xab\xe9\x96\x8b\xe7\x99\xbc\xe5" -"\x8f\x8a\xe6\xb8\xac\xe8\xa9\xa6\xe7\x9a\x84\xe9\x80\x9f\xe5\xba" -"\xa6\x2c\x20\xe6\x88\x91\xe5\x80\x91\xe4\xbe\xbf\xe5\xb8\xb8\xe5" -"\xb8\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\x88\xa9\xe7\x94\xa8\xe4\xb8" -"\x80\xe4\xba\x9b\xe5\xb7\xb2\xe9\x96\x8b\xe7\x99\xbc\xe5\xa5\xbd" -"\xe7\x9a\x84\x0a\x6c\x69\x62\x72\x61\x72\x79\x2c\x20\xe4\xb8\xa6" -"\xe6\x9c\x89\xe4\xb8\x80\xe5\x80\x8b\x20\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x20\xe5\x8f\xaf\x0a\xe4\xbe\x9b\xe4\xbd\xbf\xe7\x94" -"\xa8\x2e\x20\xe7\x9b\xae\xe5\x89\x8d\xe6\x9c\x89\xe8\xa8\xb1\xe8" -"\xa8\xb1\xe5\xa4\x9a\xe5\xa4\x9a\xe7\x9a\x84\x20\x6c\x69\x62\x72" -"\x61\x72\x79\x20\xe6\x98\xaf\xe4\xbb\xa5\x20\x43\x20\xe5\xaf\xab" -"\xe6\x88\x90\x2c\x20\xe8\x80\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20" -"\xe6\x98\xaf\xe4\xb8\x80\xe5\x80\x8b\x0a\x66\x61\x73\x74\x20\x70" -"\x72\x6f\x74\x6f\x74\x79\x70\x69\x6e\x67\x20\xe7\x9a\x84\x20\x70" -"\x72\x6f\x67\x72\x61\x6d\x6d\x69\x6e\x67\x20\x6c\x61\x6e\x67\x75" -"\x61\x67\x65\x2e\x20\xe6\x95\x85\xe6\x88\x91\xe5\x80\x91\xe5\xb8" -"\x8c\xe6\x9c\x9b\xe8\x83\xbd\xe5\xb0\x87\xe6\x97\xa2\xe6\x9c\x89" -"\xe7\x9a\x84\x0a\x43\x20\x6c\x69\x62\x72\x61\x72\x79\x20\xe6\x8b" -"\xbf\xe5\x88\xb0\x20\x50\x79\x74\x68\x6f\x6e\x20\xe7\x9a\x84\xe7" -"\x92\xb0\xe5\xa2\x83\xe4\xb8\xad\xe6\xb8\xac\xe8\xa9\xa6\xe5\x8f" -"\x8a\xe6\x95\xb4\xe5\x90\x88\x2e\x20\xe5\x85\xb6\xe4\xb8\xad\xe6" -"\x9c\x80\xe4\xb8\xbb\xe8\xa6\x81\xe4\xb9\x9f\xe6\x98\xaf\xe6\x88" -"\x91\xe5\x80\x91\xe6\x89\x80\x0a\xe8\xa6\x81\xe8\xa8\x8e\xe8\xab" -"\x96\xe7\x9a\x84\xe5\x95\x8f\xe9\xa1\x8c\xe5\xb0\xb1\xe6\x98\xaf" -"\x3a\x0a\x0a"), -'johab': ( -"\x99\xb1\xa4\x77\x88\x62\xd0\x61\x20\xcd\x5c\xaf\xa1\xc5\xa9\x9c" -"\x61\x0a\x0a\xdc\xc0\xdc\xc0\x90\x73\x21\x21\x20\xf1\x67\xe2\x9c" -"\xf0\x55\xcc\x81\xa3\x89\x9f\x85\x8a\xa1\x20\xdc\xde\xdc\xd3\xd2" -"\x7a\xd9\xaf\xd9\xaf\xd9\xaf\x20\x8b\x77\x96\xd3\x20\xdc\xd1\x95" -"\x81\x20\xdc\xc0\x2e\x20\x2e\x0a\xed\x3c\xb5\x77\xdc\xd1\x93\x77" -"\xd2\x73\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xac\xe1\xb6\x89\x9e" -"\xa1\x20\x95\x65\xd0\x62\xf0\xe0\x20\xe0\x3b\xd2\x7a\x20\x21\x20" -"\x21\x20\x21\x87\x41\x2e\x87\x41\x0a\xd3\x61\xd3\x61\xd3\x61\x20" -"\x88\x41\x88\x41\x88\x41\xd9\x69\x87\x41\x5f\x87\x41\x20\xb4\xe1" -"\x9f\x9a\x20\xc8\xa1\xc5\xc1\x8b\x7a\x20\x95\x61\xb7\x77\x20\xc3" -"\x97\xe2\x9c\x97\x69\xf0\xe0\x20\xdc\xc0\x97\x61\x8b\x7a\x0a\xac" -"\xe9\x9f\x7a\x20\xe0\x3b\xd2\x7a\x20\x2e\x20\x2e\x20\x2e\x20\x2e" -"\x20\x8a\x89\xb4\x81\xae\xba\x20\xdc\xd1\x8a\xa1\x20\xdc\xde\x9f" -"\x89\xdc\xc2\x8b\x7a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9" -"\x8c\x61\xbb\x9a\x0a\xb5\xc1\xb2\xa1\xd2\x7a\x20\x21\x20\x21\x20" -"\xed\x3c\xb5\x77\xdc\xd1\x20\xe0\x3b\x93\x77\x8a\xa1\x20\xd9\x69" -"\xea\xbe\x89\xc5\x20\xb4\xf4\x93\x77\x8a\xa1\x93\x77\x20\xed\x3c" -"\x93\x77\x96\xc1\xd2\x7a\x20\x8b\x69\xb4\x81\x97\x7a\x0a\xdc\xde" -"\x9d\x61\x97\x41\xe2\x9c\x20\xaf\x81\xce\xa1\xae\xa1\xd2\x7a\x20" -"\xb4\xe1\x9f\x9a\x20\xf1\x67\xf1\x62\xf5\x49\xed\xfc\xf3\xe9\xaf" -"\x82\xdc\xef\x97\x69\xb4\x7a\x21\x21\x20\xdc\xc0\xdc\xc0\x90\x73" -"\xd9\xbd\x20\xd9\x62\xd9\x62\x2a\x0a\x0a", -"\xeb\x98\xa0\xeb\xb0\xa9\xea\xb0\x81\xed\x95\x98\x20\xed\x8e\xb2" -"\xec\x8b\x9c\xec\xbd\x9c\xeb\x9d\xbc\x0a\x0a\xe3\x89\xaf\xe3\x89" -"\xaf\xeb\x82\xa9\x21\x21\x20\xe5\x9b\xa0\xe4\xb9\x9d\xe6\x9c\x88" -"\xed\x8c\xa8\xeb\xaf\xa4\xeb\xa6\x94\xea\xb6\x88\x20\xe2\x93\xa1" -"\xe2\x93\x96\xed\x9b\x80\xc2\xbf\xc2\xbf\xc2\xbf\x20\xea\xb8\x8d" -"\xeb\x92\x99\x20\xe2\x93\x94\xeb\x8e\xa8\x20\xe3\x89\xaf\x2e\x20" -"\x2e\x0a\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94\xeb\x8a\xa5\xed\x9a" -"\xb9\x20\x2e\x20\x2e\x20\x2e\x20\x2e\x20\xec\x84\x9c\xec\x9a\xb8" -"\xeb\xa4\x84\x20\xeb\x8e\x90\xed\x95\x99\xe4\xb9\x99\x20\xe5\xae" -"\xb6\xed\x9b\x80\x20\x21\x20\x21\x20\x21\xe3\x85\xa0\x2e\xe3\x85" -"\xa0\x0a\xed\x9d\x90\xed\x9d\x90\xed\x9d\x90\x20\xe3\x84\xb1\xe3" -"\x84\xb1\xe3\x84\xb1\xe2\x98\x86\xe3\x85\xa0\x5f\xe3\x85\xa0\x20" -"\xec\x96\xb4\xeb\xa6\xa8\x20\xed\x83\xb8\xec\xbd\xb0\xea\xb8\x90" -"\x20\xeb\x8e\x8c\xec\x9d\x91\x20\xec\xb9\x91\xe4\xb9\x9d\xeb\x93" -"\xa4\xe4\xb9\x99\x20\xe3\x89\xaf\xeb\x93\x9c\xea\xb8\x90\x0a\xec" -"\x84\xa4\xeb\xa6\x8c\x20\xe5\xae\xb6\xed\x9b\x80\x20\x2e\x20\x2e" -"\x20\x2e\x20\x2e\x20\xea\xb5\xb4\xec\x95\xa0\xec\x89\x8c\x20\xe2" -"\x93\x94\xea\xb6\x88\x20\xe2\x93\xa1\xeb\xa6\x98\xe3\x89\xb1\xea" -"\xb8\x90\x20\xe5\x9b\xa0\xe4\xbb\x81\xe5\xb7\x9d\xef\xa6\x81\xe4" -"\xb8\xad\xea\xb9\x8c\xec\xa6\xbc\x0a\xec\x99\x80\xec\x92\x80\xed" -"\x9b\x80\x20\x21\x20\x21\x20\xe4\xba\x9e\xec\x98\x81\xe2\x93\x94" -"\x20\xe5\xae\xb6\xeb\x8a\xa5\xea\xb6\x88\x20\xe2\x98\x86\xe4\xb8" -"\x8a\xea\xb4\x80\x20\xec\x97\x86\xeb\x8a\xa5\xea\xb6\x88\xeb\x8a" -"\xa5\x20\xe4\xba\x9e\xeb\x8a\xa5\xeb\x92\x88\xed\x9b\x80\x20\xea" -"\xb8\x80\xec\x95\xa0\xeb\x93\xb4\x0a\xe2\x93\xa1\xeb\xa0\xa4\xeb" -"\x93\x80\xe4\xb9\x9d\x20\xec\x8b\x80\xed\x92\x94\xec\x88\xb4\xed" -"\x9b\x80\x20\xec\x96\xb4\xeb\xa6\xa8\x20\xe5\x9b\xa0\xe4\xbb\x81" -"\xe5\xb7\x9d\xef\xa6\x81\xe4\xb8\xad\xec\x8b\x81\xe2\x91\xa8\xeb" -"\x93\xa4\xec\x95\x9c\x21\x21\x20\xe3\x89\xaf\xe3\x89\xaf\xeb\x82" -"\xa9\xe2\x99\xa1\x20\xe2\x8c\x92\xe2\x8c\x92\x2a\x0a\x0a"), -'shift_jis': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a"), -'shift_jisx0213': ( -"\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8a\x4a\x94\xad\x82\xcd\x81" -"\x41\x31\x39\x39\x30\x20\x94\x4e\x82\xb2\x82\xeb\x82\xa9\x82\xe7" -"\x8a\x4a\x8e\x6e\x82\xb3\x82\xea\x82\xc4\x82\xa2\x82\xdc\x82\xb7" -"\x81\x42\x0a\x8a\x4a\x94\xad\x8e\xd2\x82\xcc\x20\x47\x75\x69\x64" -"\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73\x73\x75\x6d\x20\x82\xcd\x8b" -"\xb3\x88\xe7\x97\x70\x82\xcc\x83\x76\x83\x8d\x83\x4f\x83\x89\x83" -"\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x81\x75\x41\x42\x43\x81\x76" -"\x82\xcc\x8a\x4a\x94\xad\x82\xc9\x8e\x51\x89\xc1\x82\xb5\x82\xc4" -"\x82\xa2\x82\xdc\x82\xb5\x82\xbd\x82\xaa\x81\x41\x41\x42\x43\x20" -"\x82\xcd\x8e\xc0\x97\x70\x8f\xe3\x82\xcc\x96\xda\x93\x49\x82\xc9" -"\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x93\x4b\x82\xb5\x82\xc4\x82\xa2" -"\x82\xdc\x82\xb9\x82\xf1\x82\xc5\x82\xb5\x82\xbd\x81\x42\x0a\x82" -"\xb1\x82\xcc\x82\xbd\x82\xdf\x81\x41\x47\x75\x69\x64\x6f\x20\x82" -"\xcd\x82\xe6\x82\xe8\x8e\xc0\x97\x70\x93\x49\x82\xc8\x83\x76\x83" -"\x8d\x83\x4f\x83\x89\x83\x7e\x83\x93\x83\x4f\x8c\xbe\x8c\xea\x82" -"\xcc\x8a\x4a\x94\xad\x82\xf0\x8a\x4a\x8e\x6e\x82\xb5\x81\x41\x89" -"\x70\x8d\x91\x20\x42\x42\x53\x20\x95\xfa\x91\x97\x82\xcc\x83\x52" -"\x83\x81\x83\x66\x83\x42\x94\xd4\x91\x67\x81\x75\x83\x82\x83\x93" -"\x83\x65\x83\x42\x20\x83\x70\x83\x43\x83\x5c\x83\x93\x81\x76\x82" -"\xcc\x83\x74\x83\x40\x83\x93\x82\xc5\x82\xa0\x82\xe9\x20\x47\x75" -"\x69\x64\x6f\x20\x82\xcd\x82\xb1\x82\xcc\x8c\xbe\x8c\xea\x82\xf0" -"\x81\x75\x50\x79\x74\x68\x6f\x6e\x81\x76\x82\xc6\x96\xbc\x82\xc3" -"\x82\xaf\x82\xdc\x82\xb5\x82\xbd\x81\x42\x0a\x82\xb1\x82\xcc\x82" -"\xe6\x82\xa4\x82\xc8\x94\x77\x8c\x69\x82\xa9\x82\xe7\x90\xb6\x82" -"\xdc\x82\xea\x82\xbd\x20\x50\x79\x74\x68\x6f\x6e\x20\x82\xcc\x8c" -"\xbe\x8c\xea\x90\xdd\x8c\x76\x82\xcd\x81\x41\x81\x75\x83\x56\x83" -"\x93\x83\x76\x83\x8b\x81\x76\x82\xc5\x81\x75\x8f\x4b\x93\xbe\x82" -"\xaa\x97\x65\x88\xd5\x81\x76\x82\xc6\x82\xa2\x82\xa4\x96\xda\x95" -"\x57\x82\xc9\x8f\x64\x93\x5f\x82\xaa\x92\x75\x82\xa9\x82\xea\x82" -"\xc4\x82\xa2\x82\xdc\x82\xb7\x81\x42\x0a\x91\xbd\x82\xad\x82\xcc" -"\x83\x58\x83\x4e\x83\x8a\x83\x76\x83\x67\x8c\x6e\x8c\xbe\x8c\xea" -"\x82\xc5\x82\xcd\x83\x86\x81\x5b\x83\x55\x82\xcc\x96\xda\x90\xe6" -"\x82\xcc\x97\x98\x95\xd6\x90\xab\x82\xf0\x97\x44\x90\xe6\x82\xb5" -"\x82\xc4\x90\x46\x81\x58\x82\xc8\x8b\x40\x94\x5c\x82\xf0\x8c\xbe" -"\x8c\xea\x97\x76\x91\x66\x82\xc6\x82\xb5\x82\xc4\x8e\xe6\x82\xe8" -"\x93\xfc\x82\xea\x82\xe9\x8f\xea\x8d\x87\x82\xaa\x91\xbd\x82\xa2" -"\x82\xcc\x82\xc5\x82\xb7\x82\xaa\x81\x41\x50\x79\x74\x68\x6f\x6e" -"\x20\x82\xc5\x82\xcd\x82\xbb\x82\xa4\x82\xa2\x82\xc1\x82\xbd\x8f" -"\xac\x8d\xd7\x8d\x48\x82\xaa\x92\xc7\x89\xc1\x82\xb3\x82\xea\x82" -"\xe9\x82\xb1\x82\xc6\x82\xcd\x82\xa0\x82\xdc\x82\xe8\x82\xa0\x82" -"\xe8\x82\xdc\x82\xb9\x82\xf1\x81\x42\x0a\x8c\xbe\x8c\xea\x8e\xa9" -"\x91\xcc\x82\xcc\x8b\x40\x94\x5c\x82\xcd\x8d\xc5\x8f\xac\x8c\xc0" -"\x82\xc9\x89\x9f\x82\xb3\x82\xa6\x81\x41\x95\x4b\x97\x76\x82\xc8" -"\x8b\x40\x94\x5c\x82\xcd\x8a\x67\x92\xa3\x83\x82\x83\x57\x83\x85" -"\x81\x5b\x83\x8b\x82\xc6\x82\xb5\x82\xc4\x92\xc7\x89\xc1\x82\xb7" -"\x82\xe9\x81\x41\x82\xc6\x82\xa2\x82\xa4\x82\xcc\x82\xaa\x20\x50" -"\x79\x74\x68\x6f\x6e\x20\x82\xcc\x83\x7c\x83\x8a\x83\x56\x81\x5b" -"\x82\xc5\x82\xb7\x81\x42\x0a\x0a\x83\x6d\x82\xf5\x20\x83\x9e\x20" -"\x83\x67\x83\x4c\x88\x4b\x88\x79\x20\x98\x83\xfc\xd6\x20\xfc\xd2" -"\xfc\xe6\xfb\xd4\x0a", -"\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xaf\xe3\x80\x81\x31\x39\x39\x30\x20\xe5\xb9\xb4\xe3\x81" -"\x94\xe3\x82\x8d\xe3\x81\x8b\xe3\x82\x89\xe9\x96\x8b\xe5\xa7\x8b" -"\xe3\x81\x95\xe3\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3" -"\x81\x99\xe3\x80\x82\x0a\xe9\x96\x8b\xe7\x99\xba\xe8\x80\x85\xe3" -"\x81\xae\x20\x47\x75\x69\x64\x6f\x20\x76\x61\x6e\x20\x52\x6f\x73" -"\x73\x75\x6d\x20\xe3\x81\xaf\xe6\x95\x99\xe8\x82\xb2\xe7\x94\xa8" -"\xe3\x81\xae\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83\xa9\xe3" -"\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e\xe3\x80" -"\x8c\x41\x42\x43\xe3\x80\x8d\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba" -"\xe3\x81\xab\xe5\x8f\x82\xe5\x8a\xa0\xe3\x81\x97\xe3\x81\xa6\xe3" -"\x81\x84\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x81\x8c\xe3\x80" -"\x81\x41\x42\x43\x20\xe3\x81\xaf\xe5\xae\x9f\xe7\x94\xa8\xe4\xb8" -"\x8a\xe3\x81\xae\xe7\x9b\xae\xe7\x9a\x84\xe3\x81\xab\xe3\x81\xaf" -"\xe3\x81\x82\xe3\x81\xbe\xe3\x82\x8a\xe9\x81\xa9\xe3\x81\x97\xe3" -"\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x9b\xe3\x82\x93\xe3\x81" -"\xa7\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a\xe3\x81\x93\xe3\x81" -"\xae\xe3\x81\x9f\xe3\x82\x81\xe3\x80\x81\x47\x75\x69\x64\x6f\x20" -"\xe3\x81\xaf\xe3\x82\x88\xe3\x82\x8a\xe5\xae\x9f\xe7\x94\xa8\xe7" -"\x9a\x84\xe3\x81\xaa\xe3\x83\x97\xe3\x83\xad\xe3\x82\xb0\xe3\x83" -"\xa9\xe3\x83\x9f\xe3\x83\xb3\xe3\x82\xb0\xe8\xa8\x80\xe8\xaa\x9e" -"\xe3\x81\xae\xe9\x96\x8b\xe7\x99\xba\xe3\x82\x92\xe9\x96\x8b\xe5" -"\xa7\x8b\xe3\x81\x97\xe3\x80\x81\xe8\x8b\xb1\xe5\x9b\xbd\x20\x42" -"\x42\x53\x20\xe6\x94\xbe\xe9\x80\x81\xe3\x81\xae\xe3\x82\xb3\xe3" -"\x83\xa1\xe3\x83\x87\xe3\x82\xa3\xe7\x95\xaa\xe7\xb5\x84\xe3\x80" -"\x8c\xe3\x83\xa2\xe3\x83\xb3\xe3\x83\x86\xe3\x82\xa3\x20\xe3\x83" -"\x91\xe3\x82\xa4\xe3\x82\xbd\xe3\x83\xb3\xe3\x80\x8d\xe3\x81\xae" -"\xe3\x83\x95\xe3\x82\xa1\xe3\x83\xb3\xe3\x81\xa7\xe3\x81\x82\xe3" -"\x82\x8b\x20\x47\x75\x69\x64\x6f\x20\xe3\x81\xaf\xe3\x81\x93\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe3\x82\x92\xe3\x80\x8c\x50\x79" -"\x74\x68\x6f\x6e\xe3\x80\x8d\xe3\x81\xa8\xe5\x90\x8d\xe3\x81\xa5" -"\xe3\x81\x91\xe3\x81\xbe\xe3\x81\x97\xe3\x81\x9f\xe3\x80\x82\x0a" -"\xe3\x81\x93\xe3\x81\xae\xe3\x82\x88\xe3\x81\x86\xe3\x81\xaa\xe8" -"\x83\x8c\xe6\x99\xaf\xe3\x81\x8b\xe3\x82\x89\xe7\x94\x9f\xe3\x81" -"\xbe\xe3\x82\x8c\xe3\x81\x9f\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3" -"\x81\xae\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa8\xad\xe8\xa8\x88\xe3\x81" -"\xaf\xe3\x80\x81\xe3\x80\x8c\xe3\x82\xb7\xe3\x83\xb3\xe3\x83\x97" -"\xe3\x83\xab\xe3\x80\x8d\xe3\x81\xa7\xe3\x80\x8c\xe7\xbf\x92\xe5" -"\xbe\x97\xe3\x81\x8c\xe5\xae\xb9\xe6\x98\x93\xe3\x80\x8d\xe3\x81" -"\xa8\xe3\x81\x84\xe3\x81\x86\xe7\x9b\xae\xe6\xa8\x99\xe3\x81\xab" -"\xe9\x87\x8d\xe7\x82\xb9\xe3\x81\x8c\xe7\xbd\xae\xe3\x81\x8b\xe3" -"\x82\x8c\xe3\x81\xa6\xe3\x81\x84\xe3\x81\xbe\xe3\x81\x99\xe3\x80" -"\x82\x0a\xe5\xa4\x9a\xe3\x81\x8f\xe3\x81\xae\xe3\x82\xb9\xe3\x82" -"\xaf\xe3\x83\xaa\xe3\x83\x97\xe3\x83\x88\xe7\xb3\xbb\xe8\xa8\x80" -"\xe8\xaa\x9e\xe3\x81\xa7\xe3\x81\xaf\xe3\x83\xa6\xe3\x83\xbc\xe3" -"\x82\xb6\xe3\x81\xae\xe7\x9b\xae\xe5\x85\x88\xe3\x81\xae\xe5\x88" -"\xa9\xe4\xbe\xbf\xe6\x80\xa7\xe3\x82\x92\xe5\x84\xaa\xe5\x85\x88" -"\xe3\x81\x97\xe3\x81\xa6\xe8\x89\xb2\xe3\x80\x85\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x82\x92\xe8\xa8\x80\xe8\xaa\x9e\xe8\xa6" -"\x81\xe7\xb4\xa0\xe3\x81\xa8\xe3\x81\x97\xe3\x81\xa6\xe5\x8f\x96" -"\xe3\x82\x8a\xe5\x85\xa5\xe3\x82\x8c\xe3\x82\x8b\xe5\xa0\xb4\xe5" -"\x90\x88\xe3\x81\x8c\xe5\xa4\x9a\xe3\x81\x84\xe3\x81\xae\xe3\x81" -"\xa7\xe3\x81\x99\xe3\x81\x8c\xe3\x80\x81\x50\x79\x74\x68\x6f\x6e" -"\x20\xe3\x81\xa7\xe3\x81\xaf\xe3\x81\x9d\xe3\x81\x86\xe3\x81\x84" -"\xe3\x81\xa3\xe3\x81\x9f\xe5\xb0\x8f\xe7\xb4\xb0\xe5\xb7\xa5\xe3" -"\x81\x8c\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x95\xe3\x82\x8c\xe3\x82" -"\x8b\xe3\x81\x93\xe3\x81\xa8\xe3\x81\xaf\xe3\x81\x82\xe3\x81\xbe" -"\xe3\x82\x8a\xe3\x81\x82\xe3\x82\x8a\xe3\x81\xbe\xe3\x81\x9b\xe3" -"\x82\x93\xe3\x80\x82\x0a\xe8\xa8\x80\xe8\xaa\x9e\xe8\x87\xaa\xe4" -"\xbd\x93\xe3\x81\xae\xe6\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x9c" -"\x80\xe5\xb0\x8f\xe9\x99\x90\xe3\x81\xab\xe6\x8a\xbc\xe3\x81\x95" -"\xe3\x81\x88\xe3\x80\x81\xe5\xbf\x85\xe8\xa6\x81\xe3\x81\xaa\xe6" -"\xa9\x9f\xe8\x83\xbd\xe3\x81\xaf\xe6\x8b\xa1\xe5\xbc\xb5\xe3\x83" -"\xa2\xe3\x82\xb8\xe3\x83\xa5\xe3\x83\xbc\xe3\x83\xab\xe3\x81\xa8" -"\xe3\x81\x97\xe3\x81\xa6\xe8\xbf\xbd\xe5\x8a\xa0\xe3\x81\x99\xe3" -"\x82\x8b\xe3\x80\x81\xe3\x81\xa8\xe3\x81\x84\xe3\x81\x86\xe3\x81" -"\xae\xe3\x81\x8c\x20\x50\x79\x74\x68\x6f\x6e\x20\xe3\x81\xae\xe3" -"\x83\x9d\xe3\x83\xaa\xe3\x82\xb7\xe3\x83\xbc\xe3\x81\xa7\xe3\x81" -"\x99\xe3\x80\x82\x0a\x0a\xe3\x83\x8e\xe3\x81\x8b\xe3\x82\x9a\x20" -"\xe3\x83\x88\xe3\x82\x9a\x20\xe3\x83\x88\xe3\x82\xad\xef\xa8\xb6" -"\xef\xa8\xb9\x20\xf0\xa1\x9a\xb4\xf0\xaa\x8e\x8c\x20\xe9\xba\x80" -"\xe9\xbd\x81\xf0\xa9\x9b\xb0\x0a"), -} diff --git a/lib-python/2.7/test/crashers/README b/lib-python/2.7/test/crashers/README --- a/lib-python/2.7/test/crashers/README +++ b/lib-python/2.7/test/crashers/README @@ -1,20 +1,16 @@ -This directory only contains tests for outstanding bugs that cause -the interpreter to segfault. Ideally this directory should always -be empty. Sometimes it may not be easy to fix the underlying cause. +This directory only contains tests for outstanding bugs that cause the +interpreter to segfault. Ideally this directory should always be empty, but +sometimes it may not be easy to fix the underlying cause and the bug is deemed +too obscure to invest the effort. Each test should fail when run from the command line: ./python Lib/test/crashers/weakref_in_del.py -Each test should have a link to the bug report: +Put as much info into a docstring or comments to help determine the cause of the +failure, as well as a bugs.python.org issue number if it exists. Particularly +note if the cause is system or environment dependent and what the variables are. - # http://python.org/sf/BUG# - -Put as much info into a docstring or comments to help determine -the cause of the failure. Particularly note if the cause is -system or environment dependent and what the variables are. - -Once the crash is fixed, the test case should be moved into an appropriate -test (even if it was originally from the test suite). This ensures the -regression doesn't happen again. And if it does, it should be easier -to track down. +Once the crash is fixed, the test case should be moved into an appropriate test +(even if it was originally from the test suite). This ensures the regression +doesn't happen again. And if it does, it should be easier to track down. diff --git a/lib-python/2.7/test/crashers/recursion_limit_too_high.py b/lib-python/2.7/test/crashers/recursion_limit_too_high.py --- a/lib-python/2.7/test/crashers/recursion_limit_too_high.py +++ b/lib-python/2.7/test/crashers/recursion_limit_too_high.py @@ -5,7 +5,7 @@ # file handles. # The point of this example is to show that sys.setrecursionlimit() is a -# hack, and not a robust solution. This example simply exercices a path +# hack, and not a robust solution. This example simply exercises a path # where it takes many C-level recursions, consuming a lot of stack # space, for each Python-level recursion. So 1000 times this amount of # stack space may be too much for standard platforms already. diff --git a/lib-python/2.7/test/decimaltestdata/and.decTest b/lib-python/2.7/test/decimaltestdata/and.decTest --- a/lib-python/2.7/test/decimaltestdata/and.decTest +++ b/lib-python/2.7/test/decimaltestdata/and.decTest @@ -1,338 +1,338 @@ ------------------------------------------------------------------------- --- and.decTest -- digitwise logical AND -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - -extended: 1 -precision: 9 -rounding: half_up -maxExponent: 999 -minExponent: -999 - --- Sanity check (truth table) -andx001 and 0 0 -> 0 -andx002 and 0 1 -> 0 -andx003 and 1 0 -> 0 -andx004 and 1 1 -> 1 -andx005 and 1100 1010 -> 1000 -andx006 and 1111 10 -> 10 -andx007 and 1111 1010 -> 1010 - --- and at msd and msd-1 -andx010 and 000000000 000000000 -> 0 -andx011 and 000000000 100000000 -> 0 -andx012 and 100000000 000000000 -> 0 -andx013 and 100000000 100000000 -> 100000000 -andx014 and 000000000 000000000 -> 0 -andx015 and 000000000 010000000 -> 0 -andx016 and 010000000 000000000 -> 0 -andx017 and 010000000 010000000 -> 10000000 - --- Various lengths --- 123456789 123456789 123456789 -andx021 and 111111111 111111111 -> 111111111 -andx022 and 111111111111 111111111 -> 111111111 -andx023 and 111111111111 11111111 -> 11111111 -andx024 and 111111111 11111111 -> 11111111 -andx025 and 111111111 1111111 -> 1111111 -andx026 and 111111111111 111111 -> 111111 -andx027 and 111111111111 11111 -> 11111 -andx028 and 111111111111 1111 -> 1111 -andx029 and 111111111111 111 -> 111 -andx031 and 111111111111 11 -> 11 -andx032 and 111111111111 1 -> 1 -andx033 and 111111111111 1111111111 -> 111111111 -andx034 and 11111111111 11111111111 -> 111111111 -andx035 and 1111111111 111111111111 -> 111111111 -andx036 and 111111111 1111111111111 -> 111111111 - -andx040 and 111111111 111111111111 -> 111111111 -andx041 and 11111111 111111111111 -> 11111111 -andx042 and 11111111 111111111 -> 11111111 -andx043 and 1111111 111111111 -> 1111111 -andx044 and 111111 111111111 -> 111111 -andx045 and 11111 111111111 -> 11111 -andx046 and 1111 111111111 -> 1111 -andx047 and 111 111111111 -> 111 -andx048 and 11 111111111 -> 11 -andx049 and 1 111111111 -> 1 - -andx050 and 1111111111 1 -> 1 -andx051 and 111111111 1 -> 1 -andx052 and 11111111 1 -> 1 -andx053 and 1111111 1 -> 1 -andx054 and 111111 1 -> 1 -andx055 and 11111 1 -> 1 -andx056 and 1111 1 -> 1 -andx057 and 111 1 -> 1 -andx058 and 11 1 -> 1 -andx059 and 1 1 -> 1 - -andx060 and 1111111111 0 -> 0 -andx061 and 111111111 0 -> 0 -andx062 and 11111111 0 -> 0 -andx063 and 1111111 0 -> 0 -andx064 and 111111 0 -> 0 -andx065 and 11111 0 -> 0 -andx066 and 1111 0 -> 0 -andx067 and 111 0 -> 0 -andx068 and 11 0 -> 0 -andx069 and 1 0 -> 0 - -andx070 and 1 1111111111 -> 1 -andx071 and 1 111111111 -> 1 -andx072 and 1 11111111 -> 1 -andx073 and 1 1111111 -> 1 -andx074 and 1 111111 -> 1 -andx075 and 1 11111 -> 1 -andx076 and 1 1111 -> 1 -andx077 and 1 111 -> 1 -andx078 and 1 11 -> 1 -andx079 and 1 1 -> 1 - -andx080 and 0 1111111111 -> 0 -andx081 and 0 111111111 -> 0 -andx082 and 0 11111111 -> 0 -andx083 and 0 1111111 -> 0 -andx084 and 0 111111 -> 0 -andx085 and 0 11111 -> 0 -andx086 and 0 1111 -> 0 -andx087 and 0 111 -> 0 -andx088 and 0 11 -> 0 -andx089 and 0 1 -> 0 - -andx090 and 011111111 111111111 -> 11111111 -andx091 and 101111111 111111111 -> 101111111 -andx092 and 110111111 111111111 -> 110111111 -andx093 and 111011111 111111111 -> 111011111 -andx094 and 111101111 111111111 -> 111101111 -andx095 and 111110111 111111111 -> 111110111 -andx096 and 111111011 111111111 -> 111111011 -andx097 and 111111101 111111111 -> 111111101 -andx098 and 111111110 111111111 -> 111111110 - -andx100 and 111111111 011111111 -> 11111111 -andx101 and 111111111 101111111 -> 101111111 -andx102 and 111111111 110111111 -> 110111111 -andx103 and 111111111 111011111 -> 111011111 -andx104 and 111111111 111101111 -> 111101111 -andx105 and 111111111 111110111 -> 111110111 -andx106 and 111111111 111111011 -> 111111011 -andx107 and 111111111 111111101 -> 111111101 -andx108 and 111111111 111111110 -> 111111110 - --- non-0/1 should not be accepted, nor should signs -andx220 and 111111112 111111111 -> NaN Invalid_operation -andx221 and 333333333 333333333 -> NaN Invalid_operation -andx222 and 555555555 555555555 -> NaN Invalid_operation -andx223 and 777777777 777777777 -> NaN Invalid_operation -andx224 and 999999999 999999999 -> NaN Invalid_operation -andx225 and 222222222 999999999 -> NaN Invalid_operation -andx226 and 444444444 999999999 -> NaN Invalid_operation -andx227 and 666666666 999999999 -> NaN Invalid_operation -andx228 and 888888888 999999999 -> NaN Invalid_operation -andx229 and 999999999 222222222 -> NaN Invalid_operation -andx230 and 999999999 444444444 -> NaN Invalid_operation -andx231 and 999999999 666666666 -> NaN Invalid_operation -andx232 and 999999999 888888888 -> NaN Invalid_operation --- a few randoms -andx240 and 567468689 -934981942 -> NaN Invalid_operation -andx241 and 567367689 934981942 -> NaN Invalid_operation -andx242 and -631917772 -706014634 -> NaN Invalid_operation -andx243 and -756253257 138579234 -> NaN Invalid_operation -andx244 and 835590149 567435400 -> NaN Invalid_operation --- test MSD -andx250 and 200000000 100000000 -> NaN Invalid_operation -andx251 and 700000000 100000000 -> NaN Invalid_operation -andx252 and 800000000 100000000 -> NaN Invalid_operation -andx253 and 900000000 100000000 -> NaN Invalid_operation -andx254 and 200000000 000000000 -> NaN Invalid_operation -andx255 and 700000000 000000000 -> NaN Invalid_operation -andx256 and 800000000 000000000 -> NaN Invalid_operation -andx257 and 900000000 000000000 -> NaN Invalid_operation -andx258 and 100000000 200000000 -> NaN Invalid_operation -andx259 and 100000000 700000000 -> NaN Invalid_operation -andx260 and 100000000 800000000 -> NaN Invalid_operation -andx261 and 100000000 900000000 -> NaN Invalid_operation -andx262 and 000000000 200000000 -> NaN Invalid_operation -andx263 and 000000000 700000000 -> NaN Invalid_operation -andx264 and 000000000 800000000 -> NaN Invalid_operation -andx265 and 000000000 900000000 -> NaN Invalid_operation --- test MSD-1 -andx270 and 020000000 100000000 -> NaN Invalid_operation -andx271 and 070100000 100000000 -> NaN Invalid_operation -andx272 and 080010000 100000001 -> NaN Invalid_operation -andx273 and 090001000 100000010 -> NaN Invalid_operation -andx274 and 100000100 020010100 -> NaN Invalid_operation -andx275 and 100000000 070001000 -> NaN Invalid_operation -andx276 and 100000010 080010100 -> NaN Invalid_operation -andx277 and 100000000 090000010 -> NaN Invalid_operation --- test LSD -andx280 and 001000002 100000000 -> NaN Invalid_operation -andx281 and 000000007 100000000 -> NaN Invalid_operation -andx282 and 000000008 100000000 -> NaN Invalid_operation -andx283 and 000000009 100000000 -> NaN Invalid_operation -andx284 and 100000000 000100002 -> NaN Invalid_operation -andx285 and 100100000 001000007 -> NaN Invalid_operation -andx286 and 100010000 010000008 -> NaN Invalid_operation -andx287 and 100001000 100000009 -> NaN Invalid_operation --- test Middie -andx288 and 001020000 100000000 -> NaN Invalid_operation -andx289 and 000070001 100000000 -> NaN Invalid_operation -andx290 and 000080000 100010000 -> NaN Invalid_operation -andx291 and 000090000 100001000 -> NaN Invalid_operation -andx292 and 100000010 000020100 -> NaN Invalid_operation -andx293 and 100100000 000070010 -> NaN Invalid_operation -andx294 and 100010100 000080001 -> NaN Invalid_operation -andx295 and 100001000 000090000 -> NaN Invalid_operation --- signs -andx296 and -100001000 -000000000 -> NaN Invalid_operation -andx297 and -100001000 000010000 -> NaN Invalid_operation -andx298 and 100001000 -000000000 -> NaN Invalid_operation -andx299 and 100001000 000011000 -> 1000 - --- Nmax, Nmin, Ntiny -andx331 and 2 9.99999999E+999 -> NaN Invalid_operation -andx332 and 3 1E-999 -> NaN Invalid_operation -andx333 and 4 1.00000000E-999 -> NaN Invalid_operation -andx334 and 5 1E-1007 -> NaN Invalid_operation -andx335 and 6 -1E-1007 -> NaN Invalid_operation -andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation -andx337 and 8 -1E-999 -> NaN Invalid_operation -andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation -andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation -andx342 and 1E-999 01 -> NaN Invalid_operation -andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation -andx344 and 1E-1007 18 -> NaN Invalid_operation -andx345 and -1E-1007 -10 -> NaN Invalid_operation -andx346 and -1.00000000E-999 18 -> NaN Invalid_operation -andx347 and -1E-999 10 -> NaN Invalid_operation -andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation - --- A few other non-integers -andx361 and 1.0 1 -> NaN Invalid_operation -andx362 and 1E+1 1 -> NaN Invalid_operation -andx363 and 0.0 1 -> NaN Invalid_operation -andx364 and 0E+1 1 -> NaN Invalid_operation -andx365 and 9.9 1 -> NaN Invalid_operation -andx366 and 9E+1 1 -> NaN Invalid_operation -andx371 and 0 1.0 -> NaN Invalid_operation -andx372 and 0 1E+1 -> NaN Invalid_operation -andx373 and 0 0.0 -> NaN Invalid_operation -andx374 and 0 0E+1 -> NaN Invalid_operation -andx375 and 0 9.9 -> NaN Invalid_operation -andx376 and 0 9E+1 -> NaN Invalid_operation - --- All Specials are in error -andx780 and -Inf -Inf -> NaN Invalid_operation -andx781 and -Inf -1000 -> NaN Invalid_operation -andx782 and -Inf -1 -> NaN Invalid_operation -andx783 and -Inf -0 -> NaN Invalid_operation -andx784 and -Inf 0 -> NaN Invalid_operation -andx785 and -Inf 1 -> NaN Invalid_operation -andx786 and -Inf 1000 -> NaN Invalid_operation -andx787 and -1000 -Inf -> NaN Invalid_operation -andx788 and -Inf -Inf -> NaN Invalid_operation -andx789 and -1 -Inf -> NaN Invalid_operation -andx790 and -0 -Inf -> NaN Invalid_operation -andx791 and 0 -Inf -> NaN Invalid_operation -andx792 and 1 -Inf -> NaN Invalid_operation -andx793 and 1000 -Inf -> NaN Invalid_operation -andx794 and Inf -Inf -> NaN Invalid_operation - -andx800 and Inf -Inf -> NaN Invalid_operation -andx801 and Inf -1000 -> NaN Invalid_operation -andx802 and Inf -1 -> NaN Invalid_operation -andx803 and Inf -0 -> NaN Invalid_operation -andx804 and Inf 0 -> NaN Invalid_operation -andx805 and Inf 1 -> NaN Invalid_operation -andx806 and Inf 1000 -> NaN Invalid_operation -andx807 and Inf Inf -> NaN Invalid_operation -andx808 and -1000 Inf -> NaN Invalid_operation -andx809 and -Inf Inf -> NaN Invalid_operation -andx810 and -1 Inf -> NaN Invalid_operation -andx811 and -0 Inf -> NaN Invalid_operation -andx812 and 0 Inf -> NaN Invalid_operation -andx813 and 1 Inf -> NaN Invalid_operation -andx814 and 1000 Inf -> NaN Invalid_operation -andx815 and Inf Inf -> NaN Invalid_operation - -andx821 and NaN -Inf -> NaN Invalid_operation -andx822 and NaN -1000 -> NaN Invalid_operation -andx823 and NaN -1 -> NaN Invalid_operation -andx824 and NaN -0 -> NaN Invalid_operation -andx825 and NaN 0 -> NaN Invalid_operation -andx826 and NaN 1 -> NaN Invalid_operation -andx827 and NaN 1000 -> NaN Invalid_operation -andx828 and NaN Inf -> NaN Invalid_operation -andx829 and NaN NaN -> NaN Invalid_operation -andx830 and -Inf NaN -> NaN Invalid_operation -andx831 and -1000 NaN -> NaN Invalid_operation -andx832 and -1 NaN -> NaN Invalid_operation -andx833 and -0 NaN -> NaN Invalid_operation -andx834 and 0 NaN -> NaN Invalid_operation -andx835 and 1 NaN -> NaN Invalid_operation -andx836 and 1000 NaN -> NaN Invalid_operation -andx837 and Inf NaN -> NaN Invalid_operation - -andx841 and sNaN -Inf -> NaN Invalid_operation -andx842 and sNaN -1000 -> NaN Invalid_operation -andx843 and sNaN -1 -> NaN Invalid_operation -andx844 and sNaN -0 -> NaN Invalid_operation -andx845 and sNaN 0 -> NaN Invalid_operation -andx846 and sNaN 1 -> NaN Invalid_operation -andx847 and sNaN 1000 -> NaN Invalid_operation -andx848 and sNaN NaN -> NaN Invalid_operation -andx849 and sNaN sNaN -> NaN Invalid_operation -andx850 and NaN sNaN -> NaN Invalid_operation -andx851 and -Inf sNaN -> NaN Invalid_operation -andx852 and -1000 sNaN -> NaN Invalid_operation -andx853 and -1 sNaN -> NaN Invalid_operation -andx854 and -0 sNaN -> NaN Invalid_operation -andx855 and 0 sNaN -> NaN Invalid_operation -andx856 and 1 sNaN -> NaN Invalid_operation -andx857 and 1000 sNaN -> NaN Invalid_operation -andx858 and Inf sNaN -> NaN Invalid_operation -andx859 and NaN sNaN -> NaN Invalid_operation - --- propagating NaNs -andx861 and NaN1 -Inf -> NaN Invalid_operation -andx862 and +NaN2 -1000 -> NaN Invalid_operation -andx863 and NaN3 1000 -> NaN Invalid_operation -andx864 and NaN4 Inf -> NaN Invalid_operation -andx865 and NaN5 +NaN6 -> NaN Invalid_operation -andx866 and -Inf NaN7 -> NaN Invalid_operation -andx867 and -1000 NaN8 -> NaN Invalid_operation -andx868 and 1000 NaN9 -> NaN Invalid_operation -andx869 and Inf +NaN10 -> NaN Invalid_operation -andx871 and sNaN11 -Inf -> NaN Invalid_operation -andx872 and sNaN12 -1000 -> NaN Invalid_operation -andx873 and sNaN13 1000 -> NaN Invalid_operation -andx874 and sNaN14 NaN17 -> NaN Invalid_operation -andx875 and sNaN15 sNaN18 -> NaN Invalid_operation -andx876 and NaN16 sNaN19 -> NaN Invalid_operation -andx877 and -Inf +sNaN20 -> NaN Invalid_operation -andx878 and -1000 sNaN21 -> NaN Invalid_operation -andx879 and 1000 sNaN22 -> NaN Invalid_operation -andx880 and Inf sNaN23 -> NaN Invalid_operation -andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation -andx882 and -NaN26 NaN28 -> NaN Invalid_operation -andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation -andx884 and 1000 -NaN30 -> NaN Invalid_operation -andx885 and 1000 -sNaN31 -> NaN Invalid_operation +------------------------------------------------------------------------ +-- and.decTest -- digitwise logical AND -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +extended: 1 +precision: 9 +rounding: half_up +maxExponent: 999 +minExponent: -999 + +-- Sanity check (truth table) +andx001 and 0 0 -> 0 +andx002 and 0 1 -> 0 +andx003 and 1 0 -> 0 +andx004 and 1 1 -> 1 +andx005 and 1100 1010 -> 1000 +andx006 and 1111 10 -> 10 +andx007 and 1111 1010 -> 1010 + +-- and at msd and msd-1 +andx010 and 000000000 000000000 -> 0 +andx011 and 000000000 100000000 -> 0 +andx012 and 100000000 000000000 -> 0 +andx013 and 100000000 100000000 -> 100000000 +andx014 and 000000000 000000000 -> 0 +andx015 and 000000000 010000000 -> 0 +andx016 and 010000000 000000000 -> 0 +andx017 and 010000000 010000000 -> 10000000 + +-- Various lengths +-- 123456789 123456789 123456789 +andx021 and 111111111 111111111 -> 111111111 +andx022 and 111111111111 111111111 -> 111111111 +andx023 and 111111111111 11111111 -> 11111111 +andx024 and 111111111 11111111 -> 11111111 +andx025 and 111111111 1111111 -> 1111111 +andx026 and 111111111111 111111 -> 111111 +andx027 and 111111111111 11111 -> 11111 +andx028 and 111111111111 1111 -> 1111 +andx029 and 111111111111 111 -> 111 +andx031 and 111111111111 11 -> 11 +andx032 and 111111111111 1 -> 1 +andx033 and 111111111111 1111111111 -> 111111111 +andx034 and 11111111111 11111111111 -> 111111111 +andx035 and 1111111111 111111111111 -> 111111111 +andx036 and 111111111 1111111111111 -> 111111111 + +andx040 and 111111111 111111111111 -> 111111111 +andx041 and 11111111 111111111111 -> 11111111 +andx042 and 11111111 111111111 -> 11111111 +andx043 and 1111111 111111111 -> 1111111 +andx044 and 111111 111111111 -> 111111 +andx045 and 11111 111111111 -> 11111 +andx046 and 1111 111111111 -> 1111 +andx047 and 111 111111111 -> 111 +andx048 and 11 111111111 -> 11 +andx049 and 1 111111111 -> 1 + +andx050 and 1111111111 1 -> 1 +andx051 and 111111111 1 -> 1 +andx052 and 11111111 1 -> 1 +andx053 and 1111111 1 -> 1 +andx054 and 111111 1 -> 1 +andx055 and 11111 1 -> 1 +andx056 and 1111 1 -> 1 +andx057 and 111 1 -> 1 +andx058 and 11 1 -> 1 +andx059 and 1 1 -> 1 + +andx060 and 1111111111 0 -> 0 +andx061 and 111111111 0 -> 0 +andx062 and 11111111 0 -> 0 +andx063 and 1111111 0 -> 0 +andx064 and 111111 0 -> 0 +andx065 and 11111 0 -> 0 +andx066 and 1111 0 -> 0 +andx067 and 111 0 -> 0 +andx068 and 11 0 -> 0 +andx069 and 1 0 -> 0 + +andx070 and 1 1111111111 -> 1 +andx071 and 1 111111111 -> 1 +andx072 and 1 11111111 -> 1 +andx073 and 1 1111111 -> 1 +andx074 and 1 111111 -> 1 +andx075 and 1 11111 -> 1 +andx076 and 1 1111 -> 1 +andx077 and 1 111 -> 1 +andx078 and 1 11 -> 1 +andx079 and 1 1 -> 1 + +andx080 and 0 1111111111 -> 0 +andx081 and 0 111111111 -> 0 +andx082 and 0 11111111 -> 0 +andx083 and 0 1111111 -> 0 +andx084 and 0 111111 -> 0 +andx085 and 0 11111 -> 0 +andx086 and 0 1111 -> 0 +andx087 and 0 111 -> 0 +andx088 and 0 11 -> 0 +andx089 and 0 1 -> 0 + +andx090 and 011111111 111111111 -> 11111111 +andx091 and 101111111 111111111 -> 101111111 +andx092 and 110111111 111111111 -> 110111111 +andx093 and 111011111 111111111 -> 111011111 +andx094 and 111101111 111111111 -> 111101111 +andx095 and 111110111 111111111 -> 111110111 +andx096 and 111111011 111111111 -> 111111011 +andx097 and 111111101 111111111 -> 111111101 +andx098 and 111111110 111111111 -> 111111110 + +andx100 and 111111111 011111111 -> 11111111 +andx101 and 111111111 101111111 -> 101111111 +andx102 and 111111111 110111111 -> 110111111 +andx103 and 111111111 111011111 -> 111011111 +andx104 and 111111111 111101111 -> 111101111 +andx105 and 111111111 111110111 -> 111110111 +andx106 and 111111111 111111011 -> 111111011 +andx107 and 111111111 111111101 -> 111111101 +andx108 and 111111111 111111110 -> 111111110 + +-- non-0/1 should not be accepted, nor should signs +andx220 and 111111112 111111111 -> NaN Invalid_operation +andx221 and 333333333 333333333 -> NaN Invalid_operation +andx222 and 555555555 555555555 -> NaN Invalid_operation +andx223 and 777777777 777777777 -> NaN Invalid_operation +andx224 and 999999999 999999999 -> NaN Invalid_operation +andx225 and 222222222 999999999 -> NaN Invalid_operation +andx226 and 444444444 999999999 -> NaN Invalid_operation +andx227 and 666666666 999999999 -> NaN Invalid_operation +andx228 and 888888888 999999999 -> NaN Invalid_operation +andx229 and 999999999 222222222 -> NaN Invalid_operation +andx230 and 999999999 444444444 -> NaN Invalid_operation +andx231 and 999999999 666666666 -> NaN Invalid_operation +andx232 and 999999999 888888888 -> NaN Invalid_operation +-- a few randoms +andx240 and 567468689 -934981942 -> NaN Invalid_operation +andx241 and 567367689 934981942 -> NaN Invalid_operation +andx242 and -631917772 -706014634 -> NaN Invalid_operation +andx243 and -756253257 138579234 -> NaN Invalid_operation +andx244 and 835590149 567435400 -> NaN Invalid_operation +-- test MSD +andx250 and 200000000 100000000 -> NaN Invalid_operation +andx251 and 700000000 100000000 -> NaN Invalid_operation +andx252 and 800000000 100000000 -> NaN Invalid_operation +andx253 and 900000000 100000000 -> NaN Invalid_operation +andx254 and 200000000 000000000 -> NaN Invalid_operation +andx255 and 700000000 000000000 -> NaN Invalid_operation +andx256 and 800000000 000000000 -> NaN Invalid_operation +andx257 and 900000000 000000000 -> NaN Invalid_operation +andx258 and 100000000 200000000 -> NaN Invalid_operation +andx259 and 100000000 700000000 -> NaN Invalid_operation +andx260 and 100000000 800000000 -> NaN Invalid_operation +andx261 and 100000000 900000000 -> NaN Invalid_operation +andx262 and 000000000 200000000 -> NaN Invalid_operation +andx263 and 000000000 700000000 -> NaN Invalid_operation +andx264 and 000000000 800000000 -> NaN Invalid_operation +andx265 and 000000000 900000000 -> NaN Invalid_operation +-- test MSD-1 +andx270 and 020000000 100000000 -> NaN Invalid_operation +andx271 and 070100000 100000000 -> NaN Invalid_operation +andx272 and 080010000 100000001 -> NaN Invalid_operation +andx273 and 090001000 100000010 -> NaN Invalid_operation +andx274 and 100000100 020010100 -> NaN Invalid_operation +andx275 and 100000000 070001000 -> NaN Invalid_operation +andx276 and 100000010 080010100 -> NaN Invalid_operation +andx277 and 100000000 090000010 -> NaN Invalid_operation +-- test LSD +andx280 and 001000002 100000000 -> NaN Invalid_operation +andx281 and 000000007 100000000 -> NaN Invalid_operation +andx282 and 000000008 100000000 -> NaN Invalid_operation +andx283 and 000000009 100000000 -> NaN Invalid_operation +andx284 and 100000000 000100002 -> NaN Invalid_operation +andx285 and 100100000 001000007 -> NaN Invalid_operation +andx286 and 100010000 010000008 -> NaN Invalid_operation +andx287 and 100001000 100000009 -> NaN Invalid_operation +-- test Middie +andx288 and 001020000 100000000 -> NaN Invalid_operation +andx289 and 000070001 100000000 -> NaN Invalid_operation +andx290 and 000080000 100010000 -> NaN Invalid_operation +andx291 and 000090000 100001000 -> NaN Invalid_operation +andx292 and 100000010 000020100 -> NaN Invalid_operation +andx293 and 100100000 000070010 -> NaN Invalid_operation +andx294 and 100010100 000080001 -> NaN Invalid_operation +andx295 and 100001000 000090000 -> NaN Invalid_operation +-- signs +andx296 and -100001000 -000000000 -> NaN Invalid_operation +andx297 and -100001000 000010000 -> NaN Invalid_operation +andx298 and 100001000 -000000000 -> NaN Invalid_operation +andx299 and 100001000 000011000 -> 1000 + +-- Nmax, Nmin, Ntiny +andx331 and 2 9.99999999E+999 -> NaN Invalid_operation +andx332 and 3 1E-999 -> NaN Invalid_operation +andx333 and 4 1.00000000E-999 -> NaN Invalid_operation +andx334 and 5 1E-1007 -> NaN Invalid_operation +andx335 and 6 -1E-1007 -> NaN Invalid_operation +andx336 and 7 -1.00000000E-999 -> NaN Invalid_operation +andx337 and 8 -1E-999 -> NaN Invalid_operation +andx338 and 9 -9.99999999E+999 -> NaN Invalid_operation +andx341 and 9.99999999E+999 -18 -> NaN Invalid_operation +andx342 and 1E-999 01 -> NaN Invalid_operation +andx343 and 1.00000000E-999 -18 -> NaN Invalid_operation +andx344 and 1E-1007 18 -> NaN Invalid_operation +andx345 and -1E-1007 -10 -> NaN Invalid_operation +andx346 and -1.00000000E-999 18 -> NaN Invalid_operation +andx347 and -1E-999 10 -> NaN Invalid_operation +andx348 and -9.99999999E+999 -18 -> NaN Invalid_operation + +-- A few other non-integers +andx361 and 1.0 1 -> NaN Invalid_operation +andx362 and 1E+1 1 -> NaN Invalid_operation +andx363 and 0.0 1 -> NaN Invalid_operation +andx364 and 0E+1 1 -> NaN Invalid_operation +andx365 and 9.9 1 -> NaN Invalid_operation +andx366 and 9E+1 1 -> NaN Invalid_operation +andx371 and 0 1.0 -> NaN Invalid_operation +andx372 and 0 1E+1 -> NaN Invalid_operation +andx373 and 0 0.0 -> NaN Invalid_operation +andx374 and 0 0E+1 -> NaN Invalid_operation +andx375 and 0 9.9 -> NaN Invalid_operation +andx376 and 0 9E+1 -> NaN Invalid_operation + +-- All Specials are in error +andx780 and -Inf -Inf -> NaN Invalid_operation +andx781 and -Inf -1000 -> NaN Invalid_operation +andx782 and -Inf -1 -> NaN Invalid_operation +andx783 and -Inf -0 -> NaN Invalid_operation +andx784 and -Inf 0 -> NaN Invalid_operation +andx785 and -Inf 1 -> NaN Invalid_operation +andx786 and -Inf 1000 -> NaN Invalid_operation +andx787 and -1000 -Inf -> NaN Invalid_operation +andx788 and -Inf -Inf -> NaN Invalid_operation +andx789 and -1 -Inf -> NaN Invalid_operation +andx790 and -0 -Inf -> NaN Invalid_operation +andx791 and 0 -Inf -> NaN Invalid_operation +andx792 and 1 -Inf -> NaN Invalid_operation +andx793 and 1000 -Inf -> NaN Invalid_operation +andx794 and Inf -Inf -> NaN Invalid_operation + +andx800 and Inf -Inf -> NaN Invalid_operation +andx801 and Inf -1000 -> NaN Invalid_operation +andx802 and Inf -1 -> NaN Invalid_operation +andx803 and Inf -0 -> NaN Invalid_operation +andx804 and Inf 0 -> NaN Invalid_operation +andx805 and Inf 1 -> NaN Invalid_operation +andx806 and Inf 1000 -> NaN Invalid_operation +andx807 and Inf Inf -> NaN Invalid_operation +andx808 and -1000 Inf -> NaN Invalid_operation +andx809 and -Inf Inf -> NaN Invalid_operation +andx810 and -1 Inf -> NaN Invalid_operation +andx811 and -0 Inf -> NaN Invalid_operation +andx812 and 0 Inf -> NaN Invalid_operation +andx813 and 1 Inf -> NaN Invalid_operation +andx814 and 1000 Inf -> NaN Invalid_operation +andx815 and Inf Inf -> NaN Invalid_operation + +andx821 and NaN -Inf -> NaN Invalid_operation +andx822 and NaN -1000 -> NaN Invalid_operation +andx823 and NaN -1 -> NaN Invalid_operation +andx824 and NaN -0 -> NaN Invalid_operation +andx825 and NaN 0 -> NaN Invalid_operation +andx826 and NaN 1 -> NaN Invalid_operation +andx827 and NaN 1000 -> NaN Invalid_operation +andx828 and NaN Inf -> NaN Invalid_operation +andx829 and NaN NaN -> NaN Invalid_operation +andx830 and -Inf NaN -> NaN Invalid_operation +andx831 and -1000 NaN -> NaN Invalid_operation +andx832 and -1 NaN -> NaN Invalid_operation +andx833 and -0 NaN -> NaN Invalid_operation +andx834 and 0 NaN -> NaN Invalid_operation +andx835 and 1 NaN -> NaN Invalid_operation +andx836 and 1000 NaN -> NaN Invalid_operation +andx837 and Inf NaN -> NaN Invalid_operation + +andx841 and sNaN -Inf -> NaN Invalid_operation +andx842 and sNaN -1000 -> NaN Invalid_operation +andx843 and sNaN -1 -> NaN Invalid_operation +andx844 and sNaN -0 -> NaN Invalid_operation +andx845 and sNaN 0 -> NaN Invalid_operation +andx846 and sNaN 1 -> NaN Invalid_operation +andx847 and sNaN 1000 -> NaN Invalid_operation +andx848 and sNaN NaN -> NaN Invalid_operation +andx849 and sNaN sNaN -> NaN Invalid_operation +andx850 and NaN sNaN -> NaN Invalid_operation +andx851 and -Inf sNaN -> NaN Invalid_operation +andx852 and -1000 sNaN -> NaN Invalid_operation +andx853 and -1 sNaN -> NaN Invalid_operation +andx854 and -0 sNaN -> NaN Invalid_operation +andx855 and 0 sNaN -> NaN Invalid_operation +andx856 and 1 sNaN -> NaN Invalid_operation +andx857 and 1000 sNaN -> NaN Invalid_operation +andx858 and Inf sNaN -> NaN Invalid_operation +andx859 and NaN sNaN -> NaN Invalid_operation + +-- propagating NaNs +andx861 and NaN1 -Inf -> NaN Invalid_operation +andx862 and +NaN2 -1000 -> NaN Invalid_operation +andx863 and NaN3 1000 -> NaN Invalid_operation +andx864 and NaN4 Inf -> NaN Invalid_operation +andx865 and NaN5 +NaN6 -> NaN Invalid_operation +andx866 and -Inf NaN7 -> NaN Invalid_operation +andx867 and -1000 NaN8 -> NaN Invalid_operation +andx868 and 1000 NaN9 -> NaN Invalid_operation +andx869 and Inf +NaN10 -> NaN Invalid_operation +andx871 and sNaN11 -Inf -> NaN Invalid_operation +andx872 and sNaN12 -1000 -> NaN Invalid_operation +andx873 and sNaN13 1000 -> NaN Invalid_operation +andx874 and sNaN14 NaN17 -> NaN Invalid_operation +andx875 and sNaN15 sNaN18 -> NaN Invalid_operation +andx876 and NaN16 sNaN19 -> NaN Invalid_operation +andx877 and -Inf +sNaN20 -> NaN Invalid_operation +andx878 and -1000 sNaN21 -> NaN Invalid_operation +andx879 and 1000 sNaN22 -> NaN Invalid_operation +andx880 and Inf sNaN23 -> NaN Invalid_operation +andx881 and +NaN25 +sNaN24 -> NaN Invalid_operation +andx882 and -NaN26 NaN28 -> NaN Invalid_operation +andx883 and -sNaN27 sNaN29 -> NaN Invalid_operation +andx884 and 1000 -NaN30 -> NaN Invalid_operation +andx885 and 1000 -sNaN31 -> NaN Invalid_operation diff --git a/lib-python/2.7/test/decimaltestdata/class.decTest b/lib-python/2.7/test/decimaltestdata/class.decTest --- a/lib-python/2.7/test/decimaltestdata/class.decTest +++ b/lib-python/2.7/test/decimaltestdata/class.decTest @@ -1,131 +1,131 @@ ------------------------------------------------------------------------- --- class.decTest -- Class operations -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- [New 2006.11.27] - -precision: 9 -maxExponent: 999 -minExponent: -999 -extended: 1 -clamp: 1 -rounding: half_even - -clasx001 class 0 -> +Zero -clasx002 class 0.00 -> +Zero -clasx003 class 0E+5 -> +Zero -clasx004 class 1E-1007 -> +Subnormal -clasx005 class 0.1E-999 -> +Subnormal -clasx006 class 0.99999999E-999 -> +Subnormal -clasx007 class 1.00000000E-999 -> +Normal -clasx008 class 1E-999 -> +Normal -clasx009 class 1E-100 -> +Normal -clasx010 class 1E-10 -> +Normal -clasx012 class 1E-1 -> +Normal -clasx013 class 1 -> +Normal -clasx014 class 2.50 -> +Normal -clasx015 class 100.100 -> +Normal -clasx016 class 1E+30 -> +Normal -clasx017 class 1E+999 -> +Normal -clasx018 class 9.99999999E+999 -> +Normal -clasx019 class Inf -> +Infinity - -clasx021 class -0 -> -Zero -clasx022 class -0.00 -> -Zero -clasx023 class -0E+5 -> -Zero -clasx024 class -1E-1007 -> -Subnormal -clasx025 class -0.1E-999 -> -Subnormal -clasx026 class -0.99999999E-999 -> -Subnormal -clasx027 class -1.00000000E-999 -> -Normal -clasx028 class -1E-999 -> -Normal -clasx029 class -1E-100 -> -Normal -clasx030 class -1E-10 -> -Normal -clasx032 class -1E-1 -> -Normal -clasx033 class -1 -> -Normal -clasx034 class -2.50 -> -Normal -clasx035 class -100.100 -> -Normal -clasx036 class -1E+30 -> -Normal -clasx037 class -1E+999 -> -Normal -clasx038 class -9.99999999E+999 -> -Normal -clasx039 class -Inf -> -Infinity - -clasx041 class NaN -> NaN -clasx042 class -NaN -> NaN -clasx043 class +NaN12345 -> NaN -clasx044 class sNaN -> sNaN -clasx045 class -sNaN -> sNaN -clasx046 class +sNaN12345 -> sNaN - - --- decimal64 bounds - -precision: 16 -maxExponent: 384 -minExponent: -383 -clamp: 1 -rounding: half_even - -clasx201 class 0 -> +Zero -clasx202 class 0.00 -> +Zero -clasx203 class 0E+5 -> +Zero -clasx204 class 1E-396 -> +Subnormal -clasx205 class 0.1E-383 -> +Subnormal -clasx206 class 0.999999999999999E-383 -> +Subnormal -clasx207 class 1.000000000000000E-383 -> +Normal -clasx208 class 1E-383 -> +Normal -clasx209 class 1E-100 -> +Normal -clasx210 class 1E-10 -> +Normal -clasx212 class 1E-1 -> +Normal -clasx213 class 1 -> +Normal -clasx214 class 2.50 -> +Normal -clasx215 class 100.100 -> +Normal -clasx216 class 1E+30 -> +Normal -clasx217 class 1E+384 -> +Normal -clasx218 class 9.999999999999999E+384 -> +Normal -clasx219 class Inf -> +Infinity - -clasx221 class -0 -> -Zero -clasx222 class -0.00 -> -Zero -clasx223 class -0E+5 -> -Zero -clasx224 class -1E-396 -> -Subnormal -clasx225 class -0.1E-383 -> -Subnormal -clasx226 class -0.999999999999999E-383 -> -Subnormal -clasx227 class -1.000000000000000E-383 -> -Normal -clasx228 class -1E-383 -> -Normal -clasx229 class -1E-100 -> -Normal -clasx230 class -1E-10 -> -Normal -clasx232 class -1E-1 -> -Normal -clasx233 class -1 -> -Normal -clasx234 class -2.50 -> -Normal -clasx235 class -100.100 -> -Normal -clasx236 class -1E+30 -> -Normal -clasx237 class -1E+384 -> -Normal -clasx238 class -9.999999999999999E+384 -> -Normal -clasx239 class -Inf -> -Infinity - -clasx241 class NaN -> NaN -clasx242 class -NaN -> NaN -clasx243 class +NaN12345 -> NaN -clasx244 class sNaN -> sNaN -clasx245 class -sNaN -> sNaN -clasx246 class +sNaN12345 -> sNaN - - - +------------------------------------------------------------------------ +-- class.decTest -- Class operations -- +-- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- +------------------------------------------------------------------------ +-- Please see the document "General Decimal Arithmetic Testcases" -- +-- at http://www2.hursley.ibm.com/decimal for the description of -- +-- these testcases. -- +-- -- +-- These testcases are experimental ('beta' versions), and they -- +-- may contain errors. They are offered on an as-is basis. In -- +-- particular, achieving the same results as the tests here is not -- +-- a guarantee that an implementation complies with any Standard -- +-- or specification. The tests are not exhaustive. -- +-- -- +-- Please send comments, suggestions, and corrections to the author: -- +-- Mike Cowlishaw, IBM Fellow -- +-- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- +-- mfc at uk.ibm.com -- +------------------------------------------------------------------------ +version: 2.59 + +-- [New 2006.11.27] + +precision: 9 +maxExponent: 999 +minExponent: -999 +extended: 1 +clamp: 1 +rounding: half_even + +clasx001 class 0 -> +Zero +clasx002 class 0.00 -> +Zero +clasx003 class 0E+5 -> +Zero +clasx004 class 1E-1007 -> +Subnormal +clasx005 class 0.1E-999 -> +Subnormal +clasx006 class 0.99999999E-999 -> +Subnormal +clasx007 class 1.00000000E-999 -> +Normal +clasx008 class 1E-999 -> +Normal +clasx009 class 1E-100 -> +Normal +clasx010 class 1E-10 -> +Normal +clasx012 class 1E-1 -> +Normal +clasx013 class 1 -> +Normal +clasx014 class 2.50 -> +Normal +clasx015 class 100.100 -> +Normal +clasx016 class 1E+30 -> +Normal +clasx017 class 1E+999 -> +Normal +clasx018 class 9.99999999E+999 -> +Normal +clasx019 class Inf -> +Infinity + +clasx021 class -0 -> -Zero +clasx022 class -0.00 -> -Zero +clasx023 class -0E+5 -> -Zero +clasx024 class -1E-1007 -> -Subnormal +clasx025 class -0.1E-999 -> -Subnormal +clasx026 class -0.99999999E-999 -> -Subnormal +clasx027 class -1.00000000E-999 -> -Normal +clasx028 class -1E-999 -> -Normal +clasx029 class -1E-100 -> -Normal +clasx030 class -1E-10 -> -Normal +clasx032 class -1E-1 -> -Normal +clasx033 class -1 -> -Normal +clasx034 class -2.50 -> -Normal +clasx035 class -100.100 -> -Normal +clasx036 class -1E+30 -> -Normal +clasx037 class -1E+999 -> -Normal +clasx038 class -9.99999999E+999 -> -Normal +clasx039 class -Inf -> -Infinity + +clasx041 class NaN -> NaN +clasx042 class -NaN -> NaN +clasx043 class +NaN12345 -> NaN +clasx044 class sNaN -> sNaN +clasx045 class -sNaN -> sNaN +clasx046 class +sNaN12345 -> sNaN + + +-- decimal64 bounds + +precision: 16 +maxExponent: 384 +minExponent: -383 +clamp: 1 +rounding: half_even + +clasx201 class 0 -> +Zero +clasx202 class 0.00 -> +Zero +clasx203 class 0E+5 -> +Zero +clasx204 class 1E-396 -> +Subnormal +clasx205 class 0.1E-383 -> +Subnormal +clasx206 class 0.999999999999999E-383 -> +Subnormal +clasx207 class 1.000000000000000E-383 -> +Normal +clasx208 class 1E-383 -> +Normal +clasx209 class 1E-100 -> +Normal +clasx210 class 1E-10 -> +Normal +clasx212 class 1E-1 -> +Normal +clasx213 class 1 -> +Normal +clasx214 class 2.50 -> +Normal +clasx215 class 100.100 -> +Normal +clasx216 class 1E+30 -> +Normal +clasx217 class 1E+384 -> +Normal +clasx218 class 9.999999999999999E+384 -> +Normal +clasx219 class Inf -> +Infinity + +clasx221 class -0 -> -Zero +clasx222 class -0.00 -> -Zero +clasx223 class -0E+5 -> -Zero +clasx224 class -1E-396 -> -Subnormal +clasx225 class -0.1E-383 -> -Subnormal +clasx226 class -0.999999999999999E-383 -> -Subnormal +clasx227 class -1.000000000000000E-383 -> -Normal +clasx228 class -1E-383 -> -Normal +clasx229 class -1E-100 -> -Normal +clasx230 class -1E-10 -> -Normal +clasx232 class -1E-1 -> -Normal +clasx233 class -1 -> -Normal +clasx234 class -2.50 -> -Normal +clasx235 class -100.100 -> -Normal +clasx236 class -1E+30 -> -Normal +clasx237 class -1E+384 -> -Normal +clasx238 class -9.999999999999999E+384 -> -Normal +clasx239 class -Inf -> -Infinity + +clasx241 class NaN -> NaN +clasx242 class -NaN -> NaN +clasx243 class +NaN12345 -> NaN +clasx244 class sNaN -> sNaN +clasx245 class -sNaN -> sNaN +clasx246 class +sNaN12345 -> sNaN + + + diff --git a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest --- a/lib-python/2.7/test/decimaltestdata/comparetotal.decTest +++ b/lib-python/2.7/test/decimaltestdata/comparetotal.decTest @@ -1,798 +1,798 @@ ------------------------------------------------------------------------- --- comparetotal.decTest -- decimal comparison using total ordering -- --- Copyright (c) IBM Corporation, 1981, 2008. All rights reserved. -- ------------------------------------------------------------------------- --- Please see the document "General Decimal Arithmetic Testcases" -- --- at http://www2.hursley.ibm.com/decimal for the description of -- --- these testcases. -- --- -- --- These testcases are experimental ('beta' versions), and they -- --- may contain errors. They are offered on an as-is basis. In -- --- particular, achieving the same results as the tests here is not -- --- a guarantee that an implementation complies with any Standard -- --- or specification. The tests are not exhaustive. -- --- -- --- Please send comments, suggestions, and corrections to the author: -- --- Mike Cowlishaw, IBM Fellow -- --- IBM UK, PO Box 31, Birmingham Road, Warwick CV34 5JL, UK -- --- mfc at uk.ibm.com -- ------------------------------------------------------------------------- -version: 2.59 - --- Note that we cannot assume add/subtract tests cover paths adequately, --- here, because the code might be quite different (comparison cannot --- overflow or underflow, so actual subtractions are not necessary). --- Similarly, comparetotal will have some radically different paths --- than compare. - -extended: 1 -precision: 16 -rounding: half_up -maxExponent: 384 -minExponent: -383 - --- sanity checks -cotx001 comparetotal -2 -2 -> 0 -cotx002 comparetotal -2 -1 -> -1 -cotx003 comparetotal -2 0 -> -1 -cotx004 comparetotal -2 1 -> -1 -cotx005 comparetotal -2 2 -> -1 -cotx006 comparetotal -1 -2 -> 1 -cotx007 comparetotal -1 -1 -> 0 -cotx008 comparetotal -1 0 -> -1 -cotx009 comparetotal -1 1 -> -1 -cotx010 comparetotal -1 2 -> -1 -cotx011 comparetotal 0 -2 -> 1 -cotx012 comparetotal 0 -1 -> 1 -cotx013 comparetotal 0 0 -> 0 -cotx014 comparetotal 0 1 -> -1 -cotx015 comparetotal 0 2 -> -1 -cotx016 comparetotal 1 -2 -> 1 -cotx017 comparetotal 1 -1 -> 1 -cotx018 comparetotal 1 0 -> 1 -cotx019 comparetotal 1 1 -> 0 -cotx020 comparetotal 1 2 -> -1 -cotx021 comparetotal 2 -2 -> 1 -cotx022 comparetotal 2 -1 -> 1 -cotx023 comparetotal 2 0 -> 1 -cotx025 comparetotal 2 1 -> 1 -cotx026 comparetotal 2 2 -> 0 - -cotx031 comparetotal -20 -20 -> 0 -cotx032 comparetotal -20 -10 -> -1 -cotx033 comparetotal -20 00 -> -1 -cotx034 comparetotal -20 10 -> -1 -cotx035 comparetotal -20 20 -> -1 -cotx036 comparetotal -10 -20 -> 1 -cotx037 comparetotal -10 -10 -> 0 -cotx038 comparetotal -10 00 -> -1 -cotx039 comparetotal -10 10 -> -1 -cotx040 comparetotal -10 20 -> -1 -cotx041 comparetotal 00 -20 -> 1 -cotx042 comparetotal 00 -10 -> 1 -cotx043 comparetotal 00 00 -> 0 -cotx044 comparetotal 00 10 -> -1 -cotx045 comparetotal 00 20 -> -1 -cotx046 comparetotal 10 -20 -> 1 -cotx047 comparetotal 10 -10 -> 1 -cotx048 comparetotal 10 00 -> 1 -cotx049 comparetotal 10 10 -> 0 -cotx050 comparetotal 10 20 -> -1 -cotx051 comparetotal 20 -20 -> 1 -cotx052 comparetotal 20 -10 -> 1 -cotx053 comparetotal 20 00 -> 1 -cotx055 comparetotal 20 10 -> 1 -cotx056 comparetotal 20 20 -> 0 - -cotx061 comparetotal -2.0 -2.0 -> 0 -cotx062 comparetotal -2.0 -1.0 -> -1 -cotx063 comparetotal -2.0 0.0 -> -1 -cotx064 comparetotal -2.0 1.0 -> -1 -cotx065 comparetotal -2.0 2.0 -> -1 -cotx066 comparetotal -1.0 -2.0 -> 1 -cotx067 comparetotal -1.0 -1.0 -> 0 -cotx068 comparetotal -1.0 0.0 -> -1 -cotx069 comparetotal -1.0 1.0 -> -1 -cotx070 comparetotal -1.0 2.0 -> -1 -cotx071 comparetotal 0.0 -2.0 -> 1 -cotx072 comparetotal 0.0 -1.0 -> 1 -cotx073 comparetotal 0.0 0.0 -> 0 -cotx074 comparetotal 0.0 1.0 -> -1 -cotx075 comparetotal 0.0 2.0 -> -1 -cotx076 comparetotal 1.0 -2.0 -> 1 -cotx077 comparetotal 1.0 -1.0 -> 1 -cotx078 comparetotal 1.0 0.0 -> 1 -cotx079 comparetotal 1.0 1.0 -> 0 -cotx080 comparetotal 1.0 2.0 -> -1 -cotx081 comparetotal 2.0 -2.0 -> 1 -cotx082 comparetotal 2.0 -1.0 -> 1 -cotx083 comparetotal 2.0 0.0 -> 1 -cotx085 comparetotal 2.0 1.0 -> 1 -cotx086 comparetotal 2.0 2.0 -> 0 - --- now some cases which might overflow if subtract were used -maxexponent: 999999999 -minexponent: -999999999 -cotx090 comparetotal 9.99999999E+999999999 9.99999999E+999999999 -> 0 -cotx091 comparetotal -9.99999999E+999999999 9.99999999E+999999999 -> -1 -cotx092 comparetotal 9.99999999E+999999999 -9.99999999E+999999999 -> 1 -cotx093 comparetotal -9.99999999E+999999999 -9.99999999E+999999999 -> 0 - --- Examples -cotx094 comparetotal 12.73 127.9 -> -1 -cotx095 comparetotal -127 12 -> -1 -cotx096 comparetotal 12.30 12.3 -> -1 -cotx097 comparetotal 12.30 12.30 -> 0 -cotx098 comparetotal 12.3 12.300 -> 1 -cotx099 comparetotal 12.3 NaN -> -1 - --- some differing length/exponent cases --- in this first group, compare would compare all equal -cotx100 comparetotal 7.0 7.0 -> 0 -cotx101 comparetotal 7.0 7 -> -1 -cotx102 comparetotal 7 7.0 -> 1 -cotx103 comparetotal 7E+0 7.0 -> 1 -cotx104 comparetotal 70E-1 7.0 -> 0 -cotx105 comparetotal 0.7E+1 7 -> 0 -cotx106 comparetotal 70E-1 7 -> -1 -cotx107 comparetotal 7.0 7E+0 -> -1 -cotx108 comparetotal 7.0 70E-1 -> 0 -cotx109 comparetotal 7 0.7E+1 -> 0 -cotx110 comparetotal 7 70E-1 -> 1 - -cotx120 comparetotal 8.0 7.0 -> 1 -cotx121 comparetotal 8.0 7 -> 1 -cotx122 comparetotal 8 7.0 -> 1 -cotx123 comparetotal 8E+0 7.0 -> 1 -cotx124 comparetotal 80E-1 7.0 -> 1 -cotx125 comparetotal 0.8E+1 7 -> 1 -cotx126 comparetotal 80E-1 7 -> 1 -cotx127 comparetotal 8.0 7E+0 -> 1 -cotx128 comparetotal 8.0 70E-1 -> 1 -cotx129 comparetotal 8 0.7E+1 -> 1 -cotx130 comparetotal 8 70E-1 -> 1 - -cotx140 comparetotal 8.0 9.0 -> -1 -cotx141 comparetotal 8.0 9 -> -1 -cotx142 comparetotal 8 9.0 -> -1 -cotx143 comparetotal 8E+0 9.0 -> -1 -cotx144 comparetotal 80E-1 9.0 -> -1 -cotx145 comparetotal 0.8E+1 9 -> -1 -cotx146 comparetotal 80E-1 9 -> -1 -cotx147 comparetotal 8.0 9E+0 -> -1 -cotx148 comparetotal 8.0 90E-1 -> -1 -cotx149 comparetotal 8 0.9E+1 -> -1 -cotx150 comparetotal 8 90E-1 -> -1 - --- and again, with sign changes -+ .. -cotx200 comparetotal -7.0 7.0 -> -1 -cotx201 comparetotal -7.0 7 -> -1 -cotx202 comparetotal -7 7.0 -> -1 -cotx203 comparetotal -7E+0 7.0 -> -1 -cotx204 comparetotal -70E-1 7.0 -> -1 -cotx205 comparetotal -0.7E+1 7 -> -1 -cotx206 comparetotal -70E-1 7 -> -1 -cotx207 comparetotal -7.0 7E+0 -> -1 -cotx208 comparetotal -7.0 70E-1 -> -1 -cotx209 comparetotal -7 0.7E+1 -> -1 -cotx210 comparetotal -7 70E-1 -> -1 - -cotx220 comparetotal -8.0 7.0 -> -1 -cotx221 comparetotal -8.0 7 -> -1 -cotx222 comparetotal -8 7.0 -> -1 -cotx223 comparetotal -8E+0 7.0 -> -1 -cotx224 comparetotal -80E-1 7.0 -> -1 -cotx225 comparetotal -0.8E+1 7 -> -1 -cotx226 comparetotal -80E-1 7 -> -1 -cotx227 comparetotal -8.0 7E+0 -> -1 -cotx228 comparetotal -8.0 70E-1 -> -1 -cotx229 comparetotal -8 0.7E+1 -> -1 -cotx230 comparetotal -8 70E-1 -> -1 - -cotx240 comparetotal -8.0 9.0 -> -1 -cotx241 comparetotal -8.0 9 -> -1 -cotx242 comparetotal -8 9.0 -> -1 -cotx243 comparetotal -8E+0 9.0 -> -1 -cotx244 comparetotal -80E-1 9.0 -> -1 -cotx245 comparetotal -0.8E+1 9 -> -1 -cotx246 comparetotal -80E-1 9 -> -1 -cotx247 comparetotal -8.0 9E+0 -> -1 -cotx248 comparetotal -8.0 90E-1 -> -1 -cotx249 comparetotal -8 0.9E+1 -> -1 -cotx250 comparetotal -8 90E-1 -> -1 - --- and again, with sign changes +- .. -cotx300 comparetotal 7.0 -7.0 -> 1 -cotx301 comparetotal 7.0 -7 -> 1 -cotx302 comparetotal 7 -7.0 -> 1 -cotx303 comparetotal 7E+0 -7.0 -> 1 -cotx304 comparetotal 70E-1 -7.0 -> 1 -cotx305 comparetotal .7E+1 -7 -> 1 -cotx306 comparetotal 70E-1 -7 -> 1 -cotx307 comparetotal 7.0 -7E+0 -> 1 -cotx308 comparetotal 7.0 -70E-1 -> 1 -cotx309 comparetotal 7 -.7E+1 -> 1 -cotx310 comparetotal 7 -70E-1 -> 1 - -cotx320 comparetotal 8.0 -7.0 -> 1 -cotx321 comparetotal 8.0 -7 -> 1 -cotx322 comparetotal 8 -7.0 -> 1 -cotx323 comparetotal 8E+0 -7.0 -> 1 -cotx324 comparetotal 80E-1 -7.0 -> 1 -cotx325 comparetotal .8E+1 -7 -> 1 -cotx326 comparetotal 80E-1 -7 -> 1 -cotx327 comparetotal 8.0 -7E+0 -> 1 -cotx328 comparetotal 8.0 -70E-1 -> 1 -cotx329 comparetotal 8 -.7E+1 -> 1 -cotx330 comparetotal 8 -70E-1 -> 1 - -cotx340 comparetotal 8.0 -9.0 -> 1 -cotx341 comparetotal 8.0 -9 -> 1 -cotx342 comparetotal 8 -9.0 -> 1 -cotx343 comparetotal 8E+0 -9.0 -> 1 -cotx344 comparetotal 80E-1 -9.0 -> 1 -cotx345 comparetotal .8E+1 -9 -> 1 -cotx346 comparetotal 80E-1 -9 -> 1 -cotx347 comparetotal 8.0 -9E+0 -> 1 -cotx348 comparetotal 8.0 -90E-1 -> 1 -cotx349 comparetotal 8 -.9E+1 -> 1 -cotx350 comparetotal 8 -90E-1 -> 1 - --- and again, with sign changes -- .. -cotx400 comparetotal -7.0 -7.0 -> 0 -cotx401 comparetotal -7.0 -7 -> 1 -cotx402 comparetotal -7 -7.0 -> -1 -cotx403 comparetotal -7E+0 -7.0 -> -1 -cotx404 comparetotal -70E-1 -7.0 -> 0 -cotx405 comparetotal -.7E+1 -7 -> 0 -cotx406 comparetotal -70E-1 -7 -> 1 -cotx407 comparetotal -7.0 -7E+0 -> 1 -cotx408 comparetotal -7.0 -70E-1 -> 0 -cotx409 comparetotal -7 -.7E+1 -> 0 From noreply at buildbot.pypy.org Sun Jan 22 01:02:21 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 01:02:21 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Merge 2.7/ recent changes to modified-2.7/ Message-ID: <20120122000221.6086E82D45@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51623:725c5aa57558 Date: 2012-01-22 01:00 +0100 http://bitbucket.org/pypy/pypy/changeset/725c5aa57558/ Log: Merge 2.7/ recent changes to modified-2.7/ diff --git a/lib-python/modified-2.7/ctypes/test/test_arrays.py b/lib-python/modified-2.7/ctypes/test/test_arrays.py --- a/lib-python/modified-2.7/ctypes/test/test_arrays.py +++ b/lib-python/modified-2.7/ctypes/test/test_arrays.py @@ -37,7 +37,7 @@ values = [ia[i] for i in range(len(init))] self.assertEqual(values, [0] * len(init)) - # Too many in itializers should be caught + # Too many initializers should be caught self.assertRaises(IndexError, int_array, *range(alen*2)) CharArray = ARRAY(c_char, 3) diff --git a/lib-python/modified-2.7/ctypes/test/test_as_parameter.py b/lib-python/modified-2.7/ctypes/test/test_as_parameter.py --- a/lib-python/modified-2.7/ctypes/test/test_as_parameter.py +++ b/lib-python/modified-2.7/ctypes/test/test_as_parameter.py @@ -187,6 +187,18 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + def test_recursive_as_param(self): + from ctypes import c_int + + class A(object): + pass + + a = A() + a._as_parameter_ = a + with self.assertRaises(RuntimeError): + c_int.from_param(a) + + #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class AsParamWrapper(object): diff --git a/lib-python/modified-2.7/ctypes/test/test_callbacks.py b/lib-python/modified-2.7/ctypes/test/test_callbacks.py --- a/lib-python/modified-2.7/ctypes/test/test_callbacks.py +++ b/lib-python/modified-2.7/ctypes/test/test_callbacks.py @@ -208,6 +208,42 @@ windll.user32.EnumWindows(EnumWindowsCallbackFunc, 0) + def test_callback_register_int(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_int, c_int, c_int, c_int, c_int, c_int) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_int + func.argtypes = (c_int, c_int, c_int, c_int, c_int, CALLBACK) + func.restype = c_int + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(2, 3, 4, 5, 6, CALLBACK(callback)) + self.assertEqual(result, callback(2*2, 3*3, 4*4, 5*5, 6*6)) + + def test_callback_register_double(self): + # Issue #8275: buggy handling of callback args under Win64 + # NOTE: should be run on release builds as well + dll = CDLL(_ctypes_test.__file__) + CALLBACK = CFUNCTYPE(c_double, c_double, c_double, c_double, + c_double, c_double) + # All this function does is call the callback with its args squared + func = dll._testfunc_cbk_reg_double + func.argtypes = (c_double, c_double, c_double, + c_double, c_double, CALLBACK) + func.restype = c_double + + def callback(a, b, c, d, e): + return a + b + c + d + e + + result = func(1.1, 2.2, 3.3, 4.4, 5.5, CALLBACK(callback)) + self.assertEqual(result, + callback(1.1*1.1, 2.2*2.2, 3.3*3.3, 4.4*4.4, 5.5*5.5)) + + ################################################################ if __name__ == '__main__': diff --git a/lib-python/modified-2.7/ctypes/test/test_functions.py b/lib-python/modified-2.7/ctypes/test/test_functions.py --- a/lib-python/modified-2.7/ctypes/test/test_functions.py +++ b/lib-python/modified-2.7/ctypes/test/test_functions.py @@ -118,7 +118,7 @@ self.assertEqual(result, 21) self.assertEqual(type(result), int) - # You cannot assing character format codes as restype any longer + # You cannot assign character format codes as restype any longer self.assertRaises(TypeError, setattr, f, "restype", "i") def test_floatresult(self): diff --git a/lib-python/modified-2.7/ctypes/test/test_init.py b/lib-python/modified-2.7/ctypes/test/test_init.py --- a/lib-python/modified-2.7/ctypes/test/test_init.py +++ b/lib-python/modified-2.7/ctypes/test/test_init.py @@ -27,7 +27,7 @@ self.assertEqual((y.x.a, y.x.b), (0, 0)) self.assertEqual(y.x.new_was_called, False) - # But explicitely creating an X structure calls __new__ and __init__, of course. + # But explicitly creating an X structure calls __new__ and __init__, of course. x = X() self.assertEqual((x.a, x.b), (9, 12)) self.assertEqual(x.new_was_called, True) diff --git a/lib-python/modified-2.7/ctypes/test/test_numbers.py b/lib-python/modified-2.7/ctypes/test/test_numbers.py --- a/lib-python/modified-2.7/ctypes/test/test_numbers.py +++ b/lib-python/modified-2.7/ctypes/test/test_numbers.py @@ -162,7 +162,7 @@ def test_int_from_address(self): from array import array for t in signed_types + unsigned_types: - # the array module doesn't suppport all format codes + # the array module doesn't support all format codes # (no 'q' or 'Q') try: array(t._type_) diff --git a/lib-python/modified-2.7/ctypes/test/test_win32.py b/lib-python/modified-2.7/ctypes/test/test_win32.py --- a/lib-python/modified-2.7/ctypes/test/test_win32.py +++ b/lib-python/modified-2.7/ctypes/test/test_win32.py @@ -17,7 +17,7 @@ # ValueError: Procedure probably called with not enough arguments (4 bytes missing) self.assertRaises(ValueError, IsWindow) - # This one should succeeed... + # This one should succeed... self.assertEqual(0, IsWindow(0)) # ValueError: Procedure probably called with too many arguments (8 bytes in excess) diff --git a/lib-python/modified-2.7/distutils/__init__.py b/lib-python/modified-2.7/distutils/__init__.py --- a/lib-python/modified-2.7/distutils/__init__.py +++ b/lib-python/modified-2.7/distutils/__init__.py @@ -15,5 +15,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "2.7.1" +__version__ = "2.7.2" #--end constants-- diff --git a/lib-python/modified-2.7/distutils/archive_util.py b/lib-python/modified-2.7/distutils/archive_util.py --- a/lib-python/modified-2.7/distutils/archive_util.py +++ b/lib-python/modified-2.7/distutils/archive_util.py @@ -121,7 +121,7 @@ def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): """Create a zip file from all the files under 'base_dir'. - The output zip file will be named 'base_dir' + ".zip". Uses either the + The output zip file will be named 'base_name' + ".zip". Uses either the "zipfile" Python module (if available) or the InfoZIP "zip" utility (if installed and found on the default search path). If neither tool is available, raises DistutilsExecError. Returns the name of the output zip diff --git a/lib-python/modified-2.7/distutils/cmd.py b/lib-python/modified-2.7/distutils/cmd.py --- a/lib-python/modified-2.7/distutils/cmd.py +++ b/lib-python/modified-2.7/distutils/cmd.py @@ -377,7 +377,7 @@ dry_run=self.dry_run) def move_file (self, src, dst, level=1): - """Move a file respectin dry-run flag.""" + """Move a file respecting dry-run flag.""" return file_util.move_file(src, dst, dry_run = self.dry_run) def spawn (self, cmd, search_path=1, level=1): diff --git a/lib-python/modified-2.7/distutils/command/build_ext.py b/lib-python/modified-2.7/distutils/command/build_ext.py --- a/lib-python/modified-2.7/distutils/command/build_ext.py +++ b/lib-python/modified-2.7/distutils/command/build_ext.py @@ -212,7 +212,7 @@ elif MSVC_VERSION == 8: self.library_dirs.append(os.path.join(sys.exec_prefix, - 'PC', 'VS8.0', 'win32release')) + 'PC', 'VS8.0')) elif MSVC_VERSION == 7: self.library_dirs.append(os.path.join(sys.exec_prefix, 'PC', 'VS7.1')) diff --git a/lib-python/modified-2.7/distutils/command/sdist.py b/lib-python/modified-2.7/distutils/command/sdist.py --- a/lib-python/modified-2.7/distutils/command/sdist.py +++ b/lib-python/modified-2.7/distutils/command/sdist.py @@ -306,17 +306,20 @@ rstrip_ws=1, collapse_join=1) - while 1: - line = template.readline() - if line is None: # end of file - break + try: + while 1: + line = template.readline() + if line is None: # end of file + break - try: - self.filelist.process_template_line(line) - except DistutilsTemplateError, msg: - self.warn("%s, line %d: %s" % (template.filename, - template.current_line, - msg)) + try: + self.filelist.process_template_line(line) + except DistutilsTemplateError, msg: + self.warn("%s, line %d: %s" % (template.filename, + template.current_line, + msg)) + finally: + template.close() def prune_file_list(self): """Prune off branches that might slip into the file list as created diff --git a/lib-python/modified-2.7/distutils/command/upload.py b/lib-python/modified-2.7/distutils/command/upload.py --- a/lib-python/modified-2.7/distutils/command/upload.py +++ b/lib-python/modified-2.7/distutils/command/upload.py @@ -176,6 +176,9 @@ result = urlopen(request) status = result.getcode() reason = result.msg + if self.show_response: + msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) + self.announce(msg, log.INFO) except socket.error, e: self.announce(str(e), log.ERROR) return @@ -189,6 +192,3 @@ else: self.announce('Upload failed (%s): %s' % (status, reason), log.ERROR) - if self.show_response: - msg = '\n'.join(('-' * 75, r.read(), '-' * 75)) - self.announce(msg, log.INFO) diff --git a/lib-python/modified-2.7/distutils/sysconfig.py b/lib-python/modified-2.7/distutils/sysconfig.py --- a/lib-python/modified-2.7/distutils/sysconfig.py +++ b/lib-python/modified-2.7/distutils/sysconfig.py @@ -9,21 +9,563 @@ Email: """ -__revision__ = "$Id: sysconfig.py 85358 2010-10-10 09:54:59Z antoine.pitrou $" +__revision__ = "$Id$" +import os +import re +import string import sys +from distutils.errors import DistutilsPlatformError -# The content of this file is redirected from -# sysconfig_cpython or sysconfig_pypy. +# These are needed in a couple of spots, so just compute them once. +PREFIX = os.path.normpath(sys.prefix) +EXEC_PREFIX = os.path.normpath(sys.exec_prefix) -if '__pypy__' in sys.builtin_module_names: - from distutils.sysconfig_pypy import * - from distutils.sysconfig_pypy import _config_vars # needed by setuptools - from distutils.sysconfig_pypy import _variable_rx # read_setup_file() -else: - from distutils.sysconfig_cpython import * - from distutils.sysconfig_cpython import _config_vars # needed by setuptools - from distutils.sysconfig_cpython import _variable_rx # read_setup_file() +# Path to the base directory of the project. On Windows the binary may +# live in project/PCBuild9. If we're dealing with an x64 Windows build, +# it'll live in project/PCbuild/amd64. +project_base = os.path.dirname(os.path.abspath(sys.executable)) +if os.name == "nt" and "pcbuild" in project_base[-8:].lower(): + project_base = os.path.abspath(os.path.join(project_base, os.path.pardir)) +# PC/VS7.1 +if os.name == "nt" and "\\pc\\v" in project_base[-10:].lower(): + project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, + os.path.pardir)) +# PC/AMD64 +if os.name == "nt" and "\\pcbuild\\amd64" in project_base[-14:].lower(): + project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, + os.path.pardir)) +# python_build: (Boolean) if true, we're either building Python or +# building an extension with an un-installed Python, so we use +# different (hard-wired) directories. +# Setup.local is available for Makefile builds including VPATH builds, +# Setup.dist is available on Windows +def _python_build(): + for fn in ("Setup.dist", "Setup.local"): + if os.path.isfile(os.path.join(project_base, "Modules", fn)): + return True + return False +python_build = _python_build() + +def get_python_version(): + """Return a string containing the major and minor Python version, + leaving off the patchlevel. Sample return values could be '1.5' + or '2.2'. + """ + return sys.version[:3] + + +def get_python_inc(plat_specific=0, prefix=None): + """Return the directory containing installed Python header files. + + If 'plat_specific' is false (the default), this is the path to the + non-platform-specific header files, i.e. Python.h and so on; + otherwise, this is the path to platform-specific header files + (namely pyconfig.h). + + If 'prefix' is supplied, use it instead of sys.prefix or + sys.exec_prefix -- i.e., ignore 'plat_specific'. + """ + if prefix is None: + prefix = plat_specific and EXEC_PREFIX or PREFIX + + if os.name == "posix": + if python_build: + buildir = os.path.dirname(sys.executable) + if plat_specific: + # python.h is located in the buildir + inc_dir = buildir + else: + # the source dir is relative to the buildir + srcdir = os.path.abspath(os.path.join(buildir, + get_config_var('srcdir'))) + # Include is located in the srcdir + inc_dir = os.path.join(srcdir, "Include") + return inc_dir + return os.path.join(prefix, "include", "python" + get_python_version()) + elif os.name == "nt": + return os.path.join(prefix, "include") + elif os.name == "os2": + return os.path.join(prefix, "Include") + else: + raise DistutilsPlatformError( + "I don't know where Python installs its C header files " + "on platform '%s'" % os.name) + + +def get_python_lib(plat_specific=0, standard_lib=0, prefix=None): + """Return the directory containing the Python library (standard or + site additions). + + If 'plat_specific' is true, return the directory containing + platform-specific modules, i.e. any module from a non-pure-Python + module distribution; otherwise, return the platform-shared library + directory. If 'standard_lib' is true, return the directory + containing standard Python library modules; otherwise, return the + directory for site-specific modules. + + If 'prefix' is supplied, use it instead of sys.prefix or + sys.exec_prefix -- i.e., ignore 'plat_specific'. + """ + if prefix is None: + prefix = plat_specific and EXEC_PREFIX or PREFIX + + if os.name == "posix": + libpython = os.path.join(prefix, + "lib", "python" + get_python_version()) + if standard_lib: + return libpython + else: + return os.path.join(libpython, "site-packages") + + elif os.name == "nt": + if standard_lib: + return os.path.join(prefix, "Lib") + else: + if get_python_version() < "2.2": + return prefix + else: + return os.path.join(prefix, "Lib", "site-packages") + + elif os.name == "os2": + if standard_lib: + return os.path.join(prefix, "Lib") + else: + return os.path.join(prefix, "Lib", "site-packages") + + else: + raise DistutilsPlatformError( + "I don't know where Python installs its library " + "on platform '%s'" % os.name) + + +def customize_compiler(compiler): + """Do any platform-specific customization of a CCompiler instance. + + Mainly needed on Unix, so we can plug in the information that + varies across Unices and is stored in Python's Makefile. + """ + if compiler.compiler_type == "unix": + (cc, cxx, opt, cflags, ccshared, ldshared, so_ext) = \ + get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS', + 'CCSHARED', 'LDSHARED', 'SO') + + if 'CC' in os.environ: + cc = os.environ['CC'] + if 'CXX' in os.environ: + cxx = os.environ['CXX'] + if 'LDSHARED' in os.environ: + ldshared = os.environ['LDSHARED'] + if 'CPP' in os.environ: + cpp = os.environ['CPP'] + else: + cpp = cc + " -E" # not always + if 'LDFLAGS' in os.environ: + ldshared = ldshared + ' ' + os.environ['LDFLAGS'] + if 'CFLAGS' in os.environ: + cflags = opt + ' ' + os.environ['CFLAGS'] + ldshared = ldshared + ' ' + os.environ['CFLAGS'] + if 'CPPFLAGS' in os.environ: + cpp = cpp + ' ' + os.environ['CPPFLAGS'] + cflags = cflags + ' ' + os.environ['CPPFLAGS'] + ldshared = ldshared + ' ' + os.environ['CPPFLAGS'] + + cc_cmd = cc + ' ' + cflags + compiler.set_executables( + preprocessor=cpp, + compiler=cc_cmd, + compiler_so=cc_cmd + ' ' + ccshared, + compiler_cxx=cxx, + linker_so=ldshared, + linker_exe=cc) + + compiler.shared_lib_extension = so_ext + + +def get_config_h_filename(): + """Return full pathname of installed pyconfig.h file.""" + if python_build: + if os.name == "nt": + inc_dir = os.path.join(project_base, "PC") + else: + inc_dir = project_base + else: + inc_dir = get_python_inc(plat_specific=1) + if get_python_version() < '2.2': + config_h = 'config.h' + else: + # The name of the config.h file changed in 2.2 + config_h = 'pyconfig.h' + return os.path.join(inc_dir, config_h) + + +def get_makefile_filename(): + """Return full pathname of installed Makefile from the Python build.""" + if python_build: + return os.path.join(os.path.dirname(sys.executable), "Makefile") + lib_dir = get_python_lib(plat_specific=1, standard_lib=1) + return os.path.join(lib_dir, "config", "Makefile") + + +def parse_config_h(fp, g=None): + """Parse a config.h-style file. + + A dictionary containing name/value pairs is returned. If an + optional dictionary is passed in as the second argument, it is + used instead of a new dictionary. + """ + if g is None: + g = {} + define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") + undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") + # + while 1: + line = fp.readline() + if not line: + break + m = define_rx.match(line) + if m: + n, v = m.group(1, 2) + try: v = int(v) + except ValueError: pass + g[n] = v + else: + m = undef_rx.match(line) + if m: + g[m.group(1)] = 0 + return g + + +# Regexes needed for parsing Makefile (and similar syntaxes, +# like old-style Setup files). +_variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") +_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") +_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") + +def parse_makefile(fn, g=None): + """Parse a Makefile-style file. + + A dictionary containing name/value pairs is returned. If an + optional dictionary is passed in as the second argument, it is + used instead of a new dictionary. + """ + from distutils.text_file import TextFile + fp = TextFile(fn, strip_comments=1, skip_blanks=1, join_lines=1) + + if g is None: + g = {} + done = {} + notdone = {} + + while 1: + line = fp.readline() + if line is None: # eof + break + m = _variable_rx.match(line) + if m: + n, v = m.group(1, 2) + v = v.strip() + # `$$' is a literal `$' in make + tmpv = v.replace('$$', '') + + if "$" in tmpv: + notdone[n] = v + else: + try: + v = int(v) + except ValueError: + # insert literal `$' + done[n] = v.replace('$$', '$') + else: + done[n] = v + + # do variable interpolation here + while notdone: + for name in notdone.keys(): + value = notdone[name] + m = _findvar1_rx.search(value) or _findvar2_rx.search(value) + if m: + n = m.group(1) + found = True + if n in done: + item = str(done[n]) + elif n in notdone: + # get it on a subsequent round + found = False + elif n in os.environ: + # do it like make: fall back to environment + item = os.environ[n] + else: + done[n] = item = "" + if found: + after = value[m.end():] + value = value[:m.start()] + item + after + if "$" in after: + notdone[name] = value + else: + try: value = int(value) + except ValueError: + done[name] = value.strip() + else: + done[name] = value + del notdone[name] + else: + # bogus variable reference; just drop it since we can't deal + del notdone[name] + + fp.close() + + # strip spurious spaces + for k, v in done.items(): + if isinstance(v, str): + done[k] = v.strip() + + # save the results in the global dictionary + g.update(done) + return g + + +def expand_makefile_vars(s, vars): + """Expand Makefile-style variables -- "${foo}" or "$(foo)" -- in + 'string' according to 'vars' (a dictionary mapping variable names to + values). Variables not present in 'vars' are silently expanded to the + empty string. The variable values in 'vars' should not contain further + variable expansions; if 'vars' is the output of 'parse_makefile()', + you're fine. Returns a variable-expanded version of 's'. + """ + + # This algorithm does multiple expansion, so if vars['foo'] contains + # "${bar}", it will expand ${foo} to ${bar}, and then expand + # ${bar}... and so forth. This is fine as long as 'vars' comes from + # 'parse_makefile()', which takes care of such expansions eagerly, + # according to make's variable expansion semantics. + + while 1: + m = _findvar1_rx.search(s) or _findvar2_rx.search(s) + if m: + (beg, end) = m.span() + s = s[0:beg] + vars.get(m.group(1)) + s[end:] + else: + break + return s + + +_config_vars = None + +def _init_posix(): + """Initialize the module as appropriate for POSIX systems.""" + g = {} + # load the installed Makefile: + try: + filename = get_makefile_filename() + parse_makefile(filename, g) + except IOError, msg: + my_msg = "invalid Python installation: unable to open %s" % filename + if hasattr(msg, "strerror"): + my_msg = my_msg + " (%s)" % msg.strerror + + raise DistutilsPlatformError(my_msg) + + # load the installed pyconfig.h: + try: + filename = get_config_h_filename() + parse_config_h(file(filename), g) + except IOError, msg: + my_msg = "invalid Python installation: unable to open %s" % filename + if hasattr(msg, "strerror"): + my_msg = my_msg + " (%s)" % msg.strerror + + raise DistutilsPlatformError(my_msg) + + # On MacOSX we need to check the setting of the environment variable + # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so + # it needs to be compatible. + # If it isn't set we set it to the configure-time value + if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in g: + cfg_target = g['MACOSX_DEPLOYMENT_TARGET'] + cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') + if cur_target == '': + cur_target = cfg_target + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target + elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): + my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' + % (cur_target, cfg_target)) + raise DistutilsPlatformError(my_msg) + + # On AIX, there are wrong paths to the linker scripts in the Makefile + # -- these paths are relative to the Python source, but when installed + # the scripts are in another directory. + if python_build: + g['LDSHARED'] = g['BLDSHARED'] + + elif get_python_version() < '2.1': + # The following two branches are for 1.5.2 compatibility. + if sys.platform == 'aix4': # what about AIX 3.x ? + # Linker script is in the config directory, not in Modules as the + # Makefile says. + python_lib = get_python_lib(standard_lib=1) + ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') + python_exp = os.path.join(python_lib, 'config', 'python.exp') + + g['LDSHARED'] = "%s %s -bI:%s" % (ld_so_aix, g['CC'], python_exp) + + elif sys.platform == 'beos': + # Linker script is in the config directory. In the Makefile it is + # relative to the srcdir, which after installation no longer makes + # sense. + python_lib = get_python_lib(standard_lib=1) + linkerscript_path = string.split(g['LDSHARED'])[0] + linkerscript_name = os.path.basename(linkerscript_path) + linkerscript = os.path.join(python_lib, 'config', + linkerscript_name) + + # XXX this isn't the right place to do this: adding the Python + # library to the link, if needed, should be in the "build_ext" + # command. (It's also needed for non-MS compilers on Windows, and + # it's taken care of for them by the 'build_ext.get_libraries()' + # method.) + g['LDSHARED'] = ("%s -L%s/lib -lpython%s" % + (linkerscript, PREFIX, get_python_version())) + + global _config_vars + _config_vars = g + + +def _init_nt(): + """Initialize the module as appropriate for NT""" + g = {} + # set basic install directories + g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) + g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) + + # XXX hmmm.. a normal install puts include files here + g['INCLUDEPY'] = get_python_inc(plat_specific=0) + + g['SO'] = '.pyd' + g['EXE'] = ".exe" + g['VERSION'] = get_python_version().replace(".", "") + g['BINDIR'] = os.path.dirname(os.path.abspath(sys.executable)) + + global _config_vars + _config_vars = g + + +def _init_os2(): + """Initialize the module as appropriate for OS/2""" + g = {} + # set basic install directories + g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) + g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) + + # XXX hmmm.. a normal install puts include files here + g['INCLUDEPY'] = get_python_inc(plat_specific=0) + + g['SO'] = '.pyd' + g['EXE'] = ".exe" + + global _config_vars + _config_vars = g + + +def get_config_vars(*args): + """With no arguments, return a dictionary of all configuration + variables relevant for the current platform. Generally this includes + everything needed to build extensions and install both pure modules and + extensions. On Unix, this means every variable defined in Python's + installed Makefile; on Windows and Mac OS it's a much smaller set. + + With arguments, return a list of values that result from looking up + each argument in the configuration variable dictionary. + """ + global _config_vars + if _config_vars is None: + func = globals().get("_init_" + os.name) + if func: + func() + else: + _config_vars = {} + + # Normalized versions of prefix and exec_prefix are handy to have; + # in fact, these are the standard versions used most places in the + # Distutils. + _config_vars['prefix'] = PREFIX + _config_vars['exec_prefix'] = EXEC_PREFIX + + if sys.platform == 'darwin': + kernel_version = os.uname()[2] # Kernel version (8.4.3) + major_version = int(kernel_version.split('.')[0]) + + if major_version < 8: + # On Mac OS X before 10.4, check if -arch and -isysroot + # are in CFLAGS or LDFLAGS and remove them if they are. + # This is needed when building extensions on a 10.3 system + # using a universal build of python. + for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + flags = _config_vars[key] + flags = re.sub('-arch\s+\w+\s', ' ', flags) + flags = re.sub('-isysroot [^ \t]*', ' ', flags) + _config_vars[key] = flags + + else: + + # Allow the user to override the architecture flags using + # an environment variable. + # NOTE: This name was introduced by Apple in OSX 10.5 and + # is used by several scripting languages distributed with + # that OS release. + + if 'ARCHFLAGS' in os.environ: + arch = os.environ['ARCHFLAGS'] + for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + + flags = _config_vars[key] + flags = re.sub('-arch\s+\w+\s', ' ', flags) + flags = flags + ' ' + arch + _config_vars[key] = flags + + # If we're on OSX 10.5 or later and the user tries to + # compiles an extension using an SDK that is not present + # on the current machine it is better to not use an SDK + # than to fail. + # + # The major usecase for this is users using a Python.org + # binary installer on OSX 10.6: that installer uses + # the 10.4u SDK, but that SDK is not installed by default + # when you install Xcode. + # + m = re.search('-isysroot\s+(\S+)', _config_vars['CFLAGS']) + if m is not None: + sdk = m.group(1) + if not os.path.exists(sdk): + for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', + # a number of derived variables. These need to be + # patched up as well. + 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): + + flags = _config_vars[key] + flags = re.sub('-isysroot\s+\S+(\s|$)', ' ', flags) + _config_vars[key] = flags + + if args: + vals = [] + for name in args: + vals.append(_config_vars.get(name)) + return vals + else: + return _config_vars + +def get_config_var(name): + """Return the value of a single variable using the dictionary + returned by 'get_config_vars()'. Equivalent to + get_config_vars().get(name) + """ + return get_config_vars().get(name) diff --git a/lib-python/modified-2.7/distutils/sysconfig_cpython.py b/lib-python/modified-2.7/distutils/sysconfig_cpython.py --- a/lib-python/modified-2.7/distutils/sysconfig_cpython.py +++ b/lib-python/modified-2.7/distutils/sysconfig_cpython.py @@ -389,7 +389,7 @@ cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') if cur_target == '': cur_target = cfg_target - os.putenv('MACOSX_DEPLOYMENT_TARGET', cfg_target) + os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' % (cur_target, cfg_target)) diff --git a/lib-python/modified-2.7/distutils/tests/__init__.py b/lib-python/modified-2.7/distutils/tests/__init__.py --- a/lib-python/modified-2.7/distutils/tests/__init__.py +++ b/lib-python/modified-2.7/distutils/tests/__init__.py @@ -15,9 +15,10 @@ import os import sys import unittest +from test.test_support import run_unittest -here = os.path.dirname(__file__) +here = os.path.dirname(__file__) or os.curdir def test_suite(): @@ -32,4 +33,4 @@ if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_archive_util.py b/lib-python/modified-2.7/distutils/tests/test_archive_util.py --- a/lib-python/modified-2.7/distutils/tests/test_archive_util.py +++ b/lib-python/modified-2.7/distutils/tests/test_archive_util.py @@ -12,7 +12,7 @@ ARCHIVE_FORMATS) from distutils.spawn import find_executable, spawn from distutils.tests import support -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest try: import grp @@ -281,4 +281,4 @@ return unittest.makeSuite(ArchiveUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_bdist_msi.py b/lib-python/modified-2.7/distutils/tests/test_bdist_msi.py --- a/lib-python/modified-2.7/distutils/tests/test_bdist_msi.py +++ b/lib-python/modified-2.7/distutils/tests/test_bdist_msi.py @@ -11,7 +11,7 @@ support.LoggingSilencer, unittest.TestCase): - def test_minial(self): + def test_minimal(self): # minimal test XXX need more tests from distutils.command.bdist_msi import bdist_msi pkg_pth, dist = self.create_dist() diff --git a/lib-python/modified-2.7/distutils/tests/test_build.py b/lib-python/modified-2.7/distutils/tests/test_build.py --- a/lib-python/modified-2.7/distutils/tests/test_build.py +++ b/lib-python/modified-2.7/distutils/tests/test_build.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.build import build from distutils.tests import support @@ -51,4 +52,4 @@ return unittest.makeSuite(BuildTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_build_clib.py b/lib-python/modified-2.7/distutils/tests/test_build_clib.py --- a/lib-python/modified-2.7/distutils/tests/test_build_clib.py +++ b/lib-python/modified-2.7/distutils/tests/test_build_clib.py @@ -3,6 +3,8 @@ import os import sys +from test.test_support import run_unittest + from distutils.command.build_clib import build_clib from distutils.errors import DistutilsSetupError from distutils.tests import support @@ -140,4 +142,4 @@ return unittest.makeSuite(BuildCLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_build_ext.py b/lib-python/modified-2.7/distutils/tests/test_build_ext.py --- a/lib-python/modified-2.7/distutils/tests/test_build_ext.py +++ b/lib-python/modified-2.7/distutils/tests/test_build_ext.py @@ -3,12 +3,13 @@ import tempfile import shutil from StringIO import StringIO +import textwrap from distutils.core import Extension, Distribution from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, CompileError import unittest from test import test_support @@ -430,6 +431,59 @@ wanted = os.path.join(cmd.build_lib, 'UpdateManager', 'fdsend' + ext) self.assertEqual(ext_path, wanted) + @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX') + def test_deployment_target(self): + self._try_compile_deployment_target() + + orig_environ = os.environ + os.environ = orig_environ.copy() + self.addCleanup(setattr, os, 'environ', orig_environ) + + os.environ['MACOSX_DEPLOYMENT_TARGET']='10.1' + self._try_compile_deployment_target() + + + def _try_compile_deployment_target(self): + deptarget_c = os.path.join(self.tmp_dir, 'deptargetmodule.c') + + with open(deptarget_c, 'w') as fp: + fp.write(textwrap.dedent('''\ + #include + + int dummy; + + #if TARGET != MAC_OS_X_VERSION_MIN_REQUIRED + #error "Unexpected target" + #endif + + ''')) + + target = sysconfig.get_config_var('MACOSX_DEPLOYMENT_TARGET') + target = tuple(map(int, target.split('.'))) + target = '%02d%01d0' % target + + deptarget_ext = Extension( + 'deptarget', + [deptarget_c], + extra_compile_args=['-DTARGET=%s'%(target,)], + ) + dist = Distribution({ + 'name': 'deptarget', + 'ext_modules': [deptarget_ext] + }) + dist.package_dir = self.tmp_dir + cmd = build_ext(dist) + cmd.build_lib = self.tmp_dir + cmd.build_temp = self.tmp_dir + + try: + old_stdout = sys.stdout + cmd.ensure_finalized() + cmd.run() + + except CompileError: + self.fail("Wrong deployment target during compilation") + def test_suite(): return unittest.makeSuite(BuildExtTestCase) diff --git a/lib-python/modified-2.7/distutils/tests/test_build_py.py b/lib-python/modified-2.7/distutils/tests/test_build_py.py --- a/lib-python/modified-2.7/distutils/tests/test_build_py.py +++ b/lib-python/modified-2.7/distutils/tests/test_build_py.py @@ -10,13 +10,14 @@ from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class BuildPyTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase): - def _setup_package_data(self): + def test_package_data(self): sources = self.mkdtemp() f = open(os.path.join(sources, "__init__.py"), "w") try: @@ -56,20 +57,15 @@ self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) - return files + self.assertIn("__init__.py", files) + self.assertIn("README.txt", files) + # XXX even with -O, distutils writes pyc, not pyo; bug? + if sys.dont_write_bytecode: + self.assertNotIn("__init__.pyc", files) + else: + self.assertIn("__init__.pyc", files) - def test_package_data(self): - files = self._setup_package_data() - self.assertTrue("__init__.py" in files) - self.assertTrue("README.txt" in files) - - @unittest.skipIf(sys.flags.optimize >= 2, - "pyc files are not written with -O2 and above") - def test_package_data_pyc(self): - files = self._setup_package_data() - self.assertTrue("__init__.pyc" in files) - - def test_empty_package_dir (self): + def test_empty_package_dir(self): # See SF 1668596/1720897. cwd = os.getcwd() @@ -117,10 +113,10 @@ finally: sys.dont_write_bytecode = old_dont_write_bytecode - self.assertTrue('byte-compiling is disabled' in self.logs[0][1]) + self.assertIn('byte-compiling is disabled', self.logs[0][1]) def test_suite(): return unittest.makeSuite(BuildPyTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_build_scripts.py b/lib-python/modified-2.7/distutils/tests/test_build_scripts.py --- a/lib-python/modified-2.7/distutils/tests/test_build_scripts.py +++ b/lib-python/modified-2.7/distutils/tests/test_build_scripts.py @@ -8,6 +8,7 @@ import sysconfig from distutils.tests import support +from test.test_support import run_unittest class BuildScriptsTestCase(support.TempdirManager, @@ -108,4 +109,4 @@ return unittest.makeSuite(BuildScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_check.py b/lib-python/modified-2.7/distutils/tests/test_check.py --- a/lib-python/modified-2.7/distutils/tests/test_check.py +++ b/lib-python/modified-2.7/distutils/tests/test_check.py @@ -1,5 +1,6 @@ """Tests for distutils.command.check.""" import unittest +from test.test_support import run_unittest from distutils.command.check import check, HAS_DOCUTILS from distutils.tests import support @@ -95,4 +96,4 @@ return unittest.makeSuite(CheckTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_clean.py b/lib-python/modified-2.7/distutils/tests/test_clean.py --- a/lib-python/modified-2.7/distutils/tests/test_clean.py +++ b/lib-python/modified-2.7/distutils/tests/test_clean.py @@ -6,6 +6,7 @@ from distutils.command.clean import clean from distutils.tests import support +from test.test_support import run_unittest class cleanTestCase(support.TempdirManager, support.LoggingSilencer, @@ -38,7 +39,7 @@ self.assertTrue(not os.path.exists(path), '%s was not removed' % path) - # let's run the command again (should spit warnings but suceed) + # let's run the command again (should spit warnings but succeed) cmd.all = 1 cmd.ensure_finalized() cmd.run() @@ -47,4 +48,4 @@ return unittest.makeSuite(cleanTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_cmd.py b/lib-python/modified-2.7/distutils/tests/test_cmd.py --- a/lib-python/modified-2.7/distutils/tests/test_cmd.py +++ b/lib-python/modified-2.7/distutils/tests/test_cmd.py @@ -99,7 +99,7 @@ def test_ensure_dirname(self): cmd = self.cmd - cmd.option1 = os.path.dirname(__file__) + cmd.option1 = os.path.dirname(__file__) or os.curdir cmd.ensure_dirname('option1') cmd.option2 = 'xxx' self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2') diff --git a/lib-python/modified-2.7/distutils/tests/test_config.py b/lib-python/modified-2.7/distutils/tests/test_config.py --- a/lib-python/modified-2.7/distutils/tests/test_config.py +++ b/lib-python/modified-2.7/distutils/tests/test_config.py @@ -11,6 +11,7 @@ from distutils.log import WARN from distutils.tests import support +from test.test_support import run_unittest PYPIRC = """\ [distutils] @@ -119,4 +120,4 @@ return unittest.makeSuite(PyPIRCCommandTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_config_cmd.py b/lib-python/modified-2.7/distutils/tests/test_config_cmd.py --- a/lib-python/modified-2.7/distutils/tests/test_config_cmd.py +++ b/lib-python/modified-2.7/distutils/tests/test_config_cmd.py @@ -2,6 +2,7 @@ import unittest import os import sys +from test.test_support import run_unittest from distutils.command.config import dump_file, config from distutils.tests import support @@ -86,4 +87,4 @@ return unittest.makeSuite(ConfigTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_core.py b/lib-python/modified-2.7/distutils/tests/test_core.py --- a/lib-python/modified-2.7/distutils/tests/test_core.py +++ b/lib-python/modified-2.7/distutils/tests/test_core.py @@ -6,7 +6,7 @@ import shutil import sys import test.test_support -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest import unittest from distutils.tests import support @@ -105,4 +105,4 @@ return unittest.makeSuite(CoreTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_dep_util.py b/lib-python/modified-2.7/distutils/tests/test_dep_util.py --- a/lib-python/modified-2.7/distutils/tests/test_dep_util.py +++ b/lib-python/modified-2.7/distutils/tests/test_dep_util.py @@ -6,6 +6,7 @@ from distutils.dep_util import newer, newer_pairwise, newer_group from distutils.errors import DistutilsFileError from distutils.tests import support +from test.test_support import run_unittest class DepUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(DepUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_dir_util.py b/lib-python/modified-2.7/distutils/tests/test_dir_util.py --- a/lib-python/modified-2.7/distutils/tests/test_dir_util.py +++ b/lib-python/modified-2.7/distutils/tests/test_dir_util.py @@ -10,6 +10,7 @@ from distutils import log from distutils.tests import support +from test.test_support import run_unittest class DirUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -112,4 +113,4 @@ return unittest.makeSuite(DirUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_dist.py b/lib-python/modified-2.7/distutils/tests/test_dist.py --- a/lib-python/modified-2.7/distutils/tests/test_dist.py +++ b/lib-python/modified-2.7/distutils/tests/test_dist.py @@ -11,7 +11,7 @@ from distutils.dist import Distribution, fix_help_options, DistributionMetadata from distutils.cmd import Command import distutils.dist -from test.test_support import TESTFN, captured_stdout +from test.test_support import TESTFN, captured_stdout, run_unittest from distutils.tests import support class test_dist(Command): @@ -433,4 +433,4 @@ return suite if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_file_util.py b/lib-python/modified-2.7/distutils/tests/test_file_util.py --- a/lib-python/modified-2.7/distutils/tests/test_file_util.py +++ b/lib-python/modified-2.7/distutils/tests/test_file_util.py @@ -6,6 +6,7 @@ from distutils.file_util import move_file, write_file, copy_file from distutils import log from distutils.tests import support +from test.test_support import run_unittest class FileUtilTestCase(support.TempdirManager, unittest.TestCase): @@ -77,4 +78,4 @@ return unittest.makeSuite(FileUtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_filelist.py b/lib-python/modified-2.7/distutils/tests/test_filelist.py --- a/lib-python/modified-2.7/distutils/tests/test_filelist.py +++ b/lib-python/modified-2.7/distutils/tests/test_filelist.py @@ -1,7 +1,7 @@ """Tests for distutils.filelist.""" from os.path import join import unittest -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.filelist import glob_to_re, FileList from distutils import debug @@ -82,4 +82,4 @@ return unittest.makeSuite(FileListTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_install.py b/lib-python/modified-2.7/distutils/tests/test_install.py --- a/lib-python/modified-2.7/distutils/tests/test_install.py +++ b/lib-python/modified-2.7/distutils/tests/test_install.py @@ -4,6 +4,8 @@ import unittest from test import test_support +from test.test_support import run_unittest + from distutils.command.install import install from distutils.core import Distribution @@ -54,4 +56,4 @@ return unittest.makeSuite(InstallTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_install_data.py b/lib-python/modified-2.7/distutils/tests/test_install_data.py --- a/lib-python/modified-2.7/distutils/tests/test_install_data.py +++ b/lib-python/modified-2.7/distutils/tests/test_install_data.py @@ -6,6 +6,7 @@ from distutils.command.install_data import install_data from distutils.tests import support +from test.test_support import run_unittest class InstallDataTestCase(support.TempdirManager, support.LoggingSilencer, @@ -73,4 +74,4 @@ return unittest.makeSuite(InstallDataTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_install_headers.py b/lib-python/modified-2.7/distutils/tests/test_install_headers.py --- a/lib-python/modified-2.7/distutils/tests/test_install_headers.py +++ b/lib-python/modified-2.7/distutils/tests/test_install_headers.py @@ -6,6 +6,7 @@ from distutils.command.install_headers import install_headers from distutils.tests import support +from test.test_support import run_unittest class InstallHeadersTestCase(support.TempdirManager, support.LoggingSilencer, @@ -37,4 +38,4 @@ return unittest.makeSuite(InstallHeadersTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_install_lib.py b/lib-python/modified-2.7/distutils/tests/test_install_lib.py --- a/lib-python/modified-2.7/distutils/tests/test_install_lib.py +++ b/lib-python/modified-2.7/distutils/tests/test_install_lib.py @@ -7,6 +7,7 @@ from distutils.extension import Extension from distutils.tests import support from distutils.errors import DistutilsOptionError +from test.test_support import run_unittest class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, @@ -103,4 +104,4 @@ return unittest.makeSuite(InstallLibTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_install_scripts.py b/lib-python/modified-2.7/distutils/tests/test_install_scripts.py --- a/lib-python/modified-2.7/distutils/tests/test_install_scripts.py +++ b/lib-python/modified-2.7/distutils/tests/test_install_scripts.py @@ -7,6 +7,7 @@ from distutils.core import Distribution from distutils.tests import support +from test.test_support import run_unittest class InstallScriptsTestCase(support.TempdirManager, @@ -78,4 +79,4 @@ return unittest.makeSuite(InstallScriptsTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_msvc9compiler.py b/lib-python/modified-2.7/distutils/tests/test_msvc9compiler.py --- a/lib-python/modified-2.7/distutils/tests/test_msvc9compiler.py +++ b/lib-python/modified-2.7/distutils/tests/test_msvc9compiler.py @@ -5,6 +5,7 @@ from distutils.errors import DistutilsPlatformError from distutils.tests import support +from test.test_support import run_unittest _MANIFEST = """\ @@ -137,4 +138,4 @@ return unittest.makeSuite(msvc9compilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_register.py b/lib-python/modified-2.7/distutils/tests/test_register.py --- a/lib-python/modified-2.7/distutils/tests/test_register.py +++ b/lib-python/modified-2.7/distutils/tests/test_register.py @@ -7,7 +7,7 @@ import urllib2 import warnings -from test.test_support import check_warnings +from test.test_support import check_warnings, run_unittest from distutils.command import register as register_module from distutils.command.register import register @@ -138,7 +138,7 @@ # let's see what the server received : we should # have 2 similar requests - self.assertTrue(self.conn.reqs, 2) + self.assertEqual(len(self.conn.reqs), 2) req1 = dict(self.conn.reqs[0].headers) req2 = dict(self.conn.reqs[1].headers) self.assertEqual(req2['Content-length'], req1['Content-length']) @@ -168,7 +168,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '608') @@ -186,7 +186,7 @@ del register_module.raw_input # we should have send a request - self.assertTrue(self.conn.reqs, 1) + self.assertEqual(len(self.conn.reqs), 1) req = self.conn.reqs[0] headers = dict(req.headers) self.assertEqual(headers['Content-length'], '290') @@ -258,4 +258,4 @@ return unittest.makeSuite(RegisterTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_sdist.py b/lib-python/modified-2.7/distutils/tests/test_sdist.py --- a/lib-python/modified-2.7/distutils/tests/test_sdist.py +++ b/lib-python/modified-2.7/distutils/tests/test_sdist.py @@ -24,11 +24,9 @@ import tempfile import warnings -from test.test_support import check_warnings -from test.test_support import captured_stdout +from test.test_support import captured_stdout, check_warnings, run_unittest -from distutils.command.sdist import sdist -from distutils.command.sdist import show_formats +from distutils.command.sdist import sdist, show_formats from distutils.core import Distribution from distutils.tests.test_config import PyPIRCCommandTestCase from distutils.errors import DistutilsExecError, DistutilsOptionError @@ -372,7 +370,7 @@ # adding a file self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#') - # make sure build_py is reinitinialized, like a fresh run + # make sure build_py is reinitialized, like a fresh run build_py = dist.get_command_obj('build_py') build_py.finalized = False build_py.ensure_finalized() @@ -390,6 +388,7 @@ self.assertEqual(len(manifest2), 6) self.assertIn('doc2.txt', manifest2[-1]) + @unittest.skipUnless(zlib, "requires zlib") def test_manifest_marker(self): # check that autogenerated MANIFESTs have a marker dist, cmd = self.get_cmd() @@ -406,6 +405,7 @@ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit') + @unittest.skipUnless(zlib, "requires zlib") def test_manual_manifest(self): # check that a MANIFEST without a marker is left alone dist, cmd = self.get_cmd() @@ -426,4 +426,4 @@ return unittest.makeSuite(SDistTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_spawn.py b/lib-python/modified-2.7/distutils/tests/test_spawn.py --- a/lib-python/modified-2.7/distutils/tests/test_spawn.py +++ b/lib-python/modified-2.7/distutils/tests/test_spawn.py @@ -2,7 +2,7 @@ import unittest import os import time -from test.test_support import captured_stdout +from test.test_support import captured_stdout, run_unittest from distutils.spawn import _nt_quote_args from distutils.spawn import spawn, find_executable @@ -57,4 +57,4 @@ return unittest.makeSuite(SpawnTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_text_file.py b/lib-python/modified-2.7/distutils/tests/test_text_file.py --- a/lib-python/modified-2.7/distutils/tests/test_text_file.py +++ b/lib-python/modified-2.7/distutils/tests/test_text_file.py @@ -3,6 +3,7 @@ import unittest from distutils.text_file import TextFile from distutils.tests import support +from test.test_support import run_unittest TEST_DATA = """# test file @@ -103,4 +104,4 @@ return unittest.makeSuite(TextFileTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_unixccompiler.py b/lib-python/modified-2.7/distutils/tests/test_unixccompiler.py --- a/lib-python/modified-2.7/distutils/tests/test_unixccompiler.py +++ b/lib-python/modified-2.7/distutils/tests/test_unixccompiler.py @@ -1,6 +1,7 @@ """Tests for distutils.unixccompiler.""" import sys import unittest +from test.test_support import run_unittest from distutils import sysconfig from distutils.unixccompiler import UnixCCompiler @@ -126,4 +127,4 @@ return unittest.makeSuite(UnixCCompilerTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_upload.py b/lib-python/modified-2.7/distutils/tests/test_upload.py --- a/lib-python/modified-2.7/distutils/tests/test_upload.py +++ b/lib-python/modified-2.7/distutils/tests/test_upload.py @@ -1,14 +1,13 @@ +# -*- encoding: utf8 -*- """Tests for distutils.command.upload.""" -# -*- encoding: utf8 -*- -import sys import os import unittest +from test.test_support import run_unittest from distutils.command import upload as upload_mod from distutils.command.upload import upload from distutils.core import Distribution -from distutils.tests import support from distutils.tests.test_config import PYPIRC, PyPIRCCommandTestCase PYPIRC_LONG_PASSWORD = """\ @@ -129,4 +128,4 @@ return unittest.makeSuite(uploadTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_util.py b/lib-python/modified-2.7/distutils/tests/test_util.py --- a/lib-python/modified-2.7/distutils/tests/test_util.py +++ b/lib-python/modified-2.7/distutils/tests/test_util.py @@ -1,6 +1,7 @@ """Tests for distutils.util.""" import sys import unittest +from test.test_support import run_unittest from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError from distutils.util import byte_compile @@ -21,4 +22,4 @@ return unittest.makeSuite(UtilTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_version.py b/lib-python/modified-2.7/distutils/tests/test_version.py --- a/lib-python/modified-2.7/distutils/tests/test_version.py +++ b/lib-python/modified-2.7/distutils/tests/test_version.py @@ -2,6 +2,7 @@ import unittest from distutils.version import LooseVersion from distutils.version import StrictVersion +from test.test_support import run_unittest class VersionTestCase(unittest.TestCase): @@ -67,4 +68,4 @@ return unittest.makeSuite(VersionTestCase) if __name__ == "__main__": - unittest.main(defaultTest="test_suite") + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/tests/test_versionpredicate.py b/lib-python/modified-2.7/distutils/tests/test_versionpredicate.py --- a/lib-python/modified-2.7/distutils/tests/test_versionpredicate.py +++ b/lib-python/modified-2.7/distutils/tests/test_versionpredicate.py @@ -4,6 +4,10 @@ import distutils.versionpredicate import doctest +from test.test_support import run_unittest def test_suite(): return doctest.DocTestSuite(distutils.versionpredicate) + +if __name__ == '__main__': + run_unittest(test_suite()) diff --git a/lib-python/modified-2.7/distutils/util.py b/lib-python/modified-2.7/distutils/util.py --- a/lib-python/modified-2.7/distutils/util.py +++ b/lib-python/modified-2.7/distutils/util.py @@ -97,9 +97,7 @@ from distutils.sysconfig import get_config_vars cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, diff --git a/lib-python/modified-2.7/email/charset.py b/lib-python/modified-2.7/email/charset.py --- a/lib-python/modified-2.7/email/charset.py +++ b/lib-python/modified-2.7/email/charset.py @@ -209,7 +209,7 @@ input_charset = unicode(input_charset, 'ascii') except UnicodeError: raise errors.CharsetError(input_charset) - input_charset = input_charset.lower() + input_charset = input_charset.lower().encode('ascii') # Set the input charset after filtering through the aliases and/or codecs if not (input_charset in ALIASES or input_charset in CHARSETS): try: diff --git a/lib-python/modified-2.7/email/generator.py b/lib-python/modified-2.7/email/generator.py --- a/lib-python/modified-2.7/email/generator.py +++ b/lib-python/modified-2.7/email/generator.py @@ -202,18 +202,13 @@ g = self.clone(s) g.flatten(part, unixfrom=False) msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() != boundary: + boundary = msg.get_boundary() + if not boundary: + # Create a boundary that doesn't appear in any of the + # message texts. + alltext = NL.join(msgtexts) + boundary = _make_boundary(alltext) msg.set_boundary(boundary) # If there's a preamble, write it out, with a trailing CRLF if msg.preamble is not None: @@ -292,7 +287,7 @@ _FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' class DecodedGenerator(Generator): - """Generator a text representation of a message. + """Generates a text representation of a message. Like the Generator base class, except that non-text parts are substituted with a format string representing the part. diff --git a/lib-python/modified-2.7/email/header.py b/lib-python/modified-2.7/email/header.py --- a/lib-python/modified-2.7/email/header.py +++ b/lib-python/modified-2.7/email/header.py @@ -47,6 +47,10 @@ # For use with .match() fcre = re.compile(r'[\041-\176]+:$') +# Find a header embedded in a putative header value. Used to check for +# header injection attack. +_embeded_header = re.compile(r'\n[^ \t]+:') + # Helpers @@ -403,7 +407,11 @@ newchunks += self._split(s, charset, targetlen, splitchars) lastchunk, lastcharset = newchunks[-1] lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) + value = self._encode_chunks(newchunks, maxlinelen) + if _embeded_header.search(value): + raise HeaderParseError("header value appears to contain " + "an embedded header: {!r}".format(value)) + return value diff --git a/lib-python/modified-2.7/email/message.py b/lib-python/modified-2.7/email/message.py --- a/lib-python/modified-2.7/email/message.py +++ b/lib-python/modified-2.7/email/message.py @@ -38,7 +38,9 @@ def _formatparam(param, value=None, quote=True): """Convenience function to format and return a key=value pair. - This will quote the value if needed or if quote is true. + This will quote the value if needed or if quote is true. If value is a + three tuple (charset, language, value), it will be encoded according + to RFC2231 rules. """ if value is not None and len(value) > 0: # A tuple is used for RFC 2231 encoded parameter values where items @@ -97,7 +99,7 @@ objects, otherwise it is a string. Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers + there is exactly one occurrence of the header per message. Some headers do in fact appear multiple times (e.g. Received) and for those headers, you must use the explicit API to set or get all the headers. Not all of the mapping methods are implemented. @@ -286,7 +288,7 @@ Return None if the header is missing instead of raising an exception. Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all + occurrence gets returned is undefined. Use get_all() to get all the values matching a header field name. """ return self.get(name) @@ -389,7 +391,10 @@ name is the header field to add. keyword arguments can be used to set additional parameters for the header field, with underscores converted to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. + value is None, in which case only the key will be added. If a + parameter value contains non-ASCII characters it must be specified as a + three-tuple of (charset, language, value), in which case it will be + encoded according to RFC2231 rules. Example: diff --git a/lib-python/modified-2.7/email/mime/application.py b/lib-python/modified-2.7/email/mime/application.py --- a/lib-python/modified-2.7/email/mime/application.py +++ b/lib-python/modified-2.7/email/mime/application.py @@ -17,7 +17,7 @@ _encoder=encoders.encode_base64, **_params): """Create an application/* type MIME document. - _data is a string containing the raw applicatoin data. + _data is a string containing the raw application data. _subtype is the MIME content type subtype, defaulting to 'octet-stream'. diff --git a/lib-python/modified-2.7/email/test/data/msg_26.txt b/lib-python/modified-2.7/email/test/data/msg_26.txt --- a/lib-python/modified-2.7/email/test/data/msg_26.txt +++ b/lib-python/modified-2.7/email/test/data/msg_26.txt @@ -42,4 +42,4 @@ MzMAAAAACH97tzAAAAALu3c3gAAAAAAL+7tzDABAu7f7cAAAAAAACA+3MA7EQAv/sIAA AAAAAAAIAAAAAAAAAIAAAAAA ---1618492860--2051301190--113853680-- +--1618492860--2051301190--113853680-- \ No newline at end of file diff --git a/lib-python/modified-2.7/email/test/test_email.py b/lib-python/modified-2.7/email/test/test_email.py --- a/lib-python/modified-2.7/email/test/test_email.py +++ b/lib-python/modified-2.7/email/test/test_email.py @@ -179,6 +179,17 @@ self.assertRaises(Errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') + def test_make_boundary(self): + msg = MIMEMultipart('form-data') + # Note that when the boundary gets created is an implementation + # detail and might change. + self.assertEqual(msg.items()[0][1], 'multipart/form-data') + # Trigger creation of boundary + msg.as_string() + self.assertEqual(msg.items()[0][1][:33], + 'multipart/form-data; boundary="==') + # XXX: there ought to be tests of the uniqueness of the boundary, too. + def test_message_rfc822_only(self): # Issue 7970: message/rfc822 not in multipart parsed by # HeaderParser caused an exception when flattened. @@ -542,6 +553,17 @@ msg.set_charset(u'us-ascii') self.assertEqual('us-ascii', msg.get_content_charset()) + # Issue 5871: reject an attempt to embed a header inside a header value + # (header injection attack). + def test_embeded_header_via_Header_rejected(self): + msg = Message() + msg['Dummy'] = Header('dummy\nX-Injected-Header: test') + self.assertRaises(Errors.HeaderParseError, msg.as_string) + + def test_embeded_header_via_string_rejected(self): + msg = Message() + msg['Dummy'] = 'dummy\nX-Injected-Header: test' + self.assertRaises(Errors.HeaderParseError, msg.as_string) # Test the email.Encoders module @@ -3114,6 +3136,28 @@ s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3I ?=' raises(Errors.HeaderParseError, decode_header, s) + # Issue 1078919 + def test_ascii_add_header(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename='bud.gif') + self.assertEqual('attachment; filename="bud.gif"', + msg['Content-Disposition']) + + def test_nonascii_add_header_via_triple(self): + msg = Message() + msg.add_header('Content-Disposition', 'attachment', + filename=('iso-8859-1', '', 'Fu\xdfballer.ppt')) + self.assertEqual( + 'attachment; filename*="iso-8859-1\'\'Fu%DFballer.ppt"', + msg['Content-Disposition']) + + def test_encode_unaliased_charset(self): + # Issue 1379416: when the charset has no output conversion, + # output was accidentally getting coerced to unicode. + res = Header('abc','iso-8859-2').encode() + self.assertEqual(res, '=?iso-8859-2?q?abc?=') + self.assertIsInstance(res, str) # Test RFC 2231 header parameters (en/de)coding diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py --- a/lib-python/modified-2.7/heapq.py +++ b/lib-python/modified-2.7/heapq.py @@ -133,6 +133,11 @@ from operator import itemgetter import bisect +def cmp_lt(x, y): + # Use __lt__ if available; otherwise, try __le__. + # In Py3.x, only __lt__ will be called. + return (x < y) if hasattr(x, '__lt__') else (not y <= x) + def heappush(heap, item): """Push item onto heap, maintaining the heap invariant.""" heap.append(item) @@ -167,13 +172,13 @@ def heappushpop(heap, item): """Fast version of a heappush followed by a heappop.""" - if heap and heap[0] < item: + if heap and cmp_lt(heap[0], item): item, heap[0] = heap[0], item _siftup(heap, 0) return item def heapify(x): - """Transform list into a heap, in-place, in O(len(heap)) time.""" + """Transform list into a heap, in-place, in O(len(x)) time.""" n = len(x) # Transform bottom-up. The largest index there's any point to looking at # is the largest with a child index in-range, so must have 2*i + 1 < n, @@ -219,11 +224,10 @@ pop = result.pop los = result[-1] # los --> Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -244,7 +248,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -299,7 +303,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -368,7 +372,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -406,7 +410,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/modified-2.7/httplib.py b/lib-python/modified-2.7/httplib.py --- a/lib-python/modified-2.7/httplib.py +++ b/lib-python/modified-2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1237,6 +1251,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/modified-2.7/idlelib/Bindings.py b/lib-python/modified-2.7/idlelib/Bindings.py --- a/lib-python/modified-2.7/idlelib/Bindings.py +++ b/lib-python/modified-2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/modified-2.7/idlelib/EditorWindow.py b/lib-python/modified-2.7/idlelib/EditorWindow.py --- a/lib-python/modified-2.7/idlelib/EditorWindow.py +++ b/lib-python/modified-2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/modified-2.7/idlelib/FileList.py b/lib-python/modified-2.7/idlelib/FileList.py --- a/lib-python/modified-2.7/idlelib/FileList.py +++ b/lib-python/modified-2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/modified-2.7/idlelib/FormatParagraph.py b/lib-python/modified-2.7/idlelib/FormatParagraph.py --- a/lib-python/modified-2.7/idlelib/FormatParagraph.py +++ b/lib-python/modified-2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/modified-2.7/idlelib/HISTORY.txt b/lib-python/modified-2.7/idlelib/HISTORY.txt --- a/lib-python/modified-2.7/idlelib/HISTORY.txt +++ b/lib-python/modified-2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/modified-2.7/idlelib/IOBinding.py b/lib-python/modified-2.7/idlelib/IOBinding.py --- a/lib-python/modified-2.7/idlelib/IOBinding.py +++ b/lib-python/modified-2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/modified-2.7/idlelib/NEWS.txt b/lib-python/modified-2.7/idlelib/NEWS.txt --- a/lib-python/modified-2.7/idlelib/NEWS.txt +++ b/lib-python/modified-2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/modified-2.7/idlelib/PyShell.py b/lib-python/modified-2.7/idlelib/PyShell.py --- a/lib-python/modified-2.7/idlelib/PyShell.py +++ b/lib-python/modified-2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/modified-2.7/idlelib/ScriptBinding.py b/lib-python/modified-2.7/idlelib/ScriptBinding.py --- a/lib-python/modified-2.7/idlelib/ScriptBinding.py +++ b/lib-python/modified-2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/modified-2.7/idlelib/config-keys.def b/lib-python/modified-2.7/idlelib/config-keys.def --- a/lib-python/modified-2.7/idlelib/config-keys.def +++ b/lib-python/modified-2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/modified-2.7/idlelib/extend.txt b/lib-python/modified-2.7/idlelib/extend.txt --- a/lib-python/modified-2.7/idlelib/extend.txt +++ b/lib-python/modified-2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/modified-2.7/idlelib/idle.bat b/lib-python/modified-2.7/idlelib/idle.bat --- a/lib-python/modified-2.7/idlelib/idle.bat +++ b/lib-python/modified-2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/modified-2.7/idlelib/idlever.py b/lib-python/modified-2.7/idlelib/idlever.py --- a/lib-python/modified-2.7/idlelib/idlever.py +++ b/lib-python/modified-2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/modified-2.7/idlelib/macosxSupport.py b/lib-python/modified-2.7/idlelib/macosxSupport.py --- a/lib-python/modified-2.7/idlelib/macosxSupport.py +++ b/lib-python/modified-2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/modified-2.7/inspect.py b/lib-python/modified-2.7/inspect.py --- a/lib-python/modified-2.7/inspect.py +++ b/lib-python/modified-2.7/inspect.py @@ -952,8 +952,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no argument (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no argument (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/modified-2.7/json/decoder.py b/lib-python/modified-2.7/json/decoder.py --- a/lib-python/modified-2.7/json/decoder.py +++ b/lib-python/modified-2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -356,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/modified-2.7/json/tests/__init__.py b/lib-python/modified-2.7/json/tests/__init__.py --- a/lib-python/modified-2.7/json/tests/__init__.py +++ b/lib-python/modified-2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/modified-2.7/json/tests/test_check_circular.py b/lib-python/modified-2.7/json/tests/test_check_circular.py --- a/lib-python/modified-2.7/json/tests/test_check_circular.py +++ b/lib-python/modified-2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_decode.py b/lib-python/modified-2.7/json/tests/test_decode.py --- a/lib-python/modified-2.7/json/tests/test_decode.py +++ b/lib-python/modified-2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,7 +19,7 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) def test_empty_objects(self): @@ -35,15 +34,19 @@ s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_default.py b/lib-python/modified-2.7/json/tests/test_default.py --- a/lib-python/modified-2.7/json/tests/test_default.py +++ b/lib-python/modified-2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_dump.py b/lib-python/modified-2.7/json/tests/test_dump.py --- a/lib-python/modified-2.7/json/tests/test_dump.py +++ b/lib-python/modified-2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/modified-2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/modified-2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/modified-2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_fail.py b/lib-python/modified-2.7/json/tests/test_fail.py --- a/lib-python/modified-2.7/json/tests/test_fail.py +++ b/lib-python/modified-2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_float.py b/lib-python/modified-2.7/json/tests/test_float.py --- a/lib-python/modified-2.7/json/tests/test_float.py +++ b/lib-python/modified-2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_indent.py b/lib-python/modified-2.7/json/tests/test_indent.py --- a/lib-python/modified-2.7/json/tests/test_indent.py +++ b/lib-python/modified-2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_pass1.py b/lib-python/modified-2.7/json/tests/test_pass1.py --- a/lib-python/modified-2.7/json/tests/test_pass1.py +++ b/lib-python/modified-2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_pass2.py b/lib-python/modified-2.7/json/tests/test_pass2.py --- a/lib-python/modified-2.7/json/tests/test_pass2.py +++ b/lib-python/modified-2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_pass3.py b/lib-python/modified-2.7/json/tests/test_pass3.py --- a/lib-python/modified-2.7/json/tests/test_pass3.py +++ b/lib-python/modified-2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_recursion.py b/lib-python/modified-2.7/json/tests/test_recursion.py --- a/lib-python/modified-2.7/json/tests/test_recursion.py +++ b/lib-python/modified-2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_scanstring.py b/lib-python/modified-2.7/json/tests/test_scanstring.py --- a/lib-python/modified-2.7/json/tests/test_scanstring.py +++ b/lib-python/modified-2.7/json/tests/test_scanstring.py @@ -1,20 +1,10 @@ import sys -import decimal -from unittest import TestCase -from test import test_support +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - @test_support.impl_detail() - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -105,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_separators.py b/lib-python/modified-2.7/json/tests/test_separators.py --- a/lib-python/modified-2.7/json/tests/test_separators.py +++ b/lib-python/modified-2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/modified-2.7/json/tests/test_speedups.py b/lib-python/modified-2.7/json/tests/test_speedups.py --- a/lib-python/modified-2.7/json/tests/test_speedups.py +++ b/lib-python/modified-2.7/json/tests/test_speedups.py @@ -1,29 +1,23 @@ -import decimal -from unittest import TestCase -from test import test_support +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): - @test_support.impl_detail() +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) - @test_support.impl_detail() def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): - @test_support.impl_detail() +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) - @test_support.impl_detail() def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,74 +14,78 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') def test_encoding4(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u]) + j = self.dumps([u]) self.assertEqual(j, '["\\u03b1\\u03a9"]') def test_encoding5(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u, ensure_ascii=False) + j = self.dumps(u, ensure_ascii=False) self.assertEqual(j, u'"{0}"'.format(u)) def test_encoding6(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps([u], ensure_ascii=False) + j = self.dumps([u], ensure_ascii=False) self.assertEqual(j, u'["{0}"]'.format(u)) def test_big_unicode_encode(self): u = u'\U0001d120' - self.assertEqual(json.dumps(u), '"\\ud834\\udd20"') - self.assertEqual(json.dumps(u, ensure_ascii=False), u'"\U0001d120"') + self.assertEqual(self.dumps(u), '"\\ud834\\udd20"') + self.assertEqual(self.dumps(u, ensure_ascii=False), u'"\U0001d120"') def test_big_unicode_decode(self): u = u'z\U0001d120x' - self.assertEqual(json.loads('"' + u + '"'), u) - self.assertEqual(json.loads('"z\\ud834\\udd20x"'), u) + self.assertEqual(self.loads('"' + u + '"'), u) + self.assertEqual(self.loads('"z\\ud834\\udd20x"'), u) def test_unicode_decode(self): for i in range(0, 0xd7ff): u = unichr(i) s = '"\\u{0:04x}"'.format(i) - self.assertEqual(json.loads(s), u) + self.assertEqual(self.loads(s), u) def test_object_pairs_hook_with_unicode(self): s = u'{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [(u"xkd", 1), (u"kcw", 2), (u"art", 3), (u"hxm", 4), (u"qrt", 5), (u"pad", 6), (u"hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook = lambda x: x), p) - od = json.loads(s, object_pairs_hook = OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook = lambda x: x), p) + od = self.loads(s, object_pairs_hook = OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook = OrderedDict, object_hook = lambda x: None), OrderedDict(p)) def test_default_encoding(self): - self.assertEqual(json.loads(u'{"a": "\xe9"}'.encode('utf-8')), + self.assertEqual(self.loads(u'{"a": "\xe9"}'.encode('utf-8')), {'a': u'\xe9'}) def test_unicode_preservation(self): - self.assertEqual(type(json.loads(u'""')), unicode) - self.assertEqual(type(json.loads(u'"a"')), unicode) - self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) + self.assertEqual(type(self.loads(u'""')), unicode) + self.assertEqual(type(self.loads(u'"a"')), unicode) + self.assertEqual(type(self.loads(u'["a"]')[0]), unicode) # Issue 10038. - self.assertEqual(type(json.loads('"foo"')), unicode) + self.assertEqual(type(self.loads('"foo"')), unicode) def test_encode_not_utf_8(self): self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), '"\\u0105\\u0107"') self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), '["\\u0105\\u0107"]') + + +class TestPyUnicode(TestUnicode, PyTest): pass +class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/modified-2.7/lib2to3/fixes/fix_itertools.py b/lib-python/modified-2.7/lib2to3/fixes/fix_itertools.py --- a/lib-python/modified-2.7/lib2to3/fixes/fix_itertools.py +++ b/lib-python/modified-2.7/lib2to3/fixes/fix_itertools.py @@ -13,7 +13,7 @@ class FixItertools(fixer_base.BaseFix): BM_compatible = True - it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" + it_funcs = "('imap'|'ifilter'|'izip'|'izip_longest'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< @@ -28,7 +28,8 @@ def transform(self, node, results): prefix = None func = results['func'][0] - if 'it' in results and func.value != u'ifilterfalse': + if ('it' in results and + func.value not in (u'ifilterfalse', u'izip_longest')): dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix diff --git a/lib-python/modified-2.7/lib2to3/fixes/fix_itertools_imports.py b/lib-python/modified-2.7/lib2to3/fixes/fix_itertools_imports.py --- a/lib-python/modified-2.7/lib2to3/fixes/fix_itertools_imports.py +++ b/lib-python/modified-2.7/lib2to3/fixes/fix_itertools_imports.py @@ -31,9 +31,10 @@ if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() - elif member_name == u'ifilterfalse': + elif member_name in (u'ifilterfalse', u'izip_longest'): node.changed() - name_node.value = u'filterfalse' + name_node.value = (u'filterfalse' if member_name[1] == u'f' + else u'zip_longest') # Make sure the import statement is still sane children = imports.children[:] or [imports] diff --git a/lib-python/modified-2.7/lib2to3/fixes/fix_metaclass.py b/lib-python/modified-2.7/lib2to3/fixes/fix_metaclass.py --- a/lib-python/modified-2.7/lib2to3/fixes/fix_metaclass.py +++ b/lib-python/modified-2.7/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/lib-python/modified-2.7/lib2to3/fixes/fix_urllib.py b/lib-python/modified-2.7/lib2to3/fixes/fix_urllib.py --- a/lib-python/modified-2.7/lib2to3/fixes/fix_urllib.py +++ b/lib-python/modified-2.7/lib2to3/fixes/fix_urllib.py @@ -12,7 +12,7 @@ MAPPING = {"urllib": [ ("urllib.request", - ["URLOpener", "FancyURLOpener", "urlretrieve", + ["URLopener", "FancyURLopener", "urlretrieve", "_urlopener", "urlopen", "urlcleanup", "pathname2url", "url2pathname"]), ("urllib.parse", diff --git a/lib-python/modified-2.7/lib2to3/main.py b/lib-python/modified-2.7/lib2to3/main.py --- a/lib-python/modified-2.7/lib2to3/main.py +++ b/lib-python/modified-2.7/lib2to3/main.py @@ -101,7 +101,7 @@ parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], - help="Prevent a fixer from being run.") + help="Prevent a transformation from being run") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", @@ -113,7 +113,7 @@ parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, - help="Don't write backups for modified files.") + help="Don't write backups for modified files") # Parse command line arguments refactor_stdin = False diff --git a/lib-python/modified-2.7/lib2to3/patcomp.py b/lib-python/modified-2.7/lib2to3/patcomp.py --- a/lib-python/modified-2.7/lib2to3/patcomp.py +++ b/lib-python/modified-2.7/lib2to3/patcomp.py @@ -12,6 +12,7 @@ # Python imports import os +import StringIO # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar @@ -32,7 +33,7 @@ def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) - tokens = tokenize.generate_tokens(driver.generate_lines(input).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(input).readline) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: diff --git a/lib-python/modified-2.7/lib2to3/pgen2/conv.py b/lib-python/modified-2.7/lib2to3/pgen2/conv.py --- a/lib-python/modified-2.7/lib2to3/pgen2/conv.py +++ b/lib-python/modified-2.7/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/lib-python/modified-2.7/lib2to3/pgen2/driver.py b/lib-python/modified-2.7/lib2to3/pgen2/driver.py --- a/lib-python/modified-2.7/lib2to3/pgen2/driver.py +++ b/lib-python/modified-2.7/lib2to3/pgen2/driver.py @@ -19,6 +19,7 @@ import codecs import os import logging +import StringIO import sys # Pgen imports @@ -101,18 +102,10 @@ def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" - tokens = tokenize.generate_tokens(generate_lines(text).next) + tokens = tokenize.generate_tokens(StringIO.StringIO(text).readline) return self.parse_tokens(tokens, debug) -def generate_lines(text): - """Generator that behaves like readline without using StringIO.""" - for line in text.splitlines(True): - yield line - while True: - yield "" - - def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" diff --git a/lib-python/modified-2.7/lib2to3/pytree.py b/lib-python/modified-2.7/lib2to3/pytree.py --- a/lib-python/modified-2.7/lib2to3/pytree.py +++ b/lib-python/modified-2.7/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is @@ -741,12 +741,13 @@ elif self.name == "bare_name": yield self._bare_name_matches(nodes) else: - # There used to be some monkey patching of sys.stderr here, to - # silence the error message from the RuntimError, PyPy has removed - # this because it relied on reference counting. This is because the - # caller of this function doesn't consume this generator fully, so - # the finally statement that used to be here would only be executed - # when the gc happened to run. + # The reason for this is that hitting the recursion limit usually + # results in some ugly messages about how RuntimeErrors are being + # ignored. We don't do this on non-CPython implementation because + # they don't have this problem. + if hasattr(sys, "getrefcount"): + save_stderr = sys.stderr + sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: @@ -759,6 +760,9 @@ if self.name: r[self.name] = nodes[:count] yield count, r + finally: + if hasattr(sys, "getrefcount"): + sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" diff --git a/lib-python/modified-2.7/lib2to3/refactor.py b/lib-python/modified-2.7/lib2to3/refactor.py --- a/lib-python/modified-2.7/lib2to3/refactor.py +++ b/lib-python/modified-2.7/lib2to3/refactor.py @@ -302,13 +302,14 @@ Files and subdirectories starting with '.' are skipped. """ + py_ext = os.extsep + "py" for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: - if not name.startswith(".") and \ - os.path.splitext(name)[1].endswith("py"): + if (not name.startswith(".") and + os.path.splitext(name)[1] == py_ext): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots diff --git a/lib-python/modified-2.7/lib2to3/tests/data/py2_test_grammar.py b/lib-python/modified-2.7/lib2to3/tests/data/py2_test_grammar.py --- a/lib-python/modified-2.7/lib2to3/tests/data/py2_test_grammar.py +++ b/lib-python/modified-2.7/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/modified-2.7/lib2to3/tests/data/py3_test_grammar.py b/lib-python/modified-2.7/lib2to3/tests/data/py3_test_grammar.py --- a/lib-python/modified-2.7/lib2to3/tests/data/py3_test_grammar.py +++ b/lib-python/modified-2.7/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/lib-python/modified-2.7/lib2to3/tests/test_fixers.py b/lib-python/modified-2.7/lib2to3/tests/test_fixers.py --- a/lib-python/modified-2.7/lib2to3/tests/test_fixers.py +++ b/lib-python/modified-2.7/lib2to3/tests/test_fixers.py @@ -3623,16 +3623,24 @@ a = """%s(f, a)""" self.checkall(b, a) - def test_2(self): + def test_qualified(self): b = """itertools.ifilterfalse(a, b)""" a = """itertools.filterfalse(a, b)""" self.check(b, a) - def test_4(self): + b = """itertools.izip_longest(a, b)""" + a = """itertools.zip_longest(a, b)""" + self.check(b, a) + + def test_2(self): b = """ifilterfalse(a, b)""" a = """filterfalse(a, b)""" self.check(b, a) + b = """izip_longest(a, b)""" + a = """zip_longest(a, b)""" + self.check(b, a) + def test_space_1(self): b = """ %s(f, a)""" a = """ %s(f, a)""" @@ -3643,9 +3651,14 @@ a = """ itertools.filterfalse(a, b)""" self.check(b, a) + b = """ itertools.izip_longest(a, b)""" + a = """ itertools.zip_longest(a, b)""" + self.check(b, a) + def test_run_order(self): self.assert_runs_after('map', 'zip', 'filter') + class Test_itertools_imports(FixerTestCase): fixer = 'itertools_imports' @@ -3696,18 +3709,19 @@ s = "from itertools import bar as bang" self.unchanged(s) - def test_ifilter(self): - b = "from itertools import ifilterfalse" - a = "from itertools import filterfalse" - self.check(b, a) - - b = "from itertools import imap, ifilterfalse, foo" - a = "from itertools import filterfalse, foo" - self.check(b, a) - - b = "from itertools import bar, ifilterfalse, foo" - a = "from itertools import bar, filterfalse, foo" - self.check(b, a) + def test_ifilter_and_zip_longest(self): + for name in "filterfalse", "zip_longest": + b = "from itertools import i%s" % (name,) + a = "from itertools import %s" % (name,) + self.check(b, a) + + b = "from itertools import imap, i%s, foo" % (name,) + a = "from itertools import %s, foo" % (name,) + self.check(b, a) + + b = "from itertools import bar, i%s, foo" % (name,) + a = "from itertools import bar, %s, foo" % (name,) + self.check(b, a) def test_import_star(self): s = "from itertools import *" diff --git a/lib-python/modified-2.7/lib2to3/tests/test_parser.py b/lib-python/modified-2.7/lib2to3/tests/test_parser.py --- a/lib-python/modified-2.7/lib2to3/tests/test_parser.py +++ b/lib-python/modified-2.7/lib2to3/tests/test_parser.py @@ -19,6 +19,16 @@ # Local imports from lib2to3.pgen2 import tokenize from ..pgen2.parse import ParseError +from lib2to3.pygram import python_symbols as syms + + +class TestDriver(support.TestCase): + + def test_formfeed(self): + s = """print 1\n\x0Cprint 2\n""" + t = driver.parse_string(s) + self.assertEqual(t.children[0].children[0].type, syms.print_stmt) + self.assertEqual(t.children[1].children[0].type, syms.print_stmt) class GrammarTest(support.TestCase): diff --git a/lib-python/modified-2.7/lib2to3/tests/test_refactor.py b/lib-python/modified-2.7/lib2to3/tests/test_refactor.py --- a/lib-python/modified-2.7/lib2to3/tests/test_refactor.py +++ b/lib-python/modified-2.7/lib2to3/tests/test_refactor.py @@ -223,6 +223,7 @@ "hi.py", ".dumb", ".after.py", + "notpy.npy", "sappy"] expected = ["hi.py"] check(tree, expected) diff --git a/lib-python/modified-2.7/lib2to3/tests/test_util.py b/lib-python/modified-2.7/lib2to3/tests/test_util.py --- a/lib-python/modified-2.7/lib2to3/tests/test_util.py +++ b/lib-python/modified-2.7/lib2to3/tests/test_util.py @@ -568,8 +568,8 @@ def test_from_import(self): node = parse('bar()') - fixer_util.touch_import("cgi", "escape", node) - self.assertEqual(str(node), 'from cgi import escape\nbar()\n\n') + fixer_util.touch_import("html", "escape", node) + self.assertEqual(str(node), 'from html import escape\nbar()\n\n') def test_name_import(self): node = parse('bar()') diff --git a/lib-python/modified-2.7/multiprocessing/__init__.py b/lib-python/modified-2.7/multiprocessing/__init__.py --- a/lib-python/modified-2.7/multiprocessing/__init__.py +++ b/lib-python/modified-2.7/multiprocessing/__init__.py @@ -38,6 +38,7 @@ # HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY # OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __version__ = '0.70a1' @@ -115,8 +116,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/lib-python/modified-2.7/multiprocessing/connection.py b/lib-python/modified-2.7/multiprocessing/connection.py --- a/lib-python/modified-2.7/multiprocessing/connection.py +++ b/lib-python/modified-2.7/multiprocessing/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/modified-2.7/multiprocessing/dummy/__init__.py b/lib-python/modified-2.7/multiprocessing/dummy/__init__.py --- a/lib-python/modified-2.7/multiprocessing/dummy/__init__.py +++ b/lib-python/modified-2.7/multiprocessing/dummy/__init__.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/__init__.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/modified-2.7/multiprocessing/dummy/connection.py b/lib-python/modified-2.7/multiprocessing/dummy/connection.py --- a/lib-python/modified-2.7/multiprocessing/dummy/connection.py +++ b/lib-python/modified-2.7/multiprocessing/dummy/connection.py @@ -3,7 +3,33 @@ # # multiprocessing/dummy/connection.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'Client', 'Listener', 'Pipe' ] diff --git a/lib-python/modified-2.7/multiprocessing/forking.py b/lib-python/modified-2.7/multiprocessing/forking.py --- a/lib-python/modified-2.7/multiprocessing/forking.py +++ b/lib-python/modified-2.7/multiprocessing/forking.py @@ -3,7 +3,33 @@ # # multiprocessing/forking.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import os @@ -169,6 +195,7 @@ TERMINATE = 0x10000 WINEXE = (sys.platform == 'win32' and getattr(sys, 'frozen', False)) + WINSERVICE = sys.executable.lower().endswith("pythonservice.exe") exit = win32.ExitProcess close = win32.CloseHandle @@ -178,7 +205,7 @@ # People embedding Python want to modify it. # - if sys.executable.lower().endswith('pythonservice.exe'): + if WINSERVICE: _python_exe = os.path.join(sys.exec_prefix, 'python.exe') else: _python_exe = sys.executable @@ -368,7 +395,7 @@ if _logger is not None: d['log_level'] = _logger.getEffectiveLevel() - if not WINEXE: + if not WINEXE and not WINSERVICE: main_path = getattr(sys.modules['__main__'], '__file__', None) if not main_path and sys.argv[0] not in ('', '-c'): main_path = sys.argv[0] diff --git a/lib-python/modified-2.7/multiprocessing/heap.py b/lib-python/modified-2.7/multiprocessing/heap.py --- a/lib-python/modified-2.7/multiprocessing/heap.py +++ b/lib-python/modified-2.7/multiprocessing/heap.py @@ -3,7 +3,33 @@ # # multiprocessing/heap.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import bisect diff --git a/lib-python/modified-2.7/multiprocessing/managers.py b/lib-python/modified-2.7/multiprocessing/managers.py --- a/lib-python/modified-2.7/multiprocessing/managers.py +++ b/lib-python/modified-2.7/multiprocessing/managers.py @@ -4,7 +4,33 @@ # # multiprocessing/managers.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ 'BaseManager', 'SyncManager', 'BaseProxy', 'Token' ] diff --git a/lib-python/modified-2.7/multiprocessing/pool.py b/lib-python/modified-2.7/multiprocessing/pool.py --- a/lib-python/modified-2.7/multiprocessing/pool.py +++ b/lib-python/modified-2.7/multiprocessing/pool.py @@ -3,7 +3,33 @@ # # multiprocessing/pool.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Pool'] @@ -269,6 +295,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) + # send sentinel to stop workers + pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -387,7 +415,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE - self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -421,7 +448,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE - taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -431,6 +457,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel + # We must wait for the worker handler to exit before terminating + # workers because we don't want workers to be restarted behind our back. + debug('joining worker handler') + worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') diff --git a/lib-python/modified-2.7/multiprocessing/process.py b/lib-python/modified-2.7/multiprocessing/process.py --- a/lib-python/modified-2.7/multiprocessing/process.py +++ b/lib-python/modified-2.7/multiprocessing/process.py @@ -3,7 +3,33 @@ # # multiprocessing/process.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Process', 'current_process', 'active_children'] diff --git a/lib-python/modified-2.7/multiprocessing/queues.py b/lib-python/modified-2.7/multiprocessing/queues.py --- a/lib-python/modified-2.7/multiprocessing/queues.py +++ b/lib-python/modified-2.7/multiprocessing/queues.py @@ -3,7 +3,33 @@ # # multiprocessing/queues.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = ['Queue', 'SimpleQueue', 'JoinableQueue'] diff --git a/lib-python/modified-2.7/multiprocessing/reduction.py b/lib-python/modified-2.7/multiprocessing/reduction.py --- a/lib-python/modified-2.7/multiprocessing/reduction.py +++ b/lib-python/modified-2.7/multiprocessing/reduction.py @@ -4,7 +4,33 @@ # # multiprocessing/reduction.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [] diff --git a/lib-python/modified-2.7/multiprocessing/sharedctypes.py b/lib-python/modified-2.7/multiprocessing/sharedctypes.py --- a/lib-python/modified-2.7/multiprocessing/sharedctypes.py +++ b/lib-python/modified-2.7/multiprocessing/sharedctypes.py @@ -3,7 +3,33 @@ # # multiprocessing/sharedctypes.py # -# Copyright (c) 2007-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import sys @@ -52,9 +78,11 @@ Returns a ctypes array allocated from shared memory ''' type_ = typecode_to_type.get(typecode_or_type, typecode_or_type) - if isinstance(size_or_initializer, int): + if isinstance(size_or_initializer, (int, long)): type_ = type_ * size_or_initializer - return _new_value(type_) + obj = _new_value(type_) + ctypes.memset(ctypes.addressof(obj), 0, ctypes.sizeof(obj)) + return obj else: type_ = type_ * len(size_or_initializer) result = _new_value(type_) diff --git a/lib-python/modified-2.7/multiprocessing/synchronize.py b/lib-python/modified-2.7/multiprocessing/synchronize.py --- a/lib-python/modified-2.7/multiprocessing/synchronize.py +++ b/lib-python/modified-2.7/multiprocessing/synchronize.py @@ -3,7 +3,33 @@ # # multiprocessing/synchronize.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # __all__ = [ diff --git a/lib-python/modified-2.7/multiprocessing/util.py b/lib-python/modified-2.7/multiprocessing/util.py --- a/lib-python/modified-2.7/multiprocessing/util.py +++ b/lib-python/modified-2.7/multiprocessing/util.py @@ -3,7 +3,33 @@ # # multiprocessing/util.py # -# Copyright (c) 2006-2008, R Oudkerk --- see COPYING.txt +# Copyright (c) 2006-2008, R Oudkerk +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. Neither the name of author nor the names of any contributors may be +# used to endorse or promote products derived from this software +# without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +# SUCH DAMAGE. # import itertools diff --git a/lib-python/modified-2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py --- a/lib-python/modified-2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -11,7 +11,7 @@ __all__ = [ 'get_importer', 'iter_importers', 'get_loader', 'find_loader', - 'walk_packages', 'iter_modules', + 'walk_packages', 'iter_modules', 'get_data', 'ImpImporter', 'ImpLoader', 'read_code', 'extend_path', ] diff --git a/lib-python/modified-2.7/pydoc.py b/lib-python/modified-2.7/pydoc.py --- a/lib-python/modified-2.7/pydoc.py +++ b/lib-python/modified-2.7/pydoc.py @@ -156,7 +156,7 @@ no.append(x) return yes, no -def visiblename(name, all=None): +def visiblename(name, all=None, obj=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', @@ -164,6 +164,9 @@ if name in _hidden_names: return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 + # Namedtuples have public fields and methods with a single leading underscore + if name.startswith('_') and hasattr(obj, '_fields'): + return 1 if all is not None: # only document that which the programmer exported in __all__ return name in all @@ -475,9 +478,9 @@ def multicolumn(self, list, format, cols=4): """Format a list of items into a multi-column list.""" result = '' - rows = (len(list)+cols-1)/cols + rows = (len(list)+cols-1)//cols for col in range(cols): - result = result + '' % (100/cols) + result = result + '' % (100//cols) for i in range(rows*col, rows*col+rows): if i < len(list): result = result + format(list[i]) + '
    \n' @@ -629,7 +632,7 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) cdict[key] = cdict[value] = '#' + key for key, value in classes: @@ -645,13 +648,13 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) fdict[key] = '#-' + key if inspect.isfunction(value): fdict[value] = fdict[key] data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) doc = self.markup(getdoc(object), self.preformat, fdict, cdict) @@ -775,7 +778,7 @@ push('\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) mdict = {} for key, kind, homecls, value in attrs: @@ -1044,18 +1047,18 @@ # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or (inspect.getmodule(value) or object) is object): - if visiblename(key, all): + if visiblename(key, all, object): classes.append((key, value)) funcs = [] for key, value in inspect.getmembers(object, inspect.isroutine): # if __all__ exists, believe it. Otherwise use old heuristic. if (all is not None or inspect.isbuiltin(value) or inspect.getmodule(value) is object): - if visiblename(key, all): + if visiblename(key, all, object): funcs.append((key, value)) data = [] for key, value in inspect.getmembers(object, isdata): - if visiblename(key, all): + if visiblename(key, all, object): data.append((key, value)) modpkgs = [] @@ -1115,7 +1118,7 @@ result = result + self.section('CREDITS', str(object.__credits__)) return result - def docclass(self, object, name=None, mod=None): + def docclass(self, object, name=None, mod=None, *ignored): """Produce text documentation for a given class object.""" realname = object.__name__ name = name or realname @@ -1188,7 +1191,7 @@ name, mod, maxlen=70, doc=doc) + '\n') return attrs - attrs = filter(lambda data: visiblename(data[0]), + attrs = filter(lambda data: visiblename(data[0], obj=object), classify_class_attrs(object)) while attrs: if mro: @@ -1720,8 +1723,9 @@ return '' return '' - def __call__(self, request=None): - if request is not None: + _GoInteractive = object() + def __call__(self, request=_GoInteractive): + if request is not self._GoInteractive: self.help(request) else: self.intro() diff --git a/lib-python/modified-2.7/random.py b/lib-python/modified-2.7/random.py --- a/lib-python/modified-2.7/random.py +++ b/lib-python/modified-2.7/random.py @@ -316,7 +316,7 @@ n = len(population) if not 0 <= k <= n: - raise ValueError, "sample larger than population" + raise ValueError("sample larger than population") random = self.random _int = int result = [None] * k @@ -489,6 +489,12 @@ Conditions on the parameters are alpha > 0 and beta > 0. + The probability distribution function is: + + x ** (alpha - 1) * math.exp(-x / beta) + pdf(x) = -------------------------------------- + math.gamma(alpha) * beta ** alpha + """ # alpha > 0, beta > 0, mean is alpha*beta, variance is alpha*beta**2 @@ -591,7 +597,7 @@ ## -------------------- beta -------------------- ## See -## http://sourceforge.net/bugs/?func=detailbug&bug_id=130030&group_id=5470 +## http://mail.python.org/pipermail/python-bugs-list/2001-January/003752.html ## for Ivan Frohne's insightful analysis of why the original implementation: ## ## def betavariate(self, alpha, beta): diff --git a/lib-python/modified-2.7/site.py b/lib-python/modified-2.7/site.py --- a/lib-python/modified-2.7/site.py +++ b/lib-python/modified-2.7/site.py @@ -61,6 +61,7 @@ import sys import os import __builtin__ +import traceback # Prefixes for site-packages; add additional prefixes like /usr/local here PREFIXES = [sys.prefix, sys.exec_prefix] @@ -157,17 +158,26 @@ except IOError: return with f: - for line in f: + for n, line in enumerate(f): if line.startswith("#"): continue - if line.startswith(("import ", "import\t")): - exec line - continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths.add(dircase) + try: + if line.startswith(("import ", "import\t")): + exec line + continue + line = line.rstrip() + dir, dircase = makepath(sitedir, line) + if not dircase in known_paths and os.path.exists(dir): + sys.path.append(dir) + known_paths.add(dircase) + except Exception as err: + print >>sys.stderr, "Error processing line {:d} of {}:\n".format( + n+1, fullname) + for record in traceback.format_exception(*sys.exc_info()): + for line in record.splitlines(): + print >>sys.stderr, ' '+line + print >>sys.stderr, "\nRemainder of file ignored" + break if reset: known_paths = None return known_paths diff --git a/lib-python/modified-2.7/ssl.py b/lib-python/modified-2.7/ssl.py --- a/lib-python/modified-2.7/ssl.py +++ b/lib-python/modified-2.7/ssl.py @@ -113,9 +113,11 @@ if e.errno != errno.ENOTCONN: raise # no, no connection yet + self._connected = False self._sslobj = None else: # yes, create the SSL object + self._connected = True self._sslobj = _ssl.sslwrap(self._sock, server_side, keyfile, certfile, cert_reqs, ssl_version, ca_certs, @@ -285,21 +287,36 @@ self._sslobj.do_handshake() - def connect(self, addr): - - """Connects to remote ADDR, and then wraps the connection in - an SSL channel.""" - + def _real_connect(self, addr, return_errno): # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. - if self._sslobj: + if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") - socket.connect(self, addr) self._sslobj = _ssl.sslwrap(self._sock, False, self.keyfile, self.certfile, self.cert_reqs, self.ssl_version, self.ca_certs, self.ciphers) - if self.do_handshake_on_connect: - self.do_handshake() + try: + socket.connect(self, addr) + if self.do_handshake_on_connect: + self.do_handshake() + except socket_error as e: + if return_errno: + return e.errno + else: + self._sslobj = None + raise e + self._connected = True + return 0 + + def connect(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + self._real_connect(addr, False) + + def connect_ex(self, addr): + """Connects to remote ADDR, and then wraps the connection in + an SSL channel.""" + return self._real_connect(addr, True) def accept(self): diff --git a/lib-python/modified-2.7/subprocess.py b/lib-python/modified-2.7/subprocess.py --- a/lib-python/modified-2.7/subprocess.py +++ b/lib-python/modified-2.7/subprocess.py @@ -396,6 +396,7 @@ import traceback import gc import signal +import errno # Exception classes used by this module. class CalledProcessError(Exception): @@ -427,7 +428,6 @@ else: import select _has_poll = hasattr(select, 'poll') - import errno import fcntl import pickle @@ -441,8 +441,15 @@ "check_output", "CalledProcessError"] if mswindows: - from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP - __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP"]) + from _subprocess import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP, + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, + STD_ERROR_HANDLE, SW_HIDE, + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW) + + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP", + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE", + "STD_ERROR_HANDLE", "SW_HIDE", + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW"]) try: MAXFD = os.sysconf("SC_OPEN_MAX") except: @@ -726,7 +733,11 @@ stderr = None if self.stdin: if input: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE and e.errno != errno.EINVAL: + raise self.stdin.close() elif self.stdout: stdout = self.stdout.read() @@ -887,7 +898,7 @@ except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -960,7 +971,11 @@ if self.stdin: if input is not None: - self.stdin.write(input) + try: + self.stdin.write(input) + except IOError as e: + if e.errno != errno.EPIPE: + raise self.stdin.close() if self.stdout: @@ -1055,14 +1070,17 @@ errread, errwrite) - def _set_cloexec_flag(self, fd): + def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) - fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + if cloexec: + fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) + else: + fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _close_fds(self, but): @@ -1132,21 +1150,25 @@ os.close(errpipe_read) # Dup fds for child - if p2cread is not None: - os.dup2(p2cread, 0) - if c2pwrite is not None: - os.dup2(c2pwrite, 1) - if errwrite is not None: - os.dup2(errwrite, 2) + def _dup2(a, b): + # dup2() removes the CLOEXEC flag but + # we must do it ourselves if dup2() + # would be a no-op (issue #10806). + if a == b: + self._set_cloexec_flag(a, False) + elif a is not None: + os.dup2(a, b) + _dup2(p2cread, 0) + _dup2(c2pwrite, 1) + _dup2(errwrite, 2) - # Close pipe fds. Make sure we don't close the same - # fd more than once, or standard fds. - if p2cread is not None and p2cread not in (0,): - os.close(p2cread) - if c2pwrite is not None and c2pwrite not in (p2cread, 1): - os.close(c2pwrite) - if errwrite is not None and errwrite not in (p2cread, c2pwrite, 2): - os.close(errwrite) + # Close pipe fds. Make sure we don't close the + # same fd more than once, or standard fds. + closed = { None } + for fd in [p2cread, c2pwrite, errwrite]: + if fd not in closed and fd > 2: + os.close(fd) + closed.add(fd) # Close all other fds, if asked for if close_fds: @@ -1198,7 +1220,11 @@ os.close(errpipe_read) if data != "": - _eintr_retry_call(os.waitpid, self.pid, 0) + try: + _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None: @@ -1244,7 +1270,15 @@ """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: - pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + try: + pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0) + except OSError as e: + if e.errno != errno.ECHILD: + raise + # This happens if SIGCLD is set to be ignored or waiting + # for child processes has otherwise been disabled for our + # process. This child is dead, we can't get the status. + sts = 0 self._handle_exitstatus(sts) return self.returncode @@ -1321,9 +1355,16 @@ for fd, mode in ready: if mode & select.POLLOUT: chunk = input[input_offset : input_offset + _PIPE_BUF] - input_offset += os.write(fd, chunk) - if input_offset >= len(input): - close_unregister_and_remove(fd) + try: + input_offset += os.write(fd, chunk) + except OSError as e: + if e.errno == errno.EPIPE: + close_unregister_and_remove(fd) + else: + raise + else: + if input_offset >= len(input): + close_unregister_and_remove(fd) elif mode & select_POLLIN_POLLPRI: data = os.read(fd, 4096) if not data: @@ -1362,11 +1403,19 @@ if self.stdin in wlist: chunk = input[input_offset : input_offset + _PIPE_BUF] - bytes_written = os.write(self.stdin.fileno(), chunk) - input_offset += bytes_written - if input_offset >= len(input): - self.stdin.close() - write_set.remove(self.stdin) + try: + bytes_written = os.write(self.stdin.fileno(), chunk) + except OSError as e: + if e.errno == errno.EPIPE: + self.stdin.close() + write_set.remove(self.stdin) + else: + raise + else: + input_offset += bytes_written + if input_offset >= len(input): + self.stdin.close() + write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) diff --git a/lib-python/modified-2.7/sysconfig.py b/lib-python/modified-2.7/sysconfig.py --- a/lib-python/modified-2.7/sysconfig.py +++ b/lib-python/modified-2.7/sysconfig.py @@ -497,9 +497,7 @@ # machine is going to compile and link as if it were # MACOSX_DEPLOYMENT_TARGET. cfgvars = get_config_vars() - macver = os.environ.get('MACOSX_DEPLOYMENT_TARGET') - if not macver: - macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') if 1: # Always calculate the release of the running machine, @@ -520,7 +518,6 @@ m = re.search( r'ProductUserVisibleVersion\s*' + r'(.*?)', f.read()) - f.close() if m is not None: macrelease = '.'.join(m.group(1).split('.')[:2]) # else: fall back to the default behaviour diff --git a/lib-python/modified-2.7/tarfile.py b/lib-python/modified-2.7/tarfile.py --- a/lib-python/modified-2.7/tarfile.py +++ b/lib-python/modified-2.7/tarfile.py @@ -2236,10 +2236,14 @@ if hasattr(os, "symlink") and hasattr(os, "link"): # For systems that support symbolic and hard links. if tarinfo.issym(): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.symlink(tarinfo.linkname, targetpath) else: # See extract(). if os.path.exists(tarinfo._link_target): + if os.path.lexists(targetpath): + os.unlink(targetpath) os.link(tarinfo._link_target, targetpath) else: self._extract_member(self._find_link_target(tarinfo), targetpath) diff --git a/lib-python/modified-2.7/test/pyclbr_input.py b/lib-python/modified-2.7/test/pyclbr_input.py --- a/lib-python/modified-2.7/test/pyclbr_input.py +++ b/lib-python/modified-2.7/test/pyclbr_input.py @@ -19,7 +19,7 @@ # XXX: This causes test_pyclbr.py to fail, but only because the # introspection-based is_method() code in the test can't - # distinguish between this and a geniune method function like m(). + # distinguish between this and a genuine method function like m(). # The pyclbr.py module gets this right as it parses the text. # #f = f diff --git a/lib-python/modified-2.7/test/regrtest.py b/lib-python/modified-2.7/test/regrtest.py --- a/lib-python/modified-2.7/test/regrtest.py +++ b/lib-python/modified-2.7/test/regrtest.py @@ -28,10 +28,12 @@ -W/--verbose3 -- re-run failed tests in verbose mode immediately -q/--quiet -- no output unless one or more tests fail -S/--slow -- print the slowest 10 tests + --header -- print header with interpreter info Selecting tests -r/--random -- randomize test execution order (see below) + --randseed -- pass a random seed to reproduce a previous random run -f/--fromfile -- read names of tests to run from a file (see below) -x/--exclude -- arguments are tests to *exclude* -s/--single -- single step through a set of tests (see below) @@ -227,7 +229,8 @@ exclude=False, single=False, randomize=False, fromfile=None, findleaks=False, use_resources=None, trace=False, coverdir='coverage', runleaks=False, huntrleaks=False, verbose2=False, print_slow=False, - random_seed=None, use_mp=None, verbose3=False, forever=False): + random_seed=None, use_mp=None, verbose3=False, forever=False, + header=False): """Execute a test suite. This also parses command-line options and modifies its behavior @@ -258,7 +261,7 @@ 'exclude', 'single', 'slow', 'random', 'fromfile', 'findleaks', 'use=', 'threshold=', 'trace', 'coverdir=', 'nocoverdir', 'runleaks', 'huntrleaks=', 'memlimit=', 'randseed=', - 'multiprocess=', 'slaveargs=', 'forever']) + 'multiprocess=', 'slaveargs=', 'forever', 'header']) except getopt.error, msg: usage(2, msg) @@ -342,6 +345,8 @@ forever = True elif o in ('-j', '--multiprocess'): use_mp = int(a) + elif o == '--header': + header = True elif o == '--slaveargs': args, kwargs = json.loads(a) try: @@ -415,13 +420,14 @@ args = [] # For a partial run, we do not need to clutter the output. - if verbose or not (quiet or single or tests or args): + if verbose or header or not (quiet or single or tests or args): # Print basic platform information print "==", platform.python_implementation(), \ " ".join(sys.version.split()) print "== ", platform.platform(aliased=True), \ "%s-endian" % sys.byteorder print "== ", os.getcwd() + print "Testing with flags:", sys.flags alltests = findtests(testdir, stdtests, nottests) selected = tests or args or alltests @@ -484,7 +490,7 @@ def tests_and_args(): for test in tests: args_tuple = ( - (test, verbose, quiet, testdir), + (test, verbose, quiet), dict(huntrleaks=huntrleaks, use_resources=use_resources) ) yield (test, args_tuple) @@ -551,16 +557,15 @@ if trace: # If we're tracing code coverage, then we don't exit with status # if on a false return value from main. - tracer.runctx('runtest(test, verbose, quiet, testdir)', + tracer.runctx('runtest(test, verbose, quiet)', globals=globals(), locals=vars()) else: try: - result = runtest(test, verbose, quiet, - testdir, huntrleaks) + result = runtest(test, verbose, quiet, huntrleaks) accumulate_result(test, result) if verbose3 and result[0] == FAILED: print "Re-running test %r in verbose mode" % test - runtest(test, True, quiet, testdir, huntrleaks) + runtest(test, True, quiet, huntrleaks) except KeyboardInterrupt: interrupted = True break @@ -630,8 +635,7 @@ sys.stdout.flush() try: test_support.verbose = True - ok = runtest(test, True, quiet, testdir, - huntrleaks) + ok = runtest(test, True, quiet, huntrleaks) except KeyboardInterrupt: # print a newline separate from the ^C print @@ -692,14 +696,13 @@ return stdtests + sorted(tests) def runtest(test, verbose, quiet, - testdir=None, huntrleaks=False, use_resources=None): + huntrleaks=False, use_resources=None): """Run a single test. test -- the name of the test verbose -- if true, print more messages quiet -- if true, don't print 'skipped' messages (probably redundant) test_times -- a list of (time, test_name) pairs - testdir -- test directory huntrleaks -- run multiple times to test for leaks; requires a debug build; a triple corresponding to -R's three arguments Returns one of the test result constants: @@ -715,8 +718,7 @@ if use_resources is not None: test_support.use_resources = use_resources try: - return runtest_inner(test, verbose, quiet, - testdir, huntrleaks) + return runtest_inner(test, verbose, quiet, huntrleaks) finally: cleanup_test_droppings(test, verbose) @@ -849,8 +851,7 @@ return False -def runtest_inner(test, verbose, quiet, - testdir=None, huntrleaks=False): +def runtest_inner(test, verbose, quiet, huntrleaks=False): test_support.unload(test) if verbose: capture_stdout = None @@ -898,16 +899,16 @@ except KeyboardInterrupt: raise except test_support.TestFailed, msg: - print "test", test, "failed --", msg - sys.stdout.flush() + print >>sys.stderr, "test", test, "failed --", msg + sys.stderr.flush() return FAILED, test_time except: type, value = sys.exc_info()[:2] - print "test", test, "crashed --", str(type) + ":", value - sys.stdout.flush() + print >>sys.stderr, "test", test, "crashed --", str(type) + ":", value + sys.stderr.flush() if verbose: - traceback.print_exc(file=sys.stdout) - sys.stdout.flush() + traceback.print_exc(file=sys.stderr) + sys.stderr.flush() return FAILED, test_time else: if refleak: @@ -1504,7 +1505,7 @@ # is distributed with Python WIN_ONLY = ["test_unicode_file", "test_winreg", "test_winsound", "test_startfile", - "test_sqlite"] + "test_sqlite", "test_msilib"] for skip in WIN_ONLY: self.expected.add(skip) diff --git a/lib-python/modified-2.7/test/string_tests.py b/lib-python/modified-2.7/test/string_tests.py --- a/lib-python/modified-2.7/test/string_tests.py +++ b/lib-python/modified-2.7/test/string_tests.py @@ -1180,6 +1180,63 @@ # mixed use of str and unicode self.assertEqual('a/b/c'.rpartition(u'/'), ('a/b', '/', 'c')) + def test_none_arguments(self): + # issue 11828 + s = 'hello' + self.checkequal(2, s, 'find', 'l', None) + self.checkequal(3, s, 'find', 'l', -2, None) + self.checkequal(2, s, 'find', 'l', None, -2) + self.checkequal(0, s, 'find', 'h', None, None) + + self.checkequal(3, s, 'rfind', 'l', None) + self.checkequal(3, s, 'rfind', 'l', -2, None) + self.checkequal(2, s, 'rfind', 'l', None, -2) + self.checkequal(0, s, 'rfind', 'h', None, None) + + self.checkequal(2, s, 'index', 'l', None) + self.checkequal(3, s, 'index', 'l', -2, None) + self.checkequal(2, s, 'index', 'l', None, -2) + self.checkequal(0, s, 'index', 'h', None, None) + + self.checkequal(3, s, 'rindex', 'l', None) + self.checkequal(3, s, 'rindex', 'l', -2, None) + self.checkequal(2, s, 'rindex', 'l', None, -2) + self.checkequal(0, s, 'rindex', 'h', None, None) + + self.checkequal(2, s, 'count', 'l', None) + self.checkequal(1, s, 'count', 'l', -2, None) + self.checkequal(1, s, 'count', 'l', None, -2) + self.checkequal(0, s, 'count', 'x', None, None) + + self.checkequal(True, s, 'endswith', 'o', None) + self.checkequal(True, s, 'endswith', 'lo', -2, None) + self.checkequal(True, s, 'endswith', 'l', None, -2) + self.checkequal(False, s, 'endswith', 'x', None, None) + + self.checkequal(True, s, 'startswith', 'h', None) + self.checkequal(True, s, 'startswith', 'l', -2, None) + self.checkequal(True, s, 'startswith', 'h', None, -2) + self.checkequal(False, s, 'startswith', 'x', None, None) + + def test_find_etc_raise_correct_error_messages(self): + # issue 11828 + s = 'hello' + x = 'x' + self.assertRaisesRegexp(TypeError, r'\bfind\b', s.find, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\brfind\b', s.rfind, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\bindex\b', s.index, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\brindex\b', s.rindex, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'^count\(', s.count, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'^startswith\(', s.startswith, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'^endswith\(', s.endswith, + x, None, None, None) + class MixinStrStringUserStringTest: # Additional tests for 8bit strings, i.e. str, UserString and # the string module diff --git a/lib-python/modified-2.7/test/test_argparse.py b/lib-python/modified-2.7/test/test_argparse.py --- a/lib-python/modified-2.7/test/test_argparse.py +++ b/lib-python/modified-2.7/test/test_argparse.py @@ -4,6 +4,7 @@ import inspect import os import shutil +import stat import sys import textwrap import tempfile @@ -46,15 +47,13 @@ def tearDown(self): os.chdir(self.old_dir) - while True: - try: - shutil.rmtree(self.temp_dir) - except WindowsError: - test_support.gc_collect() - continue - else: - break - + shutil.rmtree(self.temp_dir, True) + + def create_readonly_file(self, filename): + file_path = os.path.join(self.temp_dir, filename) + with open(file_path, 'w') as file: + file.write(filename) + os.chmod(file_path, stat.S_IREAD) class Sig(object): @@ -1452,17 +1451,19 @@ file = open(os.path.join(self.temp_dir, file_name), 'w') file.write(file_name) file.close() + self.create_readonly_file('readonly') argument_signatures = [ Sig('-x', type=argparse.FileType()), Sig('spam', type=argparse.FileType('r')), ] - failures = ['-x', '-x bar'] + failures = ['-x', '-x bar', 'non-existent-file.txt'] successes = [ ('foo', NS(x=None, spam=RFile('foo'))), ('-x foo bar', NS(x=RFile('foo'), spam=RFile('bar'))), ('bar -x foo', NS(x=RFile('foo'), spam=RFile('bar'))), ('-x - -', NS(x=sys.stdin, spam=sys.stdin)), + ('readonly', NS(x=None, spam=RFile('readonly'))), ] @@ -1511,11 +1512,16 @@ class TestFileTypeW(TempDirMixin, ParserTestCase): """Test the FileType option/argument type for writing files""" + def setUp(self): + super(TestFileTypeW, self).setUp() + self.create_readonly_file('readonly') + argument_signatures = [ Sig('-x', type=argparse.FileType('w')), Sig('spam', type=argparse.FileType('w')), ] failures = ['-x', '-x bar'] + failures = ['-x', '-x bar', 'readonly'] successes = [ ('foo', NS(x=None, spam=WFile('foo'))), ('-x foo bar', NS(x=WFile('foo'), spam=WFile('bar'))), @@ -2499,6 +2505,46 @@ ''' +class TestMutuallyExclusiveInGroup(MEMixin, TestCase): + + def get_parser(self, required=None): + parser = ErrorRaisingArgumentParser(prog='PROG') + titled_group = parser.add_argument_group( + title='Titled group', description='Group description') + mutex_group = \ + titled_group.add_mutually_exclusive_group(required=required) + mutex_group.add_argument('--bar', help='bar help') + mutex_group.add_argument('--baz', help='baz help') + return parser + + failures = ['--bar X --baz Y', '--baz X --bar Y'] + successes = [ + ('--bar X', NS(bar='X', baz=None)), + ('--baz Y', NS(bar=None, baz='Y')), + ] + successes_when_not_required = [ + ('', NS(bar=None, baz=None)), + ] + + usage_when_not_required = '''\ + usage: PROG [-h] [--bar BAR | --baz BAZ] + ''' + usage_when_required = '''\ + usage: PROG [-h] (--bar BAR | --baz BAZ) + ''' + help = '''\ + + optional arguments: + -h, --help show this help message and exit + + Titled group: + Group description + + --bar BAR bar help + --baz BAZ baz help + ''' + + class TestMutuallyExclusiveOptionalsAndPositionalsMixed(MEMixin, TestCase): def get_parser(self, required): @@ -2756,16 +2802,22 @@ parser = argparse.ArgumentParser( *tester.parser_signature.args, **tester.parser_signature.kwargs) - for argument_sig in tester.argument_signatures: + for argument_sig in getattr(tester, 'argument_signatures', []): parser.add_argument(*argument_sig.args, **argument_sig.kwargs) - group_signatures = tester.argument_group_signatures - for group_sig, argument_sigs in group_signatures: + group_sigs = getattr(tester, 'argument_group_signatures', []) + for group_sig, argument_sigs in group_sigs: group = parser.add_argument_group(*group_sig.args, **group_sig.kwargs) for argument_sig in argument_sigs: group.add_argument(*argument_sig.args, **argument_sig.kwargs) + subparsers_sigs = getattr(tester, 'subparsers_signatures', []) + if subparsers_sigs: + subparsers = parser.add_subparsers() + for subparser_sig in subparsers_sigs: + subparsers.add_parser(*subparser_sig.args, + **subparser_sig.kwargs) return parser def _test(self, tester, parser_text): @@ -3859,6 +3911,77 @@ ''' version = '' +class TestHelpSubparsersOrdering(HelpTestCase): + """Test ordering of subcommands in help matches the code""" + parser_signature = Sig(prog='PROG', + description='display some subcommands', + version='0.1') + + subparsers_signatures = [Sig(name=name) + for name in ('a', 'b', 'c', 'd', 'e')] + + usage = '''\ + usage: PROG [-h] [-v] {a,b,c,d,e} ... + ''' + + help = usage + '''\ + + display some subcommands + + positional arguments: + {a,b,c,d,e} + + optional arguments: + -h, --help show this help message and exit + -v, --version show program's version number and exit + ''' + + version = '''\ + 0.1 + ''' + +class TestHelpSubparsersWithHelpOrdering(HelpTestCase): + """Test ordering of subcommands in help matches the code""" + parser_signature = Sig(prog='PROG', + description='display some subcommands', + version='0.1') + + subcommand_data = (('a', 'a subcommand help'), + ('b', 'b subcommand help'), + ('c', 'c subcommand help'), + ('d', 'd subcommand help'), + ('e', 'e subcommand help'), + ) + + subparsers_signatures = [Sig(name=name, help=help) + for name, help in subcommand_data] + + usage = '''\ + usage: PROG [-h] [-v] {a,b,c,d,e} ... + ''' + + help = usage + '''\ + + display some subcommands + + positional arguments: + {a,b,c,d,e} + a a subcommand help + b b subcommand help + c c subcommand help + d d subcommand help + e e subcommand help + + optional arguments: + -h, --help show this help message and exit + -v, --version show program's version number and exit + ''' + + version = '''\ + 0.1 + ''' + + # ===================================== # Optional/Positional constructor tests # ===================================== @@ -3893,10 +4016,12 @@ def test_invalid_type(self): self.assertValueError('--foo', type='int') + self.assertValueError('--foo', type=(int, float)) def test_invalid_action(self): self.assertValueError('-x', action='foo') self.assertValueError('foo', action='baz') + self.assertValueError('--foo', action=('store', 'append')) parser = argparse.ArgumentParser() try: parser.add_argument("--foo", action="store-true") @@ -4271,7 +4396,7 @@ # ArgumentTypeError tests # ======================= -class TestArgumentError(TestCase): +class TestArgumentTypeError(TestCase): def test_argument_type_error(self): @@ -4313,6 +4438,177 @@ self.assertEqual(NS(v=3, spam=True, badger="B"), args) self.assertEqual(["C", "--foo", "4"], extras) +# ========================== +# add_argument metavar tests +# ========================== + +class TestAddArgumentMetavar(TestCase): + + EXPECTED_MESSAGE = "length of metavar tuple does not match nargs" + + def do_test_no_exception(self, nargs, metavar): + parser = argparse.ArgumentParser() + parser.add_argument("--foo", nargs=nargs, metavar=metavar) + + def do_test_exception(self, nargs, metavar): + parser = argparse.ArgumentParser() + with self.assertRaises(ValueError) as cm: + parser.add_argument("--foo", nargs=nargs, metavar=metavar) + self.assertEqual(cm.exception.args[0], self.EXPECTED_MESSAGE) + + # Unit tests for different values of metavar when nargs=None + + def test_nargs_None_metavar_string(self): + self.do_test_no_exception(nargs=None, metavar="1") + + def test_nargs_None_metavar_length0(self): + self.do_test_exception(nargs=None, metavar=tuple()) + + def test_nargs_None_metavar_length1(self): + self.do_test_no_exception(nargs=None, metavar=("1")) + + def test_nargs_None_metavar_length2(self): + self.do_test_exception(nargs=None, metavar=("1", "2")) + + def test_nargs_None_metavar_length3(self): + self.do_test_exception(nargs=None, metavar=("1", "2", "3")) + + # Unit tests for different values of metavar when nargs=? + + def test_nargs_optional_metavar_string(self): + self.do_test_no_exception(nargs="?", metavar="1") + + def test_nargs_optional_metavar_length0(self): + self.do_test_exception(nargs="?", metavar=tuple()) + + def test_nargs_optional_metavar_length1(self): + self.do_test_no_exception(nargs="?", metavar=("1")) + + def test_nargs_optional_metavar_length2(self): + self.do_test_exception(nargs="?", metavar=("1", "2")) + + def test_nargs_optional_metavar_length3(self): + self.do_test_exception(nargs="?", metavar=("1", "2", "3")) + + # Unit tests for different values of metavar when nargs=* + + def test_nargs_zeroormore_metavar_string(self): + self.do_test_no_exception(nargs="*", metavar="1") + + def test_nargs_zeroormore_metavar_length0(self): + self.do_test_exception(nargs="*", metavar=tuple()) + + def test_nargs_zeroormore_metavar_length1(self): + self.do_test_no_exception(nargs="*", metavar=("1")) + + def test_nargs_zeroormore_metavar_length2(self): + self.do_test_no_exception(nargs="*", metavar=("1", "2")) + + def test_nargs_zeroormore_metavar_length3(self): + self.do_test_exception(nargs="*", metavar=("1", "2", "3")) + + # Unit tests for different values of metavar when nargs=+ + + def test_nargs_oneormore_metavar_string(self): + self.do_test_no_exception(nargs="+", metavar="1") + + def test_nargs_oneormore_metavar_length0(self): + self.do_test_exception(nargs="+", metavar=tuple()) + + def test_nargs_oneormore_metavar_length1(self): + self.do_test_no_exception(nargs="+", metavar=("1")) + + def test_nargs_oneormore_metavar_length2(self): + self.do_test_no_exception(nargs="+", metavar=("1", "2")) + + def test_nargs_oneormore_metavar_length3(self): + self.do_test_exception(nargs="+", metavar=("1", "2", "3")) + + # Unit tests for different values of metavar when nargs=... + + def test_nargs_remainder_metavar_string(self): + self.do_test_no_exception(nargs="...", metavar="1") + + def test_nargs_remainder_metavar_length0(self): + self.do_test_no_exception(nargs="...", metavar=tuple()) + + def test_nargs_remainder_metavar_length1(self): + self.do_test_no_exception(nargs="...", metavar=("1")) + + def test_nargs_remainder_metavar_length2(self): + self.do_test_no_exception(nargs="...", metavar=("1", "2")) + + def test_nargs_remainder_metavar_length3(self): + self.do_test_no_exception(nargs="...", metavar=("1", "2", "3")) + + # Unit tests for different values of metavar when nargs=A... + + def test_nargs_parser_metavar_string(self): + self.do_test_no_exception(nargs="A...", metavar="1") + + def test_nargs_parser_metavar_length0(self): + self.do_test_exception(nargs="A...", metavar=tuple()) + + def test_nargs_parser_metavar_length1(self): + self.do_test_no_exception(nargs="A...", metavar=("1")) + + def test_nargs_parser_metavar_length2(self): + self.do_test_exception(nargs="A...", metavar=("1", "2")) + + def test_nargs_parser_metavar_length3(self): + self.do_test_exception(nargs="A...", metavar=("1", "2", "3")) + + # Unit tests for different values of metavar when nargs=1 + + def test_nargs_1_metavar_string(self): + self.do_test_no_exception(nargs=1, metavar="1") + + def test_nargs_1_metavar_length0(self): + self.do_test_exception(nargs=1, metavar=tuple()) + + def test_nargs_1_metavar_length1(self): + self.do_test_no_exception(nargs=1, metavar=("1")) + + def test_nargs_1_metavar_length2(self): + self.do_test_exception(nargs=1, metavar=("1", "2")) + + def test_nargs_1_metavar_length3(self): + self.do_test_exception(nargs=1, metavar=("1", "2", "3")) + + # Unit tests for different values of metavar when nargs=2 + + def test_nargs_2_metavar_string(self): + self.do_test_no_exception(nargs=2, metavar="1") + + def test_nargs_2_metavar_length0(self): + self.do_test_exception(nargs=2, metavar=tuple()) + + def test_nargs_2_metavar_length1(self): + self.do_test_no_exception(nargs=2, metavar=("1")) + + def test_nargs_2_metavar_length2(self): + self.do_test_no_exception(nargs=2, metavar=("1", "2")) + + def test_nargs_2_metavar_length3(self): + self.do_test_exception(nargs=2, metavar=("1", "2", "3")) + + # Unit tests for different values of metavar when nargs=3 + + def test_nargs_3_metavar_string(self): + self.do_test_no_exception(nargs=3, metavar="1") + + def test_nargs_3_metavar_length0(self): + self.do_test_exception(nargs=3, metavar=tuple()) + + def test_nargs_3_metavar_length1(self): + self.do_test_no_exception(nargs=3, metavar=("1")) + + def test_nargs_3_metavar_length2(self): + self.do_test_exception(nargs=3, metavar=("1", "2")) + + def test_nargs_3_metavar_length3(self): + self.do_test_no_exception(nargs=3, metavar=("1", "2", "3")) + # ============================ # from argparse import * tests # ============================ diff --git a/lib-python/modified-2.7/test/test_ast.py b/lib-python/modified-2.7/test/test_ast.py --- a/lib-python/modified-2.7/test/test_ast.py +++ b/lib-python/modified-2.7/test/test_ast.py @@ -466,6 +466,14 @@ 'op=Add(), right=Num(n=1, lineno=4, col_offset=4), lineno=4, ' 'col_offset=0))' ) + # issue10869: do not increment lineno of root twice + src = ast.parse('1 + 1', mode='eval') + self.assertEqual(ast.increment_lineno(src.body, n=3), src.body) + self.assertEqual(ast.dump(src, include_attributes=True), + 'Expression(body=BinOp(left=Num(n=1, lineno=4, col_offset=0), ' + 'op=Add(), right=Num(n=1, lineno=4, col_offset=4), lineno=4, ' + 'col_offset=0))' + ) def test_iter_fields(self): node = ast.parse('foo()', mode='eval') diff --git a/lib-python/modified-2.7/test/test_asyncore.py b/lib-python/modified-2.7/test/test_asyncore.py --- a/lib-python/modified-2.7/test/test_asyncore.py +++ b/lib-python/modified-2.7/test/test_asyncore.py @@ -330,7 +330,7 @@ if hasattr(os, 'strerror'): self.assertEqual(err, os.strerror(errno.EPERM)) err = asyncore._strerror(-1) - self.assertIn("unknown error", err.lower()) + self.assertTrue(err != "") class dispatcherwithsend_noread(asyncore.dispatcher_with_send): @@ -398,8 +398,6 @@ class FileWrapperTest(unittest.TestCase): def setUp(self): self.d = "It's not dead, it's sleeping!" - # Fixed in CPython 2.7.2 (release27-maint branch - # revision 88046.) with file(TESTFN, 'w') as h: h.write(self.d) diff --git a/lib-python/modified-2.7/test/test_builtin.py b/lib-python/modified-2.7/test/test_builtin.py --- a/lib-python/modified-2.7/test/test_builtin.py +++ b/lib-python/modified-2.7/test/test_builtin.py @@ -701,7 +701,7 @@ # provide too much opportunity for insane things to happen. # We don't want them in the interned dict and if they aren't # actually interned, we don't want to create the appearance - # that they are by allowing intern() to succeeed. + # that they are by allowing intern() to succeed. class S(str): def __hash__(self): return 123 diff --git a/lib-python/modified-2.7/test/test_bytes.py b/lib-python/modified-2.7/test/test_bytes.py --- a/lib-python/modified-2.7/test/test_bytes.py +++ b/lib-python/modified-2.7/test/test_bytes.py @@ -456,6 +456,68 @@ self.assertEqual([ord(b[i:i+1]) for i in range(len(b))], [0, 65, 127, 128, 255]) + def test_none_arguments(self): + # issue 11828 + b = self.type2test(b'hello') + l = self.type2test(b'l') + h = self.type2test(b'h') + x = self.type2test(b'x') + o = self.type2test(b'o') + + self.assertEqual(2, b.find(l, None)) + self.assertEqual(3, b.find(l, -2, None)) + self.assertEqual(2, b.find(l, None, -2)) + self.assertEqual(0, b.find(h, None, None)) + + self.assertEqual(3, b.rfind(l, None)) + self.assertEqual(3, b.rfind(l, -2, None)) + self.assertEqual(2, b.rfind(l, None, -2)) + self.assertEqual(0, b.rfind(h, None, None)) + + self.assertEqual(2, b.index(l, None)) + self.assertEqual(3, b.index(l, -2, None)) + self.assertEqual(2, b.index(l, None, -2)) + self.assertEqual(0, b.index(h, None, None)) + + self.assertEqual(3, b.rindex(l, None)) + self.assertEqual(3, b.rindex(l, -2, None)) + self.assertEqual(2, b.rindex(l, None, -2)) + self.assertEqual(0, b.rindex(h, None, None)) + + self.assertEqual(2, b.count(l, None)) + self.assertEqual(1, b.count(l, -2, None)) + self.assertEqual(1, b.count(l, None, -2)) + self.assertEqual(0, b.count(x, None, None)) + + self.assertEqual(True, b.endswith(o, None)) + self.assertEqual(True, b.endswith(o, -2, None)) + self.assertEqual(True, b.endswith(l, None, -2)) + self.assertEqual(False, b.endswith(x, None, None)) + + self.assertEqual(True, b.startswith(h, None)) + self.assertEqual(True, b.startswith(l, -2, None)) + self.assertEqual(True, b.startswith(h, None, -2)) + self.assertEqual(False, b.startswith(x, None, None)) + + def test_find_etc_raise_correct_error_messages(self): + # issue 11828 + b = self.type2test(b'hello') + x = self.type2test(b'x') + self.assertRaisesRegexp(TypeError, r'\bfind\b', b.find, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\brfind\b', b.rfind, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\bindex\b', b.index, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\brindex\b', b.rindex, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\bcount\b', b.count, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\bstartswith\b', b.startswith, + x, None, None, None) + self.assertRaisesRegexp(TypeError, r'\bendswith\b', b.endswith, + x, None, None, None) + class ByteArrayTest(BaseBytesTest): type2test = bytearray @@ -696,7 +758,7 @@ self.assertEqual(b.pop(0), ord('w')) self.assertEqual(b.pop(-2), ord('r')) self.assertRaises(IndexError, lambda: b.pop(10)) - self.assertRaises(OverflowError, lambda: bytearray().pop()) + self.assertRaises(IndexError, lambda: bytearray().pop()) # test for issue #6846 self.assertEqual(bytearray(b'\xff').pop(), 0xff) diff --git a/lib-python/modified-2.7/test/test_bz2.py b/lib-python/modified-2.7/test/test_bz2.py --- a/lib-python/modified-2.7/test/test_bz2.py +++ b/lib-python/modified-2.7/test/test_bz2.py @@ -86,7 +86,7 @@ if not str: break text += str - self.assertEqual(text, text) + self.assertEqual(text, self.TEXT) def testRead100(self): # "Test BZ2File.read(100)" diff --git a/lib-python/modified-2.7/test/test_compile.py b/lib-python/modified-2.7/test/test_compile.py --- a/lib-python/modified-2.7/test/test_compile.py +++ b/lib-python/modified-2.7/test/test_compile.py @@ -254,7 +254,7 @@ self.assertEqual(eval("-" + all_one_bits), -18446744073709551615L) else: self.fail("How many bits *does* this machine have???") - # Verify treatment of contant folding on -(sys.maxint+1) + # Verify treatment of constant folding on -(sys.maxint+1) # i.e. -2147483648 on 32 bit platforms. Should return int, not long. self.assertIsInstance(eval("%s" % (-sys.maxint - 1)), int) self.assertIsInstance(eval("%s" % (-sys.maxint - 2)), long) diff --git a/lib-python/modified-2.7/test/test_csv.py b/lib-python/modified-2.7/test/test_csv.py --- a/lib-python/modified-2.7/test/test_csv.py +++ b/lib-python/modified-2.7/test/test_csv.py @@ -328,22 +328,17 @@ expected_dialects = csv.list_dialects() + [name] expected_dialects.sort() csv.register_dialect(name, myexceltsv) - try: - self.assertTrue(csv.get_dialect(name).delimiter, '\t') - got_dialects = csv.list_dialects() - got_dialects.sort() - self.assertEqual(expected_dialects, got_dialects) - finally: - csv.unregister_dialect(name) + self.addCleanup(csv.unregister_dialect, name) + self.assertEqual(csv.get_dialect(name).delimiter, '\t') + got_dialects = sorted(csv.list_dialects()) + self.assertEqual(expected_dialects, got_dialects) def test_register_kwargs(self): name = 'fedcba' csv.register_dialect(name, delimiter=';') - try: - self.assertTrue(csv.get_dialect(name).delimiter, '\t') - self.assertTrue(list(csv.reader('X;Y;Z', name)), ['X', 'Y', 'Z']) - finally: - csv.unregister_dialect(name) + self.addCleanup(csv.unregister_dialect, name) + self.assertEqual(csv.get_dialect(name).delimiter, ';') + self.assertEqual([['X', 'Y', 'Z']], list(csv.reader(['X;Y;Z'], name))) def test_incomplete_dialect(self): class myexceltsv(csv.Dialect): diff --git a/lib-python/modified-2.7/test/test_deque.py b/lib-python/modified-2.7/test/test_deque.py --- a/lib-python/modified-2.7/test/test_deque.py +++ b/lib-python/modified-2.7/test/test_deque.py @@ -137,6 +137,15 @@ m.d = d self.assertRaises(RuntimeError, d.count, 3) + # test issue11004 + # block advance failed after rotation aligned elements on right side of block + d = deque([None]*16) + for i in range(len(d)): + d.rotate(-1) + d.rotate(1) + self.assertEqual(d.count(1), 0) + self.assertEqual(d.count(None), 16) + def test_comparisons(self): d = deque('xabc'); d.popleft() for e in [d, deque('abc'), deque('ab'), deque(), list(d)]: diff --git a/lib-python/modified-2.7/test/test_descr.py b/lib-python/modified-2.7/test/test_descr.py --- a/lib-python/modified-2.7/test/test_descr.py +++ b/lib-python/modified-2.7/test/test_descr.py @@ -877,7 +877,7 @@ # see "A Monotonic Superclass Linearization for Dylan", # by Kim Barrett et al. (OOPSLA 1996) def test_consistency_with_epg(self): - # Testing consistentcy with EPG... + # Testing consistency with EPG... class Pane(object): pass class ScrollingMixin(object): pass class EditingMixin(object): pass @@ -1719,6 +1719,7 @@ ("__exit__", run_context, swallow, set(), {"__enter__" : iden}), ("__complex__", complex, complex_num, set(), {}), ("__format__", format, format_impl, set(), {}), + ("__dir__", dir, empty_seq, set(), {}), ] class Checker(object): @@ -4299,7 +4300,7 @@ except TypeError: pass else: - self.fail("Carlo Verre __setattr__ suceeded!") + self.fail("Carlo Verre __setattr__ succeeded!") try: object.__delattr__(str, "lower") except TypeError: @@ -4567,6 +4568,33 @@ self.assertRaises(AttributeError, getattr, EvilGetattribute(), "attr") + def test_abstractmethods(self): + # type pretends not to have __abstractmethods__. + self.assertRaises(AttributeError, getattr, type, "__abstractmethods__") + class meta(type): + pass + self.assertRaises(AttributeError, getattr, meta, "__abstractmethods__") + class X(object): + pass + with self.assertRaises(AttributeError): + del X.__abstractmethods__ + + def test_proxy_call(self): + class FakeStr(object): + __class__ = str + + fake_str = FakeStr() + # isinstance() reads __class__ on new style classes + self.assertTrue(isinstance(fake_str, str)) + + # call a method descriptor + with self.assertRaises(TypeError): + str.split(fake_str) + + # call a slot wrapper descriptor + with self.assertRaises(TypeError): + str.__add__(fake_str, "abc") + class DictProxyTests(unittest.TestCase): def setUp(self): diff --git a/lib-python/modified-2.7/test/test_doctest.py b/lib-python/modified-2.7/test/test_doctest.py --- a/lib-python/modified-2.7/test/test_doctest.py +++ b/lib-python/modified-2.7/test/test_doctest.py @@ -1292,7 +1292,7 @@ ? + ++ ^ TestResults(failed=1, attempted=1) -The REPORT_ONLY_FIRST_FAILURE supresses result output after the first +The REPORT_ONLY_FIRST_FAILURE suppresses result output after the first failing example: >>> def f(x): @@ -1322,7 +1322,7 @@ 2 TestResults(failed=3, attempted=5) -However, output from `report_start` is not supressed: +However, output from `report_start` is not suppressed: >>> doctest.DocTestRunner(verbose=True, optionflags=flags).run(test) ... # doctest: +ELLIPSIS @@ -2334,7 +2334,7 @@ TestResults(failed=0, attempted=2) >>> doctest.master = None # Reset master. -Verbosity can be increased with the optional `verbose` paremter: +Verbosity can be increased with the optional `verbose` parameter: >>> doctest.testfile('test_doctest.txt', globs=globs, verbose=True) Trying: @@ -2371,7 +2371,7 @@ TestResults(failed=1, attempted=2) >>> doctest.master = None # Reset master. -The summary report may be supressed with the optional `report` +The summary report may be suppressed with the optional `report` parameter: >>> doctest.testfile('test_doctest.txt', report=False) diff --git a/lib-python/modified-2.7/test/test_extcall.py b/lib-python/modified-2.7/test/test_extcall.py --- a/lib-python/modified-2.7/test/test_extcall.py +++ b/lib-python/modified-2.7/test/test_extcall.py @@ -234,7 +234,7 @@ TypeError: unbound method method() must be called with Foo instance as \ first argument (got int instance instead) -A PyCFunction that takes only positional parameters shoud allow an +A PyCFunction that takes only positional parameters should allow an empty keyword dictionary to pass without a complaint, but raise a TypeError if te dictionary is not empty diff --git a/lib-python/modified-2.7/test/test_functools.py b/lib-python/modified-2.7/test/test_functools.py --- a/lib-python/modified-2.7/test/test_functools.py +++ b/lib-python/modified-2.7/test/test_functools.py @@ -364,6 +364,8 @@ self.value = value def __lt__(self, other): return self.value < other.value + def __eq__(self, other): + return self.value == other.value self.assertTrue(A(1) < A(2)) self.assertTrue(A(2) > A(1)) self.assertTrue(A(1) <= A(2)) @@ -378,6 +380,8 @@ self.value = value def __le__(self, other): return self.value <= other.value + def __eq__(self, other): + return self.value == other.value self.assertTrue(A(1) < A(2)) self.assertTrue(A(2) > A(1)) self.assertTrue(A(1) <= A(2)) @@ -392,6 +396,8 @@ self.value = value def __gt__(self, other): return self.value > other.value + def __eq__(self, other): + return self.value == other.value self.assertTrue(A(1) < A(2)) self.assertTrue(A(2) > A(1)) self.assertTrue(A(1) <= A(2)) @@ -406,6 +412,8 @@ self.value = value def __ge__(self, other): return self.value >= other.value + def __eq__(self, other): + return self.value == other.value self.assertTrue(A(1) < A(2)) self.assertTrue(A(2) > A(1)) self.assertTrue(A(1) <= A(2)) @@ -431,6 +439,22 @@ class A: pass + def test_bug_10042(self): + @functools.total_ordering + class TestTO: + def __init__(self, value): + self.value = value + def __eq__(self, other): + if isinstance(other, TestTO): + return self.value == other.value + return False + def __lt__(self, other): + if isinstance(other, TestTO): + return self.value < other.value + raise TypeError + with self.assertRaises(TypeError): + TestTO(8) <= () + def test_main(verbose=None): test_classes = ( TestPartial, diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -1,18 +1,31 @@ """Unittests for heapq.""" +import sys import random -import unittest + from test import test_support -import sys +from unittest import TestCase, skipUnless -# We do a bit of trickery here to be able to test both the C implementation -# and the Python implementation of the module. -import heapq as c_heapq -if '_heapq' not in sys.modules: - c_heapq = None # we don't have a _heapq module to test at all py_heapq = test_support.import_fresh_module('heapq', blocked=['_heapq']) +c_heapq = test_support.import_fresh_module('heapq', fresh=['_heapq']) -class TestHeap(unittest.TestCase): +# _heapq.nlargest/nsmallest are saved in heapq._nlargest/_smallest when +# _heapq is imported, so check them there +func_names = ['heapify', 'heappop', 'heappush', 'heappushpop', + 'heapreplace', '_nlargest', '_nsmallest'] + +class TestModules(TestCase): + def test_py_functions(self): + for fname in func_names: + self.assertEqual(getattr(py_heapq, fname).__module__, 'heapq') + + @skipUnless(c_heapq, 'requires _heapq') + def test_c_functions(self): + for fname in func_names: + self.assertEqual(getattr(c_heapq, fname).__module__, '_heapq') + + +class TestHeap(TestCase): module = None def test_push_pop(self): @@ -177,26 +190,8 @@ self.assertEqual(self.module.nlargest(n, data, key=f), sorted(data, key=f, reverse=True)[:n]) -class TestHeapPython(TestHeap): - module = py_heapq - - # As an early adopter, we sanity check the - # test_support.import_fresh_module utility function - def test_pure_python(self): - self.assertFalse(sys.modules['heapq'] is self.module) - self.assertTrue(hasattr(self.module.heapify, 'func_code')) - - def test_islice_protection(self): - m = self.module - self.assertFalse(m.nsmallest(-1, [1])) - self.assertFalse(m.nlargest(-1, [1])) - - -class TestHeapC(TestHeap): - module = c_heapq - def test_comparison_operator(self): - # Issue 3501: Make sure heapq works with both __lt__ and __le__ + # Issue 3051: Make sure heapq works with both __lt__ and __le__ def hsort(data, comp): data = map(comp, data) self.module.heapify(data) @@ -216,11 +211,19 @@ self.assertEqual(hsort(data, LT), target) self.assertEqual(hsort(data, LE), target) - # As an early adopter, we sanity check the - # test_support.import_fresh_module utility function - def test_accelerated(self): - self.assertTrue(sys.modules['heapq'] is self.module) - self.assertFalse(hasattr(self.module.heapify, 'func_code')) + +class TestHeapPython(TestHeap): + module = py_heapq + + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + + + at skipUnless(c_heapq, 'requires _heapq') +class TestHeapC(TestHeap): + module = c_heapq #============================================================================== @@ -317,22 +320,21 @@ 'Test multiple tiers of iterators' return chain(imap(lambda x:x, R(Ig(G(seqn))))) -class TestErrorHandling(unittest.TestCase): - # only for C implementation - module = c_heapq +class TestErrorHandling(TestCase): + module = None def test_non_sequence(self): for f in (self.module.heapify, self.module.heappop): - self.assertRaises(TypeError, f, 10) + self.assertRaises((TypeError, AttributeError), f, 10) for f in (self.module.heappush, self.module.heapreplace, self.module.nlargest, self.module.nsmallest): - self.assertRaises(TypeError, f, 10, 10) + self.assertRaises((TypeError, AttributeError), f, 10, 10) def test_len_only(self): for f in (self.module.heapify, self.module.heappop): - self.assertRaises(TypeError, f, LenOnly()) + self.assertRaises((TypeError, AttributeError), f, LenOnly()) for f in (self.module.heappush, self.module.heapreplace): - self.assertRaises(TypeError, f, LenOnly(), 10) + self.assertRaises((TypeError, AttributeError), f, LenOnly(), 10) for f in (self.module.nlargest, self.module.nsmallest): self.assertRaises(TypeError, f, 2, LenOnly()) @@ -349,7 +351,7 @@ for f in (self.module.heapify, self.module.heappop, self.module.heappush, self.module.heapreplace, self.module.nlargest, self.module.nsmallest): - self.assertRaises(TypeError, f, 10) + self.assertRaises((TypeError, AttributeError), f, 10) def test_iterable_args(self): for f in (self.module.nlargest, self.module.nsmallest): @@ -365,13 +367,21 @@ self.assertRaises(ZeroDivisionError, f, 2, E(s)) +class TestErrorHandlingPython(TestErrorHandling): + module = py_heapq + + + at skipUnless(c_heapq, 'requires _heapq') +class TestErrorHandlingC(TestErrorHandling): + module = c_heapq + + #============================================================================== def test_main(verbose=None): - test_classes = [TestHeapPython] - if c_heapq is not None: - test_classes += [TestHeapC, TestErrorHandling] + test_classes = [TestModules, TestHeapPython, TestHeapC, + TestErrorHandlingPython, TestErrorHandlingC] test_support.run_unittest(*test_classes) # verify reference counting diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -9,7 +9,8 @@ from test.test_support import (unlink, TESTFN, unload, run_unittest, rmtree, is_jython, check_warnings, EnvironmentVarGuard, impl_detail, check_impl_detail) - +import textwrap +from test import script_helper def remove_files(name): for f in (name + os.extsep + "py", @@ -64,7 +65,6 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: - # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, @@ -259,6 +259,17 @@ self.assertEqual("Import by filename is not supported.", c.exception.args[0]) + def test_import_in_del_does_not_crash(self): + # Issue 4236 + testfn = script_helper.make_script('', TESTFN, textwrap.dedent("""\ + import sys + class C: + def __del__(self): + import imp + sys.argv.insert(0, C()) + """)) + script_helper.assert_python_ok(testfn) + class PycRewritingTests(unittest.TestCase): # Test that the `co_filename` attribute on code objects always points diff --git a/lib-python/modified-2.7/test/test_inspect.py b/lib-python/modified-2.7/test/test_inspect.py --- a/lib-python/modified-2.7/test/test_inspect.py +++ b/lib-python/modified-2.7/test/test_inspect.py @@ -634,6 +634,16 @@ self.assertEqualCallArgs(f, '2, c=4, **{u"b":3}') self.assertEqualCallArgs(f, 'b=2, **{u"a":3, u"c":4}') + def test_varkw_only(self): + # issue11256: + f = self.makeCallable('**c') + self.assertEqualCallArgs(f, '') + self.assertEqualCallArgs(f, 'a=1') + self.assertEqualCallArgs(f, 'a=1, b=2') + self.assertEqualCallArgs(f, 'c=3, **{"a": 1, "b": 2}') + self.assertEqualCallArgs(f, '**UserDict(a=1, b=2)') + self.assertEqualCallArgs(f, 'c=3, **UserDict(a=1, b=2)') + def test_tupleargs(self): f = self.makeCallable('(b,c), (d,(e,f))=(0,[1,2])') self.assertEqualCallArgs(f, '(2,3)') @@ -695,6 +705,10 @@ self.assertEqualException(f, '1') self.assertEqualException(f, '[1]') self.assertEqualException(f, '(1,2,3)') + # issue11256: + f3 = self.makeCallable('**c') + self.assertEqualException(f3, '1, 2') + self.assertEqualException(f3, '1, 2, a=1, b=2') class TestGetcallargsMethods(TestGetcallargsFunctions): diff --git a/lib-python/modified-2.7/test/test_io.py b/lib-python/modified-2.7/test/test_io.py --- a/lib-python/modified-2.7/test/test_io.py +++ b/lib-python/modified-2.7/test/test_io.py @@ -701,6 +701,13 @@ b.close() self.assertRaises(ValueError, b.flush) + def test_readonly_attributes(self): + raw = self.MockRawIO() + buf = self.tp(raw) + x = self.MockRawIO() + with self.assertRaises((AttributeError, TypeError)): + buf.raw = x + class BufferedReaderTest(unittest.TestCase, CommonBufferedTests): read_mode = "rb" @@ -791,14 +798,17 @@ # Inject some None's in there to simulate EWOULDBLOCK rawio = self.MockRawIO((b"abc", b"d", None, b"efg", None, None, None)) bufio = self.tp(rawio) - self.assertEqual(b"abcd", bufio.read(6)) self.assertEqual(b"e", bufio.read(1)) self.assertEqual(b"fg", bufio.read()) self.assertEqual(b"", bufio.peek(1)) - self.assertTrue(None is bufio.read()) + self.assertIsNone(bufio.read()) self.assertEqual(b"", bufio.read()) + rawio = self.MockRawIO((b"a", None, None)) + self.assertEqual(b"a", rawio.readall()) + self.assertIsNone(rawio.readall()) + def test_read_past_eof(self): rawio = self.MockRawIO((b"abc", b"d", b"efg")) bufio = self.tp(rawio) @@ -1455,6 +1465,32 @@ self.assertEqual(s, b"A" + b"B" * overwrite_size + b"A" * (9 - overwrite_size)) + def test_write_rewind_write(self): + # Various combinations of reading / writing / seeking backwards / writing again + def mutate(bufio, pos1, pos2): + assert pos2 >= pos1 + # Fill the buffer + bufio.seek(pos1) + bufio.read(pos2 - pos1) + bufio.write(b'\x02') + # This writes earlier than the previous write, but still inside + # the buffer. + bufio.seek(pos1) + bufio.write(b'\x01') + + b = b"\x80\x81\x82\x83\x84" + for i in range(0, len(b)): + for j in range(i, len(b)): + raw = self.BytesIO(b) + bufio = self.tp(raw, 100) + mutate(bufio, i, j) + bufio.flush() + expected = bytearray(b) + expected[j] = 2 + expected[i] = 1 + self.assertEqual(raw.getvalue(), expected, + "failed result for i=%d, j=%d" % (i, j)) + def test_truncate_after_read_or_write(self): raw = self.BytesIO(b"A" * 10) bufio = self.tp(raw, 100) @@ -2211,6 +2247,12 @@ txt.close() self.assertRaises(ValueError, txt.flush) + def test_readonly_attributes(self): + txt = self.TextIOWrapper(self.BytesIO(self.testdata), encoding="ascii") + buf = self.BytesIO(self.testdata) + with self.assertRaises((AttributeError, TypeError)): + txt.buffer = buf + class CTextIOWrapperTest(TextIOWrapperTest): def test_initialization(self): @@ -2429,6 +2471,8 @@ self.assertRaises(ValueError, f.read) if hasattr(f, "read1"): self.assertRaises(ValueError, f.read1, 1024) + if hasattr(f, "readall"): + self.assertRaises(ValueError, f.readall) if hasattr(f, "readinto"): self.assertRaises(ValueError, f.readinto, bytearray(1024)) self.assertRaises(ValueError, f.readline) @@ -2515,7 +2559,8 @@ @unittest.skipUnless(threading, 'Threading required for this test.') def check_interrupted_write(self, item, bytes, **fdopen_kwargs): """Check that a partial write, when it gets interrupted, properly - invokes the signal handler.""" + invokes the signal handler, and bubbles up the exception raised + in the latter.""" # XXX This test has three flaws that appear when objects are # XXX not reference counted. @@ -2586,12 +2631,142 @@ def test_interrupted_write_text(self): self.check_interrupted_write("xy", b"xy", mode="w", encoding="ascii") + def check_reentrant_write(self, data, **fdopen_kwargs): + def on_alarm(*args): + # Will be called reentrantly from the same thread + wio.write(data) + 1/0 + signal.signal(signal.SIGALRM, on_alarm) + r, w = os.pipe() + wio = self.io.open(w, **fdopen_kwargs) + try: + signal.alarm(1) + # Either the reentrant call to wio.write() fails with RuntimeError, + # or the signal handler raises ZeroDivisionError. + with self.assertRaises((ZeroDivisionError, RuntimeError)) as cm: + while 1: + for i in range(100): + wio.write(data) + wio.flush() + # Make sure the buffer doesn't fill up and block further writes + os.read(r, len(data) * 100) + exc = cm.exception + if isinstance(exc, RuntimeError): + self.assertTrue(str(exc).startswith("reentrant call"), str(exc)) + finally: + wio.close() + os.close(r) + + def test_reentrant_write_buffered(self): + self.check_reentrant_write(b"xy", mode="wb") + + def test_reentrant_write_text(self): + self.check_reentrant_write("xy", mode="w", encoding="ascii") + + def check_interrupted_read_retry(self, decode, **fdopen_kwargs): + """Check that a buffered read, when it gets interrupted (either + returning a partial result or EINTR), properly invokes the signal + handler and retries if the latter returned successfully.""" + r, w = os.pipe() + fdopen_kwargs["closefd"] = False + def alarm_handler(sig, frame): + os.write(w, b"bar") + signal.signal(signal.SIGALRM, alarm_handler) + try: + rio = self.io.open(r, **fdopen_kwargs) + os.write(w, b"foo") + signal.alarm(1) + # Expected behaviour: + # - first raw read() returns partial b"foo" + # - second raw read() returns EINTR + # - third raw read() returns b"bar" + self.assertEqual(decode(rio.read(6)), "foobar") + finally: + rio.close() + os.close(w) + os.close(r) + + def test_interrupterd_read_retry_buffered(self): + self.check_interrupted_read_retry(lambda x: x.decode('latin1'), + mode="rb") + + def test_interrupterd_read_retry_text(self): + self.check_interrupted_read_retry(lambda x: x, + mode="r") + + @unittest.skipUnless(threading, 'Threading required for this test.') + def check_interrupted_write_retry(self, item, **fdopen_kwargs): + """Check that a buffered write, when it gets interrupted (either + returning a partial result or EINTR), properly invokes the signal + handler and retries if the latter returned successfully.""" + select = support.import_module("select") + # A quantity that exceeds the buffer size of an anonymous pipe's + # write end. + N = 1024 * 1024 + r, w = os.pipe() + fdopen_kwargs["closefd"] = False + # We need a separate thread to read from the pipe and allow the + # write() to finish. This thread is started after the SIGALRM is + # received (forcing a first EINTR in write()). + read_results = [] + write_finished = False + def _read(): + while not write_finished: + while r in select.select([r], [], [], 1.0)[0]: + s = os.read(r, 1024) + read_results.append(s) + t = threading.Thread(target=_read) + t.daemon = True + def alarm1(sig, frame): + signal.signal(signal.SIGALRM, alarm2) + signal.alarm(1) + def alarm2(sig, frame): + t.start() + signal.signal(signal.SIGALRM, alarm1) + try: + wio = self.io.open(w, **fdopen_kwargs) + signal.alarm(1) + # Expected behaviour: + # - first raw write() is partial (because of the limited pipe buffer + # and the first alarm) + # - second raw write() returns EINTR (because of the second alarm) + # - subsequent write()s are successful (either partial or complete) + self.assertEqual(N, wio.write(item * N)) + wio.flush() + write_finished = True + t.join() + self.assertEqual(N, sum(len(x) for x in read_results)) + finally: + write_finished = True + os.close(w) + os.close(r) + # This is deliberate. If we didn't close the file descriptor + # before closing wio, wio would try to flush its internal + # buffer, and could block (in case of failure). + try: + wio.close() + except IOError as e: + if e.errno != errno.EBADF: + raise + + def test_interrupterd_write_retry_buffered(self): + self.check_interrupted_write_retry(b"x", mode="wb") + + def test_interrupterd_write_retry_text(self): + self.check_interrupted_write_retry("x", mode="w", encoding="latin1") + + class CSignalsTest(SignalsTest): io = io class PySignalsTest(SignalsTest): io = pyio + # Handling reentrancy issues would slow down _pyio even more, so the + # tests are disabled. + test_reentrant_write_buffered = None + test_reentrant_write_text = None + def test_main(): tests = (CIOTest, PyIOTest, diff --git a/lib-python/modified-2.7/test/test_itertools.py b/lib-python/modified-2.7/test/test_itertools.py --- a/lib-python/modified-2.7/test/test_itertools.py +++ b/lib-python/modified-2.7/test/test_itertools.py @@ -785,13 +785,18 @@ self.assertRaises(ValueError, islice, xrange(10), 1, -5, -1) self.assertRaises(ValueError, islice, xrange(10), 1, 10, -1) self.assertRaises(ValueError, islice, xrange(10), 1, 10, 0) - self.assertRaises((ValueError,TypeError), islice, xrange(10), 'a') - self.assertRaises((ValueError,TypeError), islice, xrange(10), 'a', 1) - self.assertRaises((ValueError,TypeError), islice, xrange(10), 1, 'a') - self.assertRaises((ValueError,TypeError), islice, xrange(10), 'a', 1, 1) - self.assertRaises((ValueError,TypeError), islice, xrange(10), 1, 'a', 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a') + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a', 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 1, 'a') + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a', 1, 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 1, 'a', 1) self.assertEqual(len(list(islice(count(), 1, 10, maxsize))), 1) + # Issue #10323: Less islice in a predictable state + c = count() + self.assertEqual(list(islice(c, 1, 3, 50)), [1]) + self.assertEqual(next(c), 3) + def test_takewhile(self): data = [1, 3, 5, 20, 2, 4, 6, 8] underten = lambda x: x<10 @@ -1494,7 +1499,7 @@ ... return chain(iterable, repeat(None)) >>> def ncycles(iterable, n): -... "Returns the seqeuence elements n times" +... "Returns the sequence elements n times" ... return chain(*repeat(iterable, n)) >>> def dotproduct(vec1, vec2): diff --git a/lib-python/modified-2.7/test/test_linecache.py b/lib-python/modified-2.7/test/test_linecache.py --- a/lib-python/modified-2.7/test/test_linecache.py +++ b/lib-python/modified-2.7/test/test_linecache.py @@ -9,7 +9,7 @@ FILENAME = linecache.__file__ INVALID_NAME = '!@$)(!@#_1' EMPTY = '' -TESTS = 'cjkencodings_test inspect_fodder inspect_fodder2 mapping_tests' +TESTS = 'inspect_fodder inspect_fodder2 mapping_tests' TESTS = TESTS.split() TEST_PATH = os.path.dirname(support.__file__) MODULES = "linecache abc".split() diff --git a/lib-python/modified-2.7/test/test_marshal.py b/lib-python/modified-2.7/test/test_marshal.py --- a/lib-python/modified-2.7/test/test_marshal.py +++ b/lib-python/modified-2.7/test/test_marshal.py @@ -196,7 +196,7 @@ # >>> type(loads(dumps(Int()))) # for typ in (int, long, float, complex, tuple, list, dict, set, frozenset): - # Note: str and unicode sublclasses are not tested because they get handled + # Note: str and unicode subclasses are not tested because they get handled # by marshal's routines for objects supporting the buffer API. subtyp = type('subtyp', (typ,), {}) self.assertRaises(ValueError, marshal.dumps, subtyp()) diff --git a/lib-python/modified-2.7/test/test_memoryio.py b/lib-python/modified-2.7/test/test_memoryio.py --- a/lib-python/modified-2.7/test/test_memoryio.py +++ b/lib-python/modified-2.7/test/test_memoryio.py @@ -372,7 +372,7 @@ # Pickle expects the class to be on the module level. Here we use a # little hack to allow the PickleTestMemIO class to derive from - # self.ioclass without having to define all combinations explictly on + # self.ioclass without having to define all combinations explicitly on # the module-level. import __main__ PickleTestMemIO.__module__ = '__main__' diff --git a/lib-python/modified-2.7/test/test_memoryview.py b/lib-python/modified-2.7/test/test_memoryview.py --- a/lib-python/modified-2.7/test/test_memoryview.py +++ b/lib-python/modified-2.7/test/test_memoryview.py @@ -9,6 +9,7 @@ import weakref import array from test import test_support +import io class AbstractMemoryTests: @@ -236,6 +237,16 @@ gc.collect() self.assertTrue(wr() is None, wr()) + def test_writable_readonly(self): + # Issue #10451: memoryview incorrectly exposes a readonly + # buffer as writable causing a segfault if using mmap + tp = self.ro_type + if tp is None: + return + b = tp(self._source) + m = self._view(b) + i = io.BytesIO(b'ZZZZ') + self.assertRaises(TypeError, i.readinto, m) # Variations on source objects for the buffer: bytes-like objects, then arrays # with itemsize > 1. diff --git a/lib-python/modified-2.7/test/test_mmap.py b/lib-python/modified-2.7/test/test_mmap.py --- a/lib-python/modified-2.7/test/test_mmap.py +++ b/lib-python/modified-2.7/test/test_mmap.py @@ -1,6 +1,7 @@ -from test.test_support import TESTFN, run_unittest, import_module +from test.test_support import (TESTFN, run_unittest, import_module, unlink, + requires, _2G, _4G) import unittest -import os, re, itertools, socket +import os, re, itertools, socket, sys mmap = import_module('mmap') @@ -244,6 +245,14 @@ prot=mmap.PROT_READ, access=mmap.ACCESS_WRITE) f.close() + # Try writing with PROT_EXEC and without PROT_WRITE + prot = mmap.PROT_READ | getattr(mmap, 'PROT_EXEC', 0) + with open(TESTFN, "r+b") as f: + m = mmap.mmap(f.fileno(), mapsize, prot=prot) + self.assertRaises(TypeError, m.write, b"abcdef") + self.assertRaises(TypeError, m.write_byte, 0) + m.close() + def test_bad_file_desc(self): # Try opening a bad file descriptor... self.assertRaises(mmap.error, mmap.mmap, -2, 4096) @@ -333,6 +342,36 @@ mf.close() f.close() + def test_length_0_offset(self): + # Issue #10916: test mapping of remainder of file by passing 0 for + # map length with an offset doesn't cause a segfault. + if not hasattr(os, "stat"): + self.skipTest("needs os.stat") + # NOTE: allocation granularity is currently 65536 under Win64, + # and therefore the minimum offset alignment. + with open(TESTFN, "wb") as f: + f.write((65536 * 2) * b'm') # Arbitrary character + + with open(TESTFN, "rb") as f: + mf = mmap.mmap(f.fileno(), 0, offset=65536, access=mmap.ACCESS_READ) + try: + self.assertRaises(IndexError, mf.__getitem__, 80000) + finally: + mf.close() + + def test_length_0_large_offset(self): + # Issue #10959: test mapping of a file by passing 0 for + # map length with a large offset doesn't cause a segfault. + if not hasattr(os, "stat"): + self.skipTest("needs os.stat") + + with open(TESTFN, "wb") as f: + f.write(115699 * b'm') # Arbitrary character + + with open(TESTFN, "w+b") as f: + self.assertRaises(ValueError, mmap.mmap, f.fileno(), 0, + offset=2147418112) + def test_move(self): # make move works everywhere (64-bit format problem earlier) f = open(TESTFN, 'w+') @@ -562,7 +601,7 @@ m2.close() m1.close() - # Test differnt tag + # Test different tag m1 = mmap.mmap(-1, len(data1), tagname="foo") m1[:] = data1 m2 = mmap.mmap(-1, len(data2), tagname="boo") @@ -608,8 +647,69 @@ finally: s.close() + +class LargeMmapTests(unittest.TestCase): + + def setUp(self): + unlink(TESTFN) + + def tearDown(self): + unlink(TESTFN) + + def _make_test_file(self, num_zeroes, tail): + if sys.platform[:3] == 'win' or sys.platform == 'darwin': + requires('largefile', + 'test requires %s bytes and a long time to run' % str(0x180000000)) + f = open(TESTFN, 'w+b') + try: + f.seek(num_zeroes) + f.write(tail) + f.flush() + except (IOError, OverflowError): + f.close() + raise unittest.SkipTest("filesystem does not have largefile support") + return f + + def test_large_offset(self): + with self._make_test_file(0x14FFFFFFF, b" ") as f: + m = mmap.mmap(f.fileno(), 0, offset=0x140000000, access=mmap.ACCESS_READ) + try: + self.assertEqual(m[0xFFFFFFF], b" ") + finally: + m.close() + + def test_large_filesize(self): + with self._make_test_file(0x17FFFFFFF, b" ") as f: + m = mmap.mmap(f.fileno(), 0x10000, access=mmap.ACCESS_READ) + try: + self.assertEqual(m.size(), 0x180000000) + finally: + m.close() + + # Issue 11277: mmap() with large (~4GB) sparse files crashes on OS X. + + def _test_around_boundary(self, boundary): + tail = b' DEARdear ' + start = boundary - len(tail) // 2 + end = start + len(tail) + with self._make_test_file(start, tail) as f: + m = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) + try: + self.assertEqual(m[start:end], tail) + finally: + m.close() + + @unittest.skipUnless(sys.maxsize > _4G, "test cannot run on 32-bit systems") + def test_around_2GB(self): + self._test_around_boundary(_2G) + + @unittest.skipUnless(sys.maxsize > _4G, "test cannot run on 32-bit systems") + def test_around_4GB(self): + self._test_around_boundary(_4G) + + def test_main(): - run_unittest(MmapTests) + run_unittest(MmapTests, LargeMmapTests) if __name__ == '__main__': test_main() diff --git a/lib-python/modified-2.7/test/test_multibytecodec.py b/lib-python/modified-2.7/test/test_multibytecodec.py --- a/lib-python/modified-2.7/test/test_multibytecodec.py +++ b/lib-python/modified-2.7/test/test_multibytecodec.py @@ -238,6 +238,36 @@ # Any ISO 2022 codec will cause the segfault myunichr(x).encode('iso_2022_jp', 'ignore') +class TestStateful(unittest.TestCase): + text = u'\u4E16\u4E16' + encoding = 'iso-2022-jp' + expected = b'\x1b$B@$@$' + expected_reset = b'\x1b$B@$@$\x1b(B' + + def test_encode(self): + self.assertEqual(self.text.encode(self.encoding), self.expected_reset) + + def test_incrementalencoder(self): + encoder = codecs.getincrementalencoder(self.encoding)() + output = b''.join( + encoder.encode(char) + for char in self.text) + self.assertEqual(output, self.expected) + + def test_incrementalencoder_final(self): + encoder = codecs.getincrementalencoder(self.encoding)() + last_index = len(self.text) - 1 + output = b''.join( + encoder.encode(char, index == last_index) + for index, char in enumerate(self.text)) + self.assertEqual(output, self.expected_reset) + +class TestHZStateful(TestStateful): + text = u'\u804a\u804a' + encoding = 'hz' + expected = b'~{ADAD' + expected_reset = b'~{ADAD~}' + def test_main(): test_support.run_unittest(__name__) diff --git a/lib-python/modified-2.7/test/test_multibytecodec_support.py b/lib-python/modified-2.7/test/test_multibytecodec_support.py --- a/lib-python/modified-2.7/test/test_multibytecodec_support.py +++ b/lib-python/modified-2.7/test/test_multibytecodec_support.py @@ -4,8 +4,11 @@ # Common Unittest Routines for CJK codecs # -import sys, codecs -import unittest, re +import codecs +import os +import re +import sys +import unittest from httplib import HTTPException from test import test_support from StringIO import StringIO @@ -326,6 +329,10 @@ self.fail('Decoding failed while testing %s -> %s: %s' % ( repr(csetch), repr(unich), exc.reason)) -def load_teststring(encoding): - from test import cjkencodings_test - return cjkencodings_test.teststring[encoding] +def load_teststring(name): + dir = os.path.join(os.path.dirname(__file__), 'cjkencodings') + with open(os.path.join(dir, name + '.txt'), 'rb') as f: + encoded = f.read() + with open(os.path.join(dir, name + '-utf8.txt'), 'rb') as f: + utf8 = f.read() + return encoded, utf8 diff --git a/lib-python/modified-2.7/test/test_multiprocessing.py b/lib-python/modified-2.7/test/test_multiprocessing.py --- a/lib-python/modified-2.7/test/test_multiprocessing.py +++ b/lib-python/modified-2.7/test/test_multiprocessing.py @@ -18,7 +18,7 @@ from test import test_support from StringIO import StringIO _multiprocessing = test_support.import_module('_multiprocessing') -# import threading after _multiprocessing to raise a more revelant error +# import threading after _multiprocessing to raise a more relevant error # message: "No module named _multiprocessing". _multiprocessing is not compiled # without thread support. import threading @@ -780,7 +780,7 @@ event = self.Event() wait = TimingWrapper(event.wait) - # Removed temporaily, due to API shear, this does not + # Removed temporarily, due to API shear, this does not # work with threading._Event objects. is_set == isSet self.assertEqual(event.is_set(), False) @@ -914,10 +914,32 @@ self.assertEqual(list(arr[:]), seq) @unittest.skipIf(c_int is None, "requires _ctypes") + def test_array_from_size(self): + size = 10 + # Test for zeroing (see issue #11675). + # The repetition below strengthens the test by increasing the chances + # of previously allocated non-zero memory being used for the new array + # on the 2nd and 3rd loops. + for _ in range(3): + arr = self.Array('i', size) + self.assertEqual(len(arr), size) + self.assertEqual(list(arr), [0] * size) + arr[:] = range(10) + self.assertEqual(list(arr), range(10)) + del arr + + @unittest.skipIf(c_int is None, "requires _ctypes") def test_rawarray(self): self.test_array(raw=True) @unittest.skipIf(c_int is None, "requires _ctypes") + def test_array_accepts_long(self): + arr = self.Array('i', 10L) + self.assertEqual(len(arr), 10) + raw_arr = self.RawArray('i', 10L) + self.assertEqual(len(raw_arr), 10) + + @unittest.skipIf(c_int is None, "requires _ctypes") def test_getobj_getlock_obj(self): arr1 = self.Array('i', range(10)) lock1 = arr1.get_lock() @@ -1104,7 +1126,8 @@ # Refill the pool p._repopulate_pool() # Wait until all workers are alive - countdown = 5 + # (countdown * DELTA = 5 seconds max startup process time) + countdown = 50 while countdown and not all(w.is_alive() for w in p._pool): countdown -= 1 time.sleep(DELTA) @@ -1711,7 +1734,7 @@ util.Finalize(None, conn.send, args=('STOP',), exitpriority=-100) - # call mutliprocessing's cleanup function then exit process without + # call multiprocessing's cleanup function then exit process without # garbage collecting locals util._exit_function() conn.close() diff --git a/lib-python/modified-2.7/test/test_peepholer.py b/lib-python/modified-2.7/test/test_peepholer.py --- a/lib-python/modified-2.7/test/test_peepholer.py +++ b/lib-python/modified-2.7/test/test_peepholer.py @@ -143,6 +143,24 @@ asm = dis_single('a="x"*1000') self.assertIn('(1000)', asm) + def test_binary_subscr_on_unicode(self): + # valid code get optimized + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + + # invalid code doesn't get optimized + # out of range + asm = dis_single('u"fuu"[10]') + self.assertIn('BINARY_SUBSCR', asm) + # non-BMP char (see #5057) + asm = dis_single('u"\U00012345"[0]') + self.assertIn('BINARY_SUBSCR', asm) + + def test_folding_of_unaryops_on_constants(self): for line, elem in ( ('`1`', "('1')"), # unary convert diff --git a/lib-python/modified-2.7/test/test_pydoc.py b/lib-python/modified-2.7/test/test_pydoc.py --- a/lib-python/modified-2.7/test/test_pydoc.py +++ b/lib-python/modified-2.7/test/test_pydoc.py @@ -6,12 +6,14 @@ import re import pydoc import inspect +import keyword import unittest import xml.etree import test.test_support from contextlib import contextmanager +from collections import namedtuple from test.test_support import ( - TESTFN, forget, rmtree, EnvironmentVarGuard, reap_children) + TESTFN, forget, rmtree, EnvironmentVarGuard, reap_children, captured_stdout) from test import pydoc_mod @@ -340,10 +342,26 @@ expected = 'C in module %s object' % __name__ self.assertIn(expected, pydoc.render_doc(c)) + def test_namedtuple_public_underscore(self): + NT = namedtuple('NT', ['abc', 'def'], rename=True) + with captured_stdout() as help_io: + help(NT) + helptext = help_io.getvalue() + self.assertIn('_1', helptext) + self.assertIn('_replace', helptext) + self.assertIn('_asdict', helptext) + + +class TestHelper(unittest.TestCase): + def test_keywords(self): + self.assertEqual(sorted(pydoc.Helper.keywords), + sorted(keyword.kwlist)) + def test_main(): test.test_support.run_unittest(PyDocDocTest, - TestDescriptions) + TestDescriptions, + TestHelper) if __name__ == "__main__": test_main() diff --git a/lib-python/modified-2.7/test/test_pyexpat.py b/lib-python/modified-2.7/test/test_pyexpat.py --- a/lib-python/modified-2.7/test/test_pyexpat.py +++ b/lib-python/modified-2.7/test/test_pyexpat.py @@ -6,6 +6,7 @@ from xml.parsers import expat +from test import test_support from test.test_support import sortdict, run_unittest @@ -217,6 +218,16 @@ self.assertEqual(op[15], "External entity ref: (None, u'entity.file', None)") self.assertEqual(op[16], "End element: u'root'") + # Issue 4877: expat.ParseFile causes segfault on a closed file. + fp = open(test_support.TESTFN, 'wb') + try: + fp.close() + parser = expat.ParserCreate() + with self.assertRaises(ValueError): + parser.ParseFile(fp) + finally: + test_support.unlink(test_support.TESTFN) + class NamespaceSeparatorTest(unittest.TestCase): def test_legal(self): diff --git a/lib-python/modified-2.7/test/test_set.py b/lib-python/modified-2.7/test/test_set.py --- a/lib-python/modified-2.7/test/test_set.py +++ b/lib-python/modified-2.7/test/test_set.py @@ -1592,6 +1592,39 @@ self.assertRaises(TypeError, getattr(set('january'), methname), N(data)) self.assertRaises(ZeroDivisionError, getattr(set('january'), methname), E(data)) +class bad_eq: + def __eq__(self, other): + if be_bad: + set2.clear() + raise ZeroDivisionError + return self is other + def __hash__(self): + return 0 + +class bad_dict_clear: + def __eq__(self, other): + if be_bad: + dict2.clear() + return self is other + def __hash__(self): + return 0 + +class TestWeirdBugs(unittest.TestCase): + def test_8420_set_merge(self): + # This used to segfault + global be_bad, set2, dict2 + be_bad = False + set1 = {bad_eq()} + set2 = {bad_eq() for i in range(75)} + be_bad = True + self.assertRaises(ZeroDivisionError, set1.update, set2) + + be_bad = False + set1 = {bad_dict_clear()} + dict2 = {bad_dict_clear(): None} + be_bad = True + set1.symmetric_difference_update(dict2) + # Application tests (based on David Eppstein's graph recipes ==================================== def powerset(U): @@ -1733,6 +1766,7 @@ TestIdentities, TestVariousIteratorArgs, TestGraphs, + TestWeirdBugs, ) test_support.run_unittest(*test_classes) diff --git a/lib-python/modified-2.7/test/test_site.py b/lib-python/modified-2.7/test/test_site.py --- a/lib-python/modified-2.7/test/test_site.py +++ b/lib-python/modified-2.7/test/test_site.py @@ -6,9 +6,11 @@ """ import unittest from test.test_support import run_unittest, TESTFN, EnvironmentVarGuard +from test.test_support import captured_output import __builtin__ import os import sys +import re import encodings import subprocess import sysconfig @@ -94,6 +96,58 @@ finally: pth_file.cleanup() + def make_pth(self, contents, pth_dir='.', pth_name=TESTFN): + # Create a .pth file and return its (abspath, basename). + pth_dir = os.path.abspath(pth_dir) + pth_basename = pth_name + '.pth' + pth_fn = os.path.join(pth_dir, pth_basename) + pth_file = open(pth_fn, 'w') + self.addCleanup(lambda: os.remove(pth_fn)) + pth_file.write(contents) + pth_file.close() + return pth_dir, pth_basename + + def test_addpackage_import_bad_syntax(self): + # Issue 10642 + pth_dir, pth_fn = self.make_pth("import bad)syntax\n") + with captured_output("stderr") as err_out: + site.addpackage(pth_dir, pth_fn, set()) + self.assertRegexpMatches(err_out.getvalue(), "line 1") + self.assertRegexpMatches(err_out.getvalue(), + re.escape(os.path.join(pth_dir, pth_fn))) + # XXX: the previous two should be independent checks so that the + # order doesn't matter. The next three could be a single check + # but my regex foo isn't good enough to write it. + self.assertRegexpMatches(err_out.getvalue(), 'Traceback') + self.assertRegexpMatches(err_out.getvalue(), r'import bad\)syntax') + self.assertRegexpMatches(err_out.getvalue(), 'SyntaxError') + + def test_addpackage_import_bad_exec(self): + # Issue 10642 + pth_dir, pth_fn = self.make_pth("randompath\nimport nosuchmodule\n") + with captured_output("stderr") as err_out: + site.addpackage(pth_dir, pth_fn, set()) + self.assertRegexpMatches(err_out.getvalue(), "line 2") + self.assertRegexpMatches(err_out.getvalue(), + re.escape(os.path.join(pth_dir, pth_fn))) + # XXX: ditto previous XXX comment. + self.assertRegexpMatches(err_out.getvalue(), 'Traceback') + self.assertRegexpMatches(err_out.getvalue(), 'ImportError') + + @unittest.skipIf(sys.platform == "win32", "Windows does not raise an " + "error for file paths containing null characters") + def test_addpackage_import_bad_pth_file(self): + # Issue 5258 + pth_dir, pth_fn = self.make_pth("abc\x00def\n") + with captured_output("stderr") as err_out: + site.addpackage(pth_dir, pth_fn, set()) + self.assertRegexpMatches(err_out.getvalue(), "line 1") + self.assertRegexpMatches(err_out.getvalue(), + re.escape(os.path.join(pth_dir, pth_fn))) + # XXX: ditto previous XXX comment. + self.assertRegexpMatches(err_out.getvalue(), 'Traceback') + self.assertRegexpMatches(err_out.getvalue(), 'TypeError') + def test_addsitedir(self): # Same tests for test_addpackage since addsitedir() essentially just # calls addpackage() for every .pth file in the directory @@ -111,13 +165,17 @@ usersite = site.USER_SITE self.assertIn(usersite, sys.path) + env = os.environ.copy() rc = subprocess.call([sys.executable, '-c', - 'import sys; sys.exit(%r in sys.path)' % usersite]) + 'import sys; sys.exit(%r in sys.path)' % usersite], + env=env) self.assertEqual(rc, 1, "%r is not in sys.path (sys.exit returned %r)" % (usersite, rc)) + env = os.environ.copy() rc = subprocess.call([sys.executable, '-s', '-c', - 'import sys; sys.exit(%r in sys.path)' % usersite]) + 'import sys; sys.exit(%r in sys.path)' % usersite], + env=env) self.assertEqual(rc, 0) env = os.environ.copy() diff --git a/lib-python/modified-2.7/test/test_socket.py b/lib-python/modified-2.7/test/test_socket.py --- a/lib-python/modified-2.7/test/test_socket.py +++ b/lib-python/modified-2.7/test/test_socket.py @@ -275,6 +275,45 @@ self.assertRaises(socket.error, raise_gaierror, "Error raising socket exception.") + def testSendtoErrors(self): + # Testing that sendto doens't masks failures. See #10169. + s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + self.addCleanup(s.close) + s.bind(('', 0)) + sockname = s.getsockname() + # 2 args + with self.assertRaises(UnicodeEncodeError): + s.sendto(u'\u2620', sockname) + with self.assertRaises(TypeError) as cm: + s.sendto(5j, sockname) + self.assertIn('not complex', str(cm.exception)) + with self.assertRaises(TypeError) as cm: + s.sendto('foo', None) + self.assertIn('not NoneType', str(cm.exception)) + # 3 args + with self.assertRaises(UnicodeEncodeError): + s.sendto(u'\u2620', 0, sockname) + with self.assertRaises(TypeError) as cm: + s.sendto(5j, 0, sockname) + self.assertIn('not complex', str(cm.exception)) + with self.assertRaises(TypeError) as cm: + s.sendto('foo', 0, None) + self.assertIn('not NoneType', str(cm.exception)) + with self.assertRaises(TypeError) as cm: + s.sendto('foo', 'bar', sockname) + self.assertIn('an integer is required', str(cm.exception)) + with self.assertRaises(TypeError) as cm: + s.sendto('foo', None, None) + self.assertIn('an integer is required', str(cm.exception)) + # wrong number of args + with self.assertRaises(TypeError) as cm: + s.sendto('foo') + self.assertIn('(1 given)', str(cm.exception)) + with self.assertRaises(TypeError) as cm: + s.sendto('foo', 0, sockname, 4) + self.assertIn('(4 given)', str(cm.exception)) + + def testCrucialConstants(self): # Testing for mission critical constants socket.AF_INET @@ -662,6 +701,13 @@ def test_sendall_interrupted_with_timeout(self): self.check_sendall_interrupted(True) + def testListenBacklog0(self): + srv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + srv.bind((HOST, 0)) + # backlog = 0 + srv.listen(0) + srv.close() + @unittest.skipUnless(thread, 'Threading required for this test.') class BasicTCPTest(SocketConnectedTest): diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -20,12 +20,7 @@ from BaseHTTPServer import HTTPServer from SimpleHTTPServer import SimpleHTTPRequestHandler -# Optionally test SSL support, if we have it in the tested platform -skip_expected = False -try: - import ssl -except ImportError: - skip_expected = True +ssl = test_support.import_module("ssl") HOST = test_support.HOST CERTFILE = None @@ -228,6 +223,50 @@ finally: s.close() + def test_connect_ex(self): + # Issue #11326: check connect_ex() implementation + with test_support.transient_internet("svn.python.org"): + s = ssl.wrap_socket(socket.socket(socket.AF_INET), + cert_reqs=ssl.CERT_REQUIRED, + ca_certs=SVN_PYTHON_ORG_ROOT_CERT) + try: + self.assertEqual(0, s.connect_ex(("svn.python.org", 443))) + self.assertTrue(s.getpeercert()) + finally: + s.close() + + def test_non_blocking_connect_ex(self): + # Issue #11326: non-blocking connect_ex() should allow handshake + # to proceed after the socket gets ready. + with test_support.transient_internet("svn.python.org"): + s = ssl.wrap_socket(socket.socket(socket.AF_INET), + cert_reqs=ssl.CERT_REQUIRED, + ca_certs=SVN_PYTHON_ORG_ROOT_CERT, + do_handshake_on_connect=False) + try: + s.setblocking(False) + rc = s.connect_ex(('svn.python.org', 443)) + # EWOULDBLOCK under Windows, EINPROGRESS elsewhere + self.assertIn(rc, (0, errno.EINPROGRESS, errno.EWOULDBLOCK)) + # Wait for connect to finish + select.select([], [s], [], 5.0) + # Non-blocking handshake + while True: + try: + s.do_handshake() + break + except ssl.SSLError as err: + if err.args[0] == ssl.SSL_ERROR_WANT_READ: + select.select([s], [], [], 5.0) + elif err.args[0] == ssl.SSL_ERROR_WANT_WRITE: + select.select([], [s], [], 5.0) + else: + raise + # SSL established + self.assertTrue(s.getpeercert()) + finally: + s.close() + @unittest.skipIf(os.name == "nt", "Can't use a socket as a file under Windows") def test_makefile_close(self): # Issue #5238: creating a file-like object with makefile() shouldn't @@ -300,9 +339,9 @@ if ssl.OPENSSL_VERSION_INFO < (0, 9, 8, 0, 15): self.skipTest("SHA256 not available on %r" % ssl.OPENSSL_VERSION) # NOTE: https://sha256.tbs-internet.com is another possible test host - remote = ("sha2.hboeck.de", 443) + remote = ("sha256.tbs-internet.com", 443) sha256_cert = os.path.join(os.path.dirname(__file__), "sha256.pem") - with test_support.transient_internet("sha2.hboeck.de"): + with test_support.transient_internet("sha256.tbs-internet.com"): s = ssl.wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=sha256_cert,) @@ -1292,12 +1331,10 @@ def test_main(verbose=False): - if skip_expected: - raise unittest.SkipTest("No SSL support") - global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT CERTFILE = test_support.findfile("keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( + os.path.dirname(__file__) or os.curdir, "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/modified-2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/modified-2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -575,7 +575,8 @@ subprocess.Popen(['nonexisting_i_hope'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) - if c.exception.errno != errno.ENOENT: # ignore "no such file" + # ignore errors that indicate the command was not found + if c.exception.errno not in (errno.ENOENT, errno.EACCES): raise c.exception def test_handles_closed_on_exception(self): @@ -598,6 +599,24 @@ self.assertFalse(os.path.exists(ofname)) self.assertFalse(os.path.exists(efname)) + def test_communicate_epipe(self): + # Issue 10963: communicate() should hide EPIPE + p = subprocess.Popen([sys.executable, "-c", 'pass'], + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE) + self.addCleanup(p.stdout.close) + self.addCleanup(p.stderr.close) + self.addCleanup(p.stdin.close) + p.communicate("x" * 2**20) + + def test_communicate_epipe_only_stdin(self): + # Issue 10963: communicate() should hide EPIPE + p = subprocess.Popen([sys.executable, "-c", 'pass'], + stdin=subprocess.PIPE) + self.addCleanup(p.stdin.close) + time.sleep(2) + p.communicate("x" * 2**20) # context manager class _SuppressCoreFiles(object): @@ -780,6 +799,68 @@ self.assertStderrEqual(stderr, '') self.assertEqual(p.wait(), -signal.SIGTERM) + def check_close_std_fds(self, fds): + # Issue #9905: test that subprocess pipes still work properly with + # some standard fds closed + stdin = 0 + newfds = [] + for a in fds: + b = os.dup(a) + newfds.append(b) + if a == 0: + stdin = b + try: + for fd in fds: + os.close(fd) + out, err = subprocess.Popen([sys.executable, "-c", + 'import sys;' + 'sys.stdout.write("apple");' + 'sys.stdout.flush();' + 'sys.stderr.write("orange")'], + stdin=stdin, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE).communicate() + err = test_support.strip_python_stderr(err) + self.assertEqual((out, err), (b'apple', b'orange')) + finally: + for b, a in zip(newfds, fds): + os.dup2(b, a) + for b in newfds: + os.close(b) + + def test_close_fd_0(self): + self.check_close_std_fds([0]) + + def test_close_fd_1(self): + self.check_close_std_fds([1]) + + def test_close_fd_2(self): + self.check_close_std_fds([2]) + + def test_close_fds_0_1(self): + self.check_close_std_fds([0, 1]) + + def test_close_fds_0_2(self): + self.check_close_std_fds([0, 2]) + + def test_close_fds_1_2(self): + self.check_close_std_fds([1, 2]) + + def test_close_fds_0_1_2(self): + # Issue #10806: test that subprocess pipes still work properly with + # all standard fds closed. + self.check_close_std_fds([0, 1, 2]) + + def test_wait_when_sigchild_ignored(self): + # NOTE: sigchild_ignore.py may not be an effective test on all OSes. + sigchild_ignore = test_support.findfile("sigchild_ignore.py", + subdir="subprocessdata") + p = subprocess.Popen([sys.executable, sigchild_ignore], + stdout=subprocess.PIPE, stderr=subprocess.PIPE) + stdout, stderr = p.communicate() + self.assertEqual(0, p.returncode, "sigchild_ignore.py exited" + " non-zero with this error:\n%s" % stderr) + @unittest.skipUnless(mswindows, "Windows specific tests") class Win32ProcessTestCase(BaseTestCase): diff --git a/lib-python/modified-2.7/test/test_support.py b/lib-python/modified-2.7/test/test_support.py --- a/lib-python/modified-2.7/test/test_support.py +++ b/lib-python/modified-2.7/test/test_support.py @@ -35,7 +35,8 @@ "run_with_locale", "set_memlimit", "bigmemtest", "bigaddrspacetest", "BasicTestRunner", "run_unittest", "run_doctest", "threading_setup", "threading_cleanup", "reap_children", "cpython_only", - "check_impl_detail", "get_attribute", "py3k_bytes"] + "check_impl_detail", "get_attribute", "py3k_bytes", + "import_fresh_module"] class Error(Exception): @@ -83,23 +84,20 @@ def _save_and_remove_module(name, orig_modules): """Helper function to save and remove a module from sys.modules - Return value is True if the module was in sys.modules and - False otherwise.""" - saved = True - try: - orig_modules[name] = sys.modules[name] - except KeyError: - saved = False - else: + Raise ImportError if the module can't be imported.""" + # try to import the module and raise an error if it can't be imported + if name not in sys.modules: + __import__(name) del sys.modules[name] - return saved - + for modname in list(sys.modules): + if modname == name or modname.startswith(name + '.'): + orig_modules[modname] = sys.modules[modname] + del sys.modules[modname] def _save_and_block_module(name, orig_modules): """Helper function to save and block a module in sys.modules - Return value is True if the module was in sys.modules and - False otherwise.""" + Return True if the module was in sys.modules, False otherwise.""" saved = True try: orig_modules[name] = sys.modules[name] @@ -115,14 +113,15 @@ the sys.modules cache is restored to its original state. Modules named in fresh are also imported anew if needed by the import. + If one of these modules can't be imported, None is returned. Importing of modules named in blocked is prevented while the fresh import takes place. If deprecated is True, any module or package deprecation messages will be suppressed.""" - # NOTE: test_heapq and test_warnings include extra sanity checks to make - # sure that this utility function is working as expected + # NOTE: test_heapq, test_json, and test_warnings include extra sanity + # checks to make sure that this utility function is working as expected with _ignore_deprecated_imports(deprecated): # Keep track of modules saved for later restoration as well # as those which just need a blocking entry removed @@ -136,6 +135,8 @@ if not _save_and_block_module(blocked_name, orig_modules): names_to_remove.append(blocked_name) fresh_module = importlib.import_module(name) + except ImportError: + fresh_module = None finally: for orig_name, module in orig_modules.items(): sys.modules[orig_name] = module @@ -761,6 +762,7 @@ default_errnos = [ ('ECONNREFUSED', 111), ('ECONNRESET', 104), + ('EHOSTUNREACH', 113), ('ENETUNREACH', 101), ('ETIMEDOUT', 110), ] @@ -816,14 +818,8 @@ @contextlib.contextmanager def captured_output(stream_name): - """Run the 'with' statement body using a StringIO object in place of a - specific attribute on the sys module. - Example use (with 'stream_name=stdout'):: - - with captured_stdout() as s: - print "hello" - assert s.getvalue() == "hello" - """ + """Return a context manager used by captured_stdout and captured_stdin + that temporarily replaces the sys stream *stream_name* with a StringIO.""" import StringIO orig_stdout = getattr(sys, stream_name) setattr(sys, stream_name, StringIO.StringIO()) @@ -833,6 +829,12 @@ setattr(sys, stream_name, orig_stdout) def captured_stdout(): + """Capture the output of sys.stdout: + + with captured_stdout() as s: + print "hello" + self.assertEqual(s.getvalue(), "hello") + """ return captured_output("stdout") def captured_stdin(): @@ -1270,3 +1272,13 @@ if v > 0: args.append('-' + opt * v) return args + +def strip_python_stderr(stderr): + """Strip the stderr of a Python process from potential debug output + emitted by the interpreter. + + This will typically be run on the result of the communicate() method + of a subprocess.Popen object. + """ + stderr = re.sub(br"\[\d+ refs\]\r?\n?$", b"", stderr).strip() + return stderr diff --git a/lib-python/modified-2.7/test/test_syntax.py b/lib-python/modified-2.7/test/test_syntax.py --- a/lib-python/modified-2.7/test/test_syntax.py +++ b/lib-python/modified-2.7/test/test_syntax.py @@ -267,7 +267,7 @@ Test continue in finally in weird combinations. -continue in for loop under finally shouuld be ok. +continue in for loop under finally should be ok. >>> def test(): ... try: diff --git a/lib-python/modified-2.7/test/test_sysconfig.py b/lib-python/modified-2.7/test/test_sysconfig.py --- a/lib-python/modified-2.7/test/test_sysconfig.py +++ b/lib-python/modified-2.7/test/test_sysconfig.py @@ -141,7 +141,7 @@ ('Darwin Kernel Version 8.11.1: ' 'Wed Oct 10 18:23:28 PDT 2007; ' 'root:xnu-792.25.20~1/RELEASE_I386'), 'PowerPC')) - os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.3' + get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.3' get_config_vars()['CFLAGS'] = ('-fno-strict-aliasing -DNDEBUG -g ' '-fwrapv -O3 -Wall -Wstrict-prototypes') @@ -161,7 +161,6 @@ 'Wed Oct 10 18:23:28 PDT 2007; ' 'root:xnu-792.25.20~1/RELEASE_I386'), 'i386')) get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.3' - os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.3' get_config_vars()['CFLAGS'] = ('-fno-strict-aliasing -DNDEBUG -g ' '-fwrapv -O3 -Wall -Wstrict-prototypes') @@ -176,7 +175,7 @@ sys.maxint = maxint # macbook with fat binaries (fat, universal or fat64) - os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.4' + get_config_vars()['MACOSX_DEPLOYMENT_TARGET'] = '10.4' get_config_vars()['CFLAGS'] = ('-arch ppc -arch i386 -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' @@ -274,6 +273,51 @@ user_path = get_path(name, 'posix_user') self.assertEqual(user_path, global_path.replace(base, user)) + @unittest.skipUnless(sys.platform == "darwin", "test only relevant on MacOSX") + def test_platform_in_subprocess(self): + my_platform = sysconfig.get_platform() + + # Test without MACOSX_DEPLOYMENT_TARGET in the environment + + env = os.environ.copy() + if 'MACOSX_DEPLOYMENT_TARGET' in env: + del env['MACOSX_DEPLOYMENT_TARGET'] + + with open('/dev/null', 'w') as devnull_fp: + p = subprocess.Popen([ + sys.executable, '-c', + 'import sysconfig; print(sysconfig.get_platform())', + ], + stdout=subprocess.PIPE, + stderr=devnull_fp, + env=env) + test_platform = p.communicate()[0].strip() + test_platform = test_platform.decode('utf-8') + status = p.wait() + + self.assertEqual(status, 0) + self.assertEqual(my_platform, test_platform) + + + # Test with MACOSX_DEPLOYMENT_TARGET in the environment, and + # using a value that is unlikely to be the default one. + env = os.environ.copy() + env['MACOSX_DEPLOYMENT_TARGET'] = '10.1' + + p = subprocess.Popen([ + sys.executable, '-c', + 'import sysconfig; print(sysconfig.get_platform())', + ], + stdout=subprocess.PIPE, + stderr=open('/dev/null'), + env=env) + test_platform = p.communicate()[0].strip() + test_platform = test_platform.decode('utf-8') + status = p.wait() + + self.assertEqual(status, 0) + self.assertEqual(my_platform, test_platform) + def test_main(): run_unittest(TestSysConfig) diff --git a/lib-python/modified-2.7/test/test_tarfile.py b/lib-python/modified-2.7/test/test_tarfile.py --- a/lib-python/modified-2.7/test/test_tarfile.py +++ b/lib-python/modified-2.7/test/test_tarfile.py @@ -855,6 +855,94 @@ finally: os.chdir(cwd) + @unittest.skipUnless(hasattr(os, 'symlink'), "needs os.symlink") + def test_extractall_symlinks(self): + # Test if extractall works properly when tarfile contains symlinks + tempdir = os.path.join(TEMPDIR, "testsymlinks") + temparchive = os.path.join(TEMPDIR, "testsymlinks.tar") + os.mkdir(tempdir) + try: + source_file = os.path.join(tempdir,'source') + target_file = os.path.join(tempdir,'symlink') + with open(source_file,'w') as f: + f.write('something\n') + os.symlink(source_file, target_file) + tar = tarfile.open(temparchive,'w') + tar.add(source_file, arcname=os.path.basename(source_file)) + tar.add(target_file, arcname=os.path.basename(target_file)) + tar.close() + # Let's extract it to the location which contains the symlink + tar = tarfile.open(temparchive,'r') + # this should not raise OSError: [Errno 17] File exists + try: + tar.extractall(path=tempdir) + except OSError: + self.fail("extractall failed with symlinked files") + finally: + tar.close() + finally: + os.unlink(temparchive) + shutil.rmtree(tempdir) + + @unittest.skipUnless(hasattr(os, 'symlink'), "needs os.symlink") + def test_extractall_broken_symlinks(self): + # Test if extractall works properly when tarfile contains broken + # symlinks + tempdir = os.path.join(TEMPDIR, "testsymlinks") + temparchive = os.path.join(TEMPDIR, "testsymlinks.tar") + os.mkdir(tempdir) + try: + source_file = os.path.join(tempdir,'source') + target_file = os.path.join(tempdir,'symlink') + with open(source_file,'w') as f: + f.write('something\n') + os.symlink(source_file, target_file) + tar = tarfile.open(temparchive,'w') + tar.add(target_file, arcname=os.path.basename(target_file)) + tar.close() + # remove the real file + os.unlink(source_file) + # Let's extract it to the location which contains the symlink + tar = tarfile.open(temparchive,'r') + # this should not raise OSError: [Errno 17] File exists + try: + tar.extractall(path=tempdir) + except OSError: + self.fail("extractall failed with broken symlinked files") + finally: + tar.close() + finally: + os.unlink(temparchive) + shutil.rmtree(tempdir) + + @unittest.skipUnless(hasattr(os, 'link'), "needs os.link") + def test_extractall_hardlinks(self): + # Test if extractall works properly when tarfile contains symlinks + tempdir = os.path.join(TEMPDIR, "testsymlinks") + temparchive = os.path.join(TEMPDIR, "testsymlinks.tar") + os.mkdir(tempdir) + try: + source_file = os.path.join(tempdir,'source') + target_file = os.path.join(tempdir,'symlink') + with open(source_file,'w') as f: + f.write('something\n') + os.link(source_file, target_file) + tar = tarfile.open(temparchive,'w') + tar.add(source_file, arcname=os.path.basename(source_file)) + tar.add(target_file, arcname=os.path.basename(target_file)) + tar.close() + # Let's extract it to the location which contains the symlink + tar = tarfile.open(temparchive,'r') + # this should not raise OSError: [Errno 17] File exists + try: + tar.extractall(path=tempdir) + except OSError: + self.fail("extractall failed with linked files") + finally: + tar.close() + finally: + os.unlink(temparchive) + shutil.rmtree(tempdir) class StreamWriteTest(WriteTestBase): diff --git a/lib-python/modified-2.7/test/test_telnetlib.py b/lib-python/modified-2.7/test/test_telnetlib.py --- a/lib-python/modified-2.7/test/test_telnetlib.py +++ b/lib-python/modified-2.7/test/test_telnetlib.py @@ -34,12 +34,12 @@ data += item written = conn.send(data) data = data[written:] - conn.close() except socket.timeout: pass + else: + conn.close() finally: serv.close() - conn.close() evt.set() class GeneralTests(TestCase): diff --git a/lib-python/modified-2.7/test/test_tempfile.py b/lib-python/modified-2.7/test/test_tempfile.py --- a/lib-python/modified-2.7/test/test_tempfile.py +++ b/lib-python/modified-2.7/test/test_tempfile.py @@ -693,6 +693,23 @@ f.write('x') self.assertTrue(f._rolled) + def test_writelines(self): + # Verify writelines with a SpooledTemporaryFile + f = self.do_create() + f.writelines((b'x', b'y', b'z')) + f.seek(0) + buf = f.read() + self.assertEqual(buf, b'xyz') + + def test_writelines_sequential(self): + # A SpooledTemporaryFile should hold exactly max_size bytes, and roll + # over afterward + f = self.do_create(max_size=35) + f.writelines((b'x' * 20, b'x' * 10, b'x' * 5)) + self.assertFalse(f._rolled) + f.write(b'x') + self.assertTrue(f._rolled) + def test_sparse(self): # A SpooledTemporaryFile that is written late in the file will extend # when that occurs diff --git a/lib-python/modified-2.7/test/test_threading.py b/lib-python/modified-2.7/test/test_threading.py --- a/lib-python/modified-2.7/test/test_threading.py +++ b/lib-python/modified-2.7/test/test_threading.py @@ -10,6 +10,8 @@ import time import unittest import weakref +import os +import subprocess from test import lock_tests @@ -277,7 +279,6 @@ print("test_finalize_with_runnning_thread can't import ctypes") return # can't do anything - import subprocess rc = subprocess.call([sys.executable, "-c", """if 1: import ctypes, sys, time, thread @@ -308,7 +309,6 @@ def test_finalize_with_trace(self): # Issue1733757 # Avoid a deadlock when sys.settrace steps into threading._shutdown - import subprocess p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, threading @@ -343,7 +343,6 @@ def test_join_nondaemon_on_shutdown(self): # Issue 1722344 # Raising SystemExit skipped threading._shutdown - import subprocess p = subprocess.Popen([sys.executable, "-c", """if 1: import threading from time import sleep @@ -434,7 +433,6 @@ sys.stdout.flush() \n""" + script - import subprocess p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) rc = p.wait() data = p.stdout.read().replace('\r', '') @@ -506,6 +504,152 @@ """ self._run_and_join(script) + def assertScriptHasOutput(self, script, expected_output): + p = subprocess.Popen([sys.executable, "-c", script], + stdout=subprocess.PIPE) + rc = p.wait() + data = p.stdout.read().decode().replace('\r', '') + self.assertEqual(rc, 0, "Unexpected error") + self.assertEqual(data, expected_output) + + @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") + def test_4_joining_across_fork_in_worker_thread(self): + # There used to be a possible deadlock when forking from a child + # thread. See http://bugs.python.org/issue6643. + + # Skip platforms with known problems forking from a worker thread. + # See http://bugs.python.org/issue3863. + if sys.platform in ('freebsd4', 'freebsd5', 'freebsd6', 'os2emx'): + raise unittest.SkipTest('due to known OS bugs on ' + sys.platform) + + # The script takes the following steps: + # - The main thread in the parent process starts a new thread and then + # tries to join it. + # - The join operation acquires the Lock inside the thread's _block + # Condition. (See threading.py:Thread.join().) + # - We stub out the acquire method on the condition to force it to wait + # until the child thread forks. (See LOCK ACQUIRED HERE) + # - The child thread forks. (See LOCK HELD and WORKER THREAD FORKS + # HERE) + # - The main thread of the parent process enters Condition.wait(), + # which releases the lock on the child thread. + # - The child process returns. Without the necessary fix, when the + # main thread of the child process (which used to be the child thread + # in the parent process) attempts to exit, it will try to acquire the + # lock in the Thread._block Condition object and hang, because the + # lock was held across the fork. + + script = """if 1: + import os, time, threading + + finish_join = False + start_fork = False + + def worker(): + # Wait until this thread's lock is acquired before forking to + # create the deadlock. + global finish_join + while not start_fork: + time.sleep(0.01) + # LOCK HELD: Main thread holds lock across this call. + childpid = os.fork() + finish_join = True + if childpid != 0: + # Parent process just waits for child. + os.waitpid(childpid, 0) + # Child process should just return. + + w = threading.Thread(target=worker) + + # Stub out the private condition variable's lock acquire method. + # This acquires the lock and then waits until the child has forked + # before returning, which will release the lock soon after. If + # someone else tries to fix this test case by acquiring this lock + # before forking instead of resetting it, the test case will + # deadlock when it shouldn't. + condition = w._block + orig_acquire = condition.acquire + call_count_lock = threading.Lock() + call_count = 0 + def my_acquire(): + global call_count + global start_fork + orig_acquire() # LOCK ACQUIRED HERE + start_fork = True + if call_count == 0: + while not finish_join: + time.sleep(0.01) # WORKER THREAD FORKS HERE + with call_count_lock: + call_count += 1 + condition.acquire = my_acquire + + w.start() + w.join() + print('end of main') + """ + self.assertScriptHasOutput(script, "end of main\n") + + @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") + def test_5_clear_waiter_locks_to_avoid_crash(self): + # Check that a spawned thread that forks doesn't segfault on certain + # platforms, namely OS X. This used to happen if there was a waiter + # lock in the thread's condition variable's waiters list. Even though + # we know the lock will be held across the fork, it is not safe to + # release locks held across forks on all platforms, so releasing the + # waiter lock caused a segfault on OS X. Furthermore, since locks on + # OS X are (as of this writing) implemented with a mutex + condition + # variable instead of a semaphore, while we know that the Python-level + # lock will be acquired, we can't know if the internal mutex will be + # acquired at the time of the fork. + + # Skip platforms with known problems forking from a worker thread. + # See http://bugs.python.org/issue3863. + if sys.platform in ('freebsd4', 'freebsd5', 'freebsd6', 'os2emx'): + raise unittest.SkipTest('due to known OS bugs on ' + sys.platform) + script = """if True: + import os, time, threading + + start_fork = False + + def worker(): + # Wait until the main thread has attempted to join this thread + # before continuing. + while not start_fork: + time.sleep(0.01) + childpid = os.fork() + if childpid != 0: + # Parent process just waits for child. + (cpid, rc) = os.waitpid(childpid, 0) + assert cpid == childpid + assert rc == 0 + print('end of worker thread') + else: + # Child process should just return. + pass + + w = threading.Thread(target=worker) + + # Stub out the private condition variable's _release_save method. + # This releases the condition's lock and flips the global that + # causes the worker to fork. At this point, the problematic waiter + # lock has been acquired once by the waiter and has been put onto + # the waiters list. + condition = w._block + orig_release_save = condition._release_save + def my_release_save(): + global start_fork + orig_release_save() + # Waiter lock held here, condition lock released. + start_fork = True + condition._release_save = my_release_save + + w.start() + w.join() + print('end of main thread') + """ + output = "end of worker thread\nend of main thread\n" + self.assertScriptHasOutput(script, output) + class ThreadingExceptionTests(BaseTestCase): # A RuntimeError should be raised if Thread.start() is called @@ -551,6 +695,37 @@ class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests): semtype = staticmethod(threading.BoundedSemaphore) + @unittest.skipUnless(sys.platform == 'darwin', 'test macosx problem') + def test_recursion_limit(self): + # Issue 9670 + # test that excessive recursion within a non-main thread causes + # an exception rather than crashing the interpreter on platforms + # like Mac OS X or FreeBSD which have small default stack sizes + # for threads + script = """if True: + import threading + + def recurse(): + return recurse() + + def outer(): + try: + recurse() + except RuntimeError: + pass + + w = threading.Thread(target=outer) + w.start() + w.join() + print('end of main thread') + """ + expected_output = "end of main thread\n" + p = subprocess.Popen([sys.executable, "-c", script], + stdout=subprocess.PIPE) + stdout, stderr = p.communicate() + data = stdout.decode().replace('\r', '') + self.assertEqual(p.returncode, 0, "Unexpected error") + self.assertEqual(data, expected_output) def test_main(): test.test_support.run_unittest(LockTests, RLockTests, EventTests, diff --git a/lib-python/modified-2.7/test/test_unicode.py b/lib-python/modified-2.7/test/test_unicode.py --- a/lib-python/modified-2.7/test/test_unicode.py +++ b/lib-python/modified-2.7/test/test_unicode.py @@ -442,6 +442,17 @@ return u'\u1234' self.assertEqual('%s' % Wrapper(), u'\u1234') + def test_startswith_endswith_errors(self): + for meth in (u'foo'.startswith, u'foo'.endswith): + with self.assertRaises(UnicodeDecodeError): + meth('\xff') + with self.assertRaises(TypeError) as cm: + meth(['f']) + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) + @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): # should not format with a comma, but always with C locale @@ -667,11 +678,17 @@ # see http://www.unicode.org/versions/Unicode5.2.0/ch03.pdf # (table 3-7) and http://www.rfc-editor.org/rfc/rfc3629.txt #for cb in map(chr, range(0xA0, 0xC0)): - #sys.__stdout__.write('\\xED\\x%02x\\x80\n' % ord(cb)) #self.assertRaises(UnicodeDecodeError, #('\xED'+cb+'\x80').decode, 'utf-8') #self.assertRaises(UnicodeDecodeError, #('\xED'+cb+'\xBF').decode, 'utf-8') + # but since they are valid on Python 2 add a test for that: + for cb, surrogate in zip(map(chr, range(0xA0, 0xC0)), + map(unichr, range(0xd800, 0xe000, 64))): + encoded = '\xED'+cb+'\x80' + self.assertEqual(encoded.decode('utf-8'), surrogate) + self.assertEqual(surrogate.encode('utf-8'), encoded) + for cb in map(chr, range(0x80, 0x90)): self.assertRaises(UnicodeDecodeError, ('\xF0'+cb+'\x80\x80').decode, 'utf-8') diff --git a/lib-python/modified-2.7/test/test_unicodedata.py b/lib-python/modified-2.7/test/test_unicodedata.py --- a/lib-python/modified-2.7/test/test_unicodedata.py +++ b/lib-python/modified-2.7/test/test_unicodedata.py @@ -188,9 +188,22 @@ def test_pr29(self): # http://www.unicode.org/review/pr-29.html - for text in (u"\u0b47\u0300\u0b3e", u"\u1100\u0300\u1161"): + # See issues #1054943 and #10254. + composed = (u"\u0b47\u0300\u0b3e", u"\u1100\u0300\u1161", + u'Li\u030dt-s\u1e73\u0301', + u'\u092e\u093e\u0930\u094d\u0915 \u091c\u093c' + + u'\u0941\u0915\u0947\u0930\u092c\u0930\u094d\u0917', + u'\u0915\u093f\u0930\u094d\u0917\u093f\u091c\u093c' + + 'u\u0938\u094d\u0924\u093e\u0928') + for text in composed: self.assertEqual(self.db.normalize('NFC', text), text) + def test_issue10254(self): + # Crash reported in #10254 + a = u'C\u0338' * 20 + u'C\u0327' + b = u'C\u0338' * 20 + u'\xC7' + self.assertEqual(self.db.normalize('NFC', a), b) + def test_east_asian_width(self): eaw = self.db.east_asian_width self.assertRaises(TypeError, eaw, 'a') diff --git a/lib-python/modified-2.7/test/test_urllib2.py b/lib-python/modified-2.7/test/test_urllib2.py --- a/lib-python/modified-2.7/test/test_urllib2.py +++ b/lib-python/modified-2.7/test/test_urllib2.py @@ -972,6 +972,28 @@ self.assertEqual(count, urllib2.HTTPRedirectHandler.max_redirections) + def test_invalid_redirect(self): + from_url = "http://example.com/a.html" + valid_schemes = ['http', 'https', 'ftp'] + invalid_schemes = ['file', 'imap', 'ldap'] + schemeless_url = "example.com/b.html" + h = urllib2.HTTPRedirectHandler() + o = h.parent = MockOpener() + req = Request(from_url) + req.timeout = socket._GLOBAL_DEFAULT_TIMEOUT + + for scheme in invalid_schemes: + invalid_url = scheme + '://' + schemeless_url + self.assertRaises(urllib2.HTTPError, h.http_error_302, + req, MockFile(), 302, "Security Loophole", + MockHeaders({"location": invalid_url})) + + for scheme in valid_schemes: + valid_url = scheme + '://' + schemeless_url + h.http_error_302(req, MockFile(), 302, "That's fine", + MockHeaders({"location": valid_url})) + self.assertEqual(o.req.get_full_url(), valid_url) + def test_cookie_redirect(self): # cookies shouldn't leak into redirected requests from cookielib import CookieJar @@ -988,6 +1010,15 @@ o.open("http://www.example.com/") self.assertTrue(not hh.req.has_header("Cookie")) + def test_redirect_fragment(self): + redirected_url = 'http://www.example.com/index.html#OK\r\n\r\n' + hh = MockHTTPHandler(302, 'Location: ' + redirected_url) + hdeh = urllib2.HTTPDefaultErrorHandler() + hrh = urllib2.HTTPRedirectHandler() + o = build_test_opener(hh, hdeh, hrh) + fp = o.open('http://www.example.com') + self.assertEqual(fp.geturl(), redirected_url.strip()) + def test_proxy(self): o = OpenerDirector() ph = urllib2.ProxyHandler(dict(http="proxy.example.com:3128")) @@ -1273,12 +1304,16 @@ req = Request("") self.assertEqual("www.python.org", req.get_host()) - def test_urlwith_fragment(self): + def test_url_fragment(self): req = Request("http://www.python.org/?qs=query#fragment=true") self.assertEqual("/?qs=query", req.get_selector()) req = Request("http://www.python.org/#fun=true") self.assertEqual("/", req.get_selector()) + # Issue 11703: geturl() omits fragment in the original URL. + url = 'http://docs.python.org/library/urllib2.html#OK' + req = Request(url) + self.assertEqual(req.get_full_url(), url) def test_main(verbose=None): from test import test_urllib2 diff --git a/lib-python/modified-2.7/test/test_warnings.py b/lib-python/modified-2.7/test/test_warnings.py --- a/lib-python/modified-2.7/test/test_warnings.py +++ b/lib-python/modified-2.7/test/test_warnings.py @@ -320,7 +320,7 @@ sys.argv = argv def test_warn_explicit_type_errors(self): - # warn_explicit() shoud error out gracefully if it is given objects + # warn_explicit() should error out gracefully if it is given objects # of the wrong types. # lineno is expected to be an integer. self.assertRaises(TypeError, self.module.warn_explicit, diff --git a/lib-python/modified-2.7/test/test_weakset.py b/lib-python/modified-2.7/test/test_weakset.py --- a/lib-python/modified-2.7/test/test_weakset.py +++ b/lib-python/modified-2.7/test/test_weakset.py @@ -63,7 +63,8 @@ def test_contains(self): for c in self.letters: self.assertEqual(c in self.s, c in self.d) - self.assertRaises(TypeError, self.s.__contains__, [[]]) + # 1 is not weakref'able, but that TypeError is caught by __contains__ + self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj test_support.gc_collect() diff --git a/lib-python/modified-2.7/test/test_zlib.py b/lib-python/modified-2.7/test/test_zlib.py --- a/lib-python/modified-2.7/test/test_zlib.py +++ b/lib-python/modified-2.7/test/test_zlib.py @@ -1,11 +1,17 @@ import unittest -from test import test_support +from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii import os import random -from test.test_support import precisionbigmemtest, _1G +from test.test_support import precisionbigmemtest, _1G, _4G +import sys -zlib = test_support.import_module('zlib') +try: + import mmap +except ImportError: + mmap = None + +zlib = import_module('zlib') class ChecksumTestCase(unittest.TestCase): @@ -298,6 +304,15 @@ self.assertRaises(ValueError, dco.decompress, "", -1) self.assertEqual('', dco.unconsumed_tail) + def test_clear_unconsumed_tail(self): + # Issue #12050: calling decompress() without providing max_length + # should clear the unconsumed_tail attribute. + cdata = "x\x9cKLJ\x06\x00\x02M\x01" # "abc" + dco = zlib.decompressobj() + ddata = dco.decompress(cdata, 1) + ddata += dco.decompress(dco.unconsumed_tail) + self.assertEqual(dco.unconsumed_tail, "") + def test_flushes(self): # Test flush() with the various options, using all the # different levels in order to provide more variations. @@ -540,7 +555,7 @@ def test_main(): - test_support.run_unittest( + run_unittest( ChecksumTestCase, ExceptionTestCase, CompressTestCase, diff --git a/lib-python/modified-2.7/trace.py b/lib-python/modified-2.7/trace.py --- a/lib-python/modified-2.7/trace.py +++ b/lib-python/modified-2.7/trace.py @@ -335,7 +335,7 @@ lnotab, count) if summary and n_lines: - percent = int(100 * n_hits / n_lines) + percent = 100 * n_hits // n_lines sums[modulename] = n_lines, percent, modulename, filename if summary and sums: diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py --- a/lib-python/modified-2.7/urllib2.py +++ b/lib-python/modified-2.7/urllib2.py @@ -190,7 +190,7 @@ origin_req_host=None, unverifiable=False): # unwrap('') --> 'type://host/path' self.__original = unwrap(url) - self.__original, fragment = splittag(self.__original) + self.__original, self.__fragment = splittag(self.__original) self.type = None # self.__r_type is what's left after doing the splittype self.host = None @@ -236,7 +236,10 @@ return self.data def get_full_url(self): - return self.__original + if self.__fragment: + return '%s#%s' % (self.__original, self.__fragment) + else: + return self.__original def get_type(self): if self.type is None: @@ -299,8 +302,9 @@ def __init__(self): client_version = "Python-urllib/%s" % __version__ self.addheaders = [('User-agent', client_version)] + # self.handlers is retained only for backward compatibility + self.handlers = [] # manage the individual handlers - self.handlers = [] self.handle_open = {} self.handle_error = {} self.process_response = {} @@ -350,8 +354,6 @@ added = True if added: - # the handlers must work in an specific order, the order - # is specified in a Handler attribute bisect.insort(self.handlers, handler) handler.add_parent(self) @@ -579,6 +581,17 @@ newurl = urlparse.urljoin(req.get_full_url(), newurl) + # For security reasons we do not allow redirects to protocols + # other than HTTP, HTTPS or FTP. + newurl_lower = newurl.lower() + if not (newurl_lower.startswith('http://') or + newurl_lower.startswith('https://') or + newurl_lower.startswith('ftp://')): + raise HTTPError(newurl, code, + msg + " - Redirection to url '%s' is not allowed" % + newurl, + headers, fp) + # XXX Probably want to forget about the state of the current # request, although that might interact poorly with other # handlers that also use handler-specific request attributes From noreply at buildbot.pypy.org Sun Jan 22 10:17:14 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 10:17:14 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: List a test that was recently added, otherwise the test suite won't run :( Message-ID: <20120122091714.EEFB8821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51624:ff154856cb9b Date: 2012-01-22 10:16 +0100 http://bitbucket.org/pypy/pypy/changeset/ff154856cb9b/ Log: List a test that was recently added, otherwise the test suite won't run :( diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -314,6 +314,7 @@ RegrTest('test_mmap.py'), RegrTest('test_module.py', core=True), RegrTest('test_modulefinder.py'), + RegrTest('test_msilib.py', skip=only_win32), RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), RegrTest('test_multifile.py'), From noreply at buildbot.pypy.org Sun Jan 22 10:33:50 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 10:33:50 +0100 (CET) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120122093350.D8630821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r51625:01999b68d9cc Date: 2012-01-22 10:33 +0100 http://bitbucket.org/pypy/pypy/changeset/01999b68d9cc/ Log: merge heads diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -13,7 +13,7 @@ Sources for time zone and DST data: http://www.twinsun.com/tz/tz-link.htm This was originally copied from the sandbox of the CPython CVS repository. -Thanks to Tim Peters for suggesting using it. +Thanks to Tim Peters for suggesting using it. """ import time as _time @@ -271,6 +271,8 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): + if not isinstance(year, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -280,6 +282,8 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): + if not isinstance(hour, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -543,61 +547,76 @@ self = object.__new__(cls) - self.__days = d - self.__seconds = s - self.__microseconds = us + self._days = d + self._seconds = s + self._microseconds = us if abs(d) > 999999999: raise OverflowError("timedelta # of days is too large: %d" % d) return self def __repr__(self): - if self.__microseconds: + if self._microseconds: return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds, - self.__microseconds) - if self.__seconds: + self._days, + self._seconds, + self._microseconds) + if self._seconds: return "%s(%d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds) - return "%s(%d)" % ('datetime.' + self.__class__.__name__, self.__days) + self._days, + self._seconds) + return "%s(%d)" % ('datetime.' + self.__class__.__name__, self._days) def __str__(self): - mm, ss = divmod(self.__seconds, 60) + mm, ss = divmod(self._seconds, 60) hh, mm = divmod(mm, 60) s = "%d:%02d:%02d" % (hh, mm, ss) - if self.__days: + if self._days: def plural(n): return n, abs(n) != 1 and "s" or "" - s = ("%d day%s, " % plural(self.__days)) + s - if self.__microseconds: - s = s + ".%06d" % self.__microseconds + s = ("%d day%s, " % plural(self._days)) + s + if self._microseconds: + s = s + ".%06d" % self._microseconds return s - days = property(lambda self: self.__days, doc="days") - seconds = property(lambda self: self.__seconds, doc="seconds") - microseconds = property(lambda self: self.__microseconds, - doc="microseconds") - def total_seconds(self): return ((self.days * 86400 + self.seconds) * 10**6 + self.microseconds) / 1e6 + # Read-only field accessors + @property + def days(self): + """days""" + return self._days + + @property + def seconds(self): + """seconds""" + return self._seconds + + @property + def microseconds(self): + """microseconds""" + return self._microseconds + def __add__(self, other): if isinstance(other, timedelta): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days + other.__days, - self.__seconds + other.__seconds, - self.__microseconds + other.__microseconds) + return timedelta(self._days + other._days, + self._seconds + other._seconds, + self._microseconds + other._microseconds) return NotImplemented __radd__ = __add__ def __sub__(self, other): if isinstance(other, timedelta): - return self + -other + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(self._days - other._days, + self._seconds - other._seconds, + self._microseconds - other._microseconds) return NotImplemented def __rsub__(self, other): @@ -606,17 +625,17 @@ return NotImplemented def __neg__(self): - # for CPython compatibility, we cannot use - # our __class__ here, but need a real timedelta - return timedelta(-self.__days, - -self.__seconds, - -self.__microseconds) + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(-self._days, + -self._seconds, + -self._microseconds) def __pos__(self): return self def __abs__(self): - if self.__days < 0: + if self._days < 0: return -self else: return self @@ -625,81 +644,81 @@ if isinstance(other, (int, long)): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days * other, - self.__seconds * other, - self.__microseconds * other) + return timedelta(self._days * other, + self._seconds * other, + self._microseconds * other) return NotImplemented __rmul__ = __mul__ def __div__(self, other): if isinstance(other, (int, long)): - usec = ((self.__days * (24*3600L) + self.__seconds) * 1000000 + - self.__microseconds) + usec = ((self._days * (24*3600L) + self._seconds) * 1000000 + + self._microseconds) return timedelta(0, 0, usec // other) return NotImplemented __floordiv__ = __div__ - # Comparisons. + # Comparisons of timedelta objects with other. def __eq__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, timedelta) - return cmp(self.__getstate(), other.__getstate()) + return cmp(self._getstate(), other._getstate()) def __hash__(self): - return hash(self.__getstate()) + return hash(self._getstate()) def __nonzero__(self): - return (self.__days != 0 or - self.__seconds != 0 or - self.__microseconds != 0) + return (self._days != 0 or + self._seconds != 0 or + self._microseconds != 0) # Pickle support. __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - return (self.__days, self.__seconds, self.__microseconds) + def _getstate(self): + return (self._days, self._seconds, self._microseconds) def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) timedelta.min = timedelta(-999999999) timedelta.max = timedelta(days=999999999, hours=23, minutes=59, seconds=59, @@ -749,25 +768,26 @@ return self _check_date_fields(year, month, day) self = object.__new__(cls) - self.__year = year - self.__month = month - self.__day = day + self._year = year + self._month = month + self._day = day return self # Additional constructors + @classmethod def fromtimestamp(cls, t): "Construct a date from a POSIX timestamp (like time.time())." y, m, d, hh, mm, ss, weekday, jday, dst = _time.localtime(t) return cls(y, m, d) - fromtimestamp = classmethod(fromtimestamp) + @classmethod def today(cls): "Construct a date from time.time()." t = _time.time() return cls.fromtimestamp(t) - today = classmethod(today) + @classmethod def fromordinal(cls, n): """Contruct a date from a proleptic Gregorian ordinal. @@ -776,16 +796,24 @@ """ y, m, d = _ord2ymd(n) return cls(y, m, d) - fromordinal = classmethod(fromordinal) # Conversions to string def __repr__(self): - "Convert to formal string, for repr()." + """Convert to formal string, for repr(). + + >>> dt = datetime(2010, 1, 1) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0)' + + >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)' + """ return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__year, - self.__month, - self.__day) + self._year, + self._month, + self._day) # XXX These shouldn't depend on time.localtime(), because that # clips the usable dates to [1970 .. 2038). At least ctime() is # easily done without using strftime() -- that's better too because @@ -793,12 +821,20 @@ def ctime(self): "Format a la ctime()." - return tmxxx(self.__year, self.__month, self.__day).ctime() + return tmxxx(self._year, self._month, self._day).ctime() def strftime(self, fmt): "Format using strftime()." return _wrap_strftime(self, fmt, self.timetuple()) + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) + def isoformat(self): """Return the date formatted according to ISO. @@ -808,29 +844,31 @@ - http://www.w3.org/TR/NOTE-datetime - http://www.cl.cam.ac.uk/~mgk25/iso-time.html """ - return "%04d-%02d-%02d" % (self.__year, self.__month, self.__day) + return "%04d-%02d-%02d" % (self._year, self._month, self._day) __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) + # Read-only field accessors + @property + def year(self): + """year (1-9999)""" + return self._year - # Read-only field accessors - year = property(lambda self: self.__year, - doc="year (%d-%d)" % (MINYEAR, MAXYEAR)) - month = property(lambda self: self.__month, doc="month (1-12)") - day = property(lambda self: self.__day, doc="day (1-31)") + @property + def month(self): + """month (1-12)""" + return self._month + + @property + def day(self): + """day (1-31)""" + return self._day # Standard conversions, __cmp__, __hash__ (and helpers) def timetuple(self): "Return local time tuple compatible with time.localtime()." - return _build_struct_time(self.__year, self.__month, self.__day, + return _build_struct_time(self._year, self._month, self._day, 0, 0, 0, -1) def toordinal(self): @@ -839,24 +877,24 @@ January 1 of year 1 is day 1. Only the year, month and day values contribute to the result. """ - return _ymd2ord(self.__year, self.__month, self.__day) + return _ymd2ord(self._year, self._month, self._day) def replace(self, year=None, month=None, day=None): """Return a new date with new values for the specified fields.""" if year is None: - year = self.__year + year = self._year if month is None: - month = self.__month + month = self._month if day is None: - day = self.__day + day = self._day _check_date_fields(year, month, day) return date(year, month, day) - # Comparisons. + # Comparisons of date objects with other. def __eq__(self, other): if isinstance(other, date): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -864,7 +902,7 @@ def __ne__(self, other): if isinstance(other, date): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -872,7 +910,7 @@ def __le__(self, other): if isinstance(other, date): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -880,7 +918,7 @@ def __lt__(self, other): if isinstance(other, date): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -888,7 +926,7 @@ def __ge__(self, other): if isinstance(other, date): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -896,21 +934,21 @@ def __gt__(self, other): if isinstance(other, date): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple"): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, date) - y, m, d = self.__year, self.__month, self.__day - y2, m2, d2 = other.__year, other.__month, other.__day + y, m, d = self._year, self._month, self._day + y2, m2, d2 = other._year, other._month, other._day return cmp((y, m, d), (y2, m2, d2)) def __hash__(self): "Hash." - return hash(self.__getstate()) + return hash(self._getstate()) # Computations @@ -922,9 +960,9 @@ def __add__(self, other): "Add a date to a timedelta." if isinstance(other, timedelta): - t = tmxxx(self.__year, - self.__month, - self.__day + other.days) + t = tmxxx(self._year, + self._month, + self._day + other.days) self._checkOverflow(t.year) result = date(t.year, t.month, t.day) return result @@ -966,9 +1004,9 @@ ISO calendar algorithm taken from http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm """ - year = self.__year + year = self._year week1monday = _isoweek1monday(year) - today = _ymd2ord(self.__year, self.__month, self.__day) + today = _ymd2ord(self._year, self._month, self._day) # Internally, week and day have origin 0 week, day = divmod(today - week1monday, 7) if week < 0: @@ -985,18 +1023,18 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - return ("%c%c%c%c" % (yhi, ylo, self.__month, self.__day), ) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + return ("%c%c%c%c" % (yhi, ylo, self._month, self._day), ) def __setstate(self, string): if len(string) != 4 or not (1 <= ord(string[2]) <= 12): raise TypeError("not enough arguments") - yhi, ylo, self.__month, self.__day = map(ord, string) - self.__year = yhi * 256 + ylo + yhi, ylo, self._month, self._day = map(ord, string) + self._year = yhi * 256 + ylo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) _date_class = date # so functions w/ args named "date" can get at the class @@ -1118,62 +1156,80 @@ return self _check_tzinfo_arg(tzinfo) _check_time_fields(hour, minute, second, microsecond) - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self._hour + + @property + def minute(self): + """minute (0-59)""" + return self._minute + + @property + def second(self): + """second (0-59)""" + return self._second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self._microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo # Standard conversions, __hash__ (and helpers) - # Comparisons. + # Comparisons of time objects with other. def __eq__(self, other): if isinstance(other, time): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, time): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, time): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, time): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, time): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, time): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, time) mytz = self._tzinfo ottz = other._tzinfo @@ -1187,23 +1243,23 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._hour, self._minute, self._second, + self._microsecond), + (other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware times") - myhhmm = self.__hour * 60 + self.__minute - myoff - othhmm = other.__hour * 60 + other.__minute - otoff - return cmp((myhhmm, self.__second, self.__microsecond), - (othhmm, other.__second, other.__microsecond)) + myhhmm = self._hour * 60 + self._minute - myoff + othhmm = other._hour * 60 + other._minute - otoff + return cmp((myhhmm, self._second, self._microsecond), + (othhmm, other._second, other._microsecond)) def __hash__(self): """Hash.""" tzoff = self._utcoffset() if not tzoff: # zero or None - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) h, m = divmod(self.hour * 60 + self.minute - tzoff, 60) if 0 <= h < 24: return hash(time(h, m, self.second, self.microsecond)) @@ -1227,14 +1283,14 @@ def __repr__(self): """Convert to formal string, for repr().""" - if self.__microsecond != 0: - s = ", %d, %d" % (self.__second, self.__microsecond) - elif self.__second != 0: - s = ", %d" % self.__second + if self._microsecond != 0: + s = ", %d, %d" % (self._second, self._microsecond) + elif self._second != 0: + s = ", %d" % self._second else: s = "" s= "%s(%d, %d%s)" % ('datetime.' + self.__class__.__name__, - self.__hour, self.__minute, s) + self._hour, self._minute, s) if self._tzinfo is not None: assert s[-1:] == ")" s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")" @@ -1246,8 +1302,8 @@ This is 'HH:MM:SS.mmmmmm+zz:zz', or 'HH:MM:SS+zz:zz' if self.microsecond == 0. """ - s = _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond) + s = _format_time(self._hour, self._minute, self._second, + self._microsecond) tz = self._tzstr() if tz: s += tz @@ -1255,14 +1311,6 @@ __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) - def strftime(self, fmt): """Format using strftime(). The date part of the timestamp passed to underlying strftime should not be used. @@ -1270,10 +1318,18 @@ # The year must be >= 1900 else Python's strftime implementation # can raise a bogus exception. timetuple = (1900, 1, 1, - self.__hour, self.__minute, self.__second, + self._hour, self._minute, self._second, 0, 1, -1) return _wrap_strftime(self, fmt, timetuple) + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) + # Timezone functions def utcoffset(self): @@ -1350,10 +1406,10 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 6) % (self.__hour, self.__minute, self.__second, + basestate = ("%c" * 6) % (self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1363,13 +1419,13 @@ def __setstate(self, string, tzinfo): if len(string) != 6 or ord(string[0]) >= 24: raise TypeError("an integer is required") - self.__hour, self.__minute, self.__second, us1, us2, us3 = \ + self._hour, self._minute, self._second, us1, us2, us3 = \ map(ord, string) - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (time, self.__getstate()) + return (time, self._getstate()) _time_class = time # so functions w/ args named "time" can get at the class @@ -1378,9 +1434,11 @@ time.resolution = timedelta(microseconds=1) class datetime(date): + """datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) - # XXX needs docstrings - # See http://www.zope.org/Members/fdrake/DateTimeWiki/TimeZoneInfo + The year, month and day arguments are required. tzinfo may be None, or an + instance of a tzinfo subclass. The remaining arguments may be ints or longs. + """ def __new__(cls, year, month=None, day=None, hour=0, minute=0, second=0, microsecond=0, tzinfo=None): @@ -1393,24 +1451,43 @@ _check_time_fields(hour, minute, second, microsecond) self = date.__new__(cls, year, month, day) # XXX This duplicates __year, __month, __day for convenience :-( - self.__year = year - self.__month = month - self.__day = day - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._year = year + self._month = month + self._day = day + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self._hour + @property + def minute(self): + """minute (0-59)""" + return self._minute + + @property + def second(self): + """second (0-59)""" + return self._second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self._microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo + + @classmethod def fromtimestamp(cls, t, tz=None): """Construct a datetime from a POSIX timestamp (like time.time()). @@ -1438,7 +1515,6 @@ if tz is not None: result = tz.fromutc(result) return result - fromtimestamp = classmethod(fromtimestamp) @classmethod def utcfromtimestamp(cls, t): @@ -1462,18 +1538,19 @@ # XXX uses gettimeofday on platforms that have it, but that isn't # XXX available from Python. So now() may return different results # XXX across the implementations. + @classmethod def now(cls, tz=None): "Construct a datetime from time.time() and optional time zone info." t = _time.time() return cls.fromtimestamp(t, tz) - now = classmethod(now) + @classmethod def utcnow(cls): "Construct a UTC datetime from time.time()." t = _time.time() return cls.utcfromtimestamp(t) - utcnow = classmethod(utcnow) + @classmethod def combine(cls, date, time): "Construct a datetime from a given date and a given time." if not isinstance(date, _date_class): @@ -1483,7 +1560,6 @@ return cls(date.year, date.month, date.day, time.hour, time.minute, time.second, time.microsecond, time.tzinfo) - combine = classmethod(combine) def timetuple(self): "Return local time tuple compatible with time.localtime()." @@ -1509,7 +1585,7 @@ def date(self): "Return the date part." - return date(self.__year, self.__month, self.__day) + return date(self._year, self._month, self._day) def time(self): "Return the time part, with tzinfo None." @@ -1569,8 +1645,8 @@ def ctime(self): "Format a la ctime()." - t = tmxxx(self.__year, self.__month, self.__day, self.__hour, - self.__minute, self.__second) + t = tmxxx(self._year, self._month, self._day, self._hour, + self._minute, self._second) return t.ctime() def isoformat(self, sep='T'): @@ -1585,10 +1661,10 @@ Optional argument sep specifies the separator between date and time, default 'T'. """ - s = ("%04d-%02d-%02d%c" % (self.__year, self.__month, self.__day, + s = ("%04d-%02d-%02d%c" % (self._year, self._month, self._day, sep) + - _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond)) + _format_time(self._hour, self._minute, self._second, + self._microsecond)) off = self._utcoffset() if off is not None: if off < 0: @@ -1601,13 +1677,13 @@ return s def __repr__(self): - "Convert to formal string, for repr()." - L = [self.__year, self.__month, self.__day, # These are never zero - self.__hour, self.__minute, self.__second, self.__microsecond] + """Convert to formal string, for repr().""" + L = [self._year, self._month, self._day, # These are never zero + self._hour, self._minute, self._second, self._microsecond] if L[-1] == 0: del L[-1] if L[-1] == 0: - del L[-1] + del L[-1] s = ", ".join(map(str, L)) s = "%s(%s)" % ('datetime.' + self.__class__.__name__, s) if self._tzinfo is not None: @@ -1680,7 +1756,7 @@ def __eq__(self, other): if isinstance(other, datetime): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1688,7 +1764,7 @@ def __ne__(self, other): if isinstance(other, datetime): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1696,7 +1772,7 @@ def __le__(self, other): if isinstance(other, datetime): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1704,7 +1780,7 @@ def __lt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1712,7 +1788,7 @@ def __ge__(self, other): if isinstance(other, datetime): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1720,13 +1796,13 @@ def __gt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, datetime) mytz = self._tzinfo ottz = other._tzinfo @@ -1742,12 +1818,12 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__year, self.__month, self.__day, - self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__year, other.__month, other.__day, - other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._year, self._month, self._day, + self._hour, self._minute, self._second, + self._microsecond), + (other._year, other._month, other._day, + other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware datetimes") @@ -1761,13 +1837,13 @@ "Add a datetime and a timedelta." if not isinstance(other, timedelta): return NotImplemented - t = tmxxx(self.__year, - self.__month, - self.__day + other.days, - self.__hour, - self.__minute, - self.__second + other.seconds, - self.__microsecond + other.microseconds) + t = tmxxx(self._year, + self._month, + self._day + other.days, + self._hour, + self._minute, + self._second + other.seconds, + self._microsecond + other.microseconds) self._checkOverflow(t.year) result = datetime(t.year, t.month, t.day, t.hour, t.minute, t.second, @@ -1785,11 +1861,11 @@ days1 = self.toordinal() days2 = other.toordinal() - secs1 = self.__second + self.__minute * 60 + self.__hour * 3600 - secs2 = other.__second + other.__minute * 60 + other.__hour * 3600 + secs1 = self._second + self._minute * 60 + self._hour * 3600 + secs2 = other._second + other._minute * 60 + other._hour * 3600 base = timedelta(days1 - days2, secs1 - secs2, - self.__microsecond - other.__microsecond) + self._microsecond - other._microsecond) if self._tzinfo is other._tzinfo: return base myoff = self._utcoffset() @@ -1797,13 +1873,13 @@ if myoff == otoff: return base if myoff is None or otoff is None: - raise TypeError, "cannot mix naive and timezone-aware time" + raise TypeError("cannot mix naive and timezone-aware time") return base + timedelta(minutes = otoff-myoff) def __hash__(self): tzoff = self._utcoffset() if tzoff is None: - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) days = _ymd2ord(self.year, self.month, self.day) seconds = self.hour * 3600 + (self.minute - tzoff) * 60 + self.second return hash(timedelta(days, seconds, self.microsecond)) @@ -1812,12 +1888,12 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 10) % (yhi, ylo, self.__month, self.__day, - self.__hour, self.__minute, self.__second, + basestate = ("%c" * 10) % (yhi, ylo, self._month, self._day, + self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1825,14 +1901,14 @@ return (basestate, self._tzinfo) def __setstate(self, string, tzinfo): - (yhi, ylo, self.__month, self.__day, self.__hour, - self.__minute, self.__second, us1, us2, us3) = map(ord, string) - self.__year = yhi * 256 + ylo - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + (yhi, ylo, self._month, self._day, self._hour, + self._minute, self._second, us1, us2, us3) = map(ord, string) + self._year = yhi * 256 + ylo + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) datetime.min = datetime(1, 1, 1) @@ -2014,7 +2090,7 @@ Because we know z.d said z was in daylight time (else [5] would have held and we would have stopped then), and we know z.d != z'.d (else [8] would have held -and we we have stopped then), and there are only 2 possible values dst() can +and we have stopped then), and there are only 2 possible values dst() can return in Eastern, it follows that z'.d must be 0 (which it is in the example, but the reasoning doesn't depend on the example -- it depends on there being two possible dst() outcomes, one zero and the other non-zero). Therefore From noreply at buildbot.pypy.org Sun Jan 22 10:53:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 10:53:05 +0100 (CET) Subject: [pypy-commit] pypy stm: Add an argument to the callback invoked by stm_perform_transaction: Message-ID: <20120122095305.5D298821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51626:b8a1df61795a Date: 2012-01-22 10:51 +0100 http://bitbucket.org/pypy/pypy/changeset/b8a1df61795a/ Log: Add an argument to the callback invoked by stm_perform_transaction: a retry counter starting at 0. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -84,11 +84,18 @@ self.w_callback = w_callback self.args = args + def register(self): + id = threadintf.thread_id() + state.pending_lists[id].append(self) + def run(self): rstm.perform_transaction(Pending._run_in_transaction, Pending, self) @staticmethod - def _run_in_transaction(pending): + def _run_in_transaction(pending, retry_counter): + if retry_counter > 0: + self.register() # retrying: will be done later, try others first + return if state.got_exception is not None: return # return early if there is already a 'got_exception' try: @@ -99,8 +106,7 @@ def add(space, w_callback, __args__): - id = threadintf.thread_id() - state.pending_lists[id].append(Pending(w_callback, __args__)) + Pending(w_callback, __args__).register() def add_list(new_pending_list): diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -10,13 +10,13 @@ @specialize.memo() def _get_stm_callback(func, argcls): - def _stm_callback(llarg): + def _stm_callback(llarg, retry_counter): if we_are_translated(): llarg = rffi.cast(rclass.OBJECTPTR, llarg) arg = cast_base_ptr_to_instance(argcls, llarg) else: arg = lltype.TLS.stm_callback_arg - res = func(arg) + res = func(arg, retry_counter) assert res is None return lltype.nullptr(rffi.VOIDP.TO) return _stm_callback diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py --- a/pypy/rlib/test/test_rstm.py +++ b/pypy/rlib/test/test_rstm.py @@ -7,7 +7,7 @@ class Arg(object): _alloc_nonmovable_ = True -def setx(arg): +def setx(arg, retry_counter): debug_print(arg.x) assert rstm.debug_get_state() == 1 if arg.x == 303: diff --git a/pypy/translator/stm/_rffi_stm.py b/pypy/translator/stm/_rffi_stm.py --- a/pypy/translator/stm/_rffi_stm.py +++ b/pypy/translator/stm/_rffi_stm.py @@ -32,7 +32,7 @@ stm_write_word = llexternal('stm_write_word', [SignedP, lltype.Signed], lltype.Void) -CALLBACK = lltype.Ptr(lltype.FuncType([rffi.VOIDP], rffi.VOIDP)) +CALLBACK = lltype.Ptr(lltype.FuncType([rffi.VOIDP, lltype.Signed], rffi.VOIDP)) stm_perform_transaction = llexternal('stm_perform_transaction', [CALLBACK, rffi.VOIDP], rffi.VOIDP) diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -622,16 +622,20 @@ return d->end_time; } -void* stm_perform_transaction(void*(*callback)(void*), void *arg) +void* stm_perform_transaction(void*(*callback)(void*, long), void *arg) { void *result; jmp_buf _jmpbuf; + volatile long v_counter = 0; + long counter; /* you need to call descriptor_init() before calling stm_perform_transaction() */ assert(thread_descriptor != NULL_TX); setjmp(_jmpbuf); begin_transaction(&_jmpbuf); - result = callback(arg); + counter = v_counter; + v_counter = counter + 1; + result = callback(arg, counter); commit_transaction(); return result; } diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -22,7 +22,7 @@ void stm_descriptor_init(void); void stm_descriptor_done(void); -void* stm_perform_transaction(void*(*)(void*), void*); +void* stm_perform_transaction(void*(*)(void*, long), void*); long stm_read_word(long* addr); void stm_write_word(long* addr, long val); void stm_try_inevitable(STM_CCHARP1(why)); diff --git a/pypy/translator/stm/test/targetdemo.py b/pypy/translator/stm/test/targetdemo.py --- a/pypy/translator/stm/test/targetdemo.py +++ b/pypy/translator/stm/test/targetdemo.py @@ -20,7 +20,7 @@ _alloc_nonmovable_ = True -def add_at_end_of_chained_list(arg): +def add_at_end_of_chained_list(arg, retry_counter): node = arg.anchor value = arg.value x = Node(value) @@ -52,7 +52,7 @@ print "check ok!" -def increment_done(arg): +def increment_done(arg, retry_counter): print "thread done." glob.done += 1 diff --git a/pypy/translator/stm/test/test_funcgen.py b/pypy/translator/stm/test/test_funcgen.py --- a/pypy/translator/stm/test/test_funcgen.py +++ b/pypy/translator/stm/test/test_funcgen.py @@ -93,7 +93,7 @@ return a a_prebuilt = make_a_1() -def _play_with_getfield(dummy_arg): +def _play_with_getfield(dummy_arg, retry_counter): a = a_prebuilt assert a.x == -611 assert a.c1 == '/' @@ -106,7 +106,7 @@ assert float(a.sb) == float(rs1b) return NULL -def _play_with_setfields(dummy_arg): +def _play_with_setfields(dummy_arg, retry_counter): a = a_prebuilt # a.x = 12871981 @@ -125,10 +125,10 @@ a.sb = rs2b # read the values which have not been commited yet, but are local to the # transaction - _check_values_of_fields(dummy_arg) + _check_values_of_fields(dummy_arg, retry_counter) return NULL -def _check_values_of_fields(dummy_arg): +def _check_values_of_fields(dummy_arg, retry_counter): a = a_prebuilt assert a.x == 12871981 assert a.c1 == '(' @@ -162,14 +162,14 @@ array[i] = rffi.cast(lltype.typeOf(array).TO.OF, newvalues[i]) change._annspecialcase_ = 'specialize:ll' -def _play_with_getarrayitem(dummy_arg): +def _play_with_getarrayitem(dummy_arg, retry_counter): check(prebuilt_array_signed, [1, 10, -1, -10, 42]) check(prebuilt_array_char, [chr(1), chr(10), chr(255), chr(246), chr(42)]) return NULL -def _play_with_setarrayitem_1(dummy_arg): +def _play_with_setarrayitem_1(dummy_arg, retry_counter): change(prebuilt_array_signed, [500000, -10000000, 3]) check(prebuilt_array_signed, [500000, -10000000, 3, -10, 42]) prebuilt_array_char[0] = 'A' @@ -180,7 +180,7 @@ check(prebuilt_array_char, ['A', chr(10), chr(255), 'B', 'C']) return NULL -def _play_with_setarrayitem_2(dummy_arg): +def _play_with_setarrayitem_2(dummy_arg, retry_counter): check(prebuilt_array_char, ['A', chr(10), chr(255), 'B', 'C']) prebuilt_array_char[1] = 'D' check(prebuilt_array_char, ['A', 'D', chr(255), 'B', 'C']) @@ -188,7 +188,7 @@ check(prebuilt_array_char, ['A', 'D', 'E', 'B', 'C']) return NULL -def _play_with_setarrayitem_3(dummy_arg): +def _play_with_setarrayitem_3(dummy_arg, retry_counter): check(prebuilt_array_char, ['A', 'D', 'E', 'B', 'C']) return NULL @@ -222,14 +222,14 @@ array[i].y = rffi.cast(lltype.typeOf(array).TO.OF.y, newvalues2[i]) change2._annspecialcase_ = 'specialize:ll' -def _play_with_getinteriorfield(dummy_arg): +def _play_with_getinteriorfield(dummy_arg, retry_counter): check2(prebuilt_array_signed_signed, [1, -1, -50], [10, 20, -30]) check2(prebuilt_array_char_char, [chr(1), chr(255), chr(206)], [chr(10), chr(20), chr(226)]) return NULL -def _play_with_setinteriorfield_1(dummy_arg): +def _play_with_setinteriorfield_1(dummy_arg, retry_counter): change2(prebuilt_array_signed_signed, [500000, -10000000], [102101202]) check2(prebuilt_array_signed_signed, [500000, -10000000, -50], [102101202, 20, -30]) @@ -238,7 +238,7 @@ ['b', chr(20), chr(226)]) return NULL -def _play_with_setinteriorfield_2(dummy_arg): +def _play_with_setinteriorfield_2(dummy_arg, retry_counter): check2(prebuilt_array_signed_signed, [500000, -10000000, -50], [102101202, 20, -30]) check2(prebuilt_array_char_char, ['a', chr(255), chr(206)], @@ -253,7 +253,7 @@ def test_getfield_all_sizes(self): def do_stm_getfield(argv): - _play_with_getfield(None) + _play_with_getfield(None, 0) return 0 t, cbuilder = self.compile(do_stm_getfield) cbuilder.cmdexec('') @@ -272,7 +272,7 @@ def do_stm_getfield(argv): stm_descriptor_init() # we have a descriptor, but we don't call it in a transaction - _play_with_getfield(None) + _play_with_getfield(None, 0) stm_descriptor_done() return 0 t, cbuilder = self.compile(do_stm_getfield) @@ -280,7 +280,7 @@ def test_setfield_all_sizes(self): def do_stm_setfield(argv): - _play_with_setfields(None) + _play_with_setfields(None, 0) return 0 t, cbuilder = self.compile(do_stm_setfield) cbuilder.cmdexec('') @@ -301,7 +301,7 @@ def test_setfield_all_sizes_outside_transaction(self): def do_stm_setfield(argv): stm_descriptor_init() - _play_with_setfields(None) + _play_with_setfields(None, 0) stm_descriptor_done() return 0 t, cbuilder = self.compile(do_stm_setfield) @@ -309,7 +309,7 @@ def test_getarrayitem_all_sizes(self): def do_stm_getarrayitem(argv): - _play_with_getarrayitem(None) + _play_with_getarrayitem(None, 0) return 0 t, cbuilder = self.compile(do_stm_getarrayitem) cbuilder.cmdexec('') @@ -327,9 +327,9 @@ def test_setarrayitem_all_sizes(self): def do_stm_setarrayitem(argv): - _play_with_setarrayitem_1(None) - _play_with_setarrayitem_2(None) - _play_with_setarrayitem_3(None) + _play_with_setarrayitem_1(None, 0) + _play_with_setarrayitem_2(None, 0) + _play_with_setarrayitem_3(None, 0) return 0 t, cbuilder = self.compile(do_stm_setarrayitem) cbuilder.cmdexec('') @@ -351,7 +351,7 @@ def test_getinteriorfield_all_sizes(self): def do_stm_getinteriorfield(argv): - _play_with_getinteriorfield(None) + _play_with_getinteriorfield(None, 0) return 0 t, cbuilder = self.compile(do_stm_getinteriorfield) cbuilder.cmdexec('') @@ -369,8 +369,8 @@ def test_setinteriorfield_all_sizes(self): def do_stm_setinteriorfield(argv): - _play_with_setinteriorfield_1(None) - _play_with_setinteriorfield_2(None) + _play_with_setinteriorfield_1(None, 0) + _play_with_setinteriorfield_2(None, 0) return 0 t, cbuilder = self.compile(do_stm_setinteriorfield) cbuilder.cmdexec('') diff --git a/pypy/translator/stm/test/test_llstm.py b/pypy/translator/stm/test/test_llstm.py --- a/pypy/translator/stm/test/test_llstm.py +++ b/pypy/translator/stm/test/test_llstm.py @@ -19,8 +19,9 @@ rs1b = r_singlefloat(40.121) rs2b = r_singlefloat(-9e9) -def callback1(a): +def callback1(a, retry_counter): a = rffi.cast(lltype.Ptr(A), a) + assert retry_counter == a.y # non-transactionally assert a.x == -611 assert a.c1 == '/' assert a.c2 == '\\' @@ -76,7 +77,7 @@ assert a.y == 10 lltype.free(a, flavor='raw') -def callback2(a): +def callback2(a, retry_counter): a = rffi.cast(lltype.Ptr(A), a) assert a.x == -611 assert a.c1 == '&' diff --git a/pypy/translator/stm/test/test_rffi_stm.py b/pypy/translator/stm/test/test_rffi_stm.py --- a/pypy/translator/stm/test/test_rffi_stm.py +++ b/pypy/translator/stm/test/test_rffi_stm.py @@ -6,7 +6,7 @@ stm_descriptor_done() def test_stm_perform_transaction(): - def callback1(x): + def callback1(x, retry_counter): return lltype.nullptr(rffi.VOIDP.TO) stm_descriptor_init() stm_perform_transaction(llhelper(CALLBACK, callback1), @@ -17,7 +17,8 @@ A = lltype.Struct('A', ('x', lltype.Signed), ('y', lltype.Signed)) a = lltype.malloc(A, immortal=True, flavor='raw') a.y = 0 - def callback1(x): + def callback1(x, retry_counter): + assert retry_counter == a.y if a.y < 10: a.y += 1 # non-transactionally stm_abort_and_retry() @@ -35,7 +36,8 @@ a = lltype.malloc(A, immortal=True, flavor='raw') a.x = -611 a.y = 0 - def callback1(x): + def callback1(x, retry_counter): + assert retry_counter == a.y assert a.x == -611 p = lltype.direct_fieldptr(a, 'x') p = rffi.cast(SignedP, p) @@ -55,7 +57,7 @@ assert a.x == 420 def test_stm_debug_get_state(): - def callback1(x): + def callback1(x, retry_counter): assert stm_debug_get_state() == 1 stm_try_inevitable() assert stm_debug_get_state() == 2 From noreply at buildbot.pypy.org Sun Jan 22 10:53:06 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 10:53:06 +0100 (CET) Subject: [pypy-commit] pypy stm: Fix test_interp_transaction. Message-ID: <20120122095306.A0119821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51627:a18852e15aae Date: 2012-01-22 10:52 +0100 http://bitbucket.org/pypy/pypy/changeset/a18852e15aae/ Log: Fix test_interp_transaction. diff --git a/pypy/module/transaction/test/test_interp_transaction.py b/pypy/module/transaction/test/test_interp_transaction.py --- a/pypy/module/transaction/test/test_interp_transaction.py +++ b/pypy/module/transaction/test/test_interp_transaction.py @@ -3,8 +3,14 @@ class FakeSpace: - def new_exception_class(self, name): - return "some error class" + def getbuiltinmodule(self, name): + assert name == 'transaction' + return 'transaction module' + def getattr(self, w_obj, w_name): + assert w_obj == 'transaction module' + return 'some stuff from the transaction module' + def wrap(self, x): + return 'wrapped stuff' def call_args(self, w_callback, args): w_callback(*args) From noreply at buildbot.pypy.org Sun Jan 22 10:58:15 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sun, 22 Jan 2012 10:58:15 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fixes the combined use of kwonly arguments and default parameters Message-ID: <20120122095815.5BD38821FA@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51628:99083d2bfef1 Date: 2012-01-22 10:57 +0100 http://bitbucket.org/pypy/pypy/changeset/99083d2bfef1/ Log: Fixes the combined use of kwonly arguments and default parameters diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -35,6 +35,9 @@ def num_argnames(self): return len(self.argnames) + def num_kwonlyargnames(self): + return len(self.kwonlyargnames) + def has_vararg(self): return self.varargname is not None @@ -291,6 +294,7 @@ # so all values coming from there can be assumed constant. It assumes # that the length of the defaults_w does not vary too much. co_argcount = signature.num_argnames() # expected formal arguments, without */** + co_kwonlyargcount = signature.num_kwonlyargnames() has_vararg = signature.has_vararg() has_kwarg = signature.has_kwarg() extravarargs = None @@ -379,9 +383,9 @@ used_keywords[i] = True # mark as used num_remainingkwds -= 1 missing = 0 - if input_argcount < co_argcount: - def_first = co_argcount - (0 if defaults_w is None else len(defaults_w)) - for i in range(input_argcount, co_argcount): + if input_argcount < co_argcount + co_kwonlyargcount: + def_first = co_argcount + co_kwonlyargcount - (0 if defaults_w is None else len(defaults_w)) + for i in range(input_argcount, co_argcount + co_kwonlyargcount): if scope_w[i] is not None: continue defnum = i - def_first @@ -393,6 +397,10 @@ # keyword arguments, which will be checked for below. missing += 1 + # TODO: Put a nice error message + #if co_kwonlyargcount: + # assert co_kwonlyargcount == len(signature.kwonlyargnames) + # collect extra keyword arguments into the **kwarg if has_kwarg: w_kwds = self.space.newdict() @@ -423,7 +431,7 @@ co_argcount, has_vararg, has_kwarg, defaults_w, missing) - return co_argcount + has_vararg + has_kwarg + return co_argcount + has_vararg + has_kwarg + co_kwonlyargcount diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py --- a/pypy/interpreter/pycode.py +++ b/pypy/interpreter/pycode.py @@ -106,6 +106,7 @@ if self.co_cellvars: argcount = self.co_argcount + argcount += self.co_kwonlyargcount assert argcount >= 0 # annotator hint if self.co_flags & CO_VARARGS: argcount += 1 @@ -208,6 +209,8 @@ return if len(self._args_as_cellvars) > 0: return + if self.co_kwonlyargcount > 0: + return if self.co_argcount > 0xff: return From noreply at buildbot.pypy.org Sun Jan 22 11:11:46 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sun, 22 Jan 2012 11:11:46 +0100 (CET) Subject: [pypy-commit] pypy py3k: Add support for keyword arguments in the test suite, add a test for the order Message-ID: <20120122101146.05355821FA@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51629:2457af5955f1 Date: 2012-01-22 11:11 +0100 http://bitbucket.org/pypy/pypy/changeset/2457af5955f1/ Log: Add support for keyword arguments in the test suite, add a test for the order of kwonly arguments diff --git a/pypy/interpreter/test/test_interpreter.py b/pypy/interpreter/test/test_interpreter.py --- a/pypy/interpreter/test/test_interpreter.py +++ b/pypy/interpreter/test/test_interpreter.py @@ -4,7 +4,7 @@ class TestInterpreter: - def codetest(self, source, functionname, args): + def codetest(self, source, functionname, args, kwargs={}): """Compile and run the given code string, and then call its function named by 'functionname' with arguments 'args'.""" space = self.space @@ -22,10 +22,11 @@ code = space.unwrap(w_code) code.exec_code(space, w_glob, w_glob) - wrappedargs = [w(a) for a in args] + wrappedargs = w(args) + wrappedkwargs = w(kwargs) wrappedfunc = space.getitem(w_glob, w(functionname)) try: - w_output = space.call_function(wrappedfunc, *wrappedargs) + w_output = space.call(wrappedfunc, wrappedargs, wrappedkwargs) except error.OperationError, e: #e.print_detailed_traceback(space) return '<<<%s>>>' % e.errorstr(space) @@ -246,6 +247,13 @@ """ assert self.codetest(code, "f", [1, 2]) == (1, 2, 3, 4) + def test_kwonlyargs_order(self): + code = """ def f(a, b, *, c, d): + return a, b, c, d + """ + assert self.codetest(code, "f", [1, 2], {"d" : 3, "c" : 4}) == (1, 2, 3, 4) + + class AppTestInterpreter: def test_trivial(self): From noreply at buildbot.pypy.org Sun Jan 22 11:18:45 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 22 Jan 2012 11:18:45 +0100 (CET) Subject: [pypy-commit] pypy default: fix newlines Message-ID: <20120122101845.3DE6A821FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51630:444a1d6d447d Date: 2012-01-22 12:04 +0200 http://bitbucket.org/pypy/pypy/changeset/444a1d6d447d/ Log: fix newlines diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c --- a/pypy/translator/c/src/profiling.c +++ b/pypy/translator/c/src/profiling.c @@ -29,33 +29,33 @@ profiling_setup = 0; } } - -#elif defined(_WIN32) -#include - -DWORD_PTR base_affinity_mask; -int profiling_setup = 0; - -void pypy_setup_profiling() { - if (!profiling_setup) { - DWORD_PTR affinity_mask, system_affinity_mask; - GetProcessAffinityMask(GetCurrentProcess(), - &base_affinity_mask, &system_affinity_mask); - affinity_mask = 1; - /* Pick one cpu allowed by the system */ - if (system_affinity_mask) - while ((affinity_mask & system_affinity_mask) == 0) - affinity_mask <<= 1; - SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); - profiling_setup = 1; - } -} - -void pypy_teardown_profiling() { - if (profiling_setup) { - SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); - profiling_setup = 0; - } + +#elif defined(_WIN32) +#include + +DWORD_PTR base_affinity_mask; +int profiling_setup = 0; + +void pypy_setup_profiling() { + if (!profiling_setup) { + DWORD_PTR affinity_mask, system_affinity_mask; + GetProcessAffinityMask(GetCurrentProcess(), + &base_affinity_mask, &system_affinity_mask); + affinity_mask = 1; + /* Pick one cpu allowed by the system */ + if (system_affinity_mask) + while ((affinity_mask & system_affinity_mask) == 0) + affinity_mask <<= 1; + SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); + profiling_setup = 1; + } +} + +void pypy_teardown_profiling() { + if (profiling_setup) { + SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); + profiling_setup = 0; + } } #else From noreply at buildbot.pypy.org Sun Jan 22 11:18:46 2012 From: noreply at buildbot.pypy.org (fijal) Date: Sun, 22 Jan 2012 11:18:46 +0100 (CET) Subject: [pypy-commit] pypy default: merge Message-ID: <20120122101846.7CA9C821FA@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r51631:eeb977421a88 Date: 2012-01-22 12:18 +0200 http://bitbucket.org/pypy/pypy/changeset/eeb977421a88/ Log: merge diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -13,7 +13,7 @@ Sources for time zone and DST data: http://www.twinsun.com/tz/tz-link.htm This was originally copied from the sandbox of the CPython CVS repository. -Thanks to Tim Peters for suggesting using it. +Thanks to Tim Peters for suggesting using it. """ import time as _time @@ -271,6 +271,8 @@ raise ValueError("%s()=%d, must be in -1439..1439" % (name, offset)) def _check_date_fields(year, month, day): + if not isinstance(year, (int, long)): + raise TypeError('int expected') if not MINYEAR <= year <= MAXYEAR: raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year) if not 1 <= month <= 12: @@ -280,6 +282,8 @@ raise ValueError('day must be in 1..%d' % dim, day) def _check_time_fields(hour, minute, second, microsecond): + if not isinstance(hour, (int, long)): + raise TypeError('int expected') if not 0 <= hour <= 23: raise ValueError('hour must be in 0..23', hour) if not 0 <= minute <= 59: @@ -543,61 +547,76 @@ self = object.__new__(cls) - self.__days = d - self.__seconds = s - self.__microseconds = us + self._days = d + self._seconds = s + self._microseconds = us if abs(d) > 999999999: raise OverflowError("timedelta # of days is too large: %d" % d) return self def __repr__(self): - if self.__microseconds: + if self._microseconds: return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds, - self.__microseconds) - if self.__seconds: + self._days, + self._seconds, + self._microseconds) + if self._seconds: return "%s(%d, %d)" % ('datetime.' + self.__class__.__name__, - self.__days, - self.__seconds) - return "%s(%d)" % ('datetime.' + self.__class__.__name__, self.__days) + self._days, + self._seconds) + return "%s(%d)" % ('datetime.' + self.__class__.__name__, self._days) def __str__(self): - mm, ss = divmod(self.__seconds, 60) + mm, ss = divmod(self._seconds, 60) hh, mm = divmod(mm, 60) s = "%d:%02d:%02d" % (hh, mm, ss) - if self.__days: + if self._days: def plural(n): return n, abs(n) != 1 and "s" or "" - s = ("%d day%s, " % plural(self.__days)) + s - if self.__microseconds: - s = s + ".%06d" % self.__microseconds + s = ("%d day%s, " % plural(self._days)) + s + if self._microseconds: + s = s + ".%06d" % self._microseconds return s - days = property(lambda self: self.__days, doc="days") - seconds = property(lambda self: self.__seconds, doc="seconds") - microseconds = property(lambda self: self.__microseconds, - doc="microseconds") - def total_seconds(self): return ((self.days * 86400 + self.seconds) * 10**6 + self.microseconds) / 1e6 + # Read-only field accessors + @property + def days(self): + """days""" + return self._days + + @property + def seconds(self): + """seconds""" + return self._seconds + + @property + def microseconds(self): + """microseconds""" + return self._microseconds + def __add__(self, other): if isinstance(other, timedelta): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days + other.__days, - self.__seconds + other.__seconds, - self.__microseconds + other.__microseconds) + return timedelta(self._days + other._days, + self._seconds + other._seconds, + self._microseconds + other._microseconds) return NotImplemented __radd__ = __add__ def __sub__(self, other): if isinstance(other, timedelta): - return self + -other + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(self._days - other._days, + self._seconds - other._seconds, + self._microseconds - other._microseconds) return NotImplemented def __rsub__(self, other): @@ -606,17 +625,17 @@ return NotImplemented def __neg__(self): - # for CPython compatibility, we cannot use - # our __class__ here, but need a real timedelta - return timedelta(-self.__days, - -self.__seconds, - -self.__microseconds) + # for CPython compatibility, we cannot use + # our __class__ here, but need a real timedelta + return timedelta(-self._days, + -self._seconds, + -self._microseconds) def __pos__(self): return self def __abs__(self): - if self.__days < 0: + if self._days < 0: return -self else: return self @@ -625,81 +644,81 @@ if isinstance(other, (int, long)): # for CPython compatibility, we cannot use # our __class__ here, but need a real timedelta - return timedelta(self.__days * other, - self.__seconds * other, - self.__microseconds * other) + return timedelta(self._days * other, + self._seconds * other, + self._microseconds * other) return NotImplemented __rmul__ = __mul__ def __div__(self, other): if isinstance(other, (int, long)): - usec = ((self.__days * (24*3600L) + self.__seconds) * 1000000 + - self.__microseconds) + usec = ((self._days * (24*3600L) + self._seconds) * 1000000 + + self._microseconds) return timedelta(0, 0, usec // other) return NotImplemented __floordiv__ = __div__ - # Comparisons. + # Comparisons of timedelta objects with other. def __eq__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, timedelta): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, timedelta) - return cmp(self.__getstate(), other.__getstate()) + return cmp(self._getstate(), other._getstate()) def __hash__(self): - return hash(self.__getstate()) + return hash(self._getstate()) def __nonzero__(self): - return (self.__days != 0 or - self.__seconds != 0 or - self.__microseconds != 0) + return (self._days != 0 or + self._seconds != 0 or + self._microseconds != 0) # Pickle support. __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - return (self.__days, self.__seconds, self.__microseconds) + def _getstate(self): + return (self._days, self._seconds, self._microseconds) def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) timedelta.min = timedelta(-999999999) timedelta.max = timedelta(days=999999999, hours=23, minutes=59, seconds=59, @@ -749,25 +768,26 @@ return self _check_date_fields(year, month, day) self = object.__new__(cls) - self.__year = year - self.__month = month - self.__day = day + self._year = year + self._month = month + self._day = day return self # Additional constructors + @classmethod def fromtimestamp(cls, t): "Construct a date from a POSIX timestamp (like time.time())." y, m, d, hh, mm, ss, weekday, jday, dst = _time.localtime(t) return cls(y, m, d) - fromtimestamp = classmethod(fromtimestamp) + @classmethod def today(cls): "Construct a date from time.time()." t = _time.time() return cls.fromtimestamp(t) - today = classmethod(today) + @classmethod def fromordinal(cls, n): """Contruct a date from a proleptic Gregorian ordinal. @@ -776,16 +796,24 @@ """ y, m, d = _ord2ymd(n) return cls(y, m, d) - fromordinal = classmethod(fromordinal) # Conversions to string def __repr__(self): - "Convert to formal string, for repr()." + """Convert to formal string, for repr(). + + >>> dt = datetime(2010, 1, 1) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0)' + + >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc) + >>> repr(dt) + 'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)' + """ return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__, - self.__year, - self.__month, - self.__day) + self._year, + self._month, + self._day) # XXX These shouldn't depend on time.localtime(), because that # clips the usable dates to [1970 .. 2038). At least ctime() is # easily done without using strftime() -- that's better too because @@ -793,12 +821,20 @@ def ctime(self): "Format a la ctime()." - return tmxxx(self.__year, self.__month, self.__day).ctime() + return tmxxx(self._year, self._month, self._day).ctime() def strftime(self, fmt): "Format using strftime()." return _wrap_strftime(self, fmt, self.timetuple()) + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) + def isoformat(self): """Return the date formatted according to ISO. @@ -808,29 +844,31 @@ - http://www.w3.org/TR/NOTE-datetime - http://www.cl.cam.ac.uk/~mgk25/iso-time.html """ - return "%04d-%02d-%02d" % (self.__year, self.__month, self.__day) + return "%04d-%02d-%02d" % (self._year, self._month, self._day) __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) + # Read-only field accessors + @property + def year(self): + """year (1-9999)""" + return self._year - # Read-only field accessors - year = property(lambda self: self.__year, - doc="year (%d-%d)" % (MINYEAR, MAXYEAR)) - month = property(lambda self: self.__month, doc="month (1-12)") - day = property(lambda self: self.__day, doc="day (1-31)") + @property + def month(self): + """month (1-12)""" + return self._month + + @property + def day(self): + """day (1-31)""" + return self._day # Standard conversions, __cmp__, __hash__ (and helpers) def timetuple(self): "Return local time tuple compatible with time.localtime()." - return _build_struct_time(self.__year, self.__month, self.__day, + return _build_struct_time(self._year, self._month, self._day, 0, 0, 0, -1) def toordinal(self): @@ -839,24 +877,24 @@ January 1 of year 1 is day 1. Only the year, month and day values contribute to the result. """ - return _ymd2ord(self.__year, self.__month, self.__day) + return _ymd2ord(self._year, self._month, self._day) def replace(self, year=None, month=None, day=None): """Return a new date with new values for the specified fields.""" if year is None: - year = self.__year + year = self._year if month is None: - month = self.__month + month = self._month if day is None: - day = self.__day + day = self._day _check_date_fields(year, month, day) return date(year, month, day) - # Comparisons. + # Comparisons of date objects with other. def __eq__(self, other): if isinstance(other, date): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -864,7 +902,7 @@ def __ne__(self, other): if isinstance(other, date): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -872,7 +910,7 @@ def __le__(self, other): if isinstance(other, date): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -880,7 +918,7 @@ def __lt__(self, other): if isinstance(other, date): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -888,7 +926,7 @@ def __ge__(self, other): if isinstance(other, date): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple"): return NotImplemented else: @@ -896,21 +934,21 @@ def __gt__(self, other): if isinstance(other, date): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple"): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, date) - y, m, d = self.__year, self.__month, self.__day - y2, m2, d2 = other.__year, other.__month, other.__day + y, m, d = self._year, self._month, self._day + y2, m2, d2 = other._year, other._month, other._day return cmp((y, m, d), (y2, m2, d2)) def __hash__(self): "Hash." - return hash(self.__getstate()) + return hash(self._getstate()) # Computations @@ -922,9 +960,9 @@ def __add__(self, other): "Add a date to a timedelta." if isinstance(other, timedelta): - t = tmxxx(self.__year, - self.__month, - self.__day + other.days) + t = tmxxx(self._year, + self._month, + self._day + other.days) self._checkOverflow(t.year) result = date(t.year, t.month, t.day) return result @@ -966,9 +1004,9 @@ ISO calendar algorithm taken from http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm """ - year = self.__year + year = self._year week1monday = _isoweek1monday(year) - today = _ymd2ord(self.__year, self.__month, self.__day) + today = _ymd2ord(self._year, self._month, self._day) # Internally, week and day have origin 0 week, day = divmod(today - week1monday, 7) if week < 0: @@ -985,18 +1023,18 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - return ("%c%c%c%c" % (yhi, ylo, self.__month, self.__day), ) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + return ("%c%c%c%c" % (yhi, ylo, self._month, self._day), ) def __setstate(self, string): if len(string) != 4 or not (1 <= ord(string[2]) <= 12): raise TypeError("not enough arguments") - yhi, ylo, self.__month, self.__day = map(ord, string) - self.__year = yhi * 256 + ylo + yhi, ylo, self._month, self._day = map(ord, string) + self._year = yhi * 256 + ylo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) _date_class = date # so functions w/ args named "date" can get at the class @@ -1118,62 +1156,80 @@ return self _check_tzinfo_arg(tzinfo) _check_time_fields(hour, minute, second, microsecond) - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self._hour + + @property + def minute(self): + """minute (0-59)""" + return self._minute + + @property + def second(self): + """second (0-59)""" + return self._second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self._microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo # Standard conversions, __hash__ (and helpers) - # Comparisons. + # Comparisons of time objects with other. def __eq__(self, other): if isinstance(other, time): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 else: return False def __ne__(self, other): if isinstance(other, time): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 else: return True def __le__(self, other): if isinstance(other, time): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 else: _cmperror(self, other) def __lt__(self, other): if isinstance(other, time): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 else: _cmperror(self, other) def __ge__(self, other): if isinstance(other, time): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 else: _cmperror(self, other) def __gt__(self, other): if isinstance(other, time): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, time) mytz = self._tzinfo ottz = other._tzinfo @@ -1187,23 +1243,23 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._hour, self._minute, self._second, + self._microsecond), + (other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware times") - myhhmm = self.__hour * 60 + self.__minute - myoff - othhmm = other.__hour * 60 + other.__minute - otoff - return cmp((myhhmm, self.__second, self.__microsecond), - (othhmm, other.__second, other.__microsecond)) + myhhmm = self._hour * 60 + self._minute - myoff + othhmm = other._hour * 60 + other._minute - otoff + return cmp((myhhmm, self._second, self._microsecond), + (othhmm, other._second, other._microsecond)) def __hash__(self): """Hash.""" tzoff = self._utcoffset() if not tzoff: # zero or None - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) h, m = divmod(self.hour * 60 + self.minute - tzoff, 60) if 0 <= h < 24: return hash(time(h, m, self.second, self.microsecond)) @@ -1227,14 +1283,14 @@ def __repr__(self): """Convert to formal string, for repr().""" - if self.__microsecond != 0: - s = ", %d, %d" % (self.__second, self.__microsecond) - elif self.__second != 0: - s = ", %d" % self.__second + if self._microsecond != 0: + s = ", %d, %d" % (self._second, self._microsecond) + elif self._second != 0: + s = ", %d" % self._second else: s = "" s= "%s(%d, %d%s)" % ('datetime.' + self.__class__.__name__, - self.__hour, self.__minute, s) + self._hour, self._minute, s) if self._tzinfo is not None: assert s[-1:] == ")" s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")" @@ -1246,8 +1302,8 @@ This is 'HH:MM:SS.mmmmmm+zz:zz', or 'HH:MM:SS+zz:zz' if self.microsecond == 0. """ - s = _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond) + s = _format_time(self._hour, self._minute, self._second, + self._microsecond) tz = self._tzstr() if tz: s += tz @@ -1255,14 +1311,6 @@ __str__ = isoformat - def __format__(self, format): - if not isinstance(format, (str, unicode)): - raise ValueError("__format__ excepts str or unicode, not %s" % - format.__class__.__name__) - if not format: - return str(self) - return self.strftime(format) - def strftime(self, fmt): """Format using strftime(). The date part of the timestamp passed to underlying strftime should not be used. @@ -1270,10 +1318,18 @@ # The year must be >= 1900 else Python's strftime implementation # can raise a bogus exception. timetuple = (1900, 1, 1, - self.__hour, self.__minute, self.__second, + self._hour, self._minute, self._second, 0, 1, -1) return _wrap_strftime(self, fmt, timetuple) + def __format__(self, fmt): + if not isinstance(fmt, (str, unicode)): + raise ValueError("__format__ excepts str or unicode, not %s" % + fmt.__class__.__name__) + if len(fmt) != 0: + return self.strftime(fmt) + return str(self) + # Timezone functions def utcoffset(self): @@ -1350,10 +1406,10 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 6) % (self.__hour, self.__minute, self.__second, + basestate = ("%c" * 6) % (self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1363,13 +1419,13 @@ def __setstate(self, string, tzinfo): if len(string) != 6 or ord(string[0]) >= 24: raise TypeError("an integer is required") - self.__hour, self.__minute, self.__second, us1, us2, us3 = \ + self._hour, self._minute, self._second, us1, us2, us3 = \ map(ord, string) - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (time, self.__getstate()) + return (time, self._getstate()) _time_class = time # so functions w/ args named "time" can get at the class @@ -1378,9 +1434,11 @@ time.resolution = timedelta(microseconds=1) class datetime(date): + """datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) - # XXX needs docstrings - # See http://www.zope.org/Members/fdrake/DateTimeWiki/TimeZoneInfo + The year, month and day arguments are required. tzinfo may be None, or an + instance of a tzinfo subclass. The remaining arguments may be ints or longs. + """ def __new__(cls, year, month=None, day=None, hour=0, minute=0, second=0, microsecond=0, tzinfo=None): @@ -1393,24 +1451,43 @@ _check_time_fields(hour, minute, second, microsecond) self = date.__new__(cls, year, month, day) # XXX This duplicates __year, __month, __day for convenience :-( - self.__year = year - self.__month = month - self.__day = day - self.__hour = hour - self.__minute = minute - self.__second = second - self.__microsecond = microsecond + self._year = year + self._month = month + self._day = day + self._hour = hour + self._minute = minute + self._second = second + self._microsecond = microsecond self._tzinfo = tzinfo return self # Read-only field accessors - hour = property(lambda self: self.__hour, doc="hour (0-23)") - minute = property(lambda self: self.__minute, doc="minute (0-59)") - second = property(lambda self: self.__second, doc="second (0-59)") - microsecond = property(lambda self: self.__microsecond, - doc="microsecond (0-999999)") - tzinfo = property(lambda self: self._tzinfo, doc="timezone info object") + @property + def hour(self): + """hour (0-23)""" + return self._hour + @property + def minute(self): + """minute (0-59)""" + return self._minute + + @property + def second(self): + """second (0-59)""" + return self._second + + @property + def microsecond(self): + """microsecond (0-999999)""" + return self._microsecond + + @property + def tzinfo(self): + """timezone info object""" + return self._tzinfo + + @classmethod def fromtimestamp(cls, t, tz=None): """Construct a datetime from a POSIX timestamp (like time.time()). @@ -1438,7 +1515,6 @@ if tz is not None: result = tz.fromutc(result) return result - fromtimestamp = classmethod(fromtimestamp) @classmethod def utcfromtimestamp(cls, t): @@ -1462,18 +1538,19 @@ # XXX uses gettimeofday on platforms that have it, but that isn't # XXX available from Python. So now() may return different results # XXX across the implementations. + @classmethod def now(cls, tz=None): "Construct a datetime from time.time() and optional time zone info." t = _time.time() return cls.fromtimestamp(t, tz) - now = classmethod(now) + @classmethod def utcnow(cls): "Construct a UTC datetime from time.time()." t = _time.time() return cls.utcfromtimestamp(t) - utcnow = classmethod(utcnow) + @classmethod def combine(cls, date, time): "Construct a datetime from a given date and a given time." if not isinstance(date, _date_class): @@ -1483,7 +1560,6 @@ return cls(date.year, date.month, date.day, time.hour, time.minute, time.second, time.microsecond, time.tzinfo) - combine = classmethod(combine) def timetuple(self): "Return local time tuple compatible with time.localtime()." @@ -1509,7 +1585,7 @@ def date(self): "Return the date part." - return date(self.__year, self.__month, self.__day) + return date(self._year, self._month, self._day) def time(self): "Return the time part, with tzinfo None." @@ -1569,8 +1645,8 @@ def ctime(self): "Format a la ctime()." - t = tmxxx(self.__year, self.__month, self.__day, self.__hour, - self.__minute, self.__second) + t = tmxxx(self._year, self._month, self._day, self._hour, + self._minute, self._second) return t.ctime() def isoformat(self, sep='T'): @@ -1585,10 +1661,10 @@ Optional argument sep specifies the separator between date and time, default 'T'. """ - s = ("%04d-%02d-%02d%c" % (self.__year, self.__month, self.__day, + s = ("%04d-%02d-%02d%c" % (self._year, self._month, self._day, sep) + - _format_time(self.__hour, self.__minute, self.__second, - self.__microsecond)) + _format_time(self._hour, self._minute, self._second, + self._microsecond)) off = self._utcoffset() if off is not None: if off < 0: @@ -1601,13 +1677,13 @@ return s def __repr__(self): - "Convert to formal string, for repr()." - L = [self.__year, self.__month, self.__day, # These are never zero - self.__hour, self.__minute, self.__second, self.__microsecond] + """Convert to formal string, for repr().""" + L = [self._year, self._month, self._day, # These are never zero + self._hour, self._minute, self._second, self._microsecond] if L[-1] == 0: del L[-1] if L[-1] == 0: - del L[-1] + del L[-1] s = ", ".join(map(str, L)) s = "%s(%s)" % ('datetime.' + self.__class__.__name__, s) if self._tzinfo is not None: @@ -1680,7 +1756,7 @@ def __eq__(self, other): if isinstance(other, datetime): - return self.__cmp(other) == 0 + return self._cmp(other) == 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1688,7 +1764,7 @@ def __ne__(self, other): if isinstance(other, datetime): - return self.__cmp(other) != 0 + return self._cmp(other) != 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1696,7 +1772,7 @@ def __le__(self, other): if isinstance(other, datetime): - return self.__cmp(other) <= 0 + return self._cmp(other) <= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1704,7 +1780,7 @@ def __lt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) < 0 + return self._cmp(other) < 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1712,7 +1788,7 @@ def __ge__(self, other): if isinstance(other, datetime): - return self.__cmp(other) >= 0 + return self._cmp(other) >= 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: @@ -1720,13 +1796,13 @@ def __gt__(self, other): if isinstance(other, datetime): - return self.__cmp(other) > 0 + return self._cmp(other) > 0 elif hasattr(other, "timetuple") and not isinstance(other, date): return NotImplemented else: _cmperror(self, other) - def __cmp(self, other): + def _cmp(self, other): assert isinstance(other, datetime) mytz = self._tzinfo ottz = other._tzinfo @@ -1742,12 +1818,12 @@ base_compare = myoff == otoff if base_compare: - return cmp((self.__year, self.__month, self.__day, - self.__hour, self.__minute, self.__second, - self.__microsecond), - (other.__year, other.__month, other.__day, - other.__hour, other.__minute, other.__second, - other.__microsecond)) + return cmp((self._year, self._month, self._day, + self._hour, self._minute, self._second, + self._microsecond), + (other._year, other._month, other._day, + other._hour, other._minute, other._second, + other._microsecond)) if myoff is None or otoff is None: # XXX Buggy in 2.2.2. raise TypeError("cannot compare naive and aware datetimes") @@ -1761,13 +1837,13 @@ "Add a datetime and a timedelta." if not isinstance(other, timedelta): return NotImplemented - t = tmxxx(self.__year, - self.__month, - self.__day + other.days, - self.__hour, - self.__minute, - self.__second + other.seconds, - self.__microsecond + other.microseconds) + t = tmxxx(self._year, + self._month, + self._day + other.days, + self._hour, + self._minute, + self._second + other.seconds, + self._microsecond + other.microseconds) self._checkOverflow(t.year) result = datetime(t.year, t.month, t.day, t.hour, t.minute, t.second, @@ -1785,11 +1861,11 @@ days1 = self.toordinal() days2 = other.toordinal() - secs1 = self.__second + self.__minute * 60 + self.__hour * 3600 - secs2 = other.__second + other.__minute * 60 + other.__hour * 3600 + secs1 = self._second + self._minute * 60 + self._hour * 3600 + secs2 = other._second + other._minute * 60 + other._hour * 3600 base = timedelta(days1 - days2, secs1 - secs2, - self.__microsecond - other.__microsecond) + self._microsecond - other._microsecond) if self._tzinfo is other._tzinfo: return base myoff = self._utcoffset() @@ -1797,13 +1873,13 @@ if myoff == otoff: return base if myoff is None or otoff is None: - raise TypeError, "cannot mix naive and timezone-aware time" + raise TypeError("cannot mix naive and timezone-aware time") return base + timedelta(minutes = otoff-myoff) def __hash__(self): tzoff = self._utcoffset() if tzoff is None: - return hash(self.__getstate()[0]) + return hash(self._getstate()[0]) days = _ymd2ord(self.year, self.month, self.day) seconds = self.hour * 3600 + (self.minute - tzoff) * 60 + self.second return hash(timedelta(days, seconds, self.microsecond)) @@ -1812,12 +1888,12 @@ __safe_for_unpickling__ = True # For Python 2.2 - def __getstate(self): - yhi, ylo = divmod(self.__year, 256) - us2, us3 = divmod(self.__microsecond, 256) + def _getstate(self): + yhi, ylo = divmod(self._year, 256) + us2, us3 = divmod(self._microsecond, 256) us1, us2 = divmod(us2, 256) - basestate = ("%c" * 10) % (yhi, ylo, self.__month, self.__day, - self.__hour, self.__minute, self.__second, + basestate = ("%c" * 10) % (yhi, ylo, self._month, self._day, + self._hour, self._minute, self._second, us1, us2, us3) if self._tzinfo is None: return (basestate,) @@ -1825,14 +1901,14 @@ return (basestate, self._tzinfo) def __setstate(self, string, tzinfo): - (yhi, ylo, self.__month, self.__day, self.__hour, - self.__minute, self.__second, us1, us2, us3) = map(ord, string) - self.__year = yhi * 256 + ylo - self.__microsecond = (((us1 << 8) | us2) << 8) | us3 + (yhi, ylo, self._month, self._day, self._hour, + self._minute, self._second, us1, us2, us3) = map(ord, string) + self._year = yhi * 256 + ylo + self._microsecond = (((us1 << 8) | us2) << 8) | us3 self._tzinfo = tzinfo def __reduce__(self): - return (self.__class__, self.__getstate()) + return (self.__class__, self._getstate()) datetime.min = datetime(1, 1, 1) @@ -2014,7 +2090,7 @@ Because we know z.d said z was in daylight time (else [5] would have held and we would have stopped then), and we know z.d != z'.d (else [8] would have held -and we we have stopped then), and there are only 2 possible values dst() can +and we have stopped then), and there are only 2 possible values dst() can return in Eastern, it follows that z'.d must be 0 (which it is in the example, but the reasoning doesn't depend on the example -- it depends on there being two possible dst() outcomes, one zero and the other non-zero). Therefore From noreply at buildbot.pypy.org Sun Jan 22 11:29:32 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 11:29:32 +0100 (CET) Subject: [pypy-commit] pypy stm: Oups. Message-ID: <20120122102932.52A1D821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51632:4ae4d737bd18 Date: 2012-01-22 11:16 +0100 http://bitbucket.org/pypy/pypy/changeset/4ae4d737bd18/ Log: Oups. diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -94,7 +94,7 @@ @staticmethod def _run_in_transaction(pending, retry_counter): if retry_counter > 0: - self.register() # retrying: will be done later, try others first + pending.register() # retrying: will be done later, try others first return if state.got_exception is not None: return # return early if there is already a 'got_exception' From noreply at buildbot.pypy.org Sun Jan 22 11:29:33 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 11:29:33 +0100 (CET) Subject: [pypy-commit] pypy stm: Add an explicit fifo queue implementation, instead of using Message-ID: <20120122102933.90B8A821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51633:5ab80750fca1 Date: 2012-01-22 11:17 +0100 http://bitbucket.org/pypy/pypy/changeset/5ab80750fca1/ Log: Add an explicit fifo queue implementation, instead of using list.pop(0). diff --git a/pypy/module/transaction/fifo.py b/pypy/module/transaction/fifo.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/fifo.py @@ -0,0 +1,34 @@ + +class Fifo(object): + def __init__(self): + self.first = None + self.last = None + + def append(self, newitem): + newitem.next = None + if self.last is None: + self.first = newitem + else: + self.last.next = newitem + self.last = newitem + + def is_empty(self): + assert (self.first is None) == (self.last is None) + return self.first is None + + def popleft(self): + item = self.first + self.first = item.next + if self.first is None: + self.last = None + return item + + def steal(self, otherfifo): + if otherfifo.last is not None: + if self.last is None: + self.first = otherfifo.first + else: + self.last.next = otherfifo.first + self.last = otherfifo.last + otherfifo.first = None + otherfifo.last = None diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -1,6 +1,7 @@ from pypy.interpreter.error import OperationError from pypy.interpreter.gateway import unwrap_spec from pypy.module.transaction import threadintf +from pypy.module.transaction.fifo import Fifo from pypy.rlib import rstm @@ -13,7 +14,7 @@ self.__dict__.clear() self.running = False self.num_threads = NUM_THREADS_DEFAULT - self.pending = [] + self.pending = Fifo() self.pending_lists = {0: self.pending} self.ll_lock = threadintf.null_ll_lock self.ll_no_tasks_pending_lock = threadintf.null_ll_lock @@ -110,24 +111,23 @@ def add_list(new_pending_list): - if len(new_pending_list) == 0: + if new_pending_list.is_empty(): return - was_empty = len(state.pending) == 0 - state.pending += new_pending_list - del new_pending_list[:] + was_empty = state.pending.is_empty() + state.pending.steal(new_pending_list) if was_empty: state.unlock_no_tasks_pending() def _run_thread(): state.lock() - my_pending_list = [] + my_pending_list = Fifo() my_thread_id = threadintf.thread_id() state.pending_lists[my_thread_id] = my_pending_list rstm.descriptor_init() # while True: - if len(state.pending) == 0: + if state.pending.is_empty(): assert state.is_locked_no_tasks_pending() state.num_waiting_threads += 1 if state.num_waiting_threads == state.num_threads: @@ -143,8 +143,8 @@ if state.finished: break else: - pending = state.pending.pop(0) - if len(state.pending) == 0: + pending = state.pending.popleft() + if state.pending.is_empty(): state.lock_no_tasks_pending() state.unlock() pending.run() @@ -164,7 +164,7 @@ state.w_error, space.wrap("recursive invocation of transaction.run()")) assert not state.is_locked_no_tasks_pending() - if len(state.pending) == 0: + if state.pending.is_empty(): return state.num_waiting_threads = 0 state.finished = False @@ -177,7 +177,7 @@ state.lock_unfinished() # wait for all threads to finish # assert state.num_waiting_threads == 0 - assert len(state.pending) == 0 + assert state.pending.is_empty() assert state.pending_lists.keys() == [state.main_thread_id] assert not state.is_locked_no_tasks_pending() state.running = False diff --git a/pypy/module/transaction/test/test_fifo.py b/pypy/module/transaction/test/test_fifo.py new file mode 100644 --- /dev/null +++ b/pypy/module/transaction/test/test_fifo.py @@ -0,0 +1,40 @@ +from pypy.module.transaction.fifo import Fifo + +class Item: + def __init__(self, value): + self.value = value + +def test_one_item(): + f = Fifo() + assert f.is_empty() + f.append(Item(123)) + assert not f.is_empty() + item = f.popleft() + assert f.is_empty() + assert item.value == 123 + +def test_three_items(): + f = Fifo() + for i in [10, 20, 30]: + f.append(Item(i)) + assert not f.is_empty() + for i in [10, 20, 30]: + assert not f.is_empty() + item = f.popleft() + assert item.value == i + assert f.is_empty() + +def test_steal(): + for n1 in range(3): + for n2 in range(3): + f1 = Fifo() + f2 = Fifo() + for i in range(n1): f1.append(Item(10 + i)) + for i in range(n2): f2.append(Item(20 + i)) + f1.steal(f2) + assert f2.is_empty() + for x in range(10, 10+n1) + range(20, 20+n2): + assert not f1.is_empty() + item = f1.popleft() + assert item.value == x + assert f1.is_empty() From noreply at buildbot.pypy.org Sun Jan 22 11:29:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 11:29:34 +0100 (CET) Subject: [pypy-commit] pypy stm: Hacks to make checkmodule() work. Message-ID: <20120122102934.CB7C4821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51634:69e39e76c0d2 Date: 2012-01-22 11:29 +0100 http://bitbucket.org/pypy/pypy/changeset/69e39e76c0d2/ Log: Hacks to make checkmodule() work. diff --git a/pypy/module/transaction/__init__.py b/pypy/module/transaction/__init__.py --- a/pypy/module/transaction/__init__.py +++ b/pypy/module/transaction/__init__.py @@ -18,3 +18,8 @@ def startup(self, space): from pypy.module.transaction import interp_transaction interp_transaction.state.startup(space) + + def translating_for_checkmodule(self, space): + from pypy.module.transaction import interp_transaction + interp_transaction.state.space = space + interp_transaction.state._freeze_ = lambda: None # hack! diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py --- a/pypy/module/transaction/interp_transaction.py +++ b/pypy/module/transaction/interp_transaction.py @@ -19,6 +19,8 @@ self.ll_lock = threadintf.null_ll_lock self.ll_no_tasks_pending_lock = threadintf.null_ll_lock self.ll_unfinished_lock = threadintf.null_ll_lock + self.main_thread_id = 0 + self.w_error = None def startup(self, space): self.space = space diff --git a/pypy/objspace/fake/checkmodule.py b/pypy/objspace/fake/checkmodule.py --- a/pypy/objspace/fake/checkmodule.py +++ b/pypy/objspace/fake/checkmodule.py @@ -10,5 +10,7 @@ module = mod.Module(space, W_Root()) for name in module.loaders: module._load_lazily(space, name) + if hasattr(module, 'translating_for_checkmodule'): + module.translating_for_checkmodule(space) # space.translates(**{'translation.list_comprehension_operations':True}) From noreply at buildbot.pypy.org Sun Jan 22 11:56:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 11:56:45 +0100 (CET) Subject: [pypy-commit] pypy stm: print the numeric reason when aborting. Message-ID: <20120122105645.57BD1821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm Changeset: r51635:3c073f6abf1a Date: 2012-01-22 11:56 +0100 http://bitbucket.org/pypy/pypy/changeset/3c073f6abf1a/ Log: print the numeric reason when aborting. diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -271,8 +271,9 @@ d->num_aborts[reason]++; #ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-abort"); - if (PYPY_HAVE_DEBUG_PRINTS) fprintf(PYPY_DEBUG_FILE, "thread %lx aborting\n", - (long)pthread_self()); + if (PYPY_HAVE_DEBUG_PRINTS) + fprintf(PYPY_DEBUG_FILE, "thread %lx aborting %d\n", + (long)pthread_self(), reason); PYPY_DEBUG_STOP("stm-abort"); #endif tx_restart(d); From noreply at buildbot.pypy.org Sun Jan 22 12:06:39 2012 From: noreply at buildbot.pypy.org (rguillebert) Date: Sun, 22 Jan 2012 12:06:39 +0100 (CET) Subject: [pypy-commit] pypy py3k: The test was wrong... Message-ID: <20120122110639.D3F05821FA@wyvern.cs.uni-duesseldorf.de> Author: Romain Guillebert Branch: py3k Changeset: r51636:0b136e560f89 Date: 2012-01-22 12:05 +0100 http://bitbucket.org/pypy/pypy/changeset/0b136e560f89/ Log: The test was wrong... diff --git a/pypy/interpreter/test/test_interpreter.py b/pypy/interpreter/test/test_interpreter.py --- a/pypy/interpreter/test/test_interpreter.py +++ b/pypy/interpreter/test/test_interpreter.py @@ -251,7 +251,7 @@ code = """ def f(a, b, *, c, d): return a, b, c, d """ - assert self.codetest(code, "f", [1, 2], {"d" : 3, "c" : 4}) == (1, 2, 3, 4) + assert self.codetest(code, "f", [1, 2], {"d" : 4, "c" : 3}) == (1, 2, 3, 4) From noreply at buildbot.pypy.org Sun Jan 22 12:09:59 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 12:09:59 +0100 (CET) Subject: [pypy-commit] pypy py3k: Easy fixes in test_compiler Message-ID: <20120122110959.29806821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51637:52eaf273d8ed Date: 2012-01-21 22:36 +0100 http://bitbucket.org/pypy/pypy/changeset/52eaf273d8ed/ Log: Easy fixes in test_compiler diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -226,8 +226,7 @@ def test_funccalls(self): decl = py.code.Source(""" def f(*args, **kwds): - kwds = kwds.items() - kwds.sort() + kwds = sorted(kwds.items()) return list(args) + kwds """) decl = str(decl) + '\n' @@ -328,7 +327,7 @@ from __foo__.bar import x try: A().m() - except ImportError, e: + except ImportError as e: msg = str(e) ''', "msg", "No module named __foo__") @@ -519,8 +518,8 @@ """, 'x', [True, 3, 4, 6] def test_type_of_constants(self): - yield self.simple_test, "x=[0, 0L]", 'type(x[1])', long - yield self.simple_test, "x=[(1,0), (1,0L)]", 'type(x[1][1])', long + yield self.simple_test, "x=[0, 0.]", 'type(x[1])', float + yield self.simple_test, "x=[(1,0), (1,0.)]", 'type(x[1][1])', float yield self.simple_test, "x=['2?-', '2?-']", 'id(x[0])==id(x[1])', True def test_pprint(self): @@ -646,17 +645,15 @@ #Indexing for key, value in self.reference.items(): self.assertEqual(d[key], value) - knownkey = self.other.keys()[0] + knownkey = next(iter(self.other)) self.failUnlessRaises(KeyError, lambda:d[knownkey]) #len self.assertEqual(len(p), 0) self.assertEqual(len(d), len(self.reference)) #has_key for k in self.reference: - self.assert_(d.has_key(k)) self.assert_(k in d) for k in self.other: - self.failIf(d.has_key(k)) self.failIf(k in d) #cmp self.assertEqual(cmp(p,p), 0) diff --git a/pypy/interpreter/pyparser/test/expressions.py b/pypy/interpreter/pyparser/test/expressions.py --- a/pypy/interpreter/pyparser/test/expressions.py +++ b/pypy/interpreter/pyparser/test/expressions.py @@ -4,11 +4,11 @@ constants = [ "0", + "00", "7", "-3", - "053", + "0o53", "0x18", - "14L", "1.0", "3.9", "-3.6", From noreply at buildbot.pypy.org Sun Jan 22 12:10:00 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 12:10:00 +0100 (CET) Subject: [pypy-commit] pypy py3k: One more fix for test_compiler Message-ID: <20120122111000.67318821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51638:e27203d55679 Date: 2012-01-21 22:39 +0100 http://bitbucket.org/pypy/pypy/changeset/e27203d55679/ Log: One more fix for test_compiler diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -588,7 +588,7 @@ w_iterable = self.popvalue() items = self.space.fixedview(w_iterable) itemcount = len(items) - if right < itemcount: + if right > itemcount: count = left + right if count == 1: plural = '' From noreply at buildbot.pypy.org Sun Jan 22 12:10:01 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 12:10:01 +0100 (CET) Subject: [pypy-commit] pypy py3k: The exception handler target "except ValueError as exc" was always compiled as a global variable. Test and fix. Message-ID: <20120122111001.A2D7B821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51639:e325e4d3227a Date: 2012-01-22 12:02 +0100 http://bitbucket.org/pypy/pypy/changeset/e325e4d3227a/ Log: The exception handler target "except ValueError as exc" was always compiled as a global variable. Test and fix. diff --git a/pypy/interpreter/astcompiler/symtable.py b/pypy/interpreter/astcompiler/symtable.py --- a/pypy/interpreter/astcompiler/symtable.py +++ b/pypy/interpreter/astcompiler/symtable.py @@ -417,6 +417,11 @@ def visit_alias(self, alias): self._visit_alias(alias) + def visit_ExceptHandler(self, handler): + if handler.name: + self.note_symbol(handler.name, SYM_ASSIGNED) + ast.GenericASTVisitor.visit_ExceptHandler(self, handler) + def visit_Yield(self, yie): self.scope.note_yield(yie) ast.GenericASTVisitor.visit_Yield(self, yie) diff --git a/pypy/interpreter/astcompiler/test/test_symtable.py b/pypy/interpreter/astcompiler/test/test_symtable.py --- a/pypy/interpreter/astcompiler/test/test_symtable.py +++ b/pypy/interpreter/astcompiler/test/test_symtable.py @@ -142,6 +142,10 @@ scp = self.func_scope("def f(): x") assert scp.lookup("x") == symtable.SCOPE_GLOBAL_IMPLICIT + def test_exception_variable(self): + scp = self.mod_scope("try: pass\nexcept ValueError as e: pass") + assert scp.lookup("e") == symtable.SCOPE_LOCAL + def test_nested_scopes(self): def nested_scope(*bodies): names = enumerate("f" + string.ascii_letters) From noreply at buildbot.pypy.org Sun Jan 22 12:10:02 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 12:10:02 +0100 (CET) Subject: [pypy-commit] pypy py3k: Fixes in test_interpreter Message-ID: <20120122111002.DE1F4821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r51640:4ce0e0c33cd2 Date: 2012-01-22 12:04 +0100 http://bitbucket.org/pypy/pypy/changeset/4ce0e0c33cd2/ Log: Fixes in test_interpreter diff --git a/pypy/interpreter/test/test_interpreter.py b/pypy/interpreter/test/test_interpreter.py --- a/pypy/interpreter/test/test_interpreter.py +++ b/pypy/interpreter/test/test_interpreter.py @@ -38,7 +38,7 @@ def f(): try: raise Exception() - except Exception, e: + except Exception as e: return 1 return 2 ''', 'f', []) @@ -48,8 +48,8 @@ x = self.codetest(''' def f(): try: - raise Exception, 1 - except Exception, e: + raise Exception(1) + except Exception as e: return e.args[0] ''', 'f', []) assert x == 1 @@ -83,7 +83,7 @@ z = 0 try: "x"+1 - except TypeError, e: + except TypeError as e: z = 5 raise e except TypeError: @@ -97,7 +97,7 @@ z = 0 try: z = 1//v - except ZeroDivisionError, e: + except ZeroDivisionError as e: z = "infinite result" return z ''' @@ -295,20 +295,23 @@ self.data.append((type(x), x)) sys.stdout = out = Out() try: - print(unichr(0xa2)) - assert out.data == [(unicode, unichr(0xa2)), (str, "\n")] + print(chr(0xa2)) + assert out.data == [(str, chr(0xa2)), (str, "\n")] out.data = [] out.encoding = "cp424" # ignored! - print(unichr(0xa2)) - assert out.data == [(unicode, unichr(0xa2)), (str, "\n")] + print(chr(0xa2)) + assert out.data == [(str, chr(0xa2)), (str, "\n")] del out.data[:] del out.encoding - print "foo\t", "bar\n", "trick", "baz\n" # softspace handling - assert out.data == [(unicode, "foo\t"), - (unicode, "bar\n"), - (unicode, "trick"), + # we don't care about softspace anymore + print("foo\t", "bar\n", "trick", "baz\n") + assert out.data == [(str, "foo\t"), (str, " "), - (unicode, "baz\n"), + (str, "bar\n"), + (str, " "), + (str, "trick"), + (str, " "), + (str, "baz\n"), (str, "\n")] finally: sys.stdout = save From noreply at buildbot.pypy.org Sun Jan 22 12:22:34 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 12:22:34 +0100 (CET) Subject: [pypy-commit] pypy default: hg backout c5d041657831: I will move it to the stdobjspace instead, Message-ID: <20120122112234.8F7BE821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51641:937fb53e76ac Date: 2012-01-22 12:19 +0100 http://bitbucket.org/pypy/pypy/changeset/937fb53e76ac/ Log: hg backout c5d041657831: I will move it to the stdobjspace instead, where it can be special-cased before we do all the "isinf()" and "isnan()" checks. diff --git a/pypy/rpython/lltypesystem/module/ll_math.py b/pypy/rpython/lltypesystem/module/ll_math.py --- a/pypy/rpython/lltypesystem/module/ll_math.py +++ b/pypy/rpython/lltypesystem/module/ll_math.py @@ -292,10 +292,6 @@ # deal directly with IEEE specials, to cope with problems on various # platforms whose semantics don't exactly match C99 - if y == 2.0: - return x * x # this is always a correct answer, and is relatively - # common in user programs. - if isnan(y): if x == 1.0: return 1.0 # 1**Nan = 1 From noreply at buildbot.pypy.org Sun Jan 22 12:22:35 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 12:22:35 +0100 (CET) Subject: [pypy-commit] pypy default: Special-case "x ** 2" here instead. Message-ID: <20120122112235.CD675821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51642:f1fb181c13a3 Date: 2012-01-22 12:22 +0100 http://bitbucket.org/pypy/pypy/changeset/f1fb181c13a3/ Log: Special-case "x ** 2" here instead. diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -443,6 +443,8 @@ y = w_float2.floatval # Sort out special cases here instead of relying on pow() + if y == 2.0: # special case for performance: + return W_FloatObject(x * x) # x * x is always correct if y == 0.0: # x**0 is 1, even 0**0 return W_FloatObject(1.0) From noreply at buildbot.pypy.org Sun Jan 22 12:31:00 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 12:31:00 +0100 (CET) Subject: [pypy-commit] pypy default: Write a test_pypy_c checking that "f ** 2" gets us just a float_mul Message-ID: <20120122113100.2D5EF821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51643:e82f6594d309 Date: 2012-01-22 12:30 +0100 http://bitbucket.org/pypy/pypy/changeset/e82f6594d309/ Log: Write a test_pypy_c checking that "f ** 2" gets us just a float_mul operation. (Needs to check if the test really passes or if we are getting a few extra stuff too.) diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -90,3 +90,18 @@ --TICK-- jump(..., descr=) """) + + def test_pow_two(self): + def main(n): + s = 0.123 + while n > 0: + s -= s ** 2 # ID: pow + n -= 1 + return s + log = self.run(main, [500]) + assert abs(log.result - main(500)) < 1e-9 + loop, = log.loops_by_filename(self.filepath) + assert loop.match_by_id("pow", """ + f2 = float_mul(f1, f1) + f3 = float_sub(f1, f2) + """) From noreply at buildbot.pypy.org Sun Jan 22 12:38:02 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 12:38:02 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Fix method used to find cjkencodings data files Message-ID: <20120122113802.78872821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51644:70241d66bf5f Date: 2012-01-22 12:36 +0100 http://bitbucket.org/pypy/pypy/changeset/70241d66bf5f/ Log: Fix method used to find cjkencodings data files diff --git a/lib-python/modified-2.7/test/test_multibytecodec_support.py b/lib-python/modified-2.7/test/test_multibytecodec_support.py --- a/lib-python/modified-2.7/test/test_multibytecodec_support.py +++ b/lib-python/modified-2.7/test/test_multibytecodec_support.py @@ -330,7 +330,7 @@ repr(csetch), repr(unich), exc.reason)) def load_teststring(name): - dir = os.path.join(os.path.dirname(__file__), 'cjkencodings') + dir = test_support.findfile('cjkencodings') with open(os.path.join(dir, name + '.txt'), 'rb') as f: encoded = f.read() with open(os.path.join(dir, name + '-utf8.txt'), 'rb') as f: From noreply at buildbot.pypy.org Sun Jan 22 14:35:38 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 14:35:38 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Respect __subclasscheck__ when rebinding a method to a more specific class. Message-ID: <20120122133538.B1CB6821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51645:24cea2c24401 Date: 2012-01-22 14:33 +0100 http://bitbucket.org/pypy/pypy/changeset/24cea2c24401/ Log: Respect __subclasscheck__ when rebinding a method to a more specific class. This change is needed to make OrderedDict() work at all, but it may have unexpected performance impact. diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -497,7 +497,8 @@ # only allow binding to a more specific class than before if (w_cls is not None and not space.is_w(w_cls, space.w_None) and - not space.abstract_issubclass_w(w_cls, self.w_class)): + not space.is_true( + space.issubtype_allow_override(w_cls, self.w_class))): return space.wrap(self) # subclass test failed else: return descr_function_get(space, self.w_function, w_obj, w_cls) diff --git a/pypy/objspace/test/test_descroperation.py b/pypy/objspace/test/test_descroperation.py --- a/pypy/objspace/test/test_descroperation.py +++ b/pypy/objspace/test/test_descroperation.py @@ -641,6 +641,19 @@ assert issubclass(B, B) assert issubclass(23, B) + def test_issubclass_and_method(self): + class Meta(type): + def __subclasscheck__(cls, sub): + if sub is Dict: + return True + class A: + __metaclass__ = Meta + def method(self): + return 42 + class Dict: + method = A.method + assert Dict().method() == 42 + def test_truth_of_long(self): class X(object): def __len__(self): return 1L From notifications-noreply at bitbucket.org Sun Jan 22 16:33:00 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Sun, 22 Jan 2012 15:33:00 -0000 Subject: [pypy-commit] Notification: benchmarks Message-ID: <20120122153300.3632.18640@bitbucket13.managed.contegix.com> You have received a notification from Carsten Senger. Hi, I forked benchmarks. My fork is at https://bitbucket.org/csenger/benchmarks. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Sun Jan 22 17:27:09 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 17:27:09 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Undo the previous change, and just add "allow_override=True". Message-ID: <20120122162709.9CB7D821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51646:89f238f17ecb Date: 2012-01-22 17:25 +0100 http://bitbucket.org/pypy/pypy/changeset/89f238f17ecb/ Log: Undo the previous change, and just add "allow_override=True". This fixes the cases when an old-style class in involved. diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -497,8 +497,8 @@ # only allow binding to a more specific class than before if (w_cls is not None and not space.is_w(w_cls, space.w_None) and - not space.is_true( - space.issubtype_allow_override(w_cls, self.w_class))): + not space.abstract_issubclass_w(w_cls, self.w_class, + allow_override=True)): return space.wrap(self) # subclass test failed else: return descr_function_get(space, self.w_function, w_obj, w_cls) diff --git a/pypy/interpreter/test/test_function.py b/pypy/interpreter/test/test_function.py --- a/pypy/interpreter/test/test_function.py +++ b/pypy/interpreter/test/test_function.py @@ -597,6 +597,17 @@ # --- with an incompatible class w_meth5 = meth3.descr_method_get(space.wrap('hello'), space.w_str) assert space.is_w(w_meth5, w_meth3) + # Same thing, with an old-style class + w_oldclass = space.call_function( + space.builtin.get('__metaclass__'), + space.wrap('OldClass'), space.newtuple([]), space.newdict()) + w_meth6 = meth3.descr_method_get(space.wrap('hello'), w_oldclass) + assert space.is_w(w_meth6, w_meth3) + # Reverse order of old/new styles + w_meth7 = descr_function_get(space, func, space.w_None, w_oldclass) + meth7 = space.unwrap(w_meth7) + w_meth8 = meth7.descr_method_get(space.wrap('hello'), space.w_str) + assert space.is_w(w_meth8, w_meth7) class TestShortcuts(object): From noreply at buildbot.pypy.org Sun Jan 22 17:30:01 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 17:30:01 +0100 (CET) Subject: [pypy-commit] pypy default: fix the test. Message-ID: <20120122163001.DEBCC821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r51647:7cd209e0414e Date: 2012-01-22 17:29 +0100 http://bitbucket.org/pypy/pypy/changeset/7cd209e0414e/ Log: fix the test. diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -102,6 +102,7 @@ assert abs(log.result - main(500)) < 1e-9 loop, = log.loops_by_filename(self.filepath) assert loop.match_by_id("pow", """ - f2 = float_mul(f1, f1) - f3 = float_sub(f1, f2) + guard_not_invalidated(descr=...) + f38 = float_mul(f30, f30) + f39 = float_sub(f30, f38) """) From noreply at buildbot.pypy.org Sun Jan 22 18:53:27 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 22 Jan 2012 18:53:27 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: fix JSON tests Message-ID: <20120122175327.2E399821FA@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: merge-2.7.2 Changeset: r51648:e5417ef18814 Date: 2012-01-22 11:53 -0600 http://bitbucket.org/pypy/pypy/changeset/e5417ef18814/ Log: fix JSON tests diff --git a/lib-python/modified-2.7/json/tests/test_decode.py b/lib-python/modified-2.7/json/tests/test_decode.py --- a/lib-python/modified-2.7/json/tests/test_decode.py +++ b/lib-python/modified-2.7/json/tests/test_decode.py @@ -23,12 +23,10 @@ self.assertEqual(rval, {"key":"value", "k":"v"}) def test_empty_objects(self): - s = '{}' - self.assertEqual(json.loads(s), eval(s)) - s = '[]' - self.assertEqual(json.loads(s), eval(s)) - s = '""' - self.assertEqual(json.loads(s), eval(s)) + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -81,9 +81,9 @@ self.assertEqual(type(self.loads('"foo"')), unicode) def test_encode_not_utf_8(self): - self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + self.assertEqual(self.dumps('\xb1\xe6', encoding='iso8859-2'), '"\\u0105\\u0107"') - self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + self.assertEqual(self.dumps(['\xb1\xe6'], encoding='iso8859-2'), '["\\u0105\\u0107"]') From noreply at buildbot.pypy.org Sun Jan 22 19:09:51 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 22 Jan 2012 19:09:51 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Fix for time.mktime, -1 is a valid return value in some cases. Message-ID: <20120122180951.60E1F821FA@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: merge-2.7.2 Changeset: r51649:a0e53cbd4f7c Date: 2012-01-22 12:09 -0600 http://bitbucket.org/pypy/pypy/changeset/a0e53cbd4f7c/ Log: Fix for time.mktime, -1 is a valid return value in some cases. diff --git a/pypy/module/rctime/interp_time.py b/pypy/module/rctime/interp_time.py --- a/pypy/module/rctime/interp_time.py +++ b/pypy/module/rctime/interp_time.py @@ -498,8 +498,11 @@ Convert a time tuple in local time to seconds since the Epoch.""" buf = _gettmarg(space, w_tup, allowNone=False) + rffi.setintfield(buf, "c_tm_wday", -1) tt = c_mktime(buf) - if tt == -1: + # A return value of -1 does not necessarily mean an error, but tm_wday + # cannot remain set to -1 if mktime succeeds. + if tt == -1 and rffi.getintfield(buf, "c_tm_wday") == -1: raise OperationError(space.w_OverflowError, space.wrap("mktime argument out of range")) diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -14,7 +14,7 @@ assert isinstance(rctime.timezone, int) assert isinstance(rctime.tzname, tuple) assert isinstance(rctime.__doc__, str) - + def test_sleep(self): import time as rctime import sys @@ -46,7 +46,7 @@ assert isinstance(res, str) rctime.ctime(rctime.time()) raises(ValueError, rctime.ctime, 1E200) - + def test_gmtime(self): import time as rctime raises(TypeError, rctime.gmtime, "foo") @@ -75,7 +75,7 @@ assert 0 <= (t1 - t0) < 1.2 t = rctime.time() assert rctime.localtime(t) == rctime.localtime(t) - + def test_mktime(self): import time as rctime import os, sys @@ -85,30 +85,32 @@ raises(TypeError, rctime.mktime, (1, 2, 3, 4, 5, 6, 'f', 8, 9)) res = rctime.mktime(rctime.localtime()) assert isinstance(res, float) - + ltime = rctime.localtime() rctime.accept2dyear == 0 ltime = list(ltime) ltime[0] = 1899 raises(ValueError, rctime.mktime, tuple(ltime)) rctime.accept2dyear == 1 - + ltime = list(ltime) ltime[0] = 67 ltime = tuple(ltime) if os.name != "nt" and sys.maxint < 1<<32: # time_t may be 64bit raises(OverflowError, rctime.mktime, ltime) - + ltime = list(ltime) ltime[0] = 100 raises(ValueError, rctime.mktime, tuple(ltime)) - + t = rctime.time() assert long(rctime.mktime(rctime.localtime(t))) == long(t) assert long(rctime.mktime(rctime.gmtime(t))) - rctime.timezone == long(t) ltime = rctime.localtime() assert rctime.mktime(tuple(ltime)) == rctime.mktime(ltime) - + + assert rctime.mktime(rctime.localtime(-1)) == -1 + def test_asctime(self): import time as rctime rctime.asctime() @@ -138,18 +140,18 @@ st_time = rctime.struct_time(tup) assert str(st_time).startswith('time.struct_time(tm_year=1, ') assert len(st_time) == len(tup) - + def test_tzset(self): import time as rctime import os - + if not os.name == "posix": skip("tzset available only under Unix") - + # epoch time of midnight Dec 25th 2002. Never DST in northern # hemisphere. xmas2002 = 1040774400.0 - + # these formats are correct for 2002, and possibly future years # this format is the 'standard' as documented at: # http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap08.html @@ -158,7 +160,7 @@ eastern = 'EST+05EDT,M4.1.0,M10.5.0' victoria = 'AEST-10AEDT-11,M10.5.0,M3.5.0' utc = 'UTC+0' - + org_TZ = os.environ.get('TZ', None) try: # Make sure we can switch to UTC time and results are correct @@ -172,7 +174,7 @@ assert rctime.daylight == 0 assert rctime.timezone == 0 assert rctime.localtime(xmas2002).tm_isdst == 0 - + # make sure we can switch to US/Eastern os.environ['TZ'] = eastern rctime.tzset() @@ -183,7 +185,7 @@ assert rctime.timezone == 18000 assert rctime.altzone == 14400 assert rctime.localtime(xmas2002).tm_isdst == 0 - + # now go to the southern hemisphere. os.environ['TZ'] = victoria rctime.tzset() @@ -206,7 +208,7 @@ def test_strftime(self): import time as rctime - + t = rctime.time() tt = rctime.gmtime(t) for directive in ('a', 'A', 'b', 'B', 'c', 'd', 'H', 'I', @@ -214,7 +216,7 @@ 'U', 'w', 'W', 'x', 'X', 'y', 'Y', 'Z', '%'): format = ' %' + directive rctime.strftime(format, tt) - + raises(TypeError, rctime.strftime, ()) raises(TypeError, rctime.strftime, (1,)) raises(TypeError, rctime.strftime, range(8)) @@ -234,10 +236,10 @@ def test_strftime_bounds_checking(self): import time as rctime - + # make sure that strftime() checks the bounds of the various parts # of the time tuple. - + # check year raises(ValueError, rctime.strftime, '', (1899, 1, 1, 0, 0, 0, 0, 1, -1)) if rctime.accept2dyear: From noreply at buildbot.pypy.org Sun Jan 22 19:16:41 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 22 Jan 2012 19:16:41 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Change to the new exception for bytearray().pop() Message-ID: <20120122181641.8E37A821FA@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: merge-2.7.2 Changeset: r51650:f17c2e5a8629 Date: 2012-01-22 12:16 -0600 http://bitbucket.org/pypy/pypy/changeset/f17c2e5a8629/ Log: Change to the new exception for bytearray().pop() diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -414,8 +414,8 @@ result = w_bytearray.data.pop(index) except IndexError: if not w_bytearray.data: - raise OperationError(space.w_OverflowError, space.wrap( - "cannot pop an empty bytearray")) + raise OperationError(space.w_IndexError, space.wrap( + "pop from empty bytearray")) raise OperationError(space.w_IndexError, space.wrap( "pop index out of range")) return space.wrap(ord(result)) diff --git a/pypy/objspace/std/test/test_bytearrayobject.py b/pypy/objspace/std/test/test_bytearrayobject.py --- a/pypy/objspace/std/test/test_bytearrayobject.py +++ b/pypy/objspace/std/test/test_bytearrayobject.py @@ -266,7 +266,7 @@ assert b.pop(0) == ord('w') assert b.pop(-2) == ord('r') raises(IndexError, b.pop, 10) - raises(OverflowError, bytearray().pop) + raises(IndexError, bytearray().pop) assert bytearray('\xff').pop() == 0xff def test_remove(self): From noreply at buildbot.pypy.org Sun Jan 22 19:23:34 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 19:23:34 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: CPython issue11391: Fix a mmap crasher Message-ID: <20120122182334.75D16821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51651:9def34947826 Date: 2012-01-22 17:55 +0100 http://bitbucket.org/pypy/pypy/changeset/9def34947826/ Log: CPython issue11391: Fix a mmap crasher diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -625,13 +625,16 @@ flags = MAP_PRIVATE prot = PROT_READ | PROT_WRITE elif access == _ACCESS_DEFAULT: - pass + # map prot to access type + if prot & PROT_READ and prot & PROT_WRITE: + pass # _ACCESS_DEFAULT + elif prot & PROT_WRITE: + access = ACCESS_WRITE + else: + access = ACCESS_READ else: raise RValueError("mmap invalid access parameter.") - if prot == PROT_READ: - access = ACCESS_READ - # check file size try: st = os.fstat(fd) diff --git a/pypy/rlib/test/test_rmmap.py b/pypy/rlib/test/test_rmmap.py --- a/pypy/rlib/test/test_rmmap.py +++ b/pypy/rlib/test/test_rmmap.py @@ -263,10 +263,23 @@ f.flush() m = mmap.mmap(f.fileno(), 6, prot=mmap.PROT_READ) raises(RTypeError, m.write, "foo") + m.close() + f.close() + + def test_write_without_protwrite(self): + if os.name == "nt": + skip("Needs PROT_WRITE") + f = open(self.tmpname + "l2", "w+") + f.write("foobar") + f.flush() + m = mmap.mmap(f.fileno(), 6, prot=~mmap.PROT_WRITE) + raises(RTypeError, m.write_byte, 'a') + raises(RTypeError, m.write, "foo") + m.close() f.close() def test_size(self): - f = open(self.tmpname + "l", "w+") + f = open(self.tmpname + "l3", "w+") f.write("foobar") f.flush() From noreply at buildbot.pypy.org Sun Jan 22 19:23:35 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 19:23:35 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: test_ssl: Correctly use the helper function to find data files Message-ID: <20120122182335.A9480821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51652:a49166fe832a Date: 2012-01-22 18:01 +0100 http://bitbucket.org/pypy/pypy/changeset/a49166fe832a/ Log: test_ssl: Correctly use the helper function to find data files diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -1334,7 +1334,6 @@ global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT CERTFILE = test_support.findfile("keycert.pem") SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( - os.path.dirname(__file__) or os.curdir, "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or From noreply at buildbot.pypy.org Sun Jan 22 19:23:36 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 19:23:36 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Fix our copy of "sysconfig.py", which I broke during the merge Message-ID: <20120122182336.DE394821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51653:9f554debdd92 Date: 2012-01-22 18:03 +0100 http://bitbucket.org/pypy/pypy/changeset/9f554debdd92/ Log: Fix our copy of "sysconfig.py", which I broke during the merge diff --git a/lib-python/modified-2.7/distutils/sysconfig.py b/lib-python/modified-2.7/distutils/sysconfig.py --- a/lib-python/modified-2.7/distutils/sysconfig.py +++ b/lib-python/modified-2.7/distutils/sysconfig.py @@ -9,563 +9,21 @@ Email: """ -__revision__ = "$Id$" +__revision__ = "$Id: sysconfig.py 85358 2010-10-10 09:54:59Z antoine.pitrou $" -import os -import re -import string import sys -from distutils.errors import DistutilsPlatformError -# These are needed in a couple of spots, so just compute them once. -PREFIX = os.path.normpath(sys.prefix) -EXEC_PREFIX = os.path.normpath(sys.exec_prefix) +# The content of this file is redirected from +# sysconfig_cpython or sysconfig_pypy. -# Path to the base directory of the project. On Windows the binary may -# live in project/PCBuild9. If we're dealing with an x64 Windows build, -# it'll live in project/PCbuild/amd64. -project_base = os.path.dirname(os.path.abspath(sys.executable)) -if os.name == "nt" and "pcbuild" in project_base[-8:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir)) -# PC/VS7.1 -if os.name == "nt" and "\\pc\\v" in project_base[-10:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) -# PC/AMD64 -if os.name == "nt" and "\\pcbuild\\amd64" in project_base[-14:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) +if '__pypy__' in sys.builtin_module_names: + from distutils.sysconfig_pypy import * + from distutils.sysconfig_pypy import _config_vars # needed by setuptools + from distutils.sysconfig_pypy import _variable_rx # read_setup_file() +else: + from distutils.sysconfig_cpython import * + from distutils.sysconfig_cpython import _config_vars # needed by setuptools + from distutils.sysconfig_cpython import _variable_rx # read_setup_file() -# python_build: (Boolean) if true, we're either building Python or -# building an extension with an un-installed Python, so we use -# different (hard-wired) directories. -# Setup.local is available for Makefile builds including VPATH builds, -# Setup.dist is available on Windows -def _python_build(): - for fn in ("Setup.dist", "Setup.local"): - if os.path.isfile(os.path.join(project_base, "Modules", fn)): - return True - return False -python_build = _python_build() - -def get_python_version(): - """Return a string containing the major and minor Python version, - leaving off the patchlevel. Sample return values could be '1.5' - or '2.2'. - """ - return sys.version[:3] - - -def get_python_inc(plat_specific=0, prefix=None): - """Return the directory containing installed Python header files. - - If 'plat_specific' is false (the default), this is the path to the - non-platform-specific header files, i.e. Python.h and so on; - otherwise, this is the path to platform-specific header files - (namely pyconfig.h). - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - if python_build: - buildir = os.path.dirname(sys.executable) - if plat_specific: - # python.h is located in the buildir - inc_dir = buildir - else: - # the source dir is relative to the buildir - srcdir = os.path.abspath(os.path.join(buildir, - get_config_var('srcdir'))) - # Include is located in the srcdir - inc_dir = os.path.join(srcdir, "Include") - return inc_dir - return os.path.join(prefix, "include", "python" + get_python_version()) - elif os.name == "nt": - return os.path.join(prefix, "include") - elif os.name == "os2": - return os.path.join(prefix, "Include") - else: - raise DistutilsPlatformError( - "I don't know where Python installs its C header files " - "on platform '%s'" % os.name) - - -def get_python_lib(plat_specific=0, standard_lib=0, prefix=None): - """Return the directory containing the Python library (standard or - site additions). - - If 'plat_specific' is true, return the directory containing - platform-specific modules, i.e. any module from a non-pure-Python - module distribution; otherwise, return the platform-shared library - directory. If 'standard_lib' is true, return the directory - containing standard Python library modules; otherwise, return the - directory for site-specific modules. - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - libpython = os.path.join(prefix, - "lib", "python" + get_python_version()) - if standard_lib: - return libpython - else: - return os.path.join(libpython, "site-packages") - - elif os.name == "nt": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - if get_python_version() < "2.2": - return prefix - else: - return os.path.join(prefix, "Lib", "site-packages") - - elif os.name == "os2": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - return os.path.join(prefix, "Lib", "site-packages") - - else: - raise DistutilsPlatformError( - "I don't know where Python installs its library " - "on platform '%s'" % os.name) - - -def customize_compiler(compiler): - """Do any platform-specific customization of a CCompiler instance. - - Mainly needed on Unix, so we can plug in the information that - varies across Unices and is stored in Python's Makefile. - """ - if compiler.compiler_type == "unix": - (cc, cxx, opt, cflags, ccshared, ldshared, so_ext) = \ - get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS', - 'CCSHARED', 'LDSHARED', 'SO') - - if 'CC' in os.environ: - cc = os.environ['CC'] - if 'CXX' in os.environ: - cxx = os.environ['CXX'] - if 'LDSHARED' in os.environ: - ldshared = os.environ['LDSHARED'] - if 'CPP' in os.environ: - cpp = os.environ['CPP'] - else: - cpp = cc + " -E" # not always - if 'LDFLAGS' in os.environ: - ldshared = ldshared + ' ' + os.environ['LDFLAGS'] - if 'CFLAGS' in os.environ: - cflags = opt + ' ' + os.environ['CFLAGS'] - ldshared = ldshared + ' ' + os.environ['CFLAGS'] - if 'CPPFLAGS' in os.environ: - cpp = cpp + ' ' + os.environ['CPPFLAGS'] - cflags = cflags + ' ' + os.environ['CPPFLAGS'] - ldshared = ldshared + ' ' + os.environ['CPPFLAGS'] - - cc_cmd = cc + ' ' + cflags - compiler.set_executables( - preprocessor=cpp, - compiler=cc_cmd, - compiler_so=cc_cmd + ' ' + ccshared, - compiler_cxx=cxx, - linker_so=ldshared, - linker_exe=cc) - - compiler.shared_lib_extension = so_ext - - -def get_config_h_filename(): - """Return full pathname of installed pyconfig.h file.""" - if python_build: - if os.name == "nt": - inc_dir = os.path.join(project_base, "PC") - else: - inc_dir = project_base - else: - inc_dir = get_python_inc(plat_specific=1) - if get_python_version() < '2.2': - config_h = 'config.h' - else: - # The name of the config.h file changed in 2.2 - config_h = 'pyconfig.h' - return os.path.join(inc_dir, config_h) - - -def get_makefile_filename(): - """Return full pathname of installed Makefile from the Python build.""" - if python_build: - return os.path.join(os.path.dirname(sys.executable), "Makefile") - lib_dir = get_python_lib(plat_specific=1, standard_lib=1) - return os.path.join(lib_dir, "config", "Makefile") - - -def parse_config_h(fp, g=None): - """Parse a config.h-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - if g is None: - g = {} - define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") - undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") - # - while 1: - line = fp.readline() - if not line: - break - m = define_rx.match(line) - if m: - n, v = m.group(1, 2) - try: v = int(v) - except ValueError: pass - g[n] = v - else: - m = undef_rx.match(line) - if m: - g[m.group(1)] = 0 - return g - - -# Regexes needed for parsing Makefile (and similar syntaxes, -# like old-style Setup files). -_variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") -_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") -_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - -def parse_makefile(fn, g=None): - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - from distutils.text_file import TextFile - fp = TextFile(fn, strip_comments=1, skip_blanks=1, join_lines=1) - - if g is None: - g = {} - done = {} - notdone = {} - - while 1: - line = fp.readline() - if line is None: # eof - break - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # do variable interpolation here - while notdone: - for name in notdone.keys(): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - else: - done[n] = item = "" - if found: - after = value[m.end():] - value = value[:m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - - fp.close() - - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - g.update(done) - return g - - -def expand_makefile_vars(s, vars): - """Expand Makefile-style variables -- "${foo}" or "$(foo)" -- in - 'string' according to 'vars' (a dictionary mapping variable names to - values). Variables not present in 'vars' are silently expanded to the - empty string. The variable values in 'vars' should not contain further - variable expansions; if 'vars' is the output of 'parse_makefile()', - you're fine. Returns a variable-expanded version of 's'. - """ - - # This algorithm does multiple expansion, so if vars['foo'] contains - # "${bar}", it will expand ${foo} to ${bar}, and then expand - # ${bar}... and so forth. This is fine as long as 'vars' comes from - # 'parse_makefile()', which takes care of such expansions eagerly, - # according to make's variable expansion semantics. - - while 1: - m = _findvar1_rx.search(s) or _findvar2_rx.search(s) - if m: - (beg, end) = m.span() - s = s[0:beg] + vars.get(m.group(1)) + s[end:] - else: - break - return s - - -_config_vars = None - -def _init_posix(): - """Initialize the module as appropriate for POSIX systems.""" - g = {} - # load the installed Makefile: - try: - filename = get_makefile_filename() - parse_makefile(filename, g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # load the installed pyconfig.h: - try: - filename = get_config_h_filename() - parse_config_h(file(filename), g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in g: - cfg_target = g['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' - % (cur_target, cfg_target)) - raise DistutilsPlatformError(my_msg) - - # On AIX, there are wrong paths to the linker scripts in the Makefile - # -- these paths are relative to the Python source, but when installed - # the scripts are in another directory. - if python_build: - g['LDSHARED'] = g['BLDSHARED'] - - elif get_python_version() < '2.1': - # The following two branches are for 1.5.2 compatibility. - if sys.platform == 'aix4': # what about AIX 3.x ? - # Linker script is in the config directory, not in Modules as the - # Makefile says. - python_lib = get_python_lib(standard_lib=1) - ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') - python_exp = os.path.join(python_lib, 'config', 'python.exp') - - g['LDSHARED'] = "%s %s -bI:%s" % (ld_so_aix, g['CC'], python_exp) - - elif sys.platform == 'beos': - # Linker script is in the config directory. In the Makefile it is - # relative to the srcdir, which after installation no longer makes - # sense. - python_lib = get_python_lib(standard_lib=1) - linkerscript_path = string.split(g['LDSHARED'])[0] - linkerscript_name = os.path.basename(linkerscript_path) - linkerscript = os.path.join(python_lib, 'config', - linkerscript_name) - - # XXX this isn't the right place to do this: adding the Python - # library to the link, if needed, should be in the "build_ext" - # command. (It's also needed for non-MS compilers on Windows, and - # it's taken care of for them by the 'build_ext.get_libraries()' - # method.) - g['LDSHARED'] = ("%s -L%s/lib -lpython%s" % - (linkerscript, PREFIX, get_python_version())) - - global _config_vars - _config_vars = g - - -def _init_nt(): - """Initialize the module as appropriate for NT""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - g['VERSION'] = get_python_version().replace(".", "") - g['BINDIR'] = os.path.dirname(os.path.abspath(sys.executable)) - - global _config_vars - _config_vars = g - - -def _init_os2(): - """Initialize the module as appropriate for OS/2""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - - global _config_vars - _config_vars = g - - -def get_config_vars(*args): - """With no arguments, return a dictionary of all configuration - variables relevant for the current platform. Generally this includes - everything needed to build extensions and install both pure modules and - extensions. On Unix, this means every variable defined in Python's - installed Makefile; on Windows and Mac OS it's a much smaller set. - - With arguments, return a list of values that result from looking up - each argument in the configuration variable dictionary. - """ - global _config_vars - if _config_vars is None: - func = globals().get("_init_" + os.name) - if func: - func() - else: - _config_vars = {} - - # Normalized versions of prefix and exec_prefix are handy to have; - # in fact, these are the standard versions used most places in the - # Distutils. - _config_vars['prefix'] = PREFIX - _config_vars['exec_prefix'] = EXEC_PREFIX - - if sys.platform == 'darwin': - kernel_version = os.uname()[2] # Kernel version (8.4.3) - major_version = int(kernel_version.split('.')[0]) - - if major_version < 8: - # On Mac OS X before 10.4, check if -arch and -isysroot - # are in CFLAGS or LDFLAGS and remove them if they are. - # This is needed when building extensions on a 10.3 system - # using a universal build of python. - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = re.sub('-isysroot [^ \t]*', ' ', flags) - _config_vars[key] = flags - - else: - - # Allow the user to override the architecture flags using - # an environment variable. - # NOTE: This name was introduced by Apple in OSX 10.5 and - # is used by several scripting languages distributed with - # that OS release. - - if 'ARCHFLAGS' in os.environ: - arch = os.environ['ARCHFLAGS'] - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = flags + ' ' + arch - _config_vars[key] = flags - - # If we're on OSX 10.5 or later and the user tries to - # compiles an extension using an SDK that is not present - # on the current machine it is better to not use an SDK - # than to fail. - # - # The major usecase for this is users using a Python.org - # binary installer on OSX 10.6: that installer uses - # the 10.4u SDK, but that SDK is not installed by default - # when you install Xcode. - # - m = re.search('-isysroot\s+(\S+)', _config_vars['CFLAGS']) - if m is not None: - sdk = m.group(1) - if not os.path.exists(sdk): - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-isysroot\s+\S+(\s|$)', ' ', flags) - _config_vars[key] = flags - - if args: - vals = [] - for name in args: - vals.append(_config_vars.get(name)) - return vals - else: - return _config_vars - -def get_config_var(name): - """Return the value of a single variable using the dictionary - returned by 'get_config_vars()'. Equivalent to - get_config_vars().get(name) - """ - return get_config_vars().get(name) From noreply at buildbot.pypy.org Sun Jan 22 19:23:38 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 19:23:38 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Fix edge cases in float.__mod__ Message-ID: <20120122182338.2042F821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51654:40000ebe74d7 Date: 2012-01-22 18:31 +0100 http://bitbucket.org/pypy/pypy/changeset/40000ebe74d7/ Log: Fix edge cases in float.__mod__ diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -383,8 +383,16 @@ except ValueError: mod = rfloat.NAN else: - if (mod and ((y < 0.0) != (mod < 0.0))): - mod += y + if mod: + # ensure the remainder has the same sign as the denominator + if (y < 0.0) != (mod < 0.0): + mod += y + else: + # the remainder is zero, and in the presence of signed zeroes + # fmod returns different results across platforms; ensure + # it has the same sign as the denominator; we'd like to do + # "mod = y * 0.0", but that may get optimized away + mod = copysign(0.0, y) return W_FloatObject(mod) diff --git a/pypy/objspace/std/test/test_floatobject.py b/pypy/objspace/std/test/test_floatobject.py --- a/pypy/objspace/std/test/test_floatobject.py +++ b/pypy/objspace/std/test/test_floatobject.py @@ -789,3 +789,26 @@ raises(ZeroDivisionError, lambda: inf % 0) raises(ZeroDivisionError, lambda: inf // 0) raises(ZeroDivisionError, divmod, inf, 0) + + def test_modulo_edgecases(self): + # Check behaviour of % operator for IEEE 754 special cases. + # In particular, check signs of zeros. + mod = float.__mod__ + import math + + def check(a, b): + assert (a, math.copysign(1.0, a)) == (b, math.copysign(1.0, b)) + + check(mod(-1.0, 1.0), 0.0) + check(mod(-1e-100, 1.0), 1.0) + check(mod(-0.0, 1.0), 0.0) + check(mod(0.0, 1.0), 0.0) + check(mod(1e-100, 1.0), 1e-100) + check(mod(1.0, 1.0), 0.0) + + check(mod(-1.0, -1.0), -0.0) + check(mod(-1e-100, -1.0), -1e-100) + check(mod(-0.0, -1.0), -0.0) + check(mod(0.0, -1.0), -0.0) + check(mod(1e-100, -1.0), -1.0) + check(mod(1.0, -1.0), -0.0) From noreply at buildbot.pypy.org Sun Jan 22 19:32:08 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 22 Jan 2012 19:32:08 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Added in the new behavior for RawIOBase.readall, if read() returns None then readall() also returns None Message-ID: <20120122183208.DCE18821FA@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: merge-2.7.2 Changeset: r51655:12b899970671 Date: 2012-01-22 12:31 -0600 http://bitbucket.org/pypy/pypy/changeset/12b899970671/ Log: Added in the new behavior for RawIOBase.readall, if read() returns None then readall() also returns None diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -282,6 +282,10 @@ while True: w_data = space.call_method(self, "read", space.wrap(DEFAULT_BUFFER_SIZE)) + if space.is_w(w_data, space.w_None): + if not builder.getlength(): + return w_data + break if not space.isinstance_w(w_data, space.w_str): raise OperationError(space.w_TypeError, space.wrap( diff --git a/pypy/module/_io/test/test_io.py b/pypy/module/_io/test/test_io.py --- a/pypy/module/_io/test/test_io.py +++ b/pypy/module/_io/test/test_io.py @@ -135,10 +135,27 @@ assert r.read(2) == 'ab' assert r.read(2) == 'c' assert r.read(2) == 'de' - assert r.read(2) == None + assert r.read(2) is None assert r.read(2) == 'fg' assert r.read(2) == '' + def test_rawio_readall_none(self): + import _io + class MockRawIO(_io._RawIOBase): + read_stack = [None, None, "a"] + def readinto(self, buf): + v = self.read_stack.pop() + if v is None: + return v + buf[:len(v)] = v + return len(v) + + r = MockRawIO() + s = r.readall() + assert s =="a" + s = r.readall() + assert s is None + class AppTestOpen: def setup_class(cls): cls.space = gettestobjspace(usemodules=['_io', '_locale']) From noreply at buildbot.pypy.org Sun Jan 22 19:32:10 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 22 Jan 2012 19:32:10 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: merged upstream Message-ID: <20120122183210.275CB821FA@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: merge-2.7.2 Changeset: r51656:1119c3892bfc Date: 2012-01-22 12:31 -0600 http://bitbucket.org/pypy/pypy/changeset/1119c3892bfc/ Log: merged upstream diff --git a/lib-python/modified-2.7/distutils/sysconfig.py b/lib-python/modified-2.7/distutils/sysconfig.py --- a/lib-python/modified-2.7/distutils/sysconfig.py +++ b/lib-python/modified-2.7/distutils/sysconfig.py @@ -9,563 +9,21 @@ Email: """ -__revision__ = "$Id$" +__revision__ = "$Id: sysconfig.py 85358 2010-10-10 09:54:59Z antoine.pitrou $" -import os -import re -import string import sys -from distutils.errors import DistutilsPlatformError -# These are needed in a couple of spots, so just compute them once. -PREFIX = os.path.normpath(sys.prefix) -EXEC_PREFIX = os.path.normpath(sys.exec_prefix) +# The content of this file is redirected from +# sysconfig_cpython or sysconfig_pypy. -# Path to the base directory of the project. On Windows the binary may -# live in project/PCBuild9. If we're dealing with an x64 Windows build, -# it'll live in project/PCbuild/amd64. -project_base = os.path.dirname(os.path.abspath(sys.executable)) -if os.name == "nt" and "pcbuild" in project_base[-8:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir)) -# PC/VS7.1 -if os.name == "nt" and "\\pc\\v" in project_base[-10:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) -# PC/AMD64 -if os.name == "nt" and "\\pcbuild\\amd64" in project_base[-14:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) +if '__pypy__' in sys.builtin_module_names: + from distutils.sysconfig_pypy import * + from distutils.sysconfig_pypy import _config_vars # needed by setuptools + from distutils.sysconfig_pypy import _variable_rx # read_setup_file() +else: + from distutils.sysconfig_cpython import * + from distutils.sysconfig_cpython import _config_vars # needed by setuptools + from distutils.sysconfig_cpython import _variable_rx # read_setup_file() -# python_build: (Boolean) if true, we're either building Python or -# building an extension with an un-installed Python, so we use -# different (hard-wired) directories. -# Setup.local is available for Makefile builds including VPATH builds, -# Setup.dist is available on Windows -def _python_build(): - for fn in ("Setup.dist", "Setup.local"): - if os.path.isfile(os.path.join(project_base, "Modules", fn)): - return True - return False -python_build = _python_build() - -def get_python_version(): - """Return a string containing the major and minor Python version, - leaving off the patchlevel. Sample return values could be '1.5' - or '2.2'. - """ - return sys.version[:3] - - -def get_python_inc(plat_specific=0, prefix=None): - """Return the directory containing installed Python header files. - - If 'plat_specific' is false (the default), this is the path to the - non-platform-specific header files, i.e. Python.h and so on; - otherwise, this is the path to platform-specific header files - (namely pyconfig.h). - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - if python_build: - buildir = os.path.dirname(sys.executable) - if plat_specific: - # python.h is located in the buildir - inc_dir = buildir - else: - # the source dir is relative to the buildir - srcdir = os.path.abspath(os.path.join(buildir, - get_config_var('srcdir'))) - # Include is located in the srcdir - inc_dir = os.path.join(srcdir, "Include") - return inc_dir - return os.path.join(prefix, "include", "python" + get_python_version()) - elif os.name == "nt": - return os.path.join(prefix, "include") - elif os.name == "os2": - return os.path.join(prefix, "Include") - else: - raise DistutilsPlatformError( - "I don't know where Python installs its C header files " - "on platform '%s'" % os.name) - - -def get_python_lib(plat_specific=0, standard_lib=0, prefix=None): - """Return the directory containing the Python library (standard or - site additions). - - If 'plat_specific' is true, return the directory containing - platform-specific modules, i.e. any module from a non-pure-Python - module distribution; otherwise, return the platform-shared library - directory. If 'standard_lib' is true, return the directory - containing standard Python library modules; otherwise, return the - directory for site-specific modules. - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - libpython = os.path.join(prefix, - "lib", "python" + get_python_version()) - if standard_lib: - return libpython - else: - return os.path.join(libpython, "site-packages") - - elif os.name == "nt": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - if get_python_version() < "2.2": - return prefix - else: - return os.path.join(prefix, "Lib", "site-packages") - - elif os.name == "os2": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - return os.path.join(prefix, "Lib", "site-packages") - - else: - raise DistutilsPlatformError( - "I don't know where Python installs its library " - "on platform '%s'" % os.name) - - -def customize_compiler(compiler): - """Do any platform-specific customization of a CCompiler instance. - - Mainly needed on Unix, so we can plug in the information that - varies across Unices and is stored in Python's Makefile. - """ - if compiler.compiler_type == "unix": - (cc, cxx, opt, cflags, ccshared, ldshared, so_ext) = \ - get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS', - 'CCSHARED', 'LDSHARED', 'SO') - - if 'CC' in os.environ: - cc = os.environ['CC'] - if 'CXX' in os.environ: - cxx = os.environ['CXX'] - if 'LDSHARED' in os.environ: - ldshared = os.environ['LDSHARED'] - if 'CPP' in os.environ: - cpp = os.environ['CPP'] - else: - cpp = cc + " -E" # not always - if 'LDFLAGS' in os.environ: - ldshared = ldshared + ' ' + os.environ['LDFLAGS'] - if 'CFLAGS' in os.environ: - cflags = opt + ' ' + os.environ['CFLAGS'] - ldshared = ldshared + ' ' + os.environ['CFLAGS'] - if 'CPPFLAGS' in os.environ: - cpp = cpp + ' ' + os.environ['CPPFLAGS'] - cflags = cflags + ' ' + os.environ['CPPFLAGS'] - ldshared = ldshared + ' ' + os.environ['CPPFLAGS'] - - cc_cmd = cc + ' ' + cflags - compiler.set_executables( - preprocessor=cpp, - compiler=cc_cmd, - compiler_so=cc_cmd + ' ' + ccshared, - compiler_cxx=cxx, - linker_so=ldshared, - linker_exe=cc) - - compiler.shared_lib_extension = so_ext - - -def get_config_h_filename(): - """Return full pathname of installed pyconfig.h file.""" - if python_build: - if os.name == "nt": - inc_dir = os.path.join(project_base, "PC") - else: - inc_dir = project_base - else: - inc_dir = get_python_inc(plat_specific=1) - if get_python_version() < '2.2': - config_h = 'config.h' - else: - # The name of the config.h file changed in 2.2 - config_h = 'pyconfig.h' - return os.path.join(inc_dir, config_h) - - -def get_makefile_filename(): - """Return full pathname of installed Makefile from the Python build.""" - if python_build: - return os.path.join(os.path.dirname(sys.executable), "Makefile") - lib_dir = get_python_lib(plat_specific=1, standard_lib=1) - return os.path.join(lib_dir, "config", "Makefile") - - -def parse_config_h(fp, g=None): - """Parse a config.h-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - if g is None: - g = {} - define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") - undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") - # - while 1: - line = fp.readline() - if not line: - break - m = define_rx.match(line) - if m: - n, v = m.group(1, 2) - try: v = int(v) - except ValueError: pass - g[n] = v - else: - m = undef_rx.match(line) - if m: - g[m.group(1)] = 0 - return g - - -# Regexes needed for parsing Makefile (and similar syntaxes, -# like old-style Setup files). -_variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") -_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") -_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - -def parse_makefile(fn, g=None): - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - from distutils.text_file import TextFile - fp = TextFile(fn, strip_comments=1, skip_blanks=1, join_lines=1) - - if g is None: - g = {} - done = {} - notdone = {} - - while 1: - line = fp.readline() - if line is None: # eof - break - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # do variable interpolation here - while notdone: - for name in notdone.keys(): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - else: - done[n] = item = "" - if found: - after = value[m.end():] - value = value[:m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - - fp.close() - - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - g.update(done) - return g - - -def expand_makefile_vars(s, vars): - """Expand Makefile-style variables -- "${foo}" or "$(foo)" -- in - 'string' according to 'vars' (a dictionary mapping variable names to - values). Variables not present in 'vars' are silently expanded to the - empty string. The variable values in 'vars' should not contain further - variable expansions; if 'vars' is the output of 'parse_makefile()', - you're fine. Returns a variable-expanded version of 's'. - """ - - # This algorithm does multiple expansion, so if vars['foo'] contains - # "${bar}", it will expand ${foo} to ${bar}, and then expand - # ${bar}... and so forth. This is fine as long as 'vars' comes from - # 'parse_makefile()', which takes care of such expansions eagerly, - # according to make's variable expansion semantics. - - while 1: - m = _findvar1_rx.search(s) or _findvar2_rx.search(s) - if m: - (beg, end) = m.span() - s = s[0:beg] + vars.get(m.group(1)) + s[end:] - else: - break - return s - - -_config_vars = None - -def _init_posix(): - """Initialize the module as appropriate for POSIX systems.""" - g = {} - # load the installed Makefile: - try: - filename = get_makefile_filename() - parse_makefile(filename, g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # load the installed pyconfig.h: - try: - filename = get_config_h_filename() - parse_config_h(file(filename), g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in g: - cfg_target = g['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' - % (cur_target, cfg_target)) - raise DistutilsPlatformError(my_msg) - - # On AIX, there are wrong paths to the linker scripts in the Makefile - # -- these paths are relative to the Python source, but when installed - # the scripts are in another directory. - if python_build: - g['LDSHARED'] = g['BLDSHARED'] - - elif get_python_version() < '2.1': - # The following two branches are for 1.5.2 compatibility. - if sys.platform == 'aix4': # what about AIX 3.x ? - # Linker script is in the config directory, not in Modules as the - # Makefile says. - python_lib = get_python_lib(standard_lib=1) - ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') - python_exp = os.path.join(python_lib, 'config', 'python.exp') - - g['LDSHARED'] = "%s %s -bI:%s" % (ld_so_aix, g['CC'], python_exp) - - elif sys.platform == 'beos': - # Linker script is in the config directory. In the Makefile it is - # relative to the srcdir, which after installation no longer makes - # sense. - python_lib = get_python_lib(standard_lib=1) - linkerscript_path = string.split(g['LDSHARED'])[0] - linkerscript_name = os.path.basename(linkerscript_path) - linkerscript = os.path.join(python_lib, 'config', - linkerscript_name) - - # XXX this isn't the right place to do this: adding the Python - # library to the link, if needed, should be in the "build_ext" - # command. (It's also needed for non-MS compilers on Windows, and - # it's taken care of for them by the 'build_ext.get_libraries()' - # method.) - g['LDSHARED'] = ("%s -L%s/lib -lpython%s" % - (linkerscript, PREFIX, get_python_version())) - - global _config_vars - _config_vars = g - - -def _init_nt(): - """Initialize the module as appropriate for NT""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - g['VERSION'] = get_python_version().replace(".", "") - g['BINDIR'] = os.path.dirname(os.path.abspath(sys.executable)) - - global _config_vars - _config_vars = g - - -def _init_os2(): - """Initialize the module as appropriate for OS/2""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - - global _config_vars - _config_vars = g - - -def get_config_vars(*args): - """With no arguments, return a dictionary of all configuration - variables relevant for the current platform. Generally this includes - everything needed to build extensions and install both pure modules and - extensions. On Unix, this means every variable defined in Python's - installed Makefile; on Windows and Mac OS it's a much smaller set. - - With arguments, return a list of values that result from looking up - each argument in the configuration variable dictionary. - """ - global _config_vars - if _config_vars is None: - func = globals().get("_init_" + os.name) - if func: - func() - else: - _config_vars = {} - - # Normalized versions of prefix and exec_prefix are handy to have; - # in fact, these are the standard versions used most places in the - # Distutils. - _config_vars['prefix'] = PREFIX - _config_vars['exec_prefix'] = EXEC_PREFIX - - if sys.platform == 'darwin': - kernel_version = os.uname()[2] # Kernel version (8.4.3) - major_version = int(kernel_version.split('.')[0]) - - if major_version < 8: - # On Mac OS X before 10.4, check if -arch and -isysroot - # are in CFLAGS or LDFLAGS and remove them if they are. - # This is needed when building extensions on a 10.3 system - # using a universal build of python. - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = re.sub('-isysroot [^ \t]*', ' ', flags) - _config_vars[key] = flags - - else: - - # Allow the user to override the architecture flags using - # an environment variable. - # NOTE: This name was introduced by Apple in OSX 10.5 and - # is used by several scripting languages distributed with - # that OS release. - - if 'ARCHFLAGS' in os.environ: - arch = os.environ['ARCHFLAGS'] - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = flags + ' ' + arch - _config_vars[key] = flags - - # If we're on OSX 10.5 or later and the user tries to - # compiles an extension using an SDK that is not present - # on the current machine it is better to not use an SDK - # than to fail. - # - # The major usecase for this is users using a Python.org - # binary installer on OSX 10.6: that installer uses - # the 10.4u SDK, but that SDK is not installed by default - # when you install Xcode. - # - m = re.search('-isysroot\s+(\S+)', _config_vars['CFLAGS']) - if m is not None: - sdk = m.group(1) - if not os.path.exists(sdk): - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-isysroot\s+\S+(\s|$)', ' ', flags) - _config_vars[key] = flags - - if args: - vals = [] - for name in args: - vals.append(_config_vars.get(name)) - return vals - else: - return _config_vars - -def get_config_var(name): - """Return the value of a single variable using the dictionary - returned by 'get_config_vars()'. Equivalent to - get_config_vars().get(name) - """ - return get_config_vars().get(name) diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -1334,7 +1334,6 @@ global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT CERTFILE = test_support.findfile("keycert.pem") SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( - os.path.dirname(__file__) or os.curdir, "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -383,8 +383,16 @@ except ValueError: mod = rfloat.NAN else: - if (mod and ((y < 0.0) != (mod < 0.0))): - mod += y + if mod: + # ensure the remainder has the same sign as the denominator + if (y < 0.0) != (mod < 0.0): + mod += y + else: + # the remainder is zero, and in the presence of signed zeroes + # fmod returns different results across platforms; ensure + # it has the same sign as the denominator; we'd like to do + # "mod = y * 0.0", but that may get optimized away + mod = copysign(0.0, y) return W_FloatObject(mod) diff --git a/pypy/objspace/std/test/test_floatobject.py b/pypy/objspace/std/test/test_floatobject.py --- a/pypy/objspace/std/test/test_floatobject.py +++ b/pypy/objspace/std/test/test_floatobject.py @@ -789,3 +789,26 @@ raises(ZeroDivisionError, lambda: inf % 0) raises(ZeroDivisionError, lambda: inf // 0) raises(ZeroDivisionError, divmod, inf, 0) + + def test_modulo_edgecases(self): + # Check behaviour of % operator for IEEE 754 special cases. + # In particular, check signs of zeros. + mod = float.__mod__ + import math + + def check(a, b): + assert (a, math.copysign(1.0, a)) == (b, math.copysign(1.0, b)) + + check(mod(-1.0, 1.0), 0.0) + check(mod(-1e-100, 1.0), 1.0) + check(mod(-0.0, 1.0), 0.0) + check(mod(0.0, 1.0), 0.0) + check(mod(1e-100, 1.0), 1e-100) + check(mod(1.0, 1.0), 0.0) + + check(mod(-1.0, -1.0), -0.0) + check(mod(-1e-100, -1.0), -1e-100) + check(mod(-0.0, -1.0), -0.0) + check(mod(0.0, -1.0), -0.0) + check(mod(1e-100, -1.0), -1.0) + check(mod(1.0, -1.0), -0.0) diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -625,13 +625,16 @@ flags = MAP_PRIVATE prot = PROT_READ | PROT_WRITE elif access == _ACCESS_DEFAULT: - pass + # map prot to access type + if prot & PROT_READ and prot & PROT_WRITE: + pass # _ACCESS_DEFAULT + elif prot & PROT_WRITE: + access = ACCESS_WRITE + else: + access = ACCESS_READ else: raise RValueError("mmap invalid access parameter.") - if prot == PROT_READ: - access = ACCESS_READ - # check file size try: st = os.fstat(fd) diff --git a/pypy/rlib/test/test_rmmap.py b/pypy/rlib/test/test_rmmap.py --- a/pypy/rlib/test/test_rmmap.py +++ b/pypy/rlib/test/test_rmmap.py @@ -263,10 +263,23 @@ f.flush() m = mmap.mmap(f.fileno(), 6, prot=mmap.PROT_READ) raises(RTypeError, m.write, "foo") + m.close() + f.close() + + def test_write_without_protwrite(self): + if os.name == "nt": + skip("Needs PROT_WRITE") + f = open(self.tmpname + "l2", "w+") + f.write("foobar") + f.flush() + m = mmap.mmap(f.fileno(), 6, prot=~mmap.PROT_WRITE) + raises(RTypeError, m.write_byte, 'a') + raises(RTypeError, m.write, "foo") + m.close() f.close() def test_size(self): - f = open(self.tmpname + "l", "w+") + f = open(self.tmpname + "l3", "w+") f.write("foobar") f.flush() From noreply at buildbot.pypy.org Sun Jan 22 20:27:41 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 20:27:41 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: Fix: conflict with global name 'error'. Message-ID: <20120122192741.70CD1821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51657:d1c21fed7569 Date: 2012-01-12 14:10 +0100 http://bitbucket.org/pypy/pypy/changeset/d1c21fed7569/ Log: Fix: conflict with global name 'error'. diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -122,9 +122,9 @@ def release(self): # Sanity check: the lock must be locked - error = self.acquire(False) + err = self.acquire(False) c_thread_releaselock(self._lock) - if error: + if err: raise error("bad lock") def __del__(self): From noreply at buildbot.pypy.org Sun Jan 22 20:27:43 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 20:27:43 +0100 (CET) Subject: [pypy-commit] pypy concurrent-marksweep: merge heads Message-ID: <20120122192743.023E9821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: concurrent-marksweep Changeset: r51658:edaf872702ab Date: 2012-01-22 20:26 +0100 http://bitbucket.org/pypy/pypy/changeset/edaf872702ab/ Log: merge heads diff --git a/pypy/module/thread/ll_thread.py b/pypy/module/thread/ll_thread.py --- a/pypy/module/thread/ll_thread.py +++ b/pypy/module/thread/ll_thread.py @@ -122,9 +122,9 @@ def release(self): # Sanity check: the lock must be locked - error = self.acquire(False) + err = self.acquire(False) c_thread_releaselock(self._lock) - if error: + if err: raise error("bad lock") def __del__(self): From noreply at buildbot.pypy.org Sun Jan 22 20:28:04 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 20:28:04 +0100 (CET) Subject: [pypy-commit] pypy 32ptr-on-64bit: hg merge default Message-ID: <20120122192804.7FAC1821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: 32ptr-on-64bit Changeset: r51659:35da68422201 Date: 2012-01-22 20:12 +0100 http://bitbucket.org/pypy/pypy/changeset/35da68422201/ Log: hg merge default diff too long, truncating to 10000 out of 192957 lines diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -1,6 +1,10 @@ syntax: glob *.py[co] *~ +.*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ @@ -24,6 +28,8 @@ ^pypy/translator/c/src/libffi_msvc/.+\.dll$ ^pypy/translator/c/src/libffi_msvc/.+\.lib$ ^pypy/translator/c/src/libffi_msvc/.+\.exp$ +^pypy/translator/c/src/cjkcodecs/.+\.o$ +^pypy/translator/c/src/cjkcodecs/.+\.obj$ ^pypy/translator/jvm/\.project$ ^pypy/translator/jvm/\.classpath$ ^pypy/translator/jvm/eclipse-bin$ @@ -36,6 +42,8 @@ ^pypy/translator/benchmark/shootout_benchmarks$ ^pypy/translator/goal/pypy-translation-snapshot$ ^pypy/translator/goal/pypy-c +^pypy/translator/goal/pypy-jvm +^pypy/translator/goal/pypy-jvm.jar ^pypy/translator/goal/.+\.exe$ ^pypy/translator/goal/.+\.dll$ ^pypy/translator/goal/target.+-c$ @@ -62,6 +70,7 @@ ^pypy/doc/image/lattice3\.png$ ^pypy/doc/image/stackless_informal\.png$ ^pypy/doc/image/parsing_example.+\.png$ +^pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test\.o$ ^compiled ^.git/ ^release/ diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,1 +1,4 @@ b590cf6de4190623aad9aa698694c22e614d67b9 release-1.5 +b48df0bf4e75b81d98f19ce89d4a7dc3e1dab5e5 benchmarked +d8ac7d23d3ec5f9a0fa1264972f74a010dbfd07f release-1.6 +ff4af8f318821f7f5ca998613a60fca09aa137da release-1.7 diff --git a/LICENSE b/LICENSE --- a/LICENSE +++ b/LICENSE @@ -27,7 +27,7 @@ DEALINGS IN THE SOFTWARE. -PyPy Copyright holders 2003-2011 +PyPy Copyright holders 2003-2012 ----------------------------------- Except when otherwise stated (look for LICENSE files or information at @@ -42,34 +42,39 @@ Samuele Pedroni Michael Hudson Holger Krekel + Alex Gaynor Christian Tismer + Hakan Ardo Benjamin Peterson + David Schneider Eric van Riet Paap - Anders Chrigström - Håkan Ardö + Anders Chrigstrom Richard Emslie Dan Villiom Podlaski Christiansen Alexander Schremmer - Alex Gaynor - David Schneider - Aurelién Campeas + Lukas Diekmann + Aurelien Campeas Anders Lehmann Camillo Bruni Niklaus Haldimann + Sven Hager Leonardo Santagada Toon Verwaest Seo Sanghyeon + Justin Peel Lawrence Oluyede Bartosz Skowron Jakub Gustak Guido Wesdorp + Daniel Roberts + Laura Creighton Adrien Di Mascio - Laura Creighton Ludovic Aubry Niko Matsakis - Daniel Roberts + Wim Lavrijsen + Matti Picus Jason Creighton - Jacob Hallén + Jacob Hallen Alex Martelli Anders Hammarquist Jan de Mooij @@ -83,29 +88,36 @@ Alexandre Fayolle Marius Gedminas Simon Burton + David Edelsohn Jean-Paul Calderone John Witulski - Wim Lavrijsen + Timo Paulssen + holger krekel + Dario Bertini + Mark Pearse Andreas Stührk Jean-Philippe St. Pierre Guido van Rossum Pavel Vinogradov Valentino Volonghi Paul deGrandis + Ilya Osadchiy + Ronny Pfannschmidt Adrian Kuhn tav Georg Brandl + Philip Jenvey Gerald Klix Wanja Saatkamp Boris Feigin Oscar Nierstrasz - Dario Bertini David Malcolm Eugene Oden Henry Mason + Jeff Terrace Lukas Renggli Guenter Jantzen - Ronny Pfannschmidt + Ned Batchelder Bert Freudenberg Amit Regmi Ben Young @@ -122,8 +134,8 @@ Jared Grubb Karl Bartel Gabriel Lavoie + Victor Stinner Brian Dorsey - Victor Stinner Stuart Williams Toby Watson Antoine Pitrou @@ -134,19 +146,22 @@ Jonathan David Riehl Elmo Mäntynen Anders Qvist - Beatrice Düring + Beatrice During Alexander Sedov + Corbin Simpson Vincent Legoll + Romain Guillebert Alan McIntyre - Romain Guillebert Alex Perry Jens-Uwe Mager + Simon Cross Dan Stromberg - Lukas Diekmann + Guillebert Romain Carl Meyer Pieter Zieschang Alejandro J. Cura Sylvain Thenault + Christoph Gerum Travis Francis Athougies Henrik Vendelbo Lutz Paelike @@ -155,8 +170,10 @@ Lucio Torre Lene Wagner Miguel de Val Borro + Artur Lisiecki + Bruno Gola Ignas Mikalajunas - Artur Lisiecki + Stefano Rivera Joshua Gilbert Godefroid Chappelle Yusei Tahara @@ -165,26 +182,31 @@ Gustavo Niemeyer William Leslie Akira Li - Kristján Valur Jónsson + Kristjan Valur Jonsson Bobby Impollonia + Michael Hudson-Doyle + Laurence Tratt + Yasir Suhail Andrew Thompson Anders Sigfridsson + Floris Bruynooghe Jacek Generowicz Dan Colish - Sven Hager Zooko Wilcox-O Hearn - Anders Hammarquist + Dan Loewenherz + Chris Lambacher Dinu Gherman - Dan Colish + Brett Cannon Daniel Neuhäuser Michael Chermside Konrad Delong Anna Ravencroft Greg Price Armin Ronacher + Christian Muirhead Jim Baker - Philip Jenvey Rodrigo Araújo + Romain Guillebert Heinrich-Heine University, Germany Open End AB (formerly AB Strakt), Sweden diff --git a/README b/README --- a/README +++ b/README @@ -15,10 +15,10 @@ The getting-started document will help guide you: - http://codespeak.net/pypy/dist/pypy/doc/getting-started.html + http://doc.pypy.org/en/latest/getting-started.html It will also point you to the rest of the documentation which is generated from files in the pypy/doc directory within the source repositories. Enjoy and send us feedback! - the pypy-dev team + the pypy-dev team diff --git a/_pytest/__init__.py b/_pytest/__init__.py --- a/_pytest/__init__.py +++ b/_pytest/__init__.py @@ -1,2 +1,2 @@ # -__version__ = '2.0.3' +__version__ = '2.1.0.dev4' diff --git a/_pytest/assertion.py b/_pytest/assertion.py deleted file mode 100644 --- a/_pytest/assertion.py +++ /dev/null @@ -1,177 +0,0 @@ -""" -support for presented detailed information in failing assertions. -""" -import py -import sys -from _pytest.monkeypatch import monkeypatch - -def pytest_addoption(parser): - group = parser.getgroup("debugconfig") - group._addoption('--no-assert', action="store_true", default=False, - dest="noassert", - help="disable python assert expression reinterpretation."), - -def pytest_configure(config): - # The _reprcompare attribute on the py.code module is used by - # py._code._assertionnew to detect this plugin was loaded and in - # turn call the hooks defined here as part of the - # DebugInterpreter. - m = monkeypatch() - config._cleanup.append(m.undo) - warn_about_missing_assertion() - if not config.getvalue("noassert") and not config.getvalue("nomagic"): - def callbinrepr(op, left, right): - hook_result = config.hook.pytest_assertrepr_compare( - config=config, op=op, left=left, right=right) - for new_expl in hook_result: - if new_expl: - return '\n~'.join(new_expl) - m.setattr(py.builtin.builtins, - 'AssertionError', py.code._AssertionError) - m.setattr(py.code, '_reprcompare', callbinrepr) - -def warn_about_missing_assertion(): - try: - assert False - except AssertionError: - pass - else: - sys.stderr.write("WARNING: failing tests may report as passing because " - "assertions are turned off! (are you using python -O?)\n") - -# Provide basestring in python3 -try: - basestring = basestring -except NameError: - basestring = str - - -def pytest_assertrepr_compare(op, left, right): - """return specialised explanations for some operators/operands""" - width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op - left_repr = py.io.saferepr(left, maxsize=int(width/2)) - right_repr = py.io.saferepr(right, maxsize=width-len(left_repr)) - summary = '%s %s %s' % (left_repr, op, right_repr) - - issequence = lambda x: isinstance(x, (list, tuple)) - istext = lambda x: isinstance(x, basestring) - isdict = lambda x: isinstance(x, dict) - isset = lambda x: isinstance(x, set) - - explanation = None - try: - if op == '==': - if istext(left) and istext(right): - explanation = _diff_text(left, right) - elif issequence(left) and issequence(right): - explanation = _compare_eq_sequence(left, right) - elif isset(left) and isset(right): - explanation = _compare_eq_set(left, right) - elif isdict(left) and isdict(right): - explanation = _diff_text(py.std.pprint.pformat(left), - py.std.pprint.pformat(right)) - elif op == 'not in': - if istext(left) and istext(right): - explanation = _notin_text(left, right) - except py.builtin._sysex: - raise - except: - excinfo = py.code.ExceptionInfo() - explanation = ['(pytest_assertion plugin: representation of ' - 'details failed. Probably an object has a faulty __repr__.)', - str(excinfo) - ] - - - if not explanation: - return None - - # Don't include pageloads of data, should be configurable - if len(''.join(explanation)) > 80*8: - explanation = ['Detailed information too verbose, truncated'] - - return [summary] + explanation - - -def _diff_text(left, right): - """Return the explanation for the diff between text - - This will skip leading and trailing characters which are - identical to keep the diff minimal. - """ - explanation = [] - i = 0 # just in case left or right has zero length - for i in range(min(len(left), len(right))): - if left[i] != right[i]: - break - if i > 42: - i -= 10 # Provide some context - explanation = ['Skipping %s identical ' - 'leading characters in diff' % i] - left = left[i:] - right = right[i:] - if len(left) == len(right): - for i in range(len(left)): - if left[-i] != right[-i]: - break - if i > 42: - i -= 10 # Provide some context - explanation += ['Skipping %s identical ' - 'trailing characters in diff' % i] - left = left[:-i] - right = right[:-i] - explanation += [line.strip('\n') - for line in py.std.difflib.ndiff(left.splitlines(), - right.splitlines())] - return explanation - - -def _compare_eq_sequence(left, right): - explanation = [] - for i in range(min(len(left), len(right))): - if left[i] != right[i]: - explanation += ['At index %s diff: %r != %r' % - (i, left[i], right[i])] - break - if len(left) > len(right): - explanation += ['Left contains more items, ' - 'first extra item: %s' % py.io.saferepr(left[len(right)],)] - elif len(left) < len(right): - explanation += ['Right contains more items, ' - 'first extra item: %s' % py.io.saferepr(right[len(left)],)] - return explanation # + _diff_text(py.std.pprint.pformat(left), - # py.std.pprint.pformat(right)) - - -def _compare_eq_set(left, right): - explanation = [] - diff_left = left - right - diff_right = right - left - if diff_left: - explanation.append('Extra items in the left set:') - for item in diff_left: - explanation.append(py.io.saferepr(item)) - if diff_right: - explanation.append('Extra items in the right set:') - for item in diff_right: - explanation.append(py.io.saferepr(item)) - return explanation - - -def _notin_text(term, text): - index = text.find(term) - head = text[:index] - tail = text[index+len(term):] - correct_text = head + tail - diff = _diff_text(correct_text, text) - newdiff = ['%s is contained here:' % py.io.saferepr(term, maxsize=42)] - for line in diff: - if line.startswith('Skipping'): - continue - if line.startswith('- '): - continue - if line.startswith('+ '): - newdiff.append(' ' + line[2:]) - else: - newdiff.append(line) - return newdiff diff --git a/_pytest/assertion/__init__.py b/_pytest/assertion/__init__.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/__init__.py @@ -0,0 +1,128 @@ +""" +support for presenting detailed information in failing assertions. +""" +import py +import imp +import marshal +import struct +import sys +import pytest +from _pytest.monkeypatch import monkeypatch +from _pytest.assertion import reinterpret, util + +try: + from _pytest.assertion.rewrite import rewrite_asserts +except ImportError: + rewrite_asserts = None +else: + import ast + +def pytest_addoption(parser): + group = parser.getgroup("debugconfig") + group.addoption('--assertmode', action="store", dest="assertmode", + choices=("on", "old", "off", "default"), default="default", + metavar="on|old|off", + help="""control assertion debugging tools. +'off' performs no assertion debugging. +'old' reinterprets the expressions in asserts to glean information. +'on' (the default) rewrites the assert statements in test modules to provide +sub-expression results.""") + group.addoption('--no-assert', action="store_true", default=False, + dest="noassert", help="DEPRECATED equivalent to --assertmode=off") + group.addoption('--nomagic', action="store_true", default=False, + dest="nomagic", help="DEPRECATED equivalent to --assertmode=off") + +class AssertionState: + """State for the assertion plugin.""" + + def __init__(self, config, mode): + self.mode = mode + self.trace = config.trace.root.get("assertion") + +def pytest_configure(config): + warn_about_missing_assertion() + mode = config.getvalue("assertmode") + if config.getvalue("noassert") or config.getvalue("nomagic"): + if mode not in ("off", "default"): + raise pytest.UsageError("assertion options conflict") + mode = "off" + elif mode == "default": + mode = "on" + if mode != "off": + def callbinrepr(op, left, right): + hook_result = config.hook.pytest_assertrepr_compare( + config=config, op=op, left=left, right=right) + for new_expl in hook_result: + if new_expl: + return '\n~'.join(new_expl) + m = monkeypatch() + config._cleanup.append(m.undo) + m.setattr(py.builtin.builtins, 'AssertionError', + reinterpret.AssertionError) + m.setattr(util, '_reprcompare', callbinrepr) + if mode == "on" and rewrite_asserts is None: + mode = "old" + config._assertstate = AssertionState(config, mode) + config._assertstate.trace("configured with mode set to %r" % (mode,)) + +def _write_pyc(co, source_path): + if hasattr(imp, "cache_from_source"): + # Handle PEP 3147 pycs. + pyc = py.path.local(imp.cache_from_source(str(source_path))) + pyc.ensure() + else: + pyc = source_path + "c" + mtime = int(source_path.mtime()) + fp = pyc.open("wb") + try: + fp.write(imp.get_magic()) + fp.write(struct.pack(">", + ast.Add : "+", + ast.Sub : "-", + ast.Mult : "*", + ast.Div : "/", + ast.FloorDiv : "//", + ast.Mod : "%", + ast.Eq : "==", + ast.NotEq : "!=", + ast.Lt : "<", + ast.LtE : "<=", + ast.Gt : ">", + ast.GtE : ">=", + ast.Pow : "**", + ast.Is : "is", + ast.IsNot : "is not", + ast.In : "in", + ast.NotIn : "not in" +} + +unary_map = { + ast.Not : "not %s", + ast.Invert : "~%s", + ast.USub : "-%s", + ast.UAdd : "+%s" +} + + +class DebugInterpreter(ast.NodeVisitor): + """Interpret AST nodes to gleam useful debugging information. """ + + def __init__(self, frame): + self.frame = frame + + def generic_visit(self, node): + # Fallback when we don't have a special implementation. + if _is_ast_expr(node): + mod = ast.Expression(node) + co = self._compile(mod) + try: + result = self.frame.eval(co) + except Exception: + raise Failure() + explanation = self.frame.repr(result) + return explanation, result + elif _is_ast_stmt(node): + mod = ast.Module([node]) + co = self._compile(mod, "exec") + try: + self.frame.exec_(co) + except Exception: + raise Failure() + return None, None + else: + raise AssertionError("can't handle %s" %(node,)) + + def _compile(self, source, mode="eval"): + return compile(source, "", mode) + + def visit_Expr(self, expr): + return self.visit(expr.value) + + def visit_Module(self, mod): + for stmt in mod.body: + self.visit(stmt) + + def visit_Name(self, name): + explanation, result = self.generic_visit(name) + # See if the name is local. + source = "%r in locals() is not globals()" % (name.id,) + co = self._compile(source) + try: + local = self.frame.eval(co) + except Exception: + # have to assume it isn't + local = None + if local is None or not self.frame.is_true(local): + return name.id, result + return explanation, result + + def visit_Compare(self, comp): + left = comp.left + left_explanation, left_result = self.visit(left) + for op, next_op in zip(comp.ops, comp.comparators): + next_explanation, next_result = self.visit(next_op) + op_symbol = operator_map[op.__class__] + explanation = "%s %s %s" % (left_explanation, op_symbol, + next_explanation) + source = "__exprinfo_left %s __exprinfo_right" % (op_symbol,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_left=left_result, + __exprinfo_right=next_result) + except Exception: + raise Failure(explanation) + try: + if not self.frame.is_true(result): + break + except KeyboardInterrupt: + raise + except: + break + left_explanation, left_result = next_explanation, next_result + + if util._reprcompare is not None: + res = util._reprcompare(op_symbol, left_result, next_result) + if res: + explanation = res + return explanation, result + + def visit_BoolOp(self, boolop): + is_or = isinstance(boolop.op, ast.Or) + explanations = [] + for operand in boolop.values: + explanation, result = self.visit(operand) + explanations.append(explanation) + if result == is_or: + break + name = is_or and " or " or " and " + explanation = "(" + name.join(explanations) + ")" + return explanation, result + + def visit_UnaryOp(self, unary): + pattern = unary_map[unary.op.__class__] + operand_explanation, operand_result = self.visit(unary.operand) + explanation = pattern % (operand_explanation,) + co = self._compile(pattern % ("__exprinfo_expr",)) + try: + result = self.frame.eval(co, __exprinfo_expr=operand_result) + except Exception: + raise Failure(explanation) + return explanation, result + + def visit_BinOp(self, binop): + left_explanation, left_result = self.visit(binop.left) + right_explanation, right_result = self.visit(binop.right) + symbol = operator_map[binop.op.__class__] + explanation = "(%s %s %s)" % (left_explanation, symbol, + right_explanation) + source = "__exprinfo_left %s __exprinfo_right" % (symbol,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_left=left_result, + __exprinfo_right=right_result) + except Exception: + raise Failure(explanation) + return explanation, result + + def visit_Call(self, call): + func_explanation, func = self.visit(call.func) + arg_explanations = [] + ns = {"__exprinfo_func" : func} + arguments = [] + for arg in call.args: + arg_explanation, arg_result = self.visit(arg) + arg_name = "__exprinfo_%s" % (len(ns),) + ns[arg_name] = arg_result + arguments.append(arg_name) + arg_explanations.append(arg_explanation) + for keyword in call.keywords: + arg_explanation, arg_result = self.visit(keyword.value) + arg_name = "__exprinfo_%s" % (len(ns),) + ns[arg_name] = arg_result + keyword_source = "%s=%%s" % (keyword.arg) + arguments.append(keyword_source % (arg_name,)) + arg_explanations.append(keyword_source % (arg_explanation,)) + if call.starargs: + arg_explanation, arg_result = self.visit(call.starargs) + arg_name = "__exprinfo_star" + ns[arg_name] = arg_result + arguments.append("*%s" % (arg_name,)) + arg_explanations.append("*%s" % (arg_explanation,)) + if call.kwargs: + arg_explanation, arg_result = self.visit(call.kwargs) + arg_name = "__exprinfo_kwds" + ns[arg_name] = arg_result + arguments.append("**%s" % (arg_name,)) + arg_explanations.append("**%s" % (arg_explanation,)) + args_explained = ", ".join(arg_explanations) + explanation = "%s(%s)" % (func_explanation, args_explained) + args = ", ".join(arguments) + source = "__exprinfo_func(%s)" % (args,) + co = self._compile(source) + try: + result = self.frame.eval(co, **ns) + except Exception: + raise Failure(explanation) + pattern = "%s\n{%s = %s\n}" + rep = self.frame.repr(result) + explanation = pattern % (rep, rep, explanation) + return explanation, result + + def _is_builtin_name(self, name): + pattern = "%r not in globals() and %r not in locals()" + source = pattern % (name.id, name.id) + co = self._compile(source) + try: + return self.frame.eval(co) + except Exception: + return False + + def visit_Attribute(self, attr): + if not isinstance(attr.ctx, ast.Load): + return self.generic_visit(attr) + source_explanation, source_result = self.visit(attr.value) + explanation = "%s.%s" % (source_explanation, attr.attr) + source = "__exprinfo_expr.%s" % (attr.attr,) + co = self._compile(source) + try: + result = self.frame.eval(co, __exprinfo_expr=source_result) + except Exception: + raise Failure(explanation) + explanation = "%s\n{%s = %s.%s\n}" % (self.frame.repr(result), + self.frame.repr(result), + source_explanation, attr.attr) + # Check if the attr is from an instance. + source = "%r in getattr(__exprinfo_expr, '__dict__', {})" + source = source % (attr.attr,) + co = self._compile(source) + try: + from_instance = self.frame.eval(co, __exprinfo_expr=source_result) + except Exception: + from_instance = None + if from_instance is None or self.frame.is_true(from_instance): + rep = self.frame.repr(result) + pattern = "%s\n{%s = %s\n}" + explanation = pattern % (rep, rep, explanation) + return explanation, result + + def visit_Assert(self, assrt): + test_explanation, test_result = self.visit(assrt.test) + explanation = "assert %s" % (test_explanation,) + if not self.frame.is_true(test_result): + try: + raise BuiltinAssertionError + except Exception: + raise Failure(explanation) + return explanation, test_result + + def visit_Assign(self, assign): + value_explanation, value_result = self.visit(assign.value) + explanation = "... = %s" % (value_explanation,) + name = ast.Name("__exprinfo_expr", ast.Load(), + lineno=assign.value.lineno, + col_offset=assign.value.col_offset) + new_assign = ast.Assign(assign.targets, name, lineno=assign.lineno, + col_offset=assign.col_offset) + mod = ast.Module([new_assign]) + co = self._compile(mod, "exec") + try: + self.frame.exec_(co, __exprinfo_expr=value_result) + except Exception: + raise Failure(explanation) + return explanation, value_result diff --git a/_pytest/assertion/oldinterpret.py b/_pytest/assertion/oldinterpret.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/oldinterpret.py @@ -0,0 +1,552 @@ +import py +import sys, inspect +from compiler import parse, ast, pycodegen +from _pytest.assertion.util import format_explanation +from _pytest.assertion.reinterpret import BuiltinAssertionError + +passthroughex = py.builtin._sysex + +class Failure: + def __init__(self, node): + self.exc, self.value, self.tb = sys.exc_info() + self.node = node + +class View(object): + """View base class. + + If C is a subclass of View, then C(x) creates a proxy object around + the object x. The actual class of the proxy is not C in general, + but a *subclass* of C determined by the rules below. To avoid confusion + we call view class the class of the proxy (a subclass of C, so of View) + and object class the class of x. + + Attributes and methods not found in the proxy are automatically read on x. + Other operations like setting attributes are performed on the proxy, as + determined by its view class. The object x is available from the proxy + as its __obj__ attribute. + + The view class selection is determined by the __view__ tuples and the + optional __viewkey__ method. By default, the selected view class is the + most specific subclass of C whose __view__ mentions the class of x. + If no such subclass is found, the search proceeds with the parent + object classes. For example, C(True) will first look for a subclass + of C with __view__ = (..., bool, ...) and only if it doesn't find any + look for one with __view__ = (..., int, ...), and then ..., object,... + If everything fails the class C itself is considered to be the default. + + Alternatively, the view class selection can be driven by another aspect + of the object x, instead of the class of x, by overriding __viewkey__. + See last example at the end of this module. + """ + + _viewcache = {} + __view__ = () + + def __new__(rootclass, obj, *args, **kwds): + self = object.__new__(rootclass) + self.__obj__ = obj + self.__rootclass__ = rootclass + key = self.__viewkey__() + try: + self.__class__ = self._viewcache[key] + except KeyError: + self.__class__ = self._selectsubclass(key) + return self + + def __getattr__(self, attr): + # attributes not found in the normal hierarchy rooted on View + # are looked up in the object's real class + return getattr(self.__obj__, attr) + + def __viewkey__(self): + return self.__obj__.__class__ + + def __matchkey__(self, key, subclasses): + if inspect.isclass(key): + keys = inspect.getmro(key) + else: + keys = [key] + for key in keys: + result = [C for C in subclasses if key in C.__view__] + if result: + return result + return [] + + def _selectsubclass(self, key): + subclasses = list(enumsubclasses(self.__rootclass__)) + for C in subclasses: + if not isinstance(C.__view__, tuple): + C.__view__ = (C.__view__,) + choices = self.__matchkey__(key, subclasses) + if not choices: + return self.__rootclass__ + elif len(choices) == 1: + return choices[0] + else: + # combine the multiple choices + return type('?', tuple(choices), {}) + + def __repr__(self): + return '%s(%r)' % (self.__rootclass__.__name__, self.__obj__) + + +def enumsubclasses(cls): + for subcls in cls.__subclasses__(): + for subsubclass in enumsubclasses(subcls): + yield subsubclass + yield cls + + +class Interpretable(View): + """A parse tree node with a few extra methods.""" + explanation = None + + def is_builtin(self, frame): + return False + + def eval(self, frame): + # fall-back for unknown expression nodes + try: + expr = ast.Expression(self.__obj__) + expr.filename = '' + self.__obj__.filename = '' + co = pycodegen.ExpressionCodeGenerator(expr).getCode() + result = frame.eval(co) + except passthroughex: + raise + except: + raise Failure(self) + self.result = result + self.explanation = self.explanation or frame.repr(self.result) + + def run(self, frame): + # fall-back for unknown statement nodes + try: + expr = ast.Module(None, ast.Stmt([self.__obj__])) + expr.filename = '' + co = pycodegen.ModuleCodeGenerator(expr).getCode() + frame.exec_(co) + except passthroughex: + raise + except: + raise Failure(self) + + def nice_explanation(self): + return format_explanation(self.explanation) + + +class Name(Interpretable): + __view__ = ast.Name + + def is_local(self, frame): + source = '%r in locals() is not globals()' % self.name + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def is_global(self, frame): + source = '%r in globals()' % self.name + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def is_builtin(self, frame): + source = '%r not in locals() and %r not in globals()' % ( + self.name, self.name) + try: + return frame.is_true(frame.eval(source)) + except passthroughex: + raise + except: + return False + + def eval(self, frame): + super(Name, self).eval(frame) + if not self.is_local(frame): + self.explanation = self.name + +class Compare(Interpretable): + __view__ = ast.Compare + + def eval(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + for operation, expr2 in self.ops: + if hasattr(self, 'result'): + # shortcutting in chained expressions + if not frame.is_true(self.result): + break + expr2 = Interpretable(expr2) + expr2.eval(frame) + self.explanation = "%s %s %s" % ( + expr.explanation, operation, expr2.explanation) + source = "__exprinfo_left %s __exprinfo_right" % operation + try: + self.result = frame.eval(source, + __exprinfo_left=expr.result, + __exprinfo_right=expr2.result) + except passthroughex: + raise + except: + raise Failure(self) + expr = expr2 + +class And(Interpretable): + __view__ = ast.And + + def eval(self, frame): + explanations = [] + for expr in self.nodes: + expr = Interpretable(expr) + expr.eval(frame) + explanations.append(expr.explanation) + self.result = expr.result + if not frame.is_true(expr.result): + break + self.explanation = '(' + ' and '.join(explanations) + ')' + +class Or(Interpretable): + __view__ = ast.Or + + def eval(self, frame): + explanations = [] + for expr in self.nodes: + expr = Interpretable(expr) + expr.eval(frame) + explanations.append(expr.explanation) + self.result = expr.result + if frame.is_true(expr.result): + break + self.explanation = '(' + ' or '.join(explanations) + ')' + + +# == Unary operations == +keepalive = [] +for astclass, astpattern in { + ast.Not : 'not __exprinfo_expr', + ast.Invert : '(~__exprinfo_expr)', + }.items(): + + class UnaryArith(Interpretable): + __view__ = astclass + + def eval(self, frame, astpattern=astpattern): + expr = Interpretable(self.expr) + expr.eval(frame) + self.explanation = astpattern.replace('__exprinfo_expr', + expr.explanation) + try: + self.result = frame.eval(astpattern, + __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + + keepalive.append(UnaryArith) + +# == Binary operations == +for astclass, astpattern in { + ast.Add : '(__exprinfo_left + __exprinfo_right)', + ast.Sub : '(__exprinfo_left - __exprinfo_right)', + ast.Mul : '(__exprinfo_left * __exprinfo_right)', + ast.Div : '(__exprinfo_left / __exprinfo_right)', + ast.Mod : '(__exprinfo_left % __exprinfo_right)', + ast.Power : '(__exprinfo_left ** __exprinfo_right)', + }.items(): + + class BinaryArith(Interpretable): + __view__ = astclass + + def eval(self, frame, astpattern=astpattern): + left = Interpretable(self.left) + left.eval(frame) + right = Interpretable(self.right) + right.eval(frame) + self.explanation = (astpattern + .replace('__exprinfo_left', left .explanation) + .replace('__exprinfo_right', right.explanation)) + try: + self.result = frame.eval(astpattern, + __exprinfo_left=left.result, + __exprinfo_right=right.result) + except passthroughex: + raise + except: + raise Failure(self) + + keepalive.append(BinaryArith) + + +class CallFunc(Interpretable): + __view__ = ast.CallFunc + + def is_bool(self, frame): + source = 'isinstance(__exprinfo_value, bool)' + try: + return frame.is_true(frame.eval(source, + __exprinfo_value=self.result)) + except passthroughex: + raise + except: + return False + + def eval(self, frame): + node = Interpretable(self.node) + node.eval(frame) + explanations = [] + vars = {'__exprinfo_fn': node.result} + source = '__exprinfo_fn(' + for a in self.args: + if isinstance(a, ast.Keyword): + keyword = a.name + a = a.expr + else: + keyword = None + a = Interpretable(a) + a.eval(frame) + argname = '__exprinfo_%d' % len(vars) + vars[argname] = a.result + if keyword is None: + source += argname + ',' + explanations.append(a.explanation) + else: + source += '%s=%s,' % (keyword, argname) + explanations.append('%s=%s' % (keyword, a.explanation)) + if self.star_args: + star_args = Interpretable(self.star_args) + star_args.eval(frame) + argname = '__exprinfo_star' + vars[argname] = star_args.result + source += '*' + argname + ',' + explanations.append('*' + star_args.explanation) + if self.dstar_args: + dstar_args = Interpretable(self.dstar_args) + dstar_args.eval(frame) + argname = '__exprinfo_kwds' + vars[argname] = dstar_args.result + source += '**' + argname + ',' + explanations.append('**' + dstar_args.explanation) + self.explanation = "%s(%s)" % ( + node.explanation, ', '.join(explanations)) + if source.endswith(','): + source = source[:-1] + source += ')' + try: + self.result = frame.eval(source, **vars) + except passthroughex: + raise + except: + raise Failure(self) + if not node.is_builtin(frame) or not self.is_bool(frame): + r = frame.repr(self.result) + self.explanation = '%s\n{%s = %s\n}' % (r, r, self.explanation) + +class Getattr(Interpretable): + __view__ = ast.Getattr + + def eval(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + source = '__exprinfo_expr.%s' % self.attrname + try: + self.result = frame.eval(source, __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + self.explanation = '%s.%s' % (expr.explanation, self.attrname) + # if the attribute comes from the instance, its value is interesting + source = ('hasattr(__exprinfo_expr, "__dict__") and ' + '%r in __exprinfo_expr.__dict__' % self.attrname) + try: + from_instance = frame.is_true( + frame.eval(source, __exprinfo_expr=expr.result)) + except passthroughex: + raise + except: + from_instance = True + if from_instance: + r = frame.repr(self.result) + self.explanation = '%s\n{%s = %s\n}' % (r, r, self.explanation) + +# == Re-interpretation of full statements == + +class Assert(Interpretable): + __view__ = ast.Assert + + def run(self, frame): + test = Interpretable(self.test) + test.eval(frame) + # print the result as 'assert ' + self.result = test.result + self.explanation = 'assert ' + test.explanation + if not frame.is_true(test.result): + try: + raise BuiltinAssertionError + except passthroughex: + raise + except: + raise Failure(self) + +class Assign(Interpretable): + __view__ = ast.Assign + + def run(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + self.result = expr.result + self.explanation = '... = ' + expr.explanation + # fall-back-run the rest of the assignment + ass = ast.Assign(self.nodes, ast.Name('__exprinfo_expr')) + mod = ast.Module(None, ast.Stmt([ass])) + mod.filename = '' + co = pycodegen.ModuleCodeGenerator(mod).getCode() + try: + frame.exec_(co, __exprinfo_expr=expr.result) + except passthroughex: + raise + except: + raise Failure(self) + +class Discard(Interpretable): + __view__ = ast.Discard + + def run(self, frame): + expr = Interpretable(self.expr) + expr.eval(frame) + self.result = expr.result + self.explanation = expr.explanation + +class Stmt(Interpretable): + __view__ = ast.Stmt + + def run(self, frame): + for stmt in self.nodes: + stmt = Interpretable(stmt) + stmt.run(frame) + + +def report_failure(e): + explanation = e.node.nice_explanation() + if explanation: + explanation = ", in: " + explanation + else: + explanation = "" + sys.stdout.write("%s: %s%s\n" % (e.exc.__name__, e.value, explanation)) + +def check(s, frame=None): + if frame is None: + frame = sys._getframe(1) + frame = py.code.Frame(frame) + expr = parse(s, 'eval') + assert isinstance(expr, ast.Expression) + node = Interpretable(expr.node) + try: + node.eval(frame) + except passthroughex: + raise + except Failure: + e = sys.exc_info()[1] + report_failure(e) + else: + if not frame.is_true(node.result): + sys.stderr.write("assertion failed: %s\n" % node.nice_explanation()) + + +########################################################### +# API / Entry points +# ######################################################### + +def interpret(source, frame, should_fail=False): + module = Interpretable(parse(source, 'exec').node) + #print "got module", module + if isinstance(frame, py.std.types.FrameType): + frame = py.code.Frame(frame) + try: + module.run(frame) + except Failure: + e = sys.exc_info()[1] + return getfailure(e) + except passthroughex: + raise + except: + import traceback + traceback.print_exc() + if should_fail: + return ("(assertion failed, but when it was re-run for " + "printing intermediate values, it did not fail. Suggestions: " + "compute assert expression before the assert or use --nomagic)") + else: + return None + +def getmsg(excinfo): + if isinstance(excinfo, tuple): + excinfo = py.code.ExceptionInfo(excinfo) + #frame, line = gettbline(tb) + #frame = py.code.Frame(frame) + #return interpret(line, frame) + + tb = excinfo.traceback[-1] + source = str(tb.statement).strip() + x = interpret(source, tb.frame, should_fail=True) + if not isinstance(x, str): + raise TypeError("interpret returned non-string %r" % (x,)) + return x + +def getfailure(e): + explanation = e.node.nice_explanation() + if str(e.value): + lines = explanation.split('\n') + lines[0] += " << %s" % (e.value,) + explanation = '\n'.join(lines) + text = "%s: %s" % (e.exc.__name__, explanation) + if text.startswith('AssertionError: assert '): + text = text[16:] + return text + +def run(s, frame=None): + if frame is None: + frame = sys._getframe(1) + frame = py.code.Frame(frame) + module = Interpretable(parse(s, 'exec').node) + try: + module.run(frame) + except Failure: + e = sys.exc_info()[1] + report_failure(e) + + +if __name__ == '__main__': + # example: + def f(): + return 5 + def g(): + return 3 + def h(x): + return 'never' + check("f() * g() == 5") + check("not f()") + check("not (f() and g() or 0)") + check("f() == g()") + i = 4 + check("i == f()") + check("len(f()) == 0") + check("isinstance(2+3+4, float)") + + run("x = i") + check("x == 5") + + run("assert not f(), 'oops'") + run("a, b, c = 1, 2") + run("a, b, c = f()") + + check("max([f(),g()]) == 4") + check("'hello'[g()] == 'h'") + run("'guk%d' % h(f())") diff --git a/_pytest/assertion/reinterpret.py b/_pytest/assertion/reinterpret.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/reinterpret.py @@ -0,0 +1,48 @@ +import sys +import py + +BuiltinAssertionError = py.builtin.builtins.AssertionError + +class AssertionError(BuiltinAssertionError): + def __init__(self, *args): + BuiltinAssertionError.__init__(self, *args) + if args: + try: + self.msg = str(args[0]) + except py.builtin._sysex: + raise + except: + self.msg = "<[broken __repr__] %s at %0xd>" %( + args[0].__class__, id(args[0])) + else: + f = py.code.Frame(sys._getframe(1)) + try: + source = f.code.fullsource + if source is not None: + try: + source = source.getstatement(f.lineno, assertion=True) + except IndexError: + source = None + else: + source = str(source.deindent()).strip() + except py.error.ENOENT: + source = None + # this can also occur during reinterpretation, when the + # co_filename is set to "". + if source: + self.msg = reinterpret(source, f, should_fail=True) + else: + self.msg = "" + if not self.args: + self.args = (self.msg,) + +if sys.version_info > (3, 0): + AssertionError.__module__ = "builtins" + reinterpret_old = "old reinterpretation not available for py3" +else: + from _pytest.assertion.oldinterpret import interpret as reinterpret_old +if sys.version_info >= (2, 6) or (sys.platform.startswith("java")): + from _pytest.assertion.newinterpret import interpret as reinterpret +else: + reinterpret = reinterpret_old + diff --git a/_pytest/assertion/rewrite.py b/_pytest/assertion/rewrite.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/rewrite.py @@ -0,0 +1,340 @@ +"""Rewrite assertion AST to produce nice error messages""" + +import ast +import collections +import itertools +import sys + +import py +from _pytest.assertion import util + + +def rewrite_asserts(mod): + """Rewrite the assert statements in mod.""" + AssertionRewriter().run(mod) + + +_saferepr = py.io.saferepr +from _pytest.assertion.util import format_explanation as _format_explanation + +def _format_boolop(operands, explanations, is_or): + show_explanations = [] + for operand, expl in zip(operands, explanations): + show_explanations.append(expl) + if operand == is_or: + break + return "(" + (is_or and " or " or " and ").join(show_explanations) + ")" + +def _call_reprcompare(ops, results, expls, each_obj): + for i, res, expl in zip(range(len(ops)), results, expls): + try: + done = not res + except Exception: + done = True + if done: + break + if util._reprcompare is not None: + custom = util._reprcompare(ops[i], each_obj[i], each_obj[i + 1]) + if custom is not None: + return custom + return expl + + +unary_map = { + ast.Not : "not %s", + ast.Invert : "~%s", + ast.USub : "-%s", + ast.UAdd : "+%s" +} + +binop_map = { + ast.BitOr : "|", + ast.BitXor : "^", + ast.BitAnd : "&", + ast.LShift : "<<", + ast.RShift : ">>", + ast.Add : "+", + ast.Sub : "-", + ast.Mult : "*", + ast.Div : "/", + ast.FloorDiv : "//", + ast.Mod : "%", + ast.Eq : "==", + ast.NotEq : "!=", + ast.Lt : "<", + ast.LtE : "<=", + ast.Gt : ">", + ast.GtE : ">=", + ast.Pow : "**", + ast.Is : "is", + ast.IsNot : "is not", + ast.In : "in", + ast.NotIn : "not in" +} + + +def set_location(node, lineno, col_offset): + """Set node location information recursively.""" + def _fix(node, lineno, col_offset): + if "lineno" in node._attributes: + node.lineno = lineno + if "col_offset" in node._attributes: + node.col_offset = col_offset + for child in ast.iter_child_nodes(node): + _fix(child, lineno, col_offset) + _fix(node, lineno, col_offset) + return node + + +class AssertionRewriter(ast.NodeVisitor): + + def run(self, mod): + """Find all assert statements in *mod* and rewrite them.""" + if not mod.body: + # Nothing to do. + return + # Insert some special imports at the top of the module but after any + # docstrings and __future__ imports. + aliases = [ast.alias(py.builtin.builtins.__name__, "@py_builtins"), + ast.alias("_pytest.assertion.rewrite", "@pytest_ar")] + expect_docstring = True + pos = 0 + lineno = 0 + for item in mod.body: + if (expect_docstring and isinstance(item, ast.Expr) and + isinstance(item.value, ast.Str)): + doc = item.value.s + if "PYTEST_DONT_REWRITE" in doc: + # The module has disabled assertion rewriting. + return + lineno += len(doc) - 1 + expect_docstring = False + elif (not isinstance(item, ast.ImportFrom) or item.level > 0 and + item.identifier != "__future__"): + lineno = item.lineno + break + pos += 1 + imports = [ast.Import([alias], lineno=lineno, col_offset=0) + for alias in aliases] + mod.body[pos:pos] = imports + # Collect asserts. + nodes = collections.deque([mod]) + while nodes: + node = nodes.popleft() + for name, field in ast.iter_fields(node): + if isinstance(field, list): + new = [] + for i, child in enumerate(field): + if isinstance(child, ast.Assert): + # Transform assert. + new.extend(self.visit(child)) + else: + new.append(child) + if isinstance(child, ast.AST): + nodes.append(child) + setattr(node, name, new) + elif (isinstance(field, ast.AST) and + # Don't recurse into expressions as they can't contain + # asserts. + not isinstance(field, ast.expr)): + nodes.append(field) + + def variable(self): + """Get a new variable.""" + # Use a character invalid in python identifiers to avoid clashing. + name = "@py_assert" + str(next(self.variable_counter)) + self.variables.add(name) + return name + + def assign(self, expr): + """Give *expr* a name.""" + name = self.variable() + self.statements.append(ast.Assign([ast.Name(name, ast.Store())], expr)) + return ast.Name(name, ast.Load()) + + def display(self, expr): + """Call py.io.saferepr on the expression.""" + return self.helper("saferepr", expr) + + def helper(self, name, *args): + """Call a helper in this module.""" + py_name = ast.Name("@pytest_ar", ast.Load()) + attr = ast.Attribute(py_name, "_" + name, ast.Load()) + return ast.Call(attr, list(args), [], None, None) + + def builtin(self, name): + """Return the builtin called *name*.""" + builtin_name = ast.Name("@py_builtins", ast.Load()) + return ast.Attribute(builtin_name, name, ast.Load()) + + def explanation_param(self, expr): + specifier = "py" + str(next(self.variable_counter)) + self.explanation_specifiers[specifier] = expr + return "%(" + specifier + ")s" + + def push_format_context(self): + self.explanation_specifiers = {} + self.stack.append(self.explanation_specifiers) + + def pop_format_context(self, expl_expr): + current = self.stack.pop() + if self.stack: + self.explanation_specifiers = self.stack[-1] + keys = [ast.Str(key) for key in current.keys()] + format_dict = ast.Dict(keys, list(current.values())) + form = ast.BinOp(expl_expr, ast.Mod(), format_dict) + name = "@py_format" + str(next(self.variable_counter)) + self.on_failure.append(ast.Assign([ast.Name(name, ast.Store())], form)) + return ast.Name(name, ast.Load()) + + def generic_visit(self, node): + """Handle expressions we don't have custom code for.""" + assert isinstance(node, ast.expr) + res = self.assign(node) + return res, self.explanation_param(self.display(res)) + + def visit_Assert(self, assert_): + if assert_.msg: + # There's already a message. Don't mess with it. + return [assert_] + self.statements = [] + self.variables = set() + self.variable_counter = itertools.count() + self.stack = [] + self.on_failure = [] + self.push_format_context() + # Rewrite assert into a bunch of statements. + top_condition, explanation = self.visit(assert_.test) + # Create failure message. + body = self.on_failure + negation = ast.UnaryOp(ast.Not(), top_condition) + self.statements.append(ast.If(negation, body, [])) + explanation = "assert " + explanation + template = ast.Str(explanation) + msg = self.pop_format_context(template) + fmt = self.helper("format_explanation", msg) + err_name = ast.Name("AssertionError", ast.Load()) + exc = ast.Call(err_name, [fmt], [], None, None) + if sys.version_info[0] >= 3: + raise_ = ast.Raise(exc, None) + else: + raise_ = ast.Raise(exc, None, None) + body.append(raise_) + # Delete temporary variables. + names = [ast.Name(name, ast.Del()) for name in self.variables] + if names: + delete = ast.Delete(names) + self.statements.append(delete) + # Fix line numbers. + for stmt in self.statements: + set_location(stmt, assert_.lineno, assert_.col_offset) + return self.statements + + def visit_Name(self, name): + # Check if the name is local or not. + locs = ast.Call(self.builtin("locals"), [], [], None, None) + globs = ast.Call(self.builtin("globals"), [], [], None, None) + ops = [ast.In(), ast.IsNot()] + test = ast.Compare(ast.Str(name.id), ops, [locs, globs]) + expr = ast.IfExp(test, self.display(name), ast.Str(name.id)) + return name, self.explanation_param(expr) + + def visit_BoolOp(self, boolop): + operands = [] + explanations = [] + self.push_format_context() + for operand in boolop.values: + res, explanation = self.visit(operand) + operands.append(res) + explanations.append(explanation) + expls = ast.Tuple([ast.Str(expl) for expl in explanations], ast.Load()) + is_or = ast.Num(isinstance(boolop.op, ast.Or)) + expl_template = self.helper("format_boolop", + ast.Tuple(operands, ast.Load()), expls, + is_or) + expl = self.pop_format_context(expl_template) + res = self.assign(ast.BoolOp(boolop.op, operands)) + return res, self.explanation_param(expl) + + def visit_UnaryOp(self, unary): + pattern = unary_map[unary.op.__class__] + operand_res, operand_expl = self.visit(unary.operand) + res = self.assign(ast.UnaryOp(unary.op, operand_res)) + return res, pattern % (operand_expl,) + + def visit_BinOp(self, binop): + symbol = binop_map[binop.op.__class__] + left_expr, left_expl = self.visit(binop.left) + right_expr, right_expl = self.visit(binop.right) + explanation = "(%s %s %s)" % (left_expl, symbol, right_expl) + res = self.assign(ast.BinOp(left_expr, binop.op, right_expr)) + return res, explanation + + def visit_Call(self, call): + new_func, func_expl = self.visit(call.func) + arg_expls = [] + new_args = [] + new_kwargs = [] + new_star = new_kwarg = None + for arg in call.args: + res, expl = self.visit(arg) + new_args.append(res) + arg_expls.append(expl) + for keyword in call.keywords: + res, expl = self.visit(keyword.value) + new_kwargs.append(ast.keyword(keyword.arg, res)) + arg_expls.append(keyword.arg + "=" + expl) + if call.starargs: + new_star, expl = self.visit(call.starargs) + arg_expls.append("*" + expl) + if call.kwargs: + new_kwarg, expl = self.visit(call.kwarg) + arg_expls.append("**" + expl) + expl = "%s(%s)" % (func_expl, ', '.join(arg_expls)) + new_call = ast.Call(new_func, new_args, new_kwargs, new_star, new_kwarg) + res = self.assign(new_call) + res_expl = self.explanation_param(self.display(res)) + outer_expl = "%s\n{%s = %s\n}" % (res_expl, res_expl, expl) + return res, outer_expl + + def visit_Attribute(self, attr): + if not isinstance(attr.ctx, ast.Load): + return self.generic_visit(attr) + value, value_expl = self.visit(attr.value) + res = self.assign(ast.Attribute(value, attr.attr, ast.Load())) + res_expl = self.explanation_param(self.display(res)) + pat = "%s\n{%s = %s.%s\n}" + expl = pat % (res_expl, res_expl, value_expl, attr.attr) + return res, expl + + def visit_Compare(self, comp): + self.push_format_context() + left_res, left_expl = self.visit(comp.left) + res_variables = [self.variable() for i in range(len(comp.ops))] + load_names = [ast.Name(v, ast.Load()) for v in res_variables] + store_names = [ast.Name(v, ast.Store()) for v in res_variables] + it = zip(range(len(comp.ops)), comp.ops, comp.comparators) + expls = [] + syms = [] + results = [left_res] + for i, op, next_operand in it: + next_res, next_expl = self.visit(next_operand) + results.append(next_res) + sym = binop_map[op.__class__] + syms.append(ast.Str(sym)) + expl = "%s %s %s" % (left_expl, sym, next_expl) + expls.append(ast.Str(expl)) + res_expr = ast.Compare(left_res, [op], [next_res]) + self.statements.append(ast.Assign([store_names[i]], res_expr)) + left_res, left_expl = next_res, next_expl + # Use py.code._reprcompare if that's available. + expl_call = self.helper("call_reprcompare", + ast.Tuple(syms, ast.Load()), + ast.Tuple(load_names, ast.Load()), + ast.Tuple(expls, ast.Load()), + ast.Tuple(results, ast.Load())) + if len(comp.ops) > 1: + res = ast.BoolOp(ast.And(), load_names) + else: + res = load_names[0] + return res, self.explanation_param(self.pop_format_context(expl_call)) diff --git a/_pytest/assertion/util.py b/_pytest/assertion/util.py new file mode 100644 --- /dev/null +++ b/_pytest/assertion/util.py @@ -0,0 +1,213 @@ +"""Utilities for assertion debugging""" + +import py + + +# The _reprcompare attribute on the util module is used by the new assertion +# interpretation code and assertion rewriter to detect this plugin was +# loaded and in turn call the hooks defined here as part of the +# DebugInterpreter. +_reprcompare = None + +def format_explanation(explanation): + """This formats an explanation + + Normally all embedded newlines are escaped, however there are + three exceptions: \n{, \n} and \n~. The first two are intended + cover nested explanations, see function and attribute explanations + for examples (.visit_Call(), visit_Attribute()). The last one is + for when one explanation needs to span multiple lines, e.g. when + displaying diffs. + """ + # simplify 'assert False where False = ...' + where = 0 + while True: + start = where = explanation.find("False\n{False = ", where) + if where == -1: + break + level = 0 + for i, c in enumerate(explanation[start:]): + if c == "{": + level += 1 + elif c == "}": + level -= 1 + if not level: + break + else: + raise AssertionError("unbalanced braces: %r" % (explanation,)) + end = start + i + where = end + if explanation[end - 1] == '\n': + explanation = (explanation[:start] + explanation[start+15:end-1] + + explanation[end+1:]) + where -= 17 + raw_lines = (explanation or '').split('\n') + # escape newlines not followed by {, } and ~ + lines = [raw_lines[0]] + for l in raw_lines[1:]: + if l.startswith('{') or l.startswith('}') or l.startswith('~'): + lines.append(l) + else: + lines[-1] += '\\n' + l + + result = lines[:1] + stack = [0] + stackcnt = [0] + for line in lines[1:]: + if line.startswith('{'): + if stackcnt[-1]: + s = 'and ' + else: + s = 'where ' + stack.append(len(result)) + stackcnt[-1] += 1 + stackcnt.append(0) + result.append(' +' + ' '*(len(stack)-1) + s + line[1:]) + elif line.startswith('}'): + assert line.startswith('}') + stack.pop() + stackcnt.pop() + result[stack[-1]] += line[1:] + else: + assert line.startswith('~') + result.append(' '*len(stack) + line[1:]) + assert len(stack) == 1 + return '\n'.join(result) + + +# Provide basestring in python3 +try: + basestring = basestring +except NameError: + basestring = str + + +def assertrepr_compare(op, left, right): + """return specialised explanations for some operators/operands""" + width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op + left_repr = py.io.saferepr(left, maxsize=int(width/2)) + right_repr = py.io.saferepr(right, maxsize=width-len(left_repr)) + summary = '%s %s %s' % (left_repr, op, right_repr) + + issequence = lambda x: isinstance(x, (list, tuple)) + istext = lambda x: isinstance(x, basestring) + isdict = lambda x: isinstance(x, dict) + isset = lambda x: isinstance(x, set) + + explanation = None + try: + if op == '==': + if istext(left) and istext(right): + explanation = _diff_text(left, right) + elif issequence(left) and issequence(right): + explanation = _compare_eq_sequence(left, right) + elif isset(left) and isset(right): + explanation = _compare_eq_set(left, right) + elif isdict(left) and isdict(right): + explanation = _diff_text(py.std.pprint.pformat(left), + py.std.pprint.pformat(right)) + elif op == 'not in': + if istext(left) and istext(right): + explanation = _notin_text(left, right) + except py.builtin._sysex: + raise + except: + excinfo = py.code.ExceptionInfo() + explanation = ['(pytest_assertion plugin: representation of ' + 'details failed. Probably an object has a faulty __repr__.)', + str(excinfo) + ] + + + if not explanation: + return None + + # Don't include pageloads of data, should be configurable + if len(''.join(explanation)) > 80*8: + explanation = ['Detailed information too verbose, truncated'] + + return [summary] + explanation + + +def _diff_text(left, right): + """Return the explanation for the diff between text + + This will skip leading and trailing characters which are + identical to keep the diff minimal. + """ + explanation = [] + i = 0 # just in case left or right has zero length + for i in range(min(len(left), len(right))): + if left[i] != right[i]: + break + if i > 42: + i -= 10 # Provide some context + explanation = ['Skipping %s identical ' + 'leading characters in diff' % i] + left = left[i:] + right = right[i:] + if len(left) == len(right): + for i in range(len(left)): + if left[-i] != right[-i]: + break + if i > 42: + i -= 10 # Provide some context + explanation += ['Skipping %s identical ' + 'trailing characters in diff' % i] + left = left[:-i] + right = right[:-i] + explanation += [line.strip('\n') + for line in py.std.difflib.ndiff(left.splitlines(), + right.splitlines())] + return explanation + + +def _compare_eq_sequence(left, right): + explanation = [] + for i in range(min(len(left), len(right))): + if left[i] != right[i]: + explanation += ['At index %s diff: %r != %r' % + (i, left[i], right[i])] + break + if len(left) > len(right): + explanation += ['Left contains more items, ' + 'first extra item: %s' % py.io.saferepr(left[len(right)],)] + elif len(left) < len(right): + explanation += ['Right contains more items, ' + 'first extra item: %s' % py.io.saferepr(right[len(left)],)] + return explanation # + _diff_text(py.std.pprint.pformat(left), + # py.std.pprint.pformat(right)) + + +def _compare_eq_set(left, right): + explanation = [] + diff_left = left - right + diff_right = right - left + if diff_left: + explanation.append('Extra items in the left set:') + for item in diff_left: + explanation.append(py.io.saferepr(item)) + if diff_right: + explanation.append('Extra items in the right set:') + for item in diff_right: + explanation.append(py.io.saferepr(item)) + return explanation + + +def _notin_text(term, text): + index = text.find(term) + head = text[:index] + tail = text[index+len(term):] + correct_text = head + tail + diff = _diff_text(correct_text, text) + newdiff = ['%s is contained here:' % py.io.saferepr(term, maxsize=42)] + for line in diff: + if line.startswith('Skipping'): + continue + if line.startswith('- '): + continue + if line.startswith('+ '): + newdiff.append(' ' + line[2:]) + else: + newdiff.append(line) + return newdiff diff --git a/_pytest/doctest.py b/_pytest/doctest.py --- a/_pytest/doctest.py +++ b/_pytest/doctest.py @@ -59,7 +59,7 @@ inner_excinfo = py.code.ExceptionInfo(excinfo.value.exc_info) lines += ["UNEXPECTED EXCEPTION: %s" % repr(inner_excinfo.value)] - + lines += py.std.traceback.format_exception(*excinfo.value.exc_info) return ReprFailDoctest(reprlocation, lines) else: return super(DoctestItem, self).repr_failure(excinfo) diff --git a/_pytest/helpconfig.py b/_pytest/helpconfig.py --- a/_pytest/helpconfig.py +++ b/_pytest/helpconfig.py @@ -16,9 +16,6 @@ group.addoption('--traceconfig', action="store_true", dest="traceconfig", default=False, help="trace considerations of conftest.py files."), - group._addoption('--nomagic', - action="store_true", dest="nomagic", default=False, - help="don't reinterpret asserts, no traceback cutting. ") group.addoption('--debug', action="store_true", dest="debug", default=False, help="generate and show internal debugging information.") diff --git a/_pytest/junitxml.py b/_pytest/junitxml.py --- a/_pytest/junitxml.py +++ b/_pytest/junitxml.py @@ -65,7 +65,8 @@ class LogXML(object): def __init__(self, logfile, prefix): - self.logfile = logfile + logfile = os.path.expanduser(os.path.expandvars(logfile)) + self.logfile = os.path.normpath(logfile) self.prefix = prefix self.test_logs = [] self.passed = self.skipped = 0 @@ -76,7 +77,7 @@ names = report.nodeid.split("::") names[0] = names[0].replace("/", '.') names = tuple(names) - d = {'time': self._durations.pop(names, "0")} + d = {'time': self._durations.pop(report.nodeid, "0")} names = [x.replace(".py", "") for x in names if x != "()"] classnames = names[:-1] if self.prefix: @@ -170,12 +171,11 @@ self.append_skipped(report) def pytest_runtest_call(self, item, __multicall__): - names = tuple(item.listnames()) start = time.time() try: return __multicall__.execute() finally: - self._durations[names] = time.time() - start + self._durations[item.nodeid] = time.time() - start def pytest_collectreport(self, report): if not report.passed: diff --git a/_pytest/main.py b/_pytest/main.py --- a/_pytest/main.py +++ b/_pytest/main.py @@ -46,23 +46,25 @@ def pytest_namespace(): - return dict(collect=dict(Item=Item, Collector=Collector, File=File)) + collect = dict(Item=Item, Collector=Collector, File=File, Session=Session) + return dict(collect=collect) def pytest_configure(config): py.test.config = config # compatibiltiy if config.option.exitfirst: config.option.maxfail = 1 -def pytest_cmdline_main(config): - """ default command line protocol for initialization, session, - running tests and reporting. """ +def wrap_session(config, doit): + """Skeleton command line program""" session = Session(config) session.exitstatus = EXIT_OK + initstate = 0 try: config.pluginmanager.do_configure(config) + initstate = 1 config.hook.pytest_sessionstart(session=session) - config.hook.pytest_collection(session=session) - config.hook.pytest_runtestloop(session=session) + initstate = 2 + doit(config, session) except pytest.UsageError: raise except KeyboardInterrupt: @@ -77,18 +79,24 @@ sys.stderr.write("mainloop: caught Spurious SystemExit!\n") if not session.exitstatus and session._testsfailed: session.exitstatus = EXIT_TESTSFAILED - config.hook.pytest_sessionfinish(session=session, - exitstatus=session.exitstatus) - config.pluginmanager.do_unconfigure(config) + if initstate >= 2: + config.hook.pytest_sessionfinish(session=session, + exitstatus=session.exitstatus) + if initstate >= 1: + config.pluginmanager.do_unconfigure(config) return session.exitstatus +def pytest_cmdline_main(config): + return wrap_session(config, _main) + +def _main(config, session): + """ default command line protocol for initialization, session, + running tests and reporting. """ + config.hook.pytest_collection(session=session) + config.hook.pytest_runtestloop(session=session) + def pytest_collection(session): - session.perform_collect() - hook = session.config.hook - hook.pytest_collection_modifyitems(session=session, - config=session.config, items=session.items) - hook.pytest_collection_finish(session=session) - return True + return session.perform_collect() def pytest_runtestloop(session): if session.config.option.collectonly: @@ -374,6 +382,16 @@ return HookProxy(fspath, self.config) def perform_collect(self, args=None, genitems=True): + hook = self.config.hook + try: + items = self._perform_collect(args, genitems) + hook.pytest_collection_modifyitems(session=self, + config=self.config, items=items) + finally: + hook.pytest_collection_finish(session=self) + return items + + def _perform_collect(self, args, genitems): if args is None: args = self.config.args self.trace("perform_collect", self, args) diff --git a/_pytest/mark.py b/_pytest/mark.py --- a/_pytest/mark.py +++ b/_pytest/mark.py @@ -153,7 +153,7 @@ def __repr__(self): return "" % ( - self._name, self.args, self.kwargs) + self.name, self.args, self.kwargs) def pytest_itemcollected(item): if not isinstance(item, pytest.Function): diff --git a/_pytest/pytester.py b/_pytest/pytester.py --- a/_pytest/pytester.py +++ b/_pytest/pytester.py @@ -6,7 +6,7 @@ import inspect import time from fnmatch import fnmatch -from _pytest.main import Session +from _pytest.main import Session, EXIT_OK from py.builtin import print_ from _pytest.core import HookRelay @@ -292,13 +292,19 @@ assert '::' not in str(arg) p = py.path.local(arg) x = session.fspath.bestrelpath(p) - return session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionstart(session=session) + res = session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK) + return res def getpathnode(self, path): - config = self.parseconfig(path) + config = self.parseconfigure(path) session = Session(config) x = session.fspath.bestrelpath(path) - return session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionstart(session=session) + res = session.perform_collect([x], genitems=False)[0] + config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK) + return res def genitems(self, colitems): session = colitems[0].session @@ -312,7 +318,9 @@ config = self.parseconfigure(*args) rec = self.getreportrecorder(config) session = Session(config) + config.hook.pytest_sessionstart(session=session) session.perform_collect() + config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK) return session.items, rec def runitem(self, source): @@ -382,6 +390,8 @@ c.basetemp = py.path.local.make_numbered_dir(prefix="reparse", keep=0, rootdir=self.tmpdir, lock_timeout=None) c.parse(args) + c.pluginmanager.do_configure(c) + self.request.addfinalizer(lambda: c.pluginmanager.do_unconfigure(c)) return c finally: py.test.config = oldconfig diff --git a/_pytest/python.py b/_pytest/python.py --- a/_pytest/python.py +++ b/_pytest/python.py @@ -226,8 +226,13 @@ def _importtestmodule(self): # we assume we are only called once per module + from _pytest import assertion + assertion.before_module_import(self) try: - mod = self.fspath.pyimport(ensuresyspath=True) + try: + mod = self.fspath.pyimport(ensuresyspath=True) + finally: + assertion.after_module_import(self) except SyntaxError: excinfo = py.code.ExceptionInfo() raise self.CollectError(excinfo.getrepr(style="short")) @@ -374,7 +379,7 @@ # test generators are seen as collectors but they also # invoke setup/teardown on popular request # (induced by the common "test_*" naming shared with normal tests) - self.config._setupstate.prepare(self) + self.session._setupstate.prepare(self) # see FunctionMixin.setup and test_setupstate_is_preserved_134 self._preservedparent = self.parent.obj l = [] @@ -721,7 +726,7 @@ def _addfinalizer(self, finalizer, scope): colitem = self._getscopeitem(scope) - self.config._setupstate.addfinalizer( + self._pyfuncitem.session._setupstate.addfinalizer( finalizer=finalizer, colitem=colitem) def __repr__(self): @@ -742,8 +747,10 @@ raise self.LookupError(msg) def showfuncargs(config): - from _pytest.main import Session - session = Session(config) + from _pytest.main import wrap_session + return wrap_session(config, _showfuncargs_main) + +def _showfuncargs_main(config, session): session.perform_collect() if session.items: plugins = session.items[0].getplugins() diff --git a/_pytest/runner.py b/_pytest/runner.py --- a/_pytest/runner.py +++ b/_pytest/runner.py @@ -14,17 +14,15 @@ # # pytest plugin hooks -# XXX move to pytest_sessionstart and fix py.test owns tests -def pytest_configure(config): - config._setupstate = SetupState() +def pytest_sessionstart(session): + session._setupstate = SetupState() def pytest_sessionfinish(session, exitstatus): - if hasattr(session.config, '_setupstate'): - hook = session.config.hook - rep = hook.pytest__teardown_final(session=session) - if rep: - hook.pytest__teardown_final_logerror(session=session, report=rep) - session.exitstatus = 1 + hook = session.config.hook + rep = hook.pytest__teardown_final(session=session) + if rep: + hook.pytest__teardown_final_logerror(session=session, report=rep) + session.exitstatus = 1 class NodeInfo: def __init__(self, location): @@ -46,16 +44,16 @@ return reports def pytest_runtest_setup(item): - item.config._setupstate.prepare(item) + item.session._setupstate.prepare(item) def pytest_runtest_call(item): item.runtest() def pytest_runtest_teardown(item): - item.config._setupstate.teardown_exact(item) + item.session._setupstate.teardown_exact(item) def pytest__teardown_final(session): - call = CallInfo(session.config._setupstate.teardown_all, when="teardown") + call = CallInfo(session._setupstate.teardown_all, when="teardown") if call.excinfo: ntraceback = call.excinfo.traceback .cut(excludepath=py._pydir) call.excinfo.traceback = ntraceback.filter() diff --git a/ctypes_configure/configure.py b/ctypes_configure/configure.py --- a/ctypes_configure/configure.py +++ b/ctypes_configure/configure.py @@ -559,7 +559,9 @@ C_HEADER = """ #include #include /* for offsetof() */ -#include /* FreeBSD: for uint64_t */ +#ifndef _WIN32 +# include /* FreeBSD: for uint64_t */ +#endif void dump(char* key, int value) { printf("%s: %d\\n", key, value); diff --git a/ctypes_configure/stdoutcapture.py b/ctypes_configure/stdoutcapture.py --- a/ctypes_configure/stdoutcapture.py +++ b/ctypes_configure/stdoutcapture.py @@ -15,6 +15,15 @@ not hasattr(os, 'fdopen')): self.dummy = 1 else: + try: + self.tmpout = os.tmpfile() + if mixed_out_err: + self.tmperr = self.tmpout + else: + self.tmperr = os.tmpfile() + except OSError: # bah? on at least one Windows box + self.dummy = 1 + return self.dummy = 0 # make new stdout/stderr files if needed self.localoutfd = os.dup(1) @@ -29,11 +38,6 @@ sys.stderr = os.fdopen(self.localerrfd, 'w', 0) else: self.saved_stderr = None - self.tmpout = os.tmpfile() - if mixed_out_err: - self.tmperr = self.tmpout - else: - self.tmperr = os.tmpfile() os.dup2(self.tmpout.fileno(), 1) os.dup2(self.tmperr.fileno(), 2) diff --git a/dotviewer/graphparse.py b/dotviewer/graphparse.py --- a/dotviewer/graphparse.py +++ b/dotviewer/graphparse.py @@ -36,48 +36,45 @@ print >> sys.stderr, "Warning: could not guess file type, using 'dot'" return 'unknown' -def dot2plain(content, contenttype, use_codespeak=False): - if contenttype == 'plain': - # already a .plain file - return content +def dot2plain_graphviz(content, contenttype, use_codespeak=False): + if contenttype != 'neato': + cmdline = 'dot -Tplain' + else: + cmdline = 'neato -Tplain' + #print >> sys.stderr, '* running:', cmdline + close_fds = sys.platform != 'win32' + p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, + stdin=subprocess.PIPE, stdout=subprocess.PIPE) + (child_in, child_out) = (p.stdin, p.stdout) + try: + import thread + except ImportError: + bkgndwrite(child_in, content) + else: + thread.start_new_thread(bkgndwrite, (child_in, content)) + plaincontent = child_out.read() + child_out.close() + if not plaincontent: # 'dot' is likely not installed + raise PlainParseError("no result from running 'dot'") + return plaincontent - if not use_codespeak: - if contenttype != 'neato': - cmdline = 'dot -Tplain' - else: - cmdline = 'neato -Tplain' - #print >> sys.stderr, '* running:', cmdline - close_fds = sys.platform != 'win32' - p = subprocess.Popen(cmdline, shell=True, close_fds=close_fds, - stdin=subprocess.PIPE, stdout=subprocess.PIPE) - (child_in, child_out) = (p.stdin, p.stdout) - try: - import thread - except ImportError: - bkgndwrite(child_in, content) - else: - thread.start_new_thread(bkgndwrite, (child_in, content)) - plaincontent = child_out.read() - child_out.close() - if not plaincontent: # 'dot' is likely not installed - raise PlainParseError("no result from running 'dot'") - else: - import urllib - request = urllib.urlencode({'dot': content}) - url = 'http://codespeak.net/pypy/convertdot.cgi' - print >> sys.stderr, '* posting:', url - g = urllib.urlopen(url, data=request) - result = [] - while True: - data = g.read(16384) - if not data: - break - result.append(data) - g.close() - plaincontent = ''.join(result) - # very simple-minded way to give a somewhat better error message - if plaincontent.startswith('> sys.stderr, '* posting:', url + g = urllib.urlopen(url, data=request) + result = [] + while True: + data = g.read(16384) + if not data: + break + result.append(data) + g.close() + plaincontent = ''.join(result) + # very simple-minded way to give a somewhat better error message + if plaincontent.startswith('" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + f = open(name, "w") + f.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -964,7 +967,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -976,7 +980,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/TODO b/lib-python/TODO deleted file mode 100644 --- a/lib-python/TODO +++ /dev/null @@ -1,100 +0,0 @@ -TODO list for 2.7.0 -=================== - -You can find the results of the most recent buildbot run at: -http://buildbot.pypy.org/ - - -Probably easy tasks -------------------- - -- (unicode|bytearray).(index|find) should accept None as indices (see - test_unicode.py) - -- missing posix.confstr and posix.confstr_names - -- remove code duplication: bit_length() and _count_bits() in rlib/rbigint.py, - objspace/std/longobject.py and objspace/std/longtype.py. - -- missing module pyexpat.errors - -- support for PYTHONIOENCODING, this needs a way to update file.encoding - -- implement format__Complex_ANY() in pypy/objspace/std/complexobject.py - -- Code like this does not work, for two reasons:: - - \ - from __future__ import (with_statement, - unicode_literals) - assert type("") is unicode - -- Code like:: - - assert(x is not None, "error message") - - should emit a SyntaxWarning when compiled (the tuple is always true) - - -Medium tasks ------------- - -- socket module has a couple of changes (including AF_TIPC packet range) - -Longer tasks ------------- - -- Fix usage of __cmp__ in subclasses:: - - class badint(int): - def __cmp__(self, other): - raise RuntimeError - raises(RuntimeError, cmp, 0, badint(1)) - -- Fix comparison of objects layout: if two classes have the same __slots__, it - should be possible to change the instances __class__:: - - class A(object): __slots__ = ('a', 'b') - class B(object): __slots__ = ('b', 'a') - a = A() - a.__class__ = B - -- Show a ResourceWarning when a file/socket is not explicitely closed, like - CPython did for 3.2: http://svn.python.org/view?view=rev&revision=85920 - in PyPy this should be enabled by default - -Won't do for this release -------------------------- - -Note: when you give up with a missing feature, please mention it here, as well -as the various skips added to the test suite. - -- py3k warnings - - * the -3 flag is accepted on the command line, but displays a warning (see - `translator/goal/app_main.py`) - -- CJK codecs. - - * In `./conftest.py`, skipped all `test_codecencodings_*.py` and - `test_codecmaps_*.py`. - - * In test_codecs, commented out various items in `all_unicode_encodings`. - -- Error messages about ill-formed calls (like "argument after ** must be a - mapping") don't always show the function name. That's hard to fix for - the case of errors raised when the Argument object is created (as opposed - to when parsing for a given target function, which occurs later). - - * Some "..." were added to doctests in test_extcall.py - -- CPython's builtin methods are both functions and unbound methods (for - example, `str.upper is dict(str.__dict__)['upper']`). This is not the case - in pypy, and assertions like `object.__str__ is object.__str__` are False - with pypy. Use the `==` operator instead. - - * pprint.py, _threading_local.py - -- When importing a nested module fails, the ImportError message mentions the - name of the package up to the component that could not be imported (CPython - prefers to display the names starting with the failing part). diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -61,7 +61,7 @@ usemodules = '', skip=None): self.basename = basename - self._usemodules = usemodules.split() + self._usemodules = usemodules.split() + ['signal'] self._compiler = compiler self.core = core self.skip = skip @@ -154,18 +154,18 @@ RegrTest('test_cmd.py'), RegrTest('test_cmd_line_script.py'), RegrTest('test_codeccallbacks.py', core=True), - RegrTest('test_codecencodings_cn.py', skip="encodings not available"), - RegrTest('test_codecencodings_hk.py', skip="encodings not available"), - RegrTest('test_codecencodings_jp.py', skip="encodings not available"), - RegrTest('test_codecencodings_kr.py', skip="encodings not available"), - RegrTest('test_codecencodings_tw.py', skip="encodings not available"), + RegrTest('test_codecencodings_cn.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_hk.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_jp.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_kr.py', usemodules='_multibytecodec'), + RegrTest('test_codecencodings_tw.py', usemodules='_multibytecodec'), - RegrTest('test_codecmaps_cn.py', skip="encodings not available"), - RegrTest('test_codecmaps_hk.py', skip="encodings not available"), - RegrTest('test_codecmaps_jp.py', skip="encodings not available"), - RegrTest('test_codecmaps_kr.py', skip="encodings not available"), - RegrTest('test_codecmaps_tw.py', skip="encodings not available"), - RegrTest('test_codecs.py', core=True), + RegrTest('test_codecmaps_cn.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_hk.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_jp.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_kr.py', usemodules='_multibytecodec'), + RegrTest('test_codecmaps_tw.py', usemodules='_multibytecodec'), + RegrTest('test_codecs.py', core=True, usemodules='_multibytecodec'), RegrTest('test_codeop.py', core=True), RegrTest('test_coercion.py', core=True), RegrTest('test_collections.py'), @@ -201,7 +201,7 @@ RegrTest('test_difflib.py'), RegrTest('test_dircache.py', core=True), RegrTest('test_dis.py'), - RegrTest('test_distutils.py'), + RegrTest('test_distutils.py', skip=True), RegrTest('test_dl.py', skip=True), RegrTest('test_doctest.py', usemodules="thread"), RegrTest('test_doctest2.py'), @@ -314,10 +314,10 @@ RegrTest('test_mmap.py'), RegrTest('test_module.py', core=True), RegrTest('test_modulefinder.py'), - RegrTest('test_multibytecodec.py', skip="unsupported codecs"), + RegrTest('test_multibytecodec.py', usemodules='_multibytecodec'), RegrTest('test_multibytecodec_support.py', skip="not a test"), RegrTest('test_multifile.py'), - RegrTest('test_multiprocessing.py', skip='FIXME leaves subprocesses'), + RegrTest('test_multiprocessing.py', skip="FIXME leaves subprocesses"), RegrTest('test_mutants.py', core="possibly"), RegrTest('test_mutex.py'), RegrTest('test_netrc.py'), @@ -359,7 +359,7 @@ RegrTest('test_property.py', core=True), RegrTest('test_pstats.py'), RegrTest('test_pty.py', skip="unsupported extension module"), - RegrTest('test_pwd.py', skip=skip_win32), + RegrTest('test_pwd.py', usemodules="pwd", skip=skip_win32), RegrTest('test_py3kwarn.py'), RegrTest('test_pyclbr.py'), RegrTest('test_pydoc.py'), @@ -400,7 +400,7 @@ RegrTest('test_softspace.py', core=True), RegrTest('test_sort.py', core=True), - RegrTest('test_ssl.py'), + RegrTest('test_ssl.py', usemodules='_ssl _socket select'), RegrTest('test_str.py', core=True), RegrTest('test_strftime.py'), @@ -569,7 +569,6 @@ # import os import time -import socket import getpass class ReallyRunFileExternal(py.test.collect.Item): diff --git a/lib-python/modified-2.7/ctypes/__init__.py b/lib-python/modified-2.7/ctypes/__init__.py --- a/lib-python/modified-2.7/ctypes/__init__.py +++ b/lib-python/modified-2.7/ctypes/__init__.py @@ -7,6 +7,7 @@ __version__ = "1.1.0" +import _ffi from _ctypes import Union, Structure, Array from _ctypes import _Pointer from _ctypes import CFuncPtr as _CFuncPtr @@ -350,7 +351,7 @@ self._FuncPtr = _FuncPtr if handle is None: - self._handle = _dlopen(self._name, mode) + self._handle = _ffi.CDLL(name, mode) else: self._handle = handle @@ -488,9 +489,12 @@ _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI return CFunctionType -_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr) def cast(obj, typ): - return _cast(obj, obj, typ) + try: + c_void_p.from_param(obj) + except TypeError, e: + raise ArgumentError(str(e)) + return _cast_addr(obj, obj, typ) _string_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr) def string_at(ptr, size=-1): diff --git a/lib-python/modified-2.7/ctypes/test/test_callbacks.py b/lib-python/modified-2.7/ctypes/test/test_callbacks.py --- a/lib-python/modified-2.7/ctypes/test/test_callbacks.py +++ b/lib-python/modified-2.7/ctypes/test/test_callbacks.py @@ -1,5 +1,6 @@ import unittest from ctypes import * +from ctypes.test import xfail import _ctypes_test class Callbacks(unittest.TestCase): @@ -98,6 +99,7 @@ ## self.check_type(c_char_p, "abc") ## self.check_type(c_char_p, "def") + @xfail def test_pyobject(self): o = () from sys import getrefcount as grc diff --git a/lib-python/modified-2.7/ctypes/test/test_cfuncs.py b/lib-python/modified-2.7/ctypes/test/test_cfuncs.py --- a/lib-python/modified-2.7/ctypes/test/test_cfuncs.py +++ b/lib-python/modified-2.7/ctypes/test/test_cfuncs.py @@ -3,8 +3,8 @@ import unittest from ctypes import * - import _ctypes_test +from test.test_support import impl_detail class CFunctions(unittest.TestCase): _dll = CDLL(_ctypes_test.__file__) @@ -158,12 +158,14 @@ self.assertEqual(self._dll.tf_bd(0, 42.), 14.) self.assertEqual(self.S(), 42) + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdouble(self): self._dll.tf_D.restype = c_longdouble self._dll.tf_D.argtypes = (c_longdouble,) self.assertEqual(self._dll.tf_D(42.), 14.) self.assertEqual(self.S(), 42) - + + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdouble_plus(self): self._dll.tf_bD.restype = c_longdouble self._dll.tf_bD.argtypes = (c_byte, c_longdouble) diff --git a/lib-python/modified-2.7/ctypes/test/test_functions.py b/lib-python/modified-2.7/ctypes/test/test_functions.py --- a/lib-python/modified-2.7/ctypes/test/test_functions.py +++ b/lib-python/modified-2.7/ctypes/test/test_functions.py @@ -8,6 +8,7 @@ from ctypes import * import sys, unittest from ctypes.test import xfail +from test.test_support import impl_detail try: WINFUNCTYPE @@ -144,6 +145,7 @@ self.assertEqual(result, -21) self.assertEqual(type(result), float) + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdoubleresult(self): f = dll._testfunc_D_bhilfD f.argtypes = [c_byte, c_short, c_int, c_long, c_float, c_longdouble] diff --git a/lib-python/modified-2.7/ctypes/test/test_libc.py b/lib-python/modified-2.7/ctypes/test/test_libc.py --- a/lib-python/modified-2.7/ctypes/test/test_libc.py +++ b/lib-python/modified-2.7/ctypes/test/test_libc.py @@ -25,7 +25,11 @@ lib.my_qsort(chars, len(chars)-1, sizeof(c_char), comparefunc(sort)) self.assertEqual(chars.raw, " ,,aaaadmmmnpppsss\x00") - def test_no_more_xfail(self): + def SKIPPED_test_no_more_xfail(self): + # We decided to not explicitly support the whole ctypes-2.7 + # and instead go for a case-by-case, demand-driven approach. + # So this test is skipped instead of failing. + import socket import ctypes.test self.assertTrue(not hasattr(ctypes.test, 'xfail'), "You should incrementally grep for '@xfail' and remove them, they are real failures") diff --git a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py --- a/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py +++ b/lib-python/modified-2.7/ctypes/test/test_simplesubclasses.py @@ -1,6 +1,5 @@ import unittest from ctypes import * -from ctypes.test import xfail class MyInt(c_int): def __cmp__(self, other): @@ -27,7 +26,6 @@ self.assertEqual(None, cb()) - @xfail def test_int_callback(self): args = [] def func(arg): diff --git a/lib-python/modified-2.7/ctypes/util.py b/lib-python/modified-2.7/ctypes/util.py --- a/lib-python/modified-2.7/ctypes/util.py +++ b/lib-python/modified-2.7/ctypes/util.py @@ -72,8 +72,8 @@ return name if os.name == "posix" and sys.platform == "darwin": - from ctypes.macholib.dyld import dyld_find as _dyld_find def find_library(name): + from ctypes.macholib.dyld import dyld_find as _dyld_find possible = ['lib%s.dylib' % name, '%s.dylib' % name, '%s.framework/%s' % (name, name)] diff --git a/lib-python/modified-2.7/distutils/cygwinccompiler.py b/lib-python/modified-2.7/distutils/cygwinccompiler.py --- a/lib-python/modified-2.7/distutils/cygwinccompiler.py +++ b/lib-python/modified-2.7/distutils/cygwinccompiler.py @@ -75,6 +75,9 @@ elif msc_ver == '1500': # VS2008 / MSVC 9.0 return ['msvcr90'] + elif msc_ver == '1600': + # VS2010 / MSVC 10.0 + return ['msvcr100'] else: raise ValueError("Unknown MS Compiler version %s " % msc_ver) diff --git a/lib-python/modified-2.7/distutils/sysconfig.py b/lib-python/modified-2.7/distutils/sysconfig.py --- a/lib-python/modified-2.7/distutils/sysconfig.py +++ b/lib-python/modified-2.7/distutils/sysconfig.py @@ -20,8 +20,10 @@ if '__pypy__' in sys.builtin_module_names: from distutils.sysconfig_pypy import * from distutils.sysconfig_pypy import _config_vars # needed by setuptools + from distutils.sysconfig_pypy import _variable_rx # read_setup_file() else: from distutils.sysconfig_cpython import * from distutils.sysconfig_cpython import _config_vars # needed by setuptools + from distutils.sysconfig_cpython import _variable_rx # read_setup_file() diff --git a/lib-python/modified-2.7/distutils/sysconfig_pypy.py b/lib-python/modified-2.7/distutils/sysconfig_pypy.py --- a/lib-python/modified-2.7/distutils/sysconfig_pypy.py +++ b/lib-python/modified-2.7/distutils/sysconfig_pypy.py @@ -116,3 +116,13 @@ if compiler.compiler_type == "unix": compiler.compiler_so.extend(['-fPIC', '-Wimplicit']) compiler.shared_lib_extension = get_config_var('SO') + if "CFLAGS" in os.environ: + cflags = os.environ["CFLAGS"] + compiler.compiler.append(cflags) + compiler.compiler_so.append(cflags) + compiler.linker_so.append(cflags) + + +from sysconfig_cpython import ( + parse_makefile, _variable_rx, expand_makefile_vars) + diff --git a/lib-python/modified-2.7/distutils/unixccompiler.py b/lib-python/modified-2.7/distutils/unixccompiler.py --- a/lib-python/modified-2.7/distutils/unixccompiler.py +++ b/lib-python/modified-2.7/distutils/unixccompiler.py @@ -324,7 +324,7 @@ # On OSX users can specify an alternate SDK using # '-isysroot', calculate the SDK root if it is specified # (and use it further on) - cflags = sysconfig.get_config_var('CFLAGS') + cflags = sysconfig.get_config_var('CFLAGS') or '' m = re.search(r'-isysroot\s+(\S+)', cflags) if m is None: sysroot = '/' diff --git a/lib-python/modified-2.7/heapq.py b/lib-python/modified-2.7/heapq.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/heapq.py @@ -0,0 +1,442 @@ +# -*- coding: latin-1 -*- + +"""Heap queue algorithm (a.k.a. priority queue). + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +Usage: + +heap = [] # creates an empty heap +heappush(heap, item) # pushes a new item on the heap +item = heappop(heap) # pops the smallest item from the heap +item = heap[0] # smallest item on the heap without popping it +heapify(x) # transforms list into a heap, in-place, in linear time +item = heapreplace(heap, item) # pops and returns smallest item, and adds + # new item; the heap size is unchanged + +Our API differs from textbook heap algorithms as follows: + +- We use 0-based indexing. This makes the relationship between the + index for a node and the indexes for its children slightly less + obvious, but is more suitable since Python uses 0-based indexing. + +- Our heappop() method returns the smallest item, not the largest. + +These two make it possible to view the heap as a regular Python list +without surprises: heap[0] is the smallest item, and heap.sort() +maintains the heap invariant! +""" + +# Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger + +__about__ = """Heap queues + +[explanation by Fran�ois Pinard] + +Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for +all k, counting elements from 0. For the sake of comparison, +non-existing elements are considered to be infinite. The interesting +property of a heap is that a[0] is always its smallest element. + +The strange invariant above is meant to be an efficient memory +representation for a tournament. The numbers below are `k', not a[k]: + + 0 + + 1 2 + + 3 4 5 6 + + 7 8 9 10 11 12 13 14 + + 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 + + +In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In +an usual binary tournament we see in sports, each cell is the winner +over the two cells it tops, and we can trace the winner down the tree +to see all opponents s/he had. However, in many computer applications +of such tournaments, we do not need to trace the history of a winner. +To be more memory efficient, when a winner is promoted, we try to +replace it by something else at a lower level, and the rule becomes +that a cell and the two cells it tops contain three different items, +but the top cell "wins" over the two topped cells. + +If this heap invariant is protected at all time, index 0 is clearly +the overall winner. The simplest algorithmic way to remove it and +find the "next" winner is to move some loser (let's say cell 30 in the +diagram above) into the 0 position, and then percolate this new 0 down +the tree, exchanging values, until the invariant is re-established. +This is clearly logarithmic on the total number of items in the tree. +By iterating over all items, you get an O(n ln n) sort. + +A nice feature of this sort is that you can efficiently insert new +items while the sort is going on, provided that the inserted items are +not "better" than the last 0'th element you extracted. This is +especially useful in simulation contexts, where the tree holds all +incoming events, and the "win" condition means the smallest scheduled +time. When an event schedule other events for execution, they are +scheduled into the future, so they can easily go into the heap. So, a +heap is a good structure for implementing schedulers (this is what I +used for my MIDI sequencer :-). + +Various structures for implementing schedulers have been extensively +studied, and heaps are good for this, as they are reasonably speedy, +the speed is almost constant, and the worst case is not much different +than the average case. However, there are other representations which +are more efficient overall, yet the worst cases might be terrible. + +Heaps are also very useful in big disk sorts. You most probably all +know that a big sort implies producing "runs" (which are pre-sorted +sequences, which size is usually related to the amount of CPU memory), +followed by a merging passes for these runs, which merging is often +very cleverly organised[1]. It is very important that the initial +sort produces the longest runs possible. Tournaments are a good way +to that. If, using all the memory available to hold a tournament, you +replace and percolate items that happen to fit the current run, you'll +produce runs which are twice the size of the memory for random input, +and much better for input fuzzily ordered. + +Moreover, if you output the 0'th item on disk and get an input which +may not fit in the current tournament (because the value "wins" over +the last output value), it cannot fit in the heap, so the size of the +heap decreases. The freed memory could be cleverly reused immediately +for progressively building a second heap, which grows at exactly the +same rate the first heap is melting. When the first heap completely +vanishes, you switch heaps and start a new run. Clever and quite +effective! + +In a word, heaps are useful memory structures to know. I use them in +a few applications, and I think it is good to keep a `heap' module +around. :-) + +-------------------- +[1] The disk balancing algorithms which are current, nowadays, are +more annoying than clever, and this is a consequence of the seeking +capabilities of the disks. On devices which cannot seek, like big +tape drives, the story was quite different, and one had to be very +clever to ensure (far in advance) that each tape movement will be the +most effective possible (that is, will best participate at +"progressing" the merge). Some tapes were even able to read +backwards, and this was also used to avoid the rewinding time. +Believe me, real good tape sorts were quite spectacular to watch! +From all times, sorting has always been a Great Art! :-) +""" + +__all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge', + 'nlargest', 'nsmallest', 'heappushpop'] + +from itertools import islice, repeat, count, imap, izip, tee, chain +from operator import itemgetter +import bisect + +def heappush(heap, item): + """Push item onto heap, maintaining the heap invariant.""" + heap.append(item) + _siftdown(heap, 0, len(heap)-1) + +def heappop(heap): + """Pop the smallest item off the heap, maintaining the heap invariant.""" + lastelt = heap.pop() # raises appropriate IndexError if heap is empty + if heap: + returnitem = heap[0] + heap[0] = lastelt + _siftup(heap, 0) + else: + returnitem = lastelt + return returnitem + +def heapreplace(heap, item): + """Pop and return the current smallest value, and add the new item. + + This is more efficient than heappop() followed by heappush(), and can be + more appropriate when using a fixed-size heap. Note that the value + returned may be larger than item! That constrains reasonable uses of + this routine unless written as part of a conditional replacement: + + if item > heap[0]: + item = heapreplace(heap, item) + """ + returnitem = heap[0] # raises appropriate IndexError if heap is empty + heap[0] = item + _siftup(heap, 0) + return returnitem + +def heappushpop(heap, item): + """Fast version of a heappush followed by a heappop.""" + if heap and heap[0] < item: + item, heap[0] = heap[0], item + _siftup(heap, 0) + return item + +def heapify(x): + """Transform list into a heap, in-place, in O(len(heap)) time.""" + n = len(x) + # Transform bottom-up. The largest index there's any point to looking at + # is the largest with a child index in-range, so must have 2*i + 1 < n, + # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so + # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is + # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1. + for i in reversed(xrange(n//2)): + _siftup(x, i) + +def nlargest(n, iterable): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, reverse=True)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + it = iter(iterable) + result = list(islice(it, n)) + if not result: + return result + heapify(result) + _heappushpop = heappushpop + for elem in it: + _heappushpop(result, elem) + result.sort(reverse=True) + return result + +def nsmallest(n, iterable): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable)[:n] + """ + if n < 0: # for consistency with the c impl + return [] + if hasattr(iterable, '__len__') and n * 10 <= len(iterable): + # For smaller values of n, the bisect method is faster than a minheap. + # It is also memory efficient, consuming only n elements of space. + it = iter(iterable) + result = sorted(islice(it, 0, n)) + if not result: + return result + insort = bisect.insort + pop = result.pop + los = result[-1] # los --> Largest of the nsmallest + for elem in it: + if los <= elem: + continue + insort(result, elem) + pop() + los = result[-1] + return result + # An alternative approach manifests the whole iterable in memory but + # saves comparisons by heapifying all at once. Also, saves time + # over bisect.insort() which has O(n) data movement time for every + # insertion. Finding the n smallest of an m length iterable requires + # O(m) + O(n log m) comparisons. + h = list(iterable) + heapify(h) + return map(heappop, repeat(h, min(n, len(h)))) + +# 'heap' is a heap at all indices >= startpos, except possibly for pos. pos +# is the index of a leaf with a possibly out-of-order value. Restore the +# heap invariant. +def _siftdown(heap, startpos, pos): + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > startpos: + parentpos = (pos - 1) >> 1 + parent = heap[parentpos] + if newitem < parent: + heap[pos] = parent + pos = parentpos + continue + break + heap[pos] = newitem + +# The child indices of heap index pos are already heaps, and we want to make +# a heap at index pos too. We do this by bubbling the smaller child of +# pos up (and so on with that child's children, etc) until hitting a leaf, +# then using _siftdown to move the oddball originally at index pos into place. +# +# We *could* break out of the loop as soon as we find a pos where newitem <= +# both its children, but turns out that's not a good idea, and despite that +# many books write the algorithm that way. During a heap pop, the last array +# element is sifted in, and that tends to be large, so that comparing it +# against values starting from the root usually doesn't pay (= usually doesn't +# get us out of the loop early). See Knuth, Volume 3, where this is +# explained and quantified in an exercise. +# +# Cutting the # of comparisons is important, since these routines have no +# way to extract "the priority" from an array element, so that intelligence +# is likely to be hiding in custom __cmp__ methods, or in array elements +# storing (priority, record) tuples. Comparisons are thus potentially +# expensive. +# +# On random arrays of length 1000, making this change cut the number of +# comparisons made by heapify() a little, and those made by exhaustive +# heappop() a lot, in accord with theory. Here are typical results from 3 +# runs (3 just to demonstrate how small the variance is): +# +# Compares needed by heapify Compares needed by 1000 heappops +# -------------------------- -------------------------------- +# 1837 cut to 1663 14996 cut to 8680 +# 1855 cut to 1659 14966 cut to 8678 +# 1847 cut to 1660 15024 cut to 8703 +# +# Building the heap by using heappush() 1000 times instead required +# 2198, 2148, and 2219 compares: heapify() is more efficient, when +# you can use it. +# +# The total compares needed by list.sort() on the same lists were 8627, +# 8627, and 8632 (this should be compared to the sum of heapify() and +# heappop() compares): list.sort() is (unsurprisingly!) more efficient +# for sorting. + +def _siftup(heap, pos): + endpos = len(heap) + startpos = pos + newitem = heap[pos] + # Bubble up the smaller child until hitting a leaf. + childpos = 2*pos + 1 # leftmost child position + while childpos < endpos: + # Set childpos to index of smaller child. + rightpos = childpos + 1 + if rightpos < endpos and not heap[childpos] < heap[rightpos]: + childpos = rightpos + # Move the smaller child up. + heap[pos] = heap[childpos] + pos = childpos + childpos = 2*pos + 1 + # The leaf at pos is empty now. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). + heap[pos] = newitem + _siftdown(heap, startpos, pos) + +# If available, use C implementation +try: + from _heapq import * +except ImportError: + pass + +def merge(*iterables): + '''Merge multiple sorted inputs into a single sorted output. + + Similar to sorted(itertools.chain(*iterables)) but returns a generator, + does not pull the data into memory all at once, and assumes that each of + the input streams is already sorted (smallest to largest). + + >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25])) + [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25] + + ''' + _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration + + h = [] + h_append = h.append + for itnum, it in enumerate(map(iter, iterables)): + try: + next = it.next + h_append([next(), itnum, next]) + except _StopIteration: + pass + heapify(h) + + while 1: + try: + while 1: + v, itnum, next = s = h[0] # raises IndexError when h is empty + yield v + s[0] = next() # raises StopIteration when exhausted + _heapreplace(h, s) # restore heap condition + except _StopIteration: + _heappop(h) # remove empty iterator + except IndexError: + return + +# Extend the implementations of nsmallest and nlargest to use a key= argument +_nsmallest = nsmallest +def nsmallest(n, iterable, key=None): + """Find the n smallest elements in a dataset. + + Equivalent to: sorted(iterable, key=key)[:n] + """ + # Short-cut for n==1 is to use min() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [min(chain(head, it))] + return [min(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count()) # decorate + result = _nsmallest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(), in2) # decorate + result = _nsmallest(n, it) + return map(itemgetter(2), result) # undecorate + +_nlargest = nlargest +def nlargest(n, iterable, key=None): + """Find the n largest elements in a dataset. + + Equivalent to: sorted(iterable, key=key, reverse=True)[:n] + """ + + # Short-cut for n==1 is to use max() when len(iterable)>0 + if n == 1: + it = iter(iterable) + head = list(islice(it, 1)) + if not head: + return [] + if key is None: + return [max(chain(head, it))] + return [max(chain(head, it), key=key)] + + # When n>=size, it's faster to use sort() + try: + size = len(iterable) + except (TypeError, AttributeError): + pass + else: + if n >= size: + return sorted(iterable, key=key, reverse=True)[:n] + + # When key is none, use simpler decoration + if key is None: + it = izip(iterable, count(0,-1)) # decorate + result = _nlargest(n, it) + return map(itemgetter(0), result) # undecorate + + # General case, slowest method + in1, in2 = tee(iterable) + it = izip(imap(key, in1), count(0,-1), in2) # decorate + result = _nlargest(n, it) + return map(itemgetter(2), result) # undecorate + +if __name__ == "__main__": + # Simple sanity test + heap = [] + data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] + for item in data: + heappush(heap, item) + sort = [] + while heap: + sort.append(heappop(heap)) + print sort + + import doctest + doctest.testmod() diff --git a/lib-python/modified-2.7/httplib.py b/lib-python/modified-2.7/httplib.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/httplib.py @@ -0,0 +1,1377 @@ +"""HTTP/1.1 client library + + + + +HTTPConnection goes through a number of "states", which define when a client +may legally make another request or fetch the response for a particular +request. This diagram details these state transitions: + + (null) + | + | HTTPConnection() + v + Idle + | + | putrequest() + v + Request-started + | + | ( putheader() )* endheaders() + v + Request-sent + | + | response = getresponse() + v + Unread-response [Response-headers-read] + |\____________________ + | | + | response.read() | putrequest() + v v + Idle Req-started-unread-response + ______/| + / | + response.read() | | ( putheader() )* endheaders() + v v + Request-started Req-sent-unread-response + | + | response.read() + v + Request-sent + +This diagram presents the following rules: + -- a second request may not be started until {response-headers-read} + -- a response [object] cannot be retrieved until {request-sent} + -- there is no differentiation between an unread response body and a + partially read response body + +Note: this enforcement is applied by the HTTPConnection class. The + HTTPResponse class does not enforce this state machine, which + implies sophisticated clients may accelerate the request/response + pipeline. Caution should be taken, though: accelerating the states + beyond the above pattern may imply knowledge of the server's + connection-close behavior for certain requests. For example, it + is impossible to tell whether the server will close the connection + UNTIL the response headers have been read; this means that further + requests cannot be placed into the pipeline until it is known that + the server will NOT be closing the connection. + +Logical State __state __response +------------- ------- ---------- +Idle _CS_IDLE None +Request-started _CS_REQ_STARTED None +Request-sent _CS_REQ_SENT None +Unread-response _CS_IDLE +Req-started-unread-response _CS_REQ_STARTED +Req-sent-unread-response _CS_REQ_SENT +""" + +from array import array +import os +import socket +from sys import py3kwarning +from urlparse import urlsplit +import warnings +with warnings.catch_warnings(): + if py3kwarning: + warnings.filterwarnings("ignore", ".*mimetools has been removed", + DeprecationWarning) + import mimetools + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +__all__ = ["HTTP", "HTTPResponse", "HTTPConnection", + "HTTPException", "NotConnected", "UnknownProtocol", + "UnknownTransferEncoding", "UnimplementedFileMode", + "IncompleteRead", "InvalidURL", "ImproperConnectionState", + "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", + "BadStatusLine", "error", "responses"] + +HTTP_PORT = 80 +HTTPS_PORT = 443 + +_UNKNOWN = 'UNKNOWN' + +# connection states +_CS_IDLE = 'Idle' +_CS_REQ_STARTED = 'Request-started' +_CS_REQ_SENT = 'Request-sent' + +# status codes +# informational +CONTINUE = 100 +SWITCHING_PROTOCOLS = 101 +PROCESSING = 102 + +# successful +OK = 200 +CREATED = 201 +ACCEPTED = 202 +NON_AUTHORITATIVE_INFORMATION = 203 +NO_CONTENT = 204 +RESET_CONTENT = 205 +PARTIAL_CONTENT = 206 +MULTI_STATUS = 207 +IM_USED = 226 + +# redirection +MULTIPLE_CHOICES = 300 +MOVED_PERMANENTLY = 301 +FOUND = 302 +SEE_OTHER = 303 +NOT_MODIFIED = 304 +USE_PROXY = 305 +TEMPORARY_REDIRECT = 307 + +# client error +BAD_REQUEST = 400 +UNAUTHORIZED = 401 +PAYMENT_REQUIRED = 402 +FORBIDDEN = 403 +NOT_FOUND = 404 +METHOD_NOT_ALLOWED = 405 +NOT_ACCEPTABLE = 406 +PROXY_AUTHENTICATION_REQUIRED = 407 +REQUEST_TIMEOUT = 408 +CONFLICT = 409 +GONE = 410 +LENGTH_REQUIRED = 411 +PRECONDITION_FAILED = 412 +REQUEST_ENTITY_TOO_LARGE = 413 +REQUEST_URI_TOO_LONG = 414 +UNSUPPORTED_MEDIA_TYPE = 415 +REQUESTED_RANGE_NOT_SATISFIABLE = 416 +EXPECTATION_FAILED = 417 +UNPROCESSABLE_ENTITY = 422 +LOCKED = 423 +FAILED_DEPENDENCY = 424 +UPGRADE_REQUIRED = 426 + +# server error +INTERNAL_SERVER_ERROR = 500 +NOT_IMPLEMENTED = 501 +BAD_GATEWAY = 502 +SERVICE_UNAVAILABLE = 503 +GATEWAY_TIMEOUT = 504 +HTTP_VERSION_NOT_SUPPORTED = 505 +INSUFFICIENT_STORAGE = 507 +NOT_EXTENDED = 510 + +# Mapping status codes to official W3C names +responses = { + 100: 'Continue', + 101: 'Switching Protocols', + + 200: 'OK', + 201: 'Created', + 202: 'Accepted', + 203: 'Non-Authoritative Information', + 204: 'No Content', + 205: 'Reset Content', + 206: 'Partial Content', + + 300: 'Multiple Choices', + 301: 'Moved Permanently', + 302: 'Found', + 303: 'See Other', + 304: 'Not Modified', + 305: 'Use Proxy', + 306: '(Unused)', + 307: 'Temporary Redirect', + + 400: 'Bad Request', + 401: 'Unauthorized', + 402: 'Payment Required', + 403: 'Forbidden', + 404: 'Not Found', + 405: 'Method Not Allowed', + 406: 'Not Acceptable', + 407: 'Proxy Authentication Required', + 408: 'Request Timeout', + 409: 'Conflict', + 410: 'Gone', + 411: 'Length Required', + 412: 'Precondition Failed', + 413: 'Request Entity Too Large', + 414: 'Request-URI Too Long', + 415: 'Unsupported Media Type', + 416: 'Requested Range Not Satisfiable', + 417: 'Expectation Failed', + + 500: 'Internal Server Error', + 501: 'Not Implemented', + 502: 'Bad Gateway', + 503: 'Service Unavailable', + 504: 'Gateway Timeout', + 505: 'HTTP Version Not Supported', +} + +# maximal amount of data to read at one time in _safe_read +MAXAMOUNT = 1048576 + +class HTTPMessage(mimetools.Message): + + def addheader(self, key, value): + """Add header for field key handling repeats.""" + prev = self.dict.get(key) + if prev is None: + self.dict[key] = value + else: + combined = ", ".join((prev, value)) + self.dict[key] = combined + + def addcontinue(self, key, more): + """Add more field data from a continuation line.""" + prev = self.dict[key] + self.dict[key] = prev + "\n " + more + + def readheaders(self): + """Read header lines. + + Read header lines up to the entirely blank line that terminates them. + The (normally blank) line that ends the headers is skipped, but not + included in the returned list. If a non-header line ends the headers, + (which is an error), an attempt is made to backspace over it; it is + never included in the returned list. + + The variable self.status is set to the empty string if all went well, + otherwise it is an error message. The variable self.headers is a + completely uninterpreted list of lines contained in the header (so + printing them will reproduce the header exactly as it appears in the + file). + + If multiple header fields with the same name occur, they are combined + according to the rules in RFC 2616 sec 4.2: + + Appending each subsequent field-value to the first, each separated + by a comma. The order in which header fields with the same field-name + are received is significant to the interpretation of the combined + field value. + """ + # XXX The implementation overrides the readheaders() method of + # rfc822.Message. The base class design isn't amenable to + # customized behavior here so the method here is a copy of the + # base class code with a few small changes. + + self.dict = {} + self.unixfrom = '' + self.headers = hlist = [] + self.status = '' + headerseen = "" + firstline = 1 + startofline = unread = tell = None + if hasattr(self.fp, 'unread'): + unread = self.fp.unread + elif self.seekable: + tell = self.fp.tell + while True: + if tell: + try: + startofline = tell() + except IOError: + startofline = tell = None + self.seekable = 0 + line = self.fp.readline() + if not line: + self.status = 'EOF in headers' + break + # Skip unix From name time lines + if firstline and line.startswith('From '): + self.unixfrom = self.unixfrom + line + continue + firstline = 0 + if headerseen and line[0] in ' \t': + # XXX Not sure if continuation lines are handled properly + # for http and/or for repeating headers + # It's a continuation line. + hlist.append(line) + self.addcontinue(headerseen, line.strip()) + continue + elif self.iscomment(line): + # It's a comment. Ignore it. + continue + elif self.islast(line): + # Note! No pushback here! The delimiter line gets eaten. + break + headerseen = self.isheader(line) + if headerseen: + # It's a legal header line, save it. + hlist.append(line) + self.addheader(headerseen, line[len(headerseen)+1:].strip()) + continue + else: + # It's not a header line; throw it back and stop here. + if not self.dict: + self.status = 'No headers' + else: + self.status = 'Non-header line where header expected' + # Try to undo the read. + if unread: + unread(line) + elif tell: + self.fp.seek(startofline) + else: + self.status = self.status + '; bad seek' + break + +class HTTPResponse: + + # strict: If true, raise BadStatusLine if the status line can't be + # parsed as a valid HTTP/1.0 or 1.1 status line. By default it is + # false because it prevents clients from talking to HTTP/0.9 + # servers. Note that a response with a sufficiently corrupted + # status line will look like an HTTP/0.9 response. + + # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. + + def __init__(self, sock, debuglevel=0, strict=0, method=None, buffering=False): + if buffering: + # The caller won't be using any sock.recv() calls, so buffering + # is fine and recommended for performance. + self.fp = sock.makefile('rb') + else: + # The buffer size is specified as zero, because the headers of + # the response are read with readline(). If the reads were + # buffered the readline() calls could consume some of the + # response, which make be read via a recv() on the underlying + # socket. + self.fp = sock.makefile('rb', 0) + self.debuglevel = debuglevel + self.strict = strict + self._method = method + + self.msg = None + + # from the Status-Line of the response + self.version = _UNKNOWN # HTTP-Version + self.status = _UNKNOWN # Status-Code + self.reason = _UNKNOWN # Reason-Phrase + + self.chunked = _UNKNOWN # is "chunked" being used? + self.chunk_left = _UNKNOWN # bytes left to read in current chunk + self.length = _UNKNOWN # number of bytes left in response + self.will_close = _UNKNOWN # conn will close at end of response + + def _read_status(self): + # Initialize with Simple-Response defaults + line = self.fp.readline() + if self.debuglevel > 0: + print "reply:", repr(line) + if not line: + # Presumably, the server closed the connection before + # sending a valid response. + raise BadStatusLine(line) + try: + [version, status, reason] = line.split(None, 2) + except ValueError: + try: + [version, status] = line.split(None, 1) + reason = "" + except ValueError: + # empty version will cause next test to fail and status + # will be treated as 0.9 response. + version = "" + if not version.startswith('HTTP/'): + if self.strict: + self.close() + raise BadStatusLine(line) + else: + # assume it's a Simple-Response from an 0.9 server + self.fp = LineAndFileWrapper(line, self.fp) + return "HTTP/0.9", 200, "" + + # The status code is a three-digit number + try: + status = int(status) + if status < 100 or status > 999: + raise BadStatusLine(line) + except ValueError: + raise BadStatusLine(line) + return version, status, reason + + def begin(self): + if self.msg is not None: + # we've already started reading the response + return + + # read until we get a non-100 response + while True: + version, status, reason = self._read_status() + if status != CONTINUE: + break + # skip the header from the 100 response + while True: + skip = self.fp.readline().strip() + if not skip: + break + if self.debuglevel > 0: + print "header:", skip + + self.status = status + self.reason = reason.strip() + if version == 'HTTP/1.0': + self.version = 10 + elif version.startswith('HTTP/1.'): + self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1 + elif version == 'HTTP/0.9': + self.version = 9 + else: + raise UnknownProtocol(version) + + if self.version == 9: + self.length = None + self.chunked = 0 + self.will_close = 1 + self.msg = HTTPMessage(StringIO()) + return + + self.msg = HTTPMessage(self.fp, 0) + if self.debuglevel > 0: + for hdr in self.msg.headers: + print "header:", hdr, + + # don't let the msg keep an fp + self.msg.fp = None + + # are we using the chunked-style of transfer encoding? + tr_enc = self.msg.getheader('transfer-encoding') + if tr_enc and tr_enc.lower() == "chunked": + self.chunked = 1 + self.chunk_left = None + else: + self.chunked = 0 + + # will the connection close at the end of the response? + self.will_close = self._check_close() + + # do we have a Content-Length? + # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked" + length = self.msg.getheader('content-length') + if length and not self.chunked: + try: + self.length = int(length) + except ValueError: + self.length = None + else: + if self.length < 0: # ignore nonsensical negative lengths + self.length = None + else: + self.length = None + + # does the body have a fixed length? (of zero) + if (status == NO_CONTENT or status == NOT_MODIFIED or + 100 <= status < 200 or # 1xx codes + self._method == 'HEAD'): + self.length = 0 + + # if the connection remains open, and we aren't using chunked, and + # a content-length was not provided, then assume that the connection + # WILL close. + if not self.will_close and \ + not self.chunked and \ + self.length is None: + self.will_close = 1 + + def _check_close(self): + conn = self.msg.getheader('connection') + if self.version == 11: + # An HTTP/1.1 proxy is assumed to stay open unless + # explicitly closed. + conn = self.msg.getheader('connection') + if conn and "close" in conn.lower(): + return True + return False + + # Some HTTP/1.0 implementations have support for persistent + # connections, using rules different than HTTP/1.1. + + # For older HTTP, Keep-Alive indicates persistent connection. + if self.msg.getheader('keep-alive'): + return False + + # At least Akamai returns a "Connection: Keep-Alive" header, + # which was supposed to be sent by the client. + if conn and "keep-alive" in conn.lower(): + return False + + # Proxy-Connection is a netscape hack. + pconn = self.msg.getheader('proxy-connection') + if pconn and "keep-alive" in pconn.lower(): + return False + + # otherwise, assume it will close + return True + + def close(self): + if self.fp: + self.fp.close() + self.fp = None + + def isclosed(self): + # NOTE: it is possible that we will not ever call self.close(). This + # case occurs when will_close is TRUE, length is None, and we + # read up to the last byte, but NOT past it. + # + # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be + # called, meaning self.isclosed() is meaningful. + return self.fp is None + + # XXX It would be nice to have readline and __iter__ for this, too. + + def read(self, amt=None): + if self.fp is None: + return '' + + if self._method == 'HEAD': + self.close() + return '' + + if self.chunked: + return self._read_chunked(amt) + + if amt is None: + # unbounded read + if self.length is None: + s = self.fp.read() + else: + s = self._safe_read(self.length) + self.length = 0 + self.close() # we read everything + return s + + if self.length is not None: + if amt > self.length: + # clip the read to the "end of response" + amt = self.length + + # we do not use _safe_read() here because this may be a .will_close + # connection, and the user is reading more bytes than will be provided + # (for example, reading in 1k chunks) + s = self.fp.read(amt) + if self.length is not None: + self.length -= len(s) + if not self.length: + self.close() + return s + + def _read_chunked(self, amt): + assert self.chunked != _UNKNOWN + chunk_left = self.chunk_left + value = [] + while True: + if chunk_left is None: + line = self.fp.readline() + i = line.find(';') + if i >= 0: + line = line[:i] # strip chunk-extensions + try: + chunk_left = int(line, 16) + except ValueError: + # close the connection as protocol synchronisation is + # probably lost + self.close() + raise IncompleteRead(''.join(value)) + if chunk_left == 0: + break + if amt is None: + value.append(self._safe_read(chunk_left)) + elif amt < chunk_left: + value.append(self._safe_read(amt)) + self.chunk_left = chunk_left - amt + return ''.join(value) + elif amt == chunk_left: + value.append(self._safe_read(amt)) + self._safe_read(2) # toss the CRLF at the end of the chunk + self.chunk_left = None + return ''.join(value) + else: + value.append(self._safe_read(chunk_left)) + amt -= chunk_left + + # we read the whole chunk, get another + self._safe_read(2) # toss the CRLF at the end of the chunk + chunk_left = None + + # read and discard trailer up to the CRLF terminator + ### note: we shouldn't have any trailers! + while True: + line = self.fp.readline() + if not line: + # a vanishingly small number of sites EOF without + # sending the trailer + break + if line == '\r\n': + break + + # we read everything; close the "file" + self.close() + + return ''.join(value) + + def _safe_read(self, amt): + """Read the number of bytes requested, compensating for partial reads. + + Normally, we have a blocking socket, but a read() can be interrupted + by a signal (resulting in a partial read). + + Note that we cannot distinguish between EOF and an interrupt when zero + bytes have been read. IncompleteRead() will be raised in this + situation. + + This function should be used when bytes "should" be present for + reading. If the bytes are truly not available (due to EOF), then the + IncompleteRead exception can be used to detect the problem. + """ + # NOTE(gps): As of svn r74426 socket._fileobject.read(x) will never + # return less than x bytes unless EOF is encountered. It now handles + # signal interruptions (socket.error EINTR) internally. This code + # never caught that exception anyways. It seems largely pointless. + # self.fp.read(amt) will work fine. + s = [] + while amt > 0: + chunk = self.fp.read(min(amt, MAXAMOUNT)) + if not chunk: + raise IncompleteRead(''.join(s), amt) + s.append(chunk) + amt -= len(chunk) + return ''.join(s) + + def fileno(self): + return self.fp.fileno() + + def getheader(self, name, default=None): + if self.msg is None: + raise ResponseNotReady() + return self.msg.getheader(name, default) + + def getheaders(self): + """Return list of (header, value) tuples.""" + if self.msg is None: + raise ResponseNotReady() + return self.msg.items() + + +class HTTPConnection: + + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + response_class = HTTPResponse + default_port = HTTP_PORT + auto_open = 1 + debuglevel = 0 + strict = 0 + + def __init__(self, host, port=None, strict=None, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None): + self.timeout = timeout + self.source_address = source_address + self.sock = None + self._buffer = [] + self.__response = None + self.__state = _CS_IDLE + self._method = None + self._tunnel_host = None + self._tunnel_port = None + self._tunnel_headers = {} + + self._set_hostport(host, port) + if strict is not None: + self.strict = strict + + def set_tunnel(self, host, port=None, headers=None): + """ Sets up the host and the port for the HTTP CONNECT Tunnelling. + + The headers argument should be a mapping of extra HTTP headers + to send with the CONNECT request. + """ + self._tunnel_host = host + self._tunnel_port = port + if headers: + self._tunnel_headers = headers + else: + self._tunnel_headers.clear() + + def _set_hostport(self, host, port): + if port is None: + i = host.rfind(':') + j = host.rfind(']') # ipv6 addresses have [...] + if i > j: + try: + port = int(host[i+1:]) + except ValueError: + raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) + host = host[:i] + else: + port = self.default_port + if host and host[0] == '[' and host[-1] == ']': + host = host[1:-1] + self.host = host + self.port = port + + def set_debuglevel(self, level): + self.debuglevel = level + + def _tunnel(self): + self._set_hostport(self._tunnel_host, self._tunnel_port) + self.send("CONNECT %s:%d HTTP/1.0\r\n" % (self.host, self.port)) + for header, value in self._tunnel_headers.iteritems(): + self.send("%s: %s\r\n" % (header, value)) + self.send("\r\n") + response = self.response_class(self.sock, strict = self.strict, + method = self._method) + (version, code, message) = response._read_status() + + if code != 200: + self.close() + raise socket.error("Tunnel connection failed: %d %s" % (code, + message.strip())) + while True: + line = response.fp.readline() + if line == '\r\n': break + + + def connect(self): + """Connect to the host and port specified in __init__.""" + self.sock = socket.create_connection((self.host,self.port), + self.timeout, self.source_address) + + if self._tunnel_host: + self._tunnel() + + def close(self): + """Close the connection to the HTTP server.""" + if self.sock: + self.sock.close() # close it manually... there may be other refs + self.sock = None + if self.__response: + self.__response.close() + self.__response = None + self.__state = _CS_IDLE + + def send(self, data): + """Send `data' to the server.""" + if self.sock is None: + if self.auto_open: + self.connect() + else: + raise NotConnected() + + if self.debuglevel > 0: + print "send:", repr(data) + blocksize = 8192 + if hasattr(data,'read') and not isinstance(data, array): + if self.debuglevel > 0: print "sendIng a read()able" + datablock = data.read(blocksize) + while datablock: + self.sock.sendall(datablock) + datablock = data.read(blocksize) + else: + self.sock.sendall(data) + + def _output(self, s): + """Add a line of output to the current request buffer. + + Assumes that the line does *not* end with \\r\\n. + """ + self._buffer.append(s) + + def _send_output(self, message_body=None): + """Send the currently buffered request and clear the buffer. + + Appends an extra \\r\\n to the buffer. + A message_body may be specified, to be appended to the request. + """ + self._buffer.extend(("", "")) + msg = "\r\n".join(self._buffer) + del self._buffer[:] + # If msg and message_body are sent in a single send() call, + # it will avoid performance problems caused by the interaction + # between delayed ack and the Nagle algorithim. + if isinstance(message_body, str): + msg += message_body + message_body = None + self.send(msg) + if message_body is not None: + #message_body was not a string (i.e. it is a file) and + #we must run the risk of Nagle + self.send(message_body) + + def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0): + """Send a request to the server. + + `method' specifies an HTTP request method, e.g. 'GET'. + `url' specifies the object being requested, e.g. '/index.html'. + `skip_host' if True does not add automatically a 'Host:' header + `skip_accept_encoding' if True does not add automatically an + 'Accept-Encoding:' header + """ + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + + # in certain cases, we cannot issue another request on this connection. + # this occurs when: + # 1) we are in the process of sending a request. (_CS_REQ_STARTED) + # 2) a response to a previous request has signalled that it is going + # to close the connection upon completion. + # 3) the headers for the previous response have not been read, thus + # we cannot determine whether point (2) is true. (_CS_REQ_SENT) + # + # if there is no prior response, then we can request at will. + # + # if point (2) is true, then we will have passed the socket to the + # response (effectively meaning, "there is no prior response"), and + # will open a new one when a new request is made. + # + # Note: if a prior response exists, then we *can* start a new request. + # We are not allowed to begin fetching the response to this new + # request, however, until that prior response is complete. + # + if self.__state == _CS_IDLE: + self.__state = _CS_REQ_STARTED + else: + raise CannotSendRequest() + + # Save the method we use, we need it later in the response phase + self._method = method + if not url: + url = '/' + hdr = '%s %s %s' % (method, url, self._http_vsn_str) + + self._output(hdr) + + if self._http_vsn == 11: + # Issue some standard headers for better HTTP/1.1 compliance + + if not skip_host: + # this header is issued *only* for HTTP/1.1 + # connections. more specifically, this means it is + # only issued when the client uses the new + # HTTPConnection() class. backwards-compat clients + # will be using HTTP/1.0 and those clients may be + # issuing this header themselves. we should NOT issue + # it twice; some web servers (such as Apache) barf + # when they see two Host: headers + + # If we need a non-standard port,include it in the + # header. If the request is going through a proxy, + # but the host of the actual URL, not the host of the + # proxy. + + netloc = '' + if url.startswith('http'): + nil, netloc, nil, nil, nil = urlsplit(url) + + if netloc: + try: + netloc_enc = netloc.encode("ascii") + except UnicodeEncodeError: + netloc_enc = netloc.encode("idna") + self.putheader('Host', netloc_enc) + else: + try: + host_enc = self.host.encode("ascii") + except UnicodeEncodeError: + host_enc = self.host.encode("idna") + # Wrap the IPv6 Host Header with [] (RFC 2732) + if host_enc.find(':') >= 0: + host_enc = "[" + host_enc + "]" + if self.port == self.default_port: + self.putheader('Host', host_enc) + else: + self.putheader('Host', "%s:%s" % (host_enc, self.port)) + + # note: we are assuming that clients will not attempt to set these + # headers since *this* library must deal with the + # consequences. this also means that when the supporting + # libraries are updated to recognize other forms, then this + # code should be changed (removed or updated). + + # we only want a Content-Encoding of "identity" since we don't + # support encodings such as x-gzip or x-deflate. + if not skip_accept_encoding: + self.putheader('Accept-Encoding', 'identity') + + # we can accept "chunked" Transfer-Encodings, but no others + # NOTE: no TE header implies *only* "chunked" + #self.putheader('TE', 'chunked') + + # if TE is supplied in the header, then it must appear in a + # Connection header. + #self.putheader('Connection', 'TE') + + else: + # For HTTP/1.0, the server will assume "not chunked" + pass + + def putheader(self, header, *values): + """Send a request header line to the server. + + For example: h.putheader('Accept', 'text/html') + """ + if self.__state != _CS_REQ_STARTED: + raise CannotSendHeader() + + hdr = '%s: %s' % (header, '\r\n\t'.join([str(v) for v in values])) + self._output(hdr) + + def endheaders(self, message_body=None): + """Indicate that the last header line has been sent to the server. + + This method sends the request to the server. The optional + message_body argument can be used to pass message body + associated with the request. The message body will be sent in + the same packet as the message headers if possible. The + message_body should be a string. + """ + if self.__state == _CS_REQ_STARTED: + self.__state = _CS_REQ_SENT + else: + raise CannotSendHeader() + self._send_output(message_body) + + def request(self, method, url, body=None, headers={}): + """Send a complete request to the server.""" + self._send_request(method, url, body, headers) + + def _set_content_length(self, body): + # Set the content-length based on the body. + thelen = None + try: + thelen = str(len(body)) + except TypeError, te: + # If this is a file-like object, try to + # fstat its file descriptor + try: + thelen = str(os.fstat(body.fileno()).st_size) + except (AttributeError, OSError): + # Don't send a length if this failed + if self.debuglevel > 0: print "Cannot stat!!" + + if thelen is not None: + self.putheader('Content-Length', thelen) + + def _send_request(self, method, url, body, headers): + # Honor explicitly requested Host: and Accept-Encoding: headers. + header_names = dict.fromkeys([k.lower() for k in headers]) + skips = {} + if 'host' in header_names: + skips['skip_host'] = 1 + if 'accept-encoding' in header_names: + skips['skip_accept_encoding'] = 1 + + self.putrequest(method, url, **skips) + + if body and ('content-length' not in header_names): + self._set_content_length(body) + for hdr, value in headers.iteritems(): + self.putheader(hdr, value) + self.endheaders(body) + + def getresponse(self, buffering=False): + "Get the response from the server." + + # if a prior response has been completed, then forget about it. + if self.__response and self.__response.isclosed(): + self.__response = None + + # + # if a prior response exists, then it must be completed (otherwise, we + # cannot read this response's header to determine the connection-close + # behavior) + # + # note: if a prior response existed, but was connection-close, then the + # socket and response were made independent of this HTTPConnection + # object since a new request requires that we open a whole new + # connection + # + # this means the prior response had one of two states: + # 1) will_close: this connection was reset and the prior socket and + # response operate independently + # 2) persistent: the response was retained and we await its + # isclosed() status to become true. + # + if self.__state != _CS_REQ_SENT or self.__response: + raise ResponseNotReady() + + args = (self.sock,) + kwds = {"strict":self.strict, "method":self._method} + if self.debuglevel > 0: + args += (self.debuglevel,) + if buffering: + #only add this keyword if non-default, for compatibility with + #other response_classes. + kwds["buffering"] = True; + response = self.response_class(*args, **kwds) + + try: + response.begin() + except: + response.close() + raise + assert response.will_close != _UNKNOWN + self.__state = _CS_IDLE + + if response.will_close: + # this effectively passes the connection to the response + self.close() + else: + # remember this, so we can tell when it is complete + self.__response = response + + return response + + +class HTTP: + "Compatibility class with httplib.py from 1.5." + + _http_vsn = 10 + _http_vsn_str = 'HTTP/1.0' + + debuglevel = 0 + + _connection_class = HTTPConnection + + def __init__(self, host='', port=None, strict=None): + "Provide a default host, since the superclass requires one." + + # some joker passed 0 explicitly, meaning default port + if port == 0: + port = None + + # Note that we may pass an empty string as the host; this will throw + # an error when we attempt to connect. Presumably, the client code + # will call connect before then, with a proper host. + self._setup(self._connection_class(host, port, strict)) + + def _setup(self, conn): + self._conn = conn + + # set up delegation to flesh out interface + self.send = conn.send + self.putrequest = conn.putrequest + self.putheader = conn.putheader + self.endheaders = conn.endheaders + self.set_debuglevel = conn.set_debuglevel + + conn._http_vsn = self._http_vsn + conn._http_vsn_str = self._http_vsn_str + + self.file = None + + def connect(self, host=None, port=None): + "Accept arguments to set the host/port, since the superclass doesn't." + + if host is not None: + self._conn._set_hostport(host, port) + self._conn.connect() + + def getfile(self): + "Provide a getfile, since the superclass' does not use this concept." + return self.file + + def getreply(self, buffering=False): + """Compat definition since superclass does not define it. + + Returns a tuple consisting of: + - server status code (e.g. '200' if all goes well) + - server "reason" corresponding to status code + - any RFC822 headers in the response from the server + """ + try: + if not buffering: + response = self._conn.getresponse() + else: + #only add this keyword if non-default for compatibility + #with other connection classes + response = self._conn.getresponse(buffering) + except BadStatusLine, e: + ### hmm. if getresponse() ever closes the socket on a bad request, + ### then we are going to have problems with self.sock + + ### should we keep this behavior? do people use it? + # keep the socket open (as a file), and return it + self.file = self._conn.sock.makefile('rb', 0) + + # close our socket -- we want to restart after any protocol error + self.close() + + self.headers = None + return -1, e.line, None + + self.headers = response.msg + self.file = response.fp + return response.status, response.reason, response.msg + + def close(self): + self._conn.close() + + # note that self.file == response.fp, which gets closed by the + # superclass. just clear the object ref here. + ### hmm. messy. if status==-1, then self.file is owned by us. + ### well... we aren't explicitly closing, but losing this ref will + ### do it + self.file = None + +try: + import ssl +except ImportError: + pass +else: + class HTTPSConnection(HTTPConnection): + "This class allows communication via SSL." + + default_port = HTTPS_PORT + + def __init__(self, host, port=None, key_file=None, cert_file=None, + strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None): + HTTPConnection.__init__(self, host, port, strict, timeout, + source_address) + self.key_file = key_file + self.cert_file = cert_file + + def connect(self): + "Connect to a host on a given (SSL) port." + + sock = socket.create_connection((self.host, self.port), + self.timeout, self.source_address) + if self._tunnel_host: + self.sock = sock + self._tunnel() + self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) + + __all__.append("HTTPSConnection") + + class HTTPS(HTTP): + """Compatibility with 1.5 httplib interface + + Python 1.5.2 did not have an HTTPS class, but it defined an + interface for sending http requests that is also useful for + https. + """ + + _connection_class = HTTPSConnection + + def __init__(self, host='', port=None, key_file=None, cert_file=None, + strict=None): + # provide a default host, pass the X509 cert info + + # urf. compensate for bad input. + if port == 0: + port = None + self._setup(self._connection_class(host, port, key_file, + cert_file, strict)) + + # we never actually use these for anything, but we keep them + # here for compatibility with post-1.5.2 CVS. + self.key_file = key_file + self.cert_file = cert_file + + + def FakeSocket (sock, sslobj): + warnings.warn("FakeSocket is deprecated, and won't be in 3.x. " + + "Use the result of ssl.wrap_socket() directly instead.", + DeprecationWarning, stacklevel=2) + return sslobj + + +class HTTPException(Exception): + # Subclasses that define an __init__ must call Exception.__init__ + # or define self.args. Otherwise, str() will fail. + pass + +class NotConnected(HTTPException): + pass + +class InvalidURL(HTTPException): + pass + +class UnknownProtocol(HTTPException): + def __init__(self, version): + self.args = version, + self.version = version + +class UnknownTransferEncoding(HTTPException): + pass + +class UnimplementedFileMode(HTTPException): + pass + +class IncompleteRead(HTTPException): + def __init__(self, partial, expected=None): + self.args = partial, + self.partial = partial + self.expected = expected + def __repr__(self): + if self.expected is not None: + e = ', %i more expected' % self.expected + else: + e = '' + return 'IncompleteRead(%i bytes read%s)' % (len(self.partial), e) + def __str__(self): + return repr(self) + +class ImproperConnectionState(HTTPException): + pass + +class CannotSendRequest(ImproperConnectionState): + pass + +class CannotSendHeader(ImproperConnectionState): + pass + +class ResponseNotReady(ImproperConnectionState): + pass + +class BadStatusLine(HTTPException): + def __init__(self, line): + if not line: + line = repr(line) + self.args = line, + self.line = line + +# for backwards compatibility +error = HTTPException + +class LineAndFileWrapper: + """A limited file-like object for HTTP/0.9 responses.""" + + # The status-line parsing code calls readline(), which normally + # get the HTTP status line. For a 0.9 response, however, this is + # actually the first line of the body! Clients need to get a + # readable file object that contains that line. + + def __init__(self, line, file): + self._line = line + self._file = file + self._line_consumed = 0 + self._line_offset = 0 + self._line_left = len(line) + + def __getattr__(self, attr): + return getattr(self._file, attr) + + def _done(self): + # called when the last byte is read from the line. After the + # call, all read methods are delegated to the underlying file + # object. + self._line_consumed = 1 + self.read = self._file.read + self.readline = self._file.readline + self.readlines = self._file.readlines + + def read(self, amt=None): + if self._line_consumed: + return self._file.read(amt) + assert self._line_left + if amt is None or amt > self._line_left: + s = self._line[self._line_offset:] + self._done() + if amt is None: + return s + self._file.read() + else: + return s + self._file.read(amt - len(s)) + else: + assert amt <= self._line_left + i = self._line_offset + j = i + amt + s = self._line[i:j] + self._line_offset = j + self._line_left -= amt + if self._line_left == 0: + self._done() + return s + + def readline(self): + if self._line_consumed: + return self._file.readline() + assert self._line_left + s = self._line[self._line_offset:] + self._done() + return s + + def readlines(self, size=None): + if self._line_consumed: + return self._file.readlines(size) + assert self._line_left + L = [self._line[self._line_offset:]] + self._done() + if size is None: + return L + self._file.readlines() + else: + return L + self._file.readlines(size) + +def test(): + """Test this module. + + A hodge podge of tests collected here, because they have too many + external dependencies for the regular test suite. + """ + + import sys + import getopt + opts, args = getopt.getopt(sys.argv[1:], 'd') + dl = 0 + for o, a in opts: + if o == '-d': dl = dl + 1 + host = 'www.python.org' + selector = '/' + if args[0:]: host = args[0] + if args[1:]: selector = args[1] + h = HTTP() + h.set_debuglevel(dl) + h.connect(host) + h.putrequest('GET', selector) + h.endheaders() + status, reason, headers = h.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(h.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + + # minimal test that code to extract host from url works + class HTTP11(HTTP): + _http_vsn = 11 + _http_vsn_str = 'HTTP/1.1' + + h = HTTP11('www.python.org') + h.putrequest('GET', 'http://www.python.org/~jeremy/') + h.endheaders() + h.getreply() + h.close() + + try: + import ssl + except ImportError: + pass + else: + + for host, selector in (('sourceforge.net', '/projects/python'), + ): + print "https://%s%s" % (host, selector) + hs = HTTPS() + hs.set_debuglevel(dl) + hs.connect(host) + hs.putrequest('GET', selector) + hs.endheaders() + status, reason, headers = hs.getreply() + print 'status =', status + print 'reason =', reason + print "read", len(hs.getfile().read()) + print + if headers: + for header in headers.headers: print header.strip() + print + +if __name__ == '__main__': + test() diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,8 +17,7 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') @@ -37,10 +29,9 @@ """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) - -def py_encode_basestring_ascii(s): +def encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,20 +44,18 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' - - -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' +c_encode_basestring_ascii = None class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +136,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = encode_basestring_ascii + else: + self.encoder = encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +184,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +319,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +374,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +384,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +430,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +439,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +447,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +460,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +491,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/modified-2.7/json/tests/test_unicode.py b/lib-python/modified-2.7/json/tests/test_unicode.py --- a/lib-python/modified-2.7/json/tests/test_unicode.py +++ b/lib-python/modified-2.7/json/tests/test_unicode.py @@ -80,3 +80,9 @@ self.assertEqual(type(json.loads(u'["a"]')[0]), unicode) # Issue 10038. self.assertEqual(type(json.loads('"foo"')), unicode) + + def test_encode_not_utf_8(self): + self.assertEqual(json.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(json.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') diff --git a/lib-python/modified-2.7/opcode.py b/lib-python/modified-2.7/opcode.py --- a/lib-python/modified-2.7/opcode.py +++ b/lib-python/modified-2.7/opcode.py @@ -189,7 +189,6 @@ def_op('MAP_ADD', 147) # pypy modification, experimental bytecode -def_op('CALL_LIKELY_BUILTIN', 200) # #args + (#kwargs << 8) def_op('LOOKUP_METHOD', 201) # Index in name list hasname.append(201) def_op('CALL_METHOD', 202) # #args not including 'self' diff --git a/lib-python/modified-2.7/pickle.py b/lib-python/modified-2.7/pickle.py --- a/lib-python/modified-2.7/pickle.py +++ b/lib-python/modified-2.7/pickle.py @@ -168,7 +168,7 @@ # Pickling machinery -class Pickler: +class Pickler(object): def __init__(self, file, protocol=None): """This takes a file-like object for writing a pickle data stream. @@ -873,7 +873,7 @@ # Unpickling machinery -class Unpickler: +class Unpickler(object): def __init__(self, file): """This takes a file-like object for reading a pickle data stream. diff --git a/lib-python/2.7/pkgutil.py b/lib-python/modified-2.7/pkgutil.py copy from lib-python/2.7/pkgutil.py copy to lib-python/modified-2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/modified-2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/modified-2.7/site.py b/lib-python/modified-2.7/site.py --- a/lib-python/modified-2.7/site.py +++ b/lib-python/modified-2.7/site.py @@ -454,10 +454,10 @@ __builtin__.copyright = _Printer("copyright", sys.copyright) __builtin__.credits = _Printer( "credits", - "PyPy is maintained by the PyPy developers: http://codespeak.net/pypy") + "PyPy is maintained by the PyPy developers: http://pypy.org/") __builtin__.license = _Printer( "license", - "See http://codespeak.net/svn/pypy/dist/LICENSE") + "See https://bitbucket.org/pypy/pypy/src/default/LICENSE") diff --git a/lib-python/modified-2.7/sqlite3/test/regression.py b/lib-python/modified-2.7/sqlite3/test/regression.py --- a/lib-python/modified-2.7/sqlite3/test/regression.py +++ b/lib-python/modified-2.7/sqlite3/test/regression.py @@ -274,6 +274,18 @@ cur.execute("UPDATE foo SET id = 3 WHERE id = 1") self.assertEqual(cur.description, None) + def CheckStatementCache(self): + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + values = [(i,) for i in xrange(5)] + cur.executemany("INSERT INTO foo (id) VALUES (?)", values) + + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + self.con.commit() + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") return unittest.TestSuite((regression_suite,)) diff --git a/lib-python/modified-2.7/ssl.py b/lib-python/modified-2.7/ssl.py --- a/lib-python/modified-2.7/ssl.py +++ b/lib-python/modified-2.7/ssl.py @@ -62,7 +62,6 @@ from _ssl import OPENSSL_VERSION_NUMBER, OPENSSL_VERSION_INFO, OPENSSL_VERSION from _ssl import SSLError from _ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED -from _ssl import PROTOCOL_SSLv2, PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 from _ssl import RAND_status, RAND_egd, RAND_add from _ssl import \ SSL_ERROR_ZERO_RETURN, \ @@ -74,6 +73,18 @@ SSL_ERROR_WANT_CONNECT, \ SSL_ERROR_EOF, \ SSL_ERROR_INVALID_ERROR_CODE +from _ssl import PROTOCOL_SSLv3, PROTOCOL_SSLv23, PROTOCOL_TLSv1 +_PROTOCOL_NAMES = { + PROTOCOL_TLSv1: "TLSv1", + PROTOCOL_SSLv23: "SSLv23", + PROTOCOL_SSLv3: "SSLv3", +} +try: + from _ssl import PROTOCOL_SSLv2 +except ImportError: + pass +else: + _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo @@ -400,16 +411,7 @@ return DER_cert_to_PEM_cert(dercert) def get_protocol_name(protocol_code): - if protocol_code == PROTOCOL_TLSv1: - return "TLSv1" - elif protocol_code == PROTOCOL_SSLv23: - return "SSLv23" - elif protocol_code == PROTOCOL_SSLv2: - return "SSLv2" - elif protocol_code == PROTOCOL_SSLv3: - return "SSLv3" - else: - return "" + return _PROTOCOL_NAMES.get(protocol_code, '') # a replacement for the old socket.ssl function diff --git a/lib-python/modified-2.7/test/regrtest.py b/lib-python/modified-2.7/test/regrtest.py --- a/lib-python/modified-2.7/test/regrtest.py +++ b/lib-python/modified-2.7/test/regrtest.py @@ -1403,7 +1403,26 @@ test_zipimport test_zlib """, - 'openbsd3': + 'openbsd4': + """ + test_ascii_formatd + test_bsddb + test_bsddb3 + test_ctypes + test_dl + test_epoll + test_gdbm + test_locale + test_normalization + test_ossaudiodev + test_pep277 + test_tcl + test_tk + test_ttk_guionly + test_ttk_textonly + test_multiprocessing + """, + 'openbsd5': """ test_ascii_formatd test_bsddb diff --git a/lib-python/modified-2.7/test/test_array.py b/lib-python/modified-2.7/test/test_array.py --- a/lib-python/modified-2.7/test/test_array.py +++ b/lib-python/modified-2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a + b") - - self.assertRaises(TypeError, "a + 'bad'") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, "a += b") - - self.assertRaises(TypeError, "a += 'bad'") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, "a * 'bad'") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, "a *= 'bad'") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) diff --git a/lib-python/modified-2.7/test/test_bz2.py b/lib-python/modified-2.7/test/test_bz2.py --- a/lib-python/modified-2.7/test/test_bz2.py +++ b/lib-python/modified-2.7/test/test_bz2.py @@ -50,6 +50,7 @@ self.filename = TESTFN def tearDown(self): + test_support.gc_collect() if os.path.isfile(self.filename): os.unlink(self.filename) diff --git a/lib-python/modified-2.7/test/test_codecs.py b/lib-python/modified-2.7/test/test_codecs.py deleted file mode 100644 --- a/lib-python/modified-2.7/test/test_codecs.py +++ /dev/null @@ -1,1615 +0,0 @@ -from test import test_support -import unittest -import codecs -import sys, StringIO, _testcapi - -class Queue(object): - """ - queue: write bytes at one end, read bytes from the other end - """ - def __init__(self): - self._buffer = "" - - def write(self, chars): - self._buffer += chars - - def read(self, size=-1): - if size<0: - s = self._buffer - self._buffer = "" - return s - else: - s = self._buffer[:size] - self._buffer = self._buffer[size:] - return s - -class ReadTest(unittest.TestCase): - def check_partial(self, input, partialresults): - # get a StreamReader for the encoding and feed the bytestring version - # of input to the reader byte by byte. Read everything available from - # the StreamReader and check that the results equal the appropriate - # entries from partialresults. - q = Queue() - r = codecs.getreader(self.encoding)(q) - result = u"" - for (c, partialresult) in zip(input.encode(self.encoding), partialresults): - q.write(c) - result += r.read() - self.assertEqual(result, partialresult) - # check that there's nothing left in the buffers - self.assertEqual(r.read(), u"") - self.assertEqual(r.bytebuffer, "") - self.assertEqual(r.charbuffer, u"") - - # do the check again, this time using a incremental decoder - d = codecs.getincrementaldecoder(self.encoding)() - result = u"" - for (c, partialresult) in zip(input.encode(self.encoding), partialresults): - result += d.decode(c) - self.assertEqual(result, partialresult) - # check that there's nothing left in the buffers - self.assertEqual(d.decode("", True), u"") - self.assertEqual(d.buffer, "") - - # Check whether the reset method works properly - d.reset() - result = u"" - for (c, partialresult) in zip(input.encode(self.encoding), partialresults): - result += d.decode(c) - self.assertEqual(result, partialresult) - # check that there's nothing left in the buffers - self.assertEqual(d.decode("", True), u"") - self.assertEqual(d.buffer, "") - - # check iterdecode() - encoded = input.encode(self.encoding) - self.assertEqual( - input, - u"".join(codecs.iterdecode(encoded, self.encoding)) - ) - - def test_readline(self): - def getreader(input): - stream = StringIO.StringIO(input.encode(self.encoding)) - return codecs.getreader(self.encoding)(stream) - - def readalllines(input, keepends=True, size=None): - reader = getreader(input) - lines = [] - while True: - line = reader.readline(size=size, keepends=keepends) - if not line: - break - lines.append(line) - return "|".join(lines) - - s = u"foo\nbar\r\nbaz\rspam\u2028eggs" - sexpected = u"foo\n|bar\r\n|baz\r|spam\u2028|eggs" - sexpectednoends = u"foo|bar|baz|spam|eggs" - self.assertEqual(readalllines(s, True), sexpected) - self.assertEqual(readalllines(s, False), sexpectednoends) - self.assertEqual(readalllines(s, True, 10), sexpected) - self.assertEqual(readalllines(s, False, 10), sexpectednoends) - - # Test long lines (multiple calls to read() in readline()) - vw = [] - vwo = [] - for (i, lineend) in enumerate(u"\n \r\n \r \u2028".split()): - vw.append((i*200)*u"\3042" + lineend) - vwo.append((i*200)*u"\3042") - self.assertEqual(readalllines("".join(vw), True), "".join(vw)) - self.assertEqual(readalllines("".join(vw), False),"".join(vwo)) - - # Test lines where the first read might end with \r, so the - # reader has to look ahead whether this is a lone \r or a \r\n - for size in xrange(80): - for lineend in u"\n \r\n \r \u2028".split(): - s = 10*(size*u"a" + lineend + u"xxx\n") - reader = getreader(s) - for i in xrange(10): - self.assertEqual( - reader.readline(keepends=True), - size*u"a" + lineend, - ) - reader = getreader(s) - for i in xrange(10): - self.assertEqual( - reader.readline(keepends=False), - size*u"a", - ) - - def test_bug1175396(self): - s = [ - '<%!--===================================================\r\n', - ' BLOG index page: show recent articles,\r\n', - ' today\'s articles, or articles of a specific date.\r\n', - '========================================================--%>\r\n', - '<%@inputencoding="ISO-8859-1"%>\r\n', - '<%@pagetemplate=TEMPLATE.y%>\r\n', - '<%@import=import frog.util, frog%>\r\n', - '<%@import=import frog.objects%>\r\n', - '<%@import=from frog.storageerrors import StorageError%>\r\n', - '<%\r\n', - '\r\n', - 'import logging\r\n', - 'log=logging.getLogger("Snakelets.logger")\r\n', - '\r\n', - '\r\n', - 'user=self.SessionCtx.user\r\n', - 'storageEngine=self.SessionCtx.storageEngine\r\n', - '\r\n', - '\r\n', - 'def readArticlesFromDate(date, count=None):\r\n', - ' entryids=storageEngine.listBlogEntries(date)\r\n', - ' entryids.reverse() # descending\r\n', - ' if count:\r\n', - ' entryids=entryids[:count]\r\n', - ' try:\r\n', - ' return [ frog.objects.BlogEntry.load(storageEngine, date, Id) for Id in entryids ]\r\n', - ' except StorageError,x:\r\n', - ' log.error("Error loading articles: "+str(x))\r\n', - ' self.abort("cannot load articles")\r\n', - '\r\n', - 'showdate=None\r\n', - '\r\n', - 'arg=self.Request.getArg()\r\n', - 'if arg=="today":\r\n', - ' #-------------------- TODAY\'S ARTICLES\r\n', - ' self.write("

    Today\'s articles

    ")\r\n', - ' showdate = frog.util.isodatestr() \r\n', - ' entries = readArticlesFromDate(showdate)\r\n', - 'elif arg=="active":\r\n', - ' #-------------------- ACTIVE ARTICLES redirect\r\n', - ' self.Yredirect("active.y")\r\n', - 'elif arg=="login":\r\n', - ' #-------------------- LOGIN PAGE redirect\r\n', - ' self.Yredirect("login.y")\r\n', - 'elif arg=="date":\r\n', - ' #-------------------- ARTICLES OF A SPECIFIC DATE\r\n', - ' showdate = self.Request.getParameter("date")\r\n', - ' self.write("

    Articles written on %s

    "% frog.util.mediumdatestr(showdate))\r\n', - ' entries = readArticlesFromDate(showdate)\r\n', - 'else:\r\n', - ' #-------------------- RECENT ARTICLES\r\n', - ' self.write("

    Recent articles

    ")\r\n', - ' dates=storageEngine.listBlogEntryDates()\r\n', - ' if dates:\r\n', - ' entries=[]\r\n', - ' SHOWAMOUNT=10\r\n', - ' for showdate in dates:\r\n', - ' entries.extend( readArticlesFromDate(showdate, SHOWAMOUNT-len(entries)) )\r\n', - ' if len(entries)>=SHOWAMOUNT:\r\n', - ' break\r\n', - ' \r\n', - ] - stream = StringIO.StringIO("".join(s).encode(self.encoding)) - reader = codecs.getreader(self.encoding)(stream) - for (i, line) in enumerate(reader): - self.assertEqual(line, s[i]) - - def test_readlinequeue(self): - q = Queue() - writer = codecs.getwriter(self.encoding)(q) - reader = codecs.getreader(self.encoding)(q) - - # No lineends - writer.write(u"foo\r") - self.assertEqual(reader.readline(keepends=False), u"foo") - writer.write(u"\nbar\r") - self.assertEqual(reader.readline(keepends=False), u"") - self.assertEqual(reader.readline(keepends=False), u"bar") - writer.write(u"baz") - self.assertEqual(reader.readline(keepends=False), u"baz") - self.assertEqual(reader.readline(keepends=False), u"") - - # Lineends - writer.write(u"foo\r") - self.assertEqual(reader.readline(keepends=True), u"foo\r") - writer.write(u"\nbar\r") - self.assertEqual(reader.readline(keepends=True), u"\n") - self.assertEqual(reader.readline(keepends=True), u"bar\r") - writer.write(u"baz") - self.assertEqual(reader.readline(keepends=True), u"baz") - self.assertEqual(reader.readline(keepends=True), u"") - writer.write(u"foo\r\n") - self.assertEqual(reader.readline(keepends=True), u"foo\r\n") - - def test_bug1098990_a(self): - s1 = u"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\r\n" - s2 = u"offending line: ladfj askldfj klasdj fskla dfzaskdj fasklfj laskd fjasklfzzzzaa%whereisthis!!!\r\n" - s3 = u"next line.\r\n" - - s = (s1+s2+s3).encode(self.encoding) - stream = StringIO.StringIO(s) - reader = codecs.getreader(self.encoding)(stream) - self.assertEqual(reader.readline(), s1) - self.assertEqual(reader.readline(), s2) - self.assertEqual(reader.readline(), s3) - self.assertEqual(reader.readline(), u"") - - def test_bug1098990_b(self): - s1 = u"aaaaaaaaaaaaaaaaaaaaaaaa\r\n" - s2 = u"bbbbbbbbbbbbbbbbbbbbbbbb\r\n" - s3 = u"stillokay:bbbbxx\r\n" - s4 = u"broken!!!!badbad\r\n" - s5 = u"againokay.\r\n" - - s = (s1+s2+s3+s4+s5).encode(self.encoding) - stream = StringIO.StringIO(s) - reader = codecs.getreader(self.encoding)(stream) - self.assertEqual(reader.readline(), s1) - self.assertEqual(reader.readline(), s2) - self.assertEqual(reader.readline(), s3) - self.assertEqual(reader.readline(), s4) - self.assertEqual(reader.readline(), s5) - self.assertEqual(reader.readline(), u"") - -class UTF32Test(ReadTest): - encoding = "utf-32" - - spamle = ('\xff\xfe\x00\x00' - 's\x00\x00\x00p\x00\x00\x00a\x00\x00\x00m\x00\x00\x00' - 's\x00\x00\x00p\x00\x00\x00a\x00\x00\x00m\x00\x00\x00') - spambe = ('\x00\x00\xfe\xff' - '\x00\x00\x00s\x00\x00\x00p\x00\x00\x00a\x00\x00\x00m' - '\x00\x00\x00s\x00\x00\x00p\x00\x00\x00a\x00\x00\x00m') - - def test_only_one_bom(self): - _,_,reader,writer = codecs.lookup(self.encoding) - # encode some stream - s = StringIO.StringIO() - f = writer(s) - f.write(u"spam") - f.write(u"spam") - d = s.getvalue() - # check whether there is exactly one BOM in it - self.assertTrue(d == self.spamle or d == self.spambe) - # try to read it back - s = StringIO.StringIO(d) - f = reader(s) - self.assertEqual(f.read(), u"spamspam") - - def test_badbom(self): - s = StringIO.StringIO(4*"\xff") - f = codecs.getreader(self.encoding)(s) - self.assertRaises(UnicodeError, f.read) - - s = StringIO.StringIO(8*"\xff") - f = codecs.getreader(self.encoding)(s) - self.assertRaises(UnicodeError, f.read) - - def test_partial(self): - self.check_partial( - u"\x00\xff\u0100\uffff", - [ - u"", # first byte of BOM read - u"", # second byte of BOM read - u"", # third byte of BOM read - u"", # fourth byte of BOM read => byteorder known - u"", - u"", - u"", - u"\x00", - u"\x00", - u"\x00", - u"\x00", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100\uffff", - ] - ) - - def test_handlers(self): - self.assertEqual((u'\ufffd', 1), - codecs.utf_32_decode('\x01', 'replace', True)) - self.assertEqual((u'', 1), - codecs.utf_32_decode('\x01', 'ignore', True)) - - def test_errors(self): - self.assertRaises(UnicodeDecodeError, codecs.utf_32_decode, - "\xff", "strict", True) - - def test_issue8941(self): - # Issue #8941: insufficient result allocation when decoding into - # surrogate pairs on UCS-2 builds. - encoded_le = '\xff\xfe\x00\x00' + '\x00\x00\x01\x00' * 1024 - self.assertEqual(u'\U00010000' * 1024, - codecs.utf_32_decode(encoded_le)[0]) - encoded_be = '\x00\x00\xfe\xff' + '\x00\x01\x00\x00' * 1024 - self.assertEqual(u'\U00010000' * 1024, - codecs.utf_32_decode(encoded_be)[0]) - -class UTF32LETest(ReadTest): - encoding = "utf-32-le" - - def test_partial(self): - self.check_partial( - u"\x00\xff\u0100\uffff", - [ - u"", - u"", - u"", - u"\x00", - u"\x00", - u"\x00", - u"\x00", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100\uffff", - ] - ) - - def test_simple(self): - self.assertEqual(u"\U00010203".encode(self.encoding), "\x03\x02\x01\x00") - - def test_errors(self): - self.assertRaises(UnicodeDecodeError, codecs.utf_32_le_decode, - "\xff", "strict", True) - - def test_issue8941(self): - # Issue #8941: insufficient result allocation when decoding into - # surrogate pairs on UCS-2 builds. - encoded = '\x00\x00\x01\x00' * 1024 - self.assertEqual(u'\U00010000' * 1024, - codecs.utf_32_le_decode(encoded)[0]) - -class UTF32BETest(ReadTest): - encoding = "utf-32-be" - - def test_partial(self): - self.check_partial( - u"\x00\xff\u0100\uffff", - [ - u"", - u"", - u"", - u"\x00", - u"\x00", - u"\x00", - u"\x00", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100\uffff", - ] - ) - - def test_simple(self): - self.assertEqual(u"\U00010203".encode(self.encoding), "\x00\x01\x02\x03") - - def test_errors(self): - self.assertRaises(UnicodeDecodeError, codecs.utf_32_be_decode, - "\xff", "strict", True) - - def test_issue8941(self): - # Issue #8941: insufficient result allocation when decoding into - # surrogate pairs on UCS-2 builds. - encoded = '\x00\x01\x00\x00' * 1024 - self.assertEqual(u'\U00010000' * 1024, - codecs.utf_32_be_decode(encoded)[0]) - - -class UTF16Test(ReadTest): - encoding = "utf-16" - - spamle = '\xff\xfes\x00p\x00a\x00m\x00s\x00p\x00a\x00m\x00' - spambe = '\xfe\xff\x00s\x00p\x00a\x00m\x00s\x00p\x00a\x00m' - - def test_only_one_bom(self): - _,_,reader,writer = codecs.lookup(self.encoding) - # encode some stream - s = StringIO.StringIO() - f = writer(s) - f.write(u"spam") - f.write(u"spam") - d = s.getvalue() - # check whether there is exactly one BOM in it - self.assertTrue(d == self.spamle or d == self.spambe) - # try to read it back - s = StringIO.StringIO(d) - f = reader(s) - self.assertEqual(f.read(), u"spamspam") - - def test_badbom(self): - s = StringIO.StringIO("\xff\xff") - f = codecs.getreader(self.encoding)(s) - self.assertRaises(UnicodeError, f.read) - - s = StringIO.StringIO("\xff\xff\xff\xff") - f = codecs.getreader(self.encoding)(s) - self.assertRaises(UnicodeError, f.read) - - def test_partial(self): - self.check_partial( - u"\x00\xff\u0100\uffff", - [ - u"", # first byte of BOM read - u"", # second byte of BOM read => byteorder known - u"", - u"\x00", - u"\x00", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100\uffff", - ] - ) - - def test_handlers(self): - self.assertEqual((u'\ufffd', 1), - codecs.utf_16_decode('\x01', 'replace', True)) - self.assertEqual((u'', 1), - codecs.utf_16_decode('\x01', 'ignore', True)) - - def test_errors(self): - self.assertRaises(UnicodeDecodeError, codecs.utf_16_decode, "\xff", "strict", True) - - def test_bug691291(self): - # Files are always opened in binary mode, even if no binary mode was - # specified. This means that no automatic conversion of '\n' is done - # on reading and writing. - s1 = u'Hello\r\nworld\r\n' - - s = s1.encode(self.encoding) - try: - with open(test_support.TESTFN, 'wb') as fp: - fp.write(s) - with codecs.open(test_support.TESTFN, 'U', encoding=self.encoding) as reader: - self.assertEqual(reader.read(), s1) - finally: - test_support.unlink(test_support.TESTFN) - -class UTF16LETest(ReadTest): - encoding = "utf-16-le" - - def test_partial(self): - self.check_partial( - u"\x00\xff\u0100\uffff", - [ - u"", - u"\x00", - u"\x00", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100\uffff", - ] - ) - - def test_errors(self): - self.assertRaises(UnicodeDecodeError, codecs.utf_16_le_decode, "\xff", "strict", True) - -class UTF16BETest(ReadTest): - encoding = "utf-16-be" - - def test_partial(self): - self.check_partial( - u"\x00\xff\u0100\uffff", - [ - u"", - u"\x00", - u"\x00", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff\u0100", - u"\x00\xff\u0100", - u"\x00\xff\u0100\uffff", - ] - ) - - def test_errors(self): - self.assertRaises(UnicodeDecodeError, codecs.utf_16_be_decode, "\xff", "strict", True) - -class UTF8Test(ReadTest): - encoding = "utf-8" - - def test_partial(self): - self.check_partial( - u"\x00\xff\u07ff\u0800\uffff", - [ - u"\x00", - u"\x00", - u"\x00\xff", - u"\x00\xff", - u"\x00\xff\u07ff", - u"\x00\xff\u07ff", - u"\x00\xff\u07ff", - u"\x00\xff\u07ff\u0800", - u"\x00\xff\u07ff\u0800", - u"\x00\xff\u07ff\u0800", - u"\x00\xff\u07ff\u0800\uffff", - ] - ) - -class UTF7Test(ReadTest): - encoding = "utf-7" - - def test_partial(self): - self.check_partial( - u"a+-b", - [ - u"a", - u"a", - u"a+", - u"a+-", - u"a+-b", - ] - ) - -class UTF16ExTest(unittest.TestCase): - - def test_errors(self): - self.assertRaises(UnicodeDecodeError, codecs.utf_16_ex_decode, "\xff", "strict", 0, True) - - def test_bad_args(self): - self.assertRaises(TypeError, codecs.utf_16_ex_decode) - -class ReadBufferTest(unittest.TestCase): - - def test_array(self): - import array - self.assertEqual( - codecs.readbuffer_encode(array.array("c", "spam")), - ("spam", 4) - ) - - def test_empty(self): - self.assertEqual(codecs.readbuffer_encode(""), ("", 0)) - - def test_bad_args(self): - self.assertRaises(TypeError, codecs.readbuffer_encode) - self.assertRaises(TypeError, codecs.readbuffer_encode, 42) - -class CharBufferTest(unittest.TestCase): - - def test_string(self): - self.assertEqual(codecs.charbuffer_encode("spam"), ("spam", 4)) - - def test_empty(self): - self.assertEqual(codecs.charbuffer_encode(""), ("", 0)) - - def test_bad_args(self): - self.assertRaises(TypeError, codecs.charbuffer_encode) - self.assertRaises(TypeError, codecs.charbuffer_encode, 42) - -class UTF8SigTest(ReadTest): - encoding = "utf-8-sig" - - def test_partial(self): - self.check_partial( - u"\ufeff\x00\xff\u07ff\u0800\uffff", - [ - u"", - u"", - u"", # First BOM has been read and skipped - u"", - u"", - u"\ufeff", # Second BOM has been read and emitted - u"\ufeff\x00", # "\x00" read and emitted - u"\ufeff\x00", # First byte of encoded u"\xff" read - u"\ufeff\x00\xff", # Second byte of encoded u"\xff" read - u"\ufeff\x00\xff", # First byte of encoded u"\u07ff" read - u"\ufeff\x00\xff\u07ff", # Second byte of encoded u"\u07ff" read - u"\ufeff\x00\xff\u07ff", - u"\ufeff\x00\xff\u07ff", - u"\ufeff\x00\xff\u07ff\u0800", - u"\ufeff\x00\xff\u07ff\u0800", - u"\ufeff\x00\xff\u07ff\u0800", - u"\ufeff\x00\xff\u07ff\u0800\uffff", - ] - ) - - def test_bug1601501(self): - # SF bug #1601501: check that the codec works with a buffer - unicode("\xef\xbb\xbf", "utf-8-sig") - - def test_bom(self): - d = codecs.getincrementaldecoder("utf-8-sig")() - s = u"spam" - self.assertEqual(d.decode(s.encode("utf-8-sig")), s) - - def test_stream_bom(self): - unistring = u"ABC\u00A1\u2200XYZ" - bytestring = codecs.BOM_UTF8 + "ABC\xC2\xA1\xE2\x88\x80XYZ" - - reader = codecs.getreader("utf-8-sig") - for sizehint in [None] + range(1, 11) + \ - [64, 128, 256, 512, 1024]: - istream = reader(StringIO.StringIO(bytestring)) - ostream = StringIO.StringIO() - while 1: - if sizehint is not None: - data = istream.read(sizehint) - else: - data = istream.read() - - if not data: - break - ostream.write(data) - - got = ostream.getvalue() - self.assertEqual(got, unistring) - - def test_stream_bare(self): - unistring = u"ABC\u00A1\u2200XYZ" - bytestring = "ABC\xC2\xA1\xE2\x88\x80XYZ" - - reader = codecs.getreader("utf-8-sig") - for sizehint in [None] + range(1, 11) + \ - [64, 128, 256, 512, 1024]: - istream = reader(StringIO.StringIO(bytestring)) - ostream = StringIO.StringIO() - while 1: - if sizehint is not None: - data = istream.read(sizehint) - else: - data = istream.read() - - if not data: - break - ostream.write(data) - - got = ostream.getvalue() - self.assertEqual(got, unistring) - -class EscapeDecodeTest(unittest.TestCase): - def test_empty(self): - self.assertEqual(codecs.escape_decode(""), ("", 0)) - -class RecodingTest(unittest.TestCase): - def test_recoding(self): - f = StringIO.StringIO() - f2 = codecs.EncodedFile(f, "unicode_internal", "utf-8") - f2.write(u"a") - f2.close() - # Python used to crash on this at exit because of a refcount - # bug in _codecsmodule.c - -# From RFC 3492 -punycode_testcases = [ - # A Arabic (Egyptian): - (u"\u0644\u064A\u0647\u0645\u0627\u0628\u062A\u0643\u0644" - u"\u0645\u0648\u0634\u0639\u0631\u0628\u064A\u061F", - "egbpdaj6bu4bxfgehfvwxn"), - # B Chinese (simplified): - (u"\u4ED6\u4EEC\u4E3A\u4EC0\u4E48\u4E0D\u8BF4\u4E2D\u6587", - "ihqwcrb4cv8a8dqg056pqjye"), - # C Chinese (traditional): - (u"\u4ED6\u5011\u7232\u4EC0\u9EBD\u4E0D\u8AAA\u4E2D\u6587", - "ihqwctvzc91f659drss3x8bo0yb"), - # D Czech: Proprostnemluvesky - (u"\u0050\u0072\u006F\u010D\u0070\u0072\u006F\u0073\u0074" - u"\u011B\u006E\u0065\u006D\u006C\u0075\u0076\u00ED\u010D" - u"\u0065\u0073\u006B\u0079", - "Proprostnemluvesky-uyb24dma41a"), - # E Hebrew: - (u"\u05DC\u05DE\u05D4\u05D4\u05DD\u05E4\u05E9\u05D5\u05D8" - u"\u05DC\u05D0\u05DE\u05D3\u05D1\u05E8\u05D9\u05DD\u05E2" - u"\u05D1\u05E8\u05D9\u05EA", - "4dbcagdahymbxekheh6e0a7fei0b"), - # F Hindi (Devanagari): - (u"\u092F\u0939\u0932\u094B\u0917\u0939\u093F\u0928\u094D" - u"\u0926\u0940\u0915\u094D\u092F\u094B\u0902\u0928\u0939" - u"\u0940\u0902\u092C\u094B\u0932\u0938\u0915\u0924\u0947" - u"\u0939\u0948\u0902", - "i1baa7eci9glrd9b2ae1bj0hfcgg6iyaf8o0a1dig0cd"), - - #(G) Japanese (kanji and hiragana): - (u"\u306A\u305C\u307F\u3093\u306A\u65E5\u672C\u8A9E\u3092" - u"\u8A71\u3057\u3066\u304F\u308C\u306A\u3044\u306E\u304B", - "n8jok5ay5dzabd5bym9f0cm5685rrjetr6pdxa"), - - # (H) Korean (Hangul syllables): - (u"\uC138\uACC4\uC758\uBAA8\uB4E0\uC0AC\uB78C\uB4E4\uC774" - u"\uD55C\uAD6D\uC5B4\uB97C\uC774\uD574\uD55C\uB2E4\uBA74" - u"\uC5BC\uB9C8\uB098\uC88B\uC744\uAE4C", - "989aomsvi5e83db1d2a355cv1e0vak1dwrv93d5xbh15a0dt30a5j" - "psd879ccm6fea98c"), - - # (I) Russian (Cyrillic): - (u"\u043F\u043E\u0447\u0435\u043C\u0443\u0436\u0435\u043E" - u"\u043D\u0438\u043D\u0435\u0433\u043E\u0432\u043E\u0440" - u"\u044F\u0442\u043F\u043E\u0440\u0443\u0441\u0441\u043A" - u"\u0438", - "b1abfaaepdrnnbgefbaDotcwatmq2g4l"), - - # (J) Spanish: PorqunopuedensimplementehablarenEspaol - (u"\u0050\u006F\u0072\u0071\u0075\u00E9\u006E\u006F\u0070" - u"\u0075\u0065\u0064\u0065\u006E\u0073\u0069\u006D\u0070" - u"\u006C\u0065\u006D\u0065\u006E\u0074\u0065\u0068\u0061" - u"\u0062\u006C\u0061\u0072\u0065\u006E\u0045\u0073\u0070" - u"\u0061\u00F1\u006F\u006C", - "PorqunopuedensimplementehablarenEspaol-fmd56a"), - - # (K) Vietnamese: - # Tisaohkhngthch\ - # nitingVit - (u"\u0054\u1EA1\u0069\u0073\u0061\u006F\u0068\u1ECD\u006B" - u"\u0068\u00F4\u006E\u0067\u0074\u0068\u1EC3\u0063\u0068" - u"\u1EC9\u006E\u00F3\u0069\u0074\u0069\u1EBF\u006E\u0067" - u"\u0056\u0069\u1EC7\u0074", - "TisaohkhngthchnitingVit-kjcr8268qyxafd2f1b9g"), - - #(L) 3B - (u"\u0033\u5E74\u0042\u7D44\u91D1\u516B\u5148\u751F", - "3B-ww4c5e180e575a65lsy2b"), - - # (M) -with-SUPER-MONKEYS - (u"\u5B89\u5BA4\u5948\u7F8E\u6075\u002D\u0077\u0069\u0074" - u"\u0068\u002D\u0053\u0055\u0050\u0045\u0052\u002D\u004D" - u"\u004F\u004E\u004B\u0045\u0059\u0053", - "-with-SUPER-MONKEYS-pc58ag80a8qai00g7n9n"), - - # (N) Hello-Another-Way- - (u"\u0048\u0065\u006C\u006C\u006F\u002D\u0041\u006E\u006F" - u"\u0074\u0068\u0065\u0072\u002D\u0057\u0061\u0079\u002D" - u"\u305D\u308C\u305E\u308C\u306E\u5834\u6240", - "Hello-Another-Way--fc4qua05auwb3674vfr0b"), - - # (O) 2 - (u"\u3072\u3068\u3064\u5C4B\u6839\u306E\u4E0B\u0032", - "2-u9tlzr9756bt3uc0v"), - - # (P) MajiKoi5 - (u"\u004D\u0061\u006A\u0069\u3067\u004B\u006F\u0069\u3059" - u"\u308B\u0035\u79D2\u524D", - "MajiKoi5-783gue6qz075azm5e"), - - # (Q) de - (u"\u30D1\u30D5\u30A3\u30FC\u0064\u0065\u30EB\u30F3\u30D0", - "de-jg4avhby1noc0d"), - - # (R) - (u"\u305D\u306E\u30B9\u30D4\u30FC\u30C9\u3067", - "d9juau41awczczp"), - - # (S) -> $1.00 <- - (u"\u002D\u003E\u0020\u0024\u0031\u002E\u0030\u0030\u0020" - u"\u003C\u002D", - "-> $1.00 <--") - ] - -for i in punycode_testcases: - if len(i)!=2: - print repr(i) - -class PunycodeTest(unittest.TestCase): - def test_encode(self): - for uni, puny in punycode_testcases: - # Need to convert both strings to lower case, since - # some of the extended encodings use upper case, but our - # code produces only lower case. Converting just puny to - # lower is also insufficient, since some of the input characters - # are upper case. - self.assertEqual(uni.encode("punycode").lower(), puny.lower()) - - def test_decode(self): - for uni, puny in punycode_testcases: - self.assertEqual(uni, puny.decode("punycode")) - -class UnicodeInternalTest(unittest.TestCase): - def test_bug1251300(self): - # Decoding with unicode_internal used to not correctly handle "code - # points" above 0x10ffff on UCS-4 builds. - if sys.maxunicode > 0xffff: - ok = [ - ("\x00\x10\xff\xff", u"\U0010ffff"), - ("\x00\x00\x01\x01", u"\U00000101"), - ("", u""), - ] - not_ok = [ - "\x7f\xff\xff\xff", - "\x80\x00\x00\x00", - "\x81\x00\x00\x00", - "\x00", - "\x00\x00\x00\x00\x00", - ] - for internal, uni in ok: - if sys.byteorder == "little": - internal = "".join(reversed(internal)) - self.assertEqual(uni, internal.decode("unicode_internal")) - for internal in not_ok: - if sys.byteorder == "little": - internal = "".join(reversed(internal)) - self.assertRaises(UnicodeDecodeError, internal.decode, - "unicode_internal") - - def test_decode_error_attributes(self): - if sys.maxunicode > 0xffff: - try: - "\x00\x00\x00\x00\x00\x11\x11\x00".decode("unicode_internal") - except UnicodeDecodeError, ex: - self.assertEqual("unicode_internal", ex.encoding) - self.assertEqual("\x00\x00\x00\x00\x00\x11\x11\x00", ex.object) - self.assertEqual(4, ex.start) - self.assertEqual(8, ex.end) - else: - self.fail() - - def test_decode_callback(self): - if sys.maxunicode > 0xffff: - codecs.register_error("UnicodeInternalTest", codecs.ignore_errors) - decoder = codecs.getdecoder("unicode_internal") - ab = u"ab".encode("unicode_internal") - ignored = decoder("%s\x22\x22\x22\x22%s" % (ab[:4], ab[4:]), - "UnicodeInternalTest") - self.assertEqual((u"ab", 12), ignored) - - def test_encode_length(self): - # Issue 3739 - encoder = codecs.getencoder("unicode_internal") - self.assertEqual(encoder(u"a")[1], 1) - self.assertEqual(encoder(u"\xe9\u0142")[1], 2) - - encoder = codecs.getencoder("string-escape") - self.assertEqual(encoder(r'\x00')[1], 4) - -# From http://www.gnu.org/software/libidn/draft-josefsson-idn-test-vectors.html -nameprep_tests = [ - # 3.1 Map to nothing. - ('foo\xc2\xad\xcd\x8f\xe1\xa0\x86\xe1\xa0\x8bbar' - '\xe2\x80\x8b\xe2\x81\xa0baz\xef\xb8\x80\xef\xb8\x88\xef' - '\xb8\x8f\xef\xbb\xbf', - 'foobarbaz'), - # 3.2 Case folding ASCII U+0043 U+0041 U+0046 U+0045. - ('CAFE', - 'cafe'), - # 3.3 Case folding 8bit U+00DF (german sharp s). - # The original test case is bogus; it says \xc3\xdf - ('\xc3\x9f', - 'ss'), - # 3.4 Case folding U+0130 (turkish capital I with dot). - ('\xc4\xb0', - 'i\xcc\x87'), - # 3.5 Case folding multibyte U+0143 U+037A. - ('\xc5\x83\xcd\xba', - '\xc5\x84 \xce\xb9'), - # 3.6 Case folding U+2121 U+33C6 U+1D7BB. - # XXX: skip this as it fails in UCS-2 mode - #('\xe2\x84\xa1\xe3\x8f\x86\xf0\x9d\x9e\xbb', - # 'telc\xe2\x88\x95kg\xcf\x83'), - (None, None), - # 3.7 Normalization of U+006a U+030c U+00A0 U+00AA. - ('j\xcc\x8c\xc2\xa0\xc2\xaa', - '\xc7\xb0 a'), - # 3.8 Case folding U+1FB7 and normalization. - ('\xe1\xbe\xb7', - '\xe1\xbe\xb6\xce\xb9'), - # 3.9 Self-reverting case folding U+01F0 and normalization. - # The original test case is bogus, it says `\xc7\xf0' - ('\xc7\xb0', - '\xc7\xb0'), - # 3.10 Self-reverting case folding U+0390 and normalization. - ('\xce\x90', - '\xce\x90'), - # 3.11 Self-reverting case folding U+03B0 and normalization. - ('\xce\xb0', - '\xce\xb0'), - # 3.12 Self-reverting case folding U+1E96 and normalization. - ('\xe1\xba\x96', - '\xe1\xba\x96'), - # 3.13 Self-reverting case folding U+1F56 and normalization. - ('\xe1\xbd\x96', - '\xe1\xbd\x96'), - # 3.14 ASCII space character U+0020. - (' ', - ' '), - # 3.15 Non-ASCII 8bit space character U+00A0. - ('\xc2\xa0', - ' '), - # 3.16 Non-ASCII multibyte space character U+1680. - ('\xe1\x9a\x80', - None), - # 3.17 Non-ASCII multibyte space character U+2000. - ('\xe2\x80\x80', - ' '), - # 3.18 Zero Width Space U+200b. - ('\xe2\x80\x8b', - ''), - # 3.19 Non-ASCII multibyte space character U+3000. - ('\xe3\x80\x80', - ' '), - # 3.20 ASCII control characters U+0010 U+007F. - ('\x10\x7f', - '\x10\x7f'), - # 3.21 Non-ASCII 8bit control character U+0085. - ('\xc2\x85', - None), - # 3.22 Non-ASCII multibyte control character U+180E. - ('\xe1\xa0\x8e', - None), - # 3.23 Zero Width No-Break Space U+FEFF. - ('\xef\xbb\xbf', - ''), - # 3.24 Non-ASCII control character U+1D175. - ('\xf0\x9d\x85\xb5', - None), - # 3.25 Plane 0 private use character U+F123. - ('\xef\x84\xa3', - None), - # 3.26 Plane 15 private use character U+F1234. - ('\xf3\xb1\x88\xb4', - None), - # 3.27 Plane 16 private use character U+10F234. - ('\xf4\x8f\x88\xb4', - None), - # 3.28 Non-character code point U+8FFFE. - ('\xf2\x8f\xbf\xbe', - None), - # 3.29 Non-character code point U+10FFFF. - ('\xf4\x8f\xbf\xbf', - None), - # 3.30 Surrogate code U+DF42. - ('\xed\xbd\x82', - None), - # 3.31 Non-plain text character U+FFFD. - ('\xef\xbf\xbd', - None), - # 3.32 Ideographic description character U+2FF5. - ('\xe2\xbf\xb5', - None), - # 3.33 Display property character U+0341. - ('\xcd\x81', - '\xcc\x81'), - # 3.34 Left-to-right mark U+200E. - ('\xe2\x80\x8e', - None), - # 3.35 Deprecated U+202A. - ('\xe2\x80\xaa', - None), - # 3.36 Language tagging character U+E0001. - ('\xf3\xa0\x80\x81', - None), - # 3.37 Language tagging character U+E0042. - ('\xf3\xa0\x81\x82', - None), - # 3.38 Bidi: RandALCat character U+05BE and LCat characters. - ('foo\xd6\xbebar', - None), - # 3.39 Bidi: RandALCat character U+FD50 and LCat characters. - ('foo\xef\xb5\x90bar', - None), - # 3.40 Bidi: RandALCat character U+FB38 and LCat characters. - ('foo\xef\xb9\xb6bar', - 'foo \xd9\x8ebar'), - # 3.41 Bidi: RandALCat without trailing RandALCat U+0627 U+0031. - ('\xd8\xa71', - None), - # 3.42 Bidi: RandALCat character U+0627 U+0031 U+0628. - ('\xd8\xa71\xd8\xa8', - '\xd8\xa71\xd8\xa8'), - # 3.43 Unassigned code point U+E0002. - # Skip this test as we allow unassigned - #('\xf3\xa0\x80\x82', - # None), - (None, None), - # 3.44 Larger test (shrinking). - # Original test case reads \xc3\xdf - ('X\xc2\xad\xc3\x9f\xc4\xb0\xe2\x84\xa1j\xcc\x8c\xc2\xa0\xc2' - '\xaa\xce\xb0\xe2\x80\x80', - 'xssi\xcc\x87tel\xc7\xb0 a\xce\xb0 '), - # 3.45 Larger test (expanding). - # Original test case reads \xc3\x9f - ('X\xc3\x9f\xe3\x8c\x96\xc4\xb0\xe2\x84\xa1\xe2\x92\x9f\xe3\x8c' - '\x80', - 'xss\xe3\x82\xad\xe3\x83\xad\xe3\x83\xa1\xe3\x83\xbc\xe3' - '\x83\x88\xe3\x83\xabi\xcc\x87tel\x28d\x29\xe3\x82' - '\xa2\xe3\x83\x91\xe3\x83\xbc\xe3\x83\x88') - ] - - -class NameprepTest(unittest.TestCase): - def test_nameprep(self): - from encodings.idna import nameprep - for pos, (orig, prepped) in enumerate(nameprep_tests): - if orig is None: - # Skipped - continue - # The Unicode strings are given in UTF-8 - orig = unicode(orig, "utf-8") - if prepped is None: - # Input contains prohibited characters - self.assertRaises(UnicodeError, nameprep, orig) - else: - prepped = unicode(prepped, "utf-8") - try: - self.assertEqual(nameprep(orig), prepped) - except Exception,e: - raise test_support.TestFailed("Test 3.%d: %s" % (pos+1, str(e))) - -class IDNACodecTest(unittest.TestCase): - def test_builtin_decode(self): - self.assertEqual(unicode("python.org", "idna"), u"python.org") - self.assertEqual(unicode("python.org.", "idna"), u"python.org.") - self.assertEqual(unicode("xn--pythn-mua.org", "idna"), u"pyth\xf6n.org") - self.assertEqual(unicode("xn--pythn-mua.org.", "idna"), u"pyth\xf6n.org.") - - def test_builtin_encode(self): - self.assertEqual(u"python.org".encode("idna"), "python.org") - self.assertEqual("python.org.".encode("idna"), "python.org.") - self.assertEqual(u"pyth\xf6n.org".encode("idna"), "xn--pythn-mua.org") - self.assertEqual(u"pyth\xf6n.org.".encode("idna"), "xn--pythn-mua.org.") - - def test_stream(self): - import StringIO - r = codecs.getreader("idna")(StringIO.StringIO("abc")) - r.read(3) - self.assertEqual(r.read(), u"") - - def test_incremental_decode(self): - self.assertEqual( - "".join(codecs.iterdecode("python.org", "idna")), - u"python.org" - ) - self.assertEqual( - "".join(codecs.iterdecode("python.org.", "idna")), - u"python.org." - ) - self.assertEqual( - "".join(codecs.iterdecode("xn--pythn-mua.org.", "idna")), - u"pyth\xf6n.org." - ) - self.assertEqual( - "".join(codecs.iterdecode("xn--pythn-mua.org.", "idna")), - u"pyth\xf6n.org." - ) - - decoder = codecs.getincrementaldecoder("idna")() - self.assertEqual(decoder.decode("xn--xam", ), u"") - self.assertEqual(decoder.decode("ple-9ta.o", ), u"\xe4xample.") - self.assertEqual(decoder.decode(u"rg"), u"") - self.assertEqual(decoder.decode(u"", True), u"org") - - decoder.reset() - self.assertEqual(decoder.decode("xn--xam", ), u"") - self.assertEqual(decoder.decode("ple-9ta.o", ), u"\xe4xample.") - self.assertEqual(decoder.decode("rg."), u"org.") - self.assertEqual(decoder.decode("", True), u"") - - def test_incremental_encode(self): - self.assertEqual( - "".join(codecs.iterencode(u"python.org", "idna")), - "python.org" - ) - self.assertEqual( - "".join(codecs.iterencode(u"python.org.", "idna")), - "python.org." - ) - self.assertEqual( - "".join(codecs.iterencode(u"pyth\xf6n.org.", "idna")), - "xn--pythn-mua.org." - ) - self.assertEqual( - "".join(codecs.iterencode(u"pyth\xf6n.org.", "idna")), - "xn--pythn-mua.org." - ) - - encoder = codecs.getincrementalencoder("idna")() - self.assertEqual(encoder.encode(u"\xe4x"), "") - self.assertEqual(encoder.encode(u"ample.org"), "xn--xample-9ta.") - self.assertEqual(encoder.encode(u"", True), "org") - - encoder.reset() - self.assertEqual(encoder.encode(u"\xe4x"), "") - self.assertEqual(encoder.encode(u"ample.org."), "xn--xample-9ta.org.") - self.assertEqual(encoder.encode(u"", True), "") - -class CodecsModuleTest(unittest.TestCase): - - def test_decode(self): - self.assertEqual(codecs.decode('\xe4\xf6\xfc', 'latin-1'), - u'\xe4\xf6\xfc') - self.assertRaises(TypeError, codecs.decode) - self.assertEqual(codecs.decode('abc'), u'abc') - self.assertRaises(UnicodeDecodeError, codecs.decode, '\xff', 'ascii') - - def test_encode(self): - self.assertEqual(codecs.encode(u'\xe4\xf6\xfc', 'latin-1'), - '\xe4\xf6\xfc') - self.assertRaises(TypeError, codecs.encode) - self.assertRaises(LookupError, codecs.encode, "foo", "__spam__") - self.assertEqual(codecs.encode(u'abc'), 'abc') - self.assertRaises(UnicodeEncodeError, codecs.encode, u'\xffff', 'ascii') - - def test_register(self): - self.assertRaises(TypeError, codecs.register) - self.assertRaises(TypeError, codecs.register, 42) - - def test_lookup(self): - self.assertRaises(TypeError, codecs.lookup) - self.assertRaises(LookupError, codecs.lookup, "__spam__") - self.assertRaises(LookupError, codecs.lookup, " ") - - def test_getencoder(self): - self.assertRaises(TypeError, codecs.getencoder) - self.assertRaises(LookupError, codecs.getencoder, "__spam__") - - def test_getdecoder(self): - self.assertRaises(TypeError, codecs.getdecoder) - self.assertRaises(LookupError, codecs.getdecoder, "__spam__") - - def test_getreader(self): - self.assertRaises(TypeError, codecs.getreader) - self.assertRaises(LookupError, codecs.getreader, "__spam__") - - def test_getwriter(self): - self.assertRaises(TypeError, codecs.getwriter) - self.assertRaises(LookupError, codecs.getwriter, "__spam__") - -class StreamReaderTest(unittest.TestCase): - - def setUp(self): - self.reader = codecs.getreader('utf-8') - self.stream = StringIO.StringIO('\xed\x95\x9c\n\xea\xb8\x80') - - def test_readlines(self): - f = self.reader(self.stream) - self.assertEqual(f.readlines(), [u'\ud55c\n', u'\uae00']) - -class EncodedFileTest(unittest.TestCase): - - def test_basic(self): - f = StringIO.StringIO('\xed\x95\x9c\n\xea\xb8\x80') - ef = codecs.EncodedFile(f, 'utf-16-le', 'utf-8') - self.assertEqual(ef.read(), '\\\xd5\n\x00\x00\xae') - - f = StringIO.StringIO() - ef = codecs.EncodedFile(f, 'utf-8', 'latin1') - ef.write('\xc3\xbc') - self.assertEqual(f.getvalue(), '\xfc') - -class Str2StrTest(unittest.TestCase): - - def test_read(self): - sin = "\x80".encode("base64_codec") - reader = codecs.getreader("base64_codec")(StringIO.StringIO(sin)) - sout = reader.read() - self.assertEqual(sout, "\x80") - self.assertIsInstance(sout, str) - - def test_readline(self): - sin = "\x80".encode("base64_codec") - reader = codecs.getreader("base64_codec")(StringIO.StringIO(sin)) - sout = reader.readline() - self.assertEqual(sout, "\x80") - self.assertIsInstance(sout, str) - -all_unicode_encodings = [ - "ascii", - "base64_codec", - ## "big5", - ## "big5hkscs", - "charmap", - "cp037", - "cp1006", - "cp1026", - "cp1140", - "cp1250", - "cp1251", - "cp1252", - "cp1253", - "cp1254", - "cp1255", - "cp1256", - "cp1257", - "cp1258", - "cp424", - "cp437", - "cp500", - "cp720", - "cp737", - "cp775", - "cp850", - "cp852", - "cp855", - "cp856", - "cp857", - "cp858", - "cp860", - "cp861", - "cp862", - "cp863", - "cp864", - "cp865", - "cp866", - "cp869", - "cp874", - "cp875", - ## "cp932", - ## "cp949", - ## "cp950", - ## "euc_jis_2004", - ## "euc_jisx0213", - ## "euc_jp", - ## "euc_kr", - ## "gb18030", - ## "gb2312", - ## "gbk", - "hex_codec", - "hp_roman8", - ## "hz", - "idna", - ## "iso2022_jp", - ## "iso2022_jp_1", - ## "iso2022_jp_2", - ## "iso2022_jp_2004", - ## "iso2022_jp_3", - ## "iso2022_jp_ext", - ## "iso2022_kr", - "iso8859_1", - "iso8859_10", - "iso8859_11", - "iso8859_13", - "iso8859_14", - "iso8859_15", - "iso8859_16", - "iso8859_2", - "iso8859_3", - "iso8859_4", - "iso8859_5", - "iso8859_6", - "iso8859_7", - "iso8859_8", - "iso8859_9", - ## "johab", - "koi8_r", - "koi8_u", - "latin_1", - "mac_cyrillic", - "mac_greek", - "mac_iceland", - "mac_latin2", - "mac_roman", - "mac_turkish", - "palmos", - "ptcp154", - "punycode", - "raw_unicode_escape", - "rot_13", - ## "shift_jis", - ## "shift_jis_2004", - ## "shift_jisx0213", - "tis_620", - "unicode_escape", - "unicode_internal", - "utf_16", - "utf_16_be", - "utf_16_le", - "utf_7", - "utf_8", -] - -if hasattr(codecs, "mbcs_encode"): - all_unicode_encodings.append("mbcs") - -# The following encodings work only with str, not unicode -all_string_encodings = [ - "quopri_codec", - "string_escape", - "uu_codec", -] - -# The following encoding is not tested, because it's not supposed -# to work: -# "undefined" - -# The following encodings don't work in stateful mode -broken_unicode_with_streams = [ - "base64_codec", - "hex_codec", - "punycode", - "unicode_internal" -] -broken_incremental_coders = broken_unicode_with_streams[:] - -# The following encodings only support "strict" mode -only_strict_mode = [ - "idna", - "zlib_codec", - "bz2_codec", -] - -try: - import bz2 -except ImportError: - pass -else: - all_unicode_encodings.append("bz2_codec") - broken_unicode_with_streams.append("bz2_codec") - -try: - import zlib -except ImportError: - pass -else: - all_unicode_encodings.append("zlib_codec") - broken_unicode_with_streams.append("zlib_codec") - -class BasicUnicodeTest(unittest.TestCase): - def test_basics(self): - s = u"abc123" # all codecs should be able to encode these - for encoding in all_unicode_encodings: - name = codecs.lookup(encoding).name - if encoding.endswith("_codec"): - name += "_codec" - elif encoding == "latin_1": - name = "latin_1" - self.assertEqual(encoding.replace("_", "-"), name.replace("_", "-")) - (bytes, size) = codecs.getencoder(encoding)(s) - self.assertEqual(size, len(s), "%r != %r (encoding=%r)" % (size, len(s), encoding)) - (chars, size) = codecs.getdecoder(encoding)(bytes) - self.assertEqual(chars, s, "%r != %r (encoding=%r)" % (chars, s, encoding)) - - if encoding not in broken_unicode_with_streams: - # check stream reader/writer - q = Queue() - writer = codecs.getwriter(encoding)(q) - encodedresult = "" - for c in s: - writer.write(c) - encodedresult += q.read() - q = Queue() - reader = codecs.getreader(encoding)(q) - decodedresult = u"" - for c in encodedresult: - q.write(c) - decodedresult += reader.read() - self.assertEqual(decodedresult, s, "%r != %r (encoding=%r)" % (decodedresult, s, encoding)) - - if encoding not in broken_incremental_coders: - # check incremental decoder/encoder (fetched via the Python - # and C API) and iterencode()/iterdecode() - try: - encoder = codecs.getincrementalencoder(encoding)() - cencoder = _testcapi.codec_incrementalencoder(encoding) - except LookupError: # no IncrementalEncoder - pass - else: - # check incremental decoder/encoder - encodedresult = "" - for c in s: - encodedresult += encoder.encode(c) - encodedresult += encoder.encode(u"", True) - decoder = codecs.getincrementaldecoder(encoding)() - decodedresult = u"" - for c in encodedresult: - decodedresult += decoder.decode(c) - decodedresult += decoder.decode("", True) - self.assertEqual(decodedresult, s, "%r != %r (encoding=%r)" % (decodedresult, s, encoding)) - - # check C API - encodedresult = "" - for c in s: - encodedresult += cencoder.encode(c) - encodedresult += cencoder.encode(u"", True) - cdecoder = _testcapi.codec_incrementaldecoder(encoding) - decodedresult = u"" - for c in encodedresult: - decodedresult += cdecoder.decode(c) - decodedresult += cdecoder.decode("", True) - self.assertEqual(decodedresult, s, "%r != %r (encoding=%r)" % (decodedresult, s, encoding)) - - # check iterencode()/iterdecode() - result = u"".join(codecs.iterdecode(codecs.iterencode(s, encoding), encoding)) - self.assertEqual(result, s, "%r != %r (encoding=%r)" % (result, s, encoding)) - - # check iterencode()/iterdecode() with empty string - result = u"".join(codecs.iterdecode(codecs.iterencode(u"", encoding), encoding)) - self.assertEqual(result, u"") - - if encoding not in only_strict_mode: - # check incremental decoder/encoder with errors argument - try: - encoder = codecs.getincrementalencoder(encoding)("ignore") - cencoder = _testcapi.codec_incrementalencoder(encoding, "ignore") - except LookupError: # no IncrementalEncoder - pass - else: - encodedresult = "".join(encoder.encode(c) for c in s) - decoder = codecs.getincrementaldecoder(encoding)("ignore") - decodedresult = u"".join(decoder.decode(c) for c in encodedresult) - self.assertEqual(decodedresult, s, "%r != %r (encoding=%r)" % (decodedresult, s, encoding)) - - encodedresult = "".join(cencoder.encode(c) for c in s) - cdecoder = _testcapi.codec_incrementaldecoder(encoding, "ignore") - decodedresult = u"".join(cdecoder.decode(c) for c in encodedresult) - self.assertEqual(decodedresult, s, "%r != %r (encoding=%r)" % (decodedresult, s, encoding)) - - def test_seek(self): - # all codecs should be able to encode these - s = u"%s\n%s\n" % (100*u"abc123", 100*u"def456") - for encoding in all_unicode_encodings: - if encoding == "idna": # FIXME: See SF bug #1163178 - continue - if encoding in broken_unicode_with_streams: - continue - reader = codecs.getreader(encoding)(StringIO.StringIO(s.encode(encoding))) - for t in xrange(5): - # Test that calling seek resets the internal codec state and buffers - reader.seek(0, 0) - line = reader.readline() - self.assertEqual(s[:len(line)], line) - - def test_bad_decode_args(self): - for encoding in all_unicode_encodings: - decoder = codecs.getdecoder(encoding) - self.assertRaises(TypeError, decoder) - if encoding not in ("idna", "punycode"): - self.assertRaises(TypeError, decoder, 42) - - def test_bad_encode_args(self): - for encoding in all_unicode_encodings: - encoder = codecs.getencoder(encoding) - self.assertRaises(TypeError, encoder) - - def test_encoding_map_type_initialized(self): - from encodings import cp1140 - # This used to crash, we are only verifying there's no crash. - table_type = type(cp1140.encoding_table) - self.assertEqual(table_type, table_type) - -class BasicStrTest(unittest.TestCase): - def test_basics(self): - s = "abc123" - for encoding in all_string_encodings: - (bytes, size) = codecs.getencoder(encoding)(s) - self.assertEqual(size, len(s)) - (chars, size) = codecs.getdecoder(encoding)(bytes) - self.assertEqual(chars, s, "%r != %r (encoding=%r)" % (chars, s, encoding)) - -class CharmapTest(unittest.TestCase): - def test_decode_with_string_map(self): - self.assertEqual( - codecs.charmap_decode("\x00\x01\x02", "strict", u"abc"), - (u"abc", 3) - ) - - self.assertEqual( - codecs.charmap_decode("\x00\x01\x02", "replace", u"ab"), - (u"ab\ufffd", 3) - ) - - self.assertEqual( - codecs.charmap_decode("\x00\x01\x02", "replace", u"ab\ufffe"), - (u"ab\ufffd", 3) - ) - - self.assertEqual( - codecs.charmap_decode("\x00\x01\x02", "ignore", u"ab"), - (u"ab", 3) - ) - - self.assertEqual( - codecs.charmap_decode("\x00\x01\x02", "ignore", u"ab\ufffe"), - (u"ab", 3) - ) - - allbytes = "".join(chr(i) for i in xrange(256)) - self.assertEqual( - codecs.charmap_decode(allbytes, "ignore", u""), - (u"", len(allbytes)) - ) - -class WithStmtTest(unittest.TestCase): - def test_encodedfile(self): - f = StringIO.StringIO("\xc3\xbc") - with codecs.EncodedFile(f, "latin-1", "utf-8") as ef: - self.assertEqual(ef.read(), "\xfc") - - def test_streamreaderwriter(self): - f = StringIO.StringIO("\xc3\xbc") - info = codecs.lookup("utf-8") - with codecs.StreamReaderWriter(f, info.streamreader, - info.streamwriter, 'strict') as srw: - self.assertEqual(srw.read(), u"\xfc") - - -class BomTest(unittest.TestCase): - def test_seek0(self): - data = u"1234567890" - tests = ("utf-16", - "utf-16-le", - "utf-16-be", - "utf-32", - "utf-32-le", - "utf-32-be") - for encoding in tests: - # Check if the BOM is written only once - with codecs.open(test_support.TESTFN, 'w+', encoding=encoding) as f: - f.write(data) - f.write(data) - f.seek(0) - self.assertEqual(f.read(), data * 2) - f.seek(0) - self.assertEqual(f.read(), data * 2) - - # Check that the BOM is written after a seek(0) - with codecs.open(test_support.TESTFN, 'w+', encoding=encoding) as f: - f.write(data[0]) - self.assertNotEqual(f.tell(), 0) - f.seek(0) - f.write(data) - f.seek(0) - self.assertEqual(f.read(), data) - - # (StreamWriter) Check that the BOM is written after a seek(0) - with codecs.open(test_support.TESTFN, 'w+', encoding=encoding) as f: - f.writer.write(data[0]) - self.assertNotEqual(f.writer.tell(), 0) - f.writer.seek(0) - f.writer.write(data) - f.seek(0) - self.assertEqual(f.read(), data) - - # Check that the BOM is not written after a seek() at a position - # different than the start - with codecs.open(test_support.TESTFN, 'w+', encoding=encoding) as f: - f.write(data) - f.seek(f.tell()) - f.write(data) - f.seek(0) - self.assertEqual(f.read(), data * 2) - - # (StreamWriter) Check that the BOM is not written after a seek() - # at a position different than the start - with codecs.open(test_support.TESTFN, 'w+', encoding=encoding) as f: - f.writer.write(data) - f.writer.seek(f.writer.tell()) - f.writer.write(data) - f.seek(0) - self.assertEqual(f.read(), data * 2) - - -def test_main(): - test_support.run_unittest( - UTF32Test, - UTF32LETest, - UTF32BETest, - UTF16Test, - UTF16LETest, - UTF16BETest, - UTF8Test, - UTF8SigTest, - UTF7Test, - UTF16ExTest, - ReadBufferTest, - CharBufferTest, - EscapeDecodeTest, - RecodingTest, - PunycodeTest, - UnicodeInternalTest, - NameprepTest, - IDNACodecTest, - CodecsModuleTest, - StreamReaderTest, - EncodedFileTest, - Str2StrTest, - BasicUnicodeTest, - BasicStrTest, - CharmapTest, - WithStmtTest, - BomTest, - ) - - -if __name__ == "__main__": - test_main() diff --git a/lib-python/modified-2.7/test/test_descr.py b/lib-python/modified-2.7/test/test_descr.py --- a/lib-python/modified-2.7/test/test_descr.py +++ b/lib-python/modified-2.7/test/test_descr.py @@ -4399,13 +4399,10 @@ self.assertTrue(l.__add__ != [5].__add__) self.assertTrue(l.__add__ != l.__mul__) self.assertTrue(l.__add__.__name__ == '__add__') - if hasattr(l.__add__, '__self__'): - # CPython - self.assertTrue(l.__add__.__self__ is l) + self.assertTrue(l.__add__.__self__ is l) + if hasattr(l.__add__, '__objclass__'): # CPython self.assertTrue(l.__add__.__objclass__ is list) - else: - # Python implementations where [].__add__ is a normal bound method - self.assertTrue(l.__add__.im_self is l) + else: # PyPy self.assertTrue(l.__add__.im_class is list) self.assertEqual(l.__add__.__doc__, list.__add__.__doc__) try: diff --git a/lib-python/modified-2.7/test/test_dis.py b/lib-python/modified-2.7/test/test_dis.py deleted file mode 100644 --- a/lib-python/modified-2.7/test/test_dis.py +++ /dev/null @@ -1,152 +0,0 @@ -# Minimal tests for dis module - -from test.test_support import run_unittest -import unittest -import sys -import dis -import StringIO - - -def _f(a): - print a - return 1 - -dis_f = """\ - %-4d 0 LOAD_FAST 0 (a) - 3 PRINT_ITEM - 4 PRINT_NEWLINE - - %-4d 5 LOAD_CONST 1 (1) - 8 RETURN_VALUE -"""%(_f.func_code.co_firstlineno + 1, - _f.func_code.co_firstlineno + 2) - - -# we "call" rangexxx() instead of range() to disable the -# pypy optimization that turns it into CALL_LIKELY_BUILTIN. -def bug708901(): - for res in rangexxx(1, - 10): - pass - -dis_bug708901 = """\ - %-4d 0 SETUP_LOOP 23 (to 26) - 3 LOAD_GLOBAL 0 (rangexxx) - 6 LOAD_CONST 1 (1) - - %-4d 9 LOAD_CONST 2 (10) - 12 CALL_FUNCTION 2 - 15 GET_ITER - >> 16 FOR_ITER 6 (to 25) - 19 STORE_FAST 0 (res) - - %-4d 22 JUMP_ABSOLUTE 16 - >> 25 POP_BLOCK - >> 26 LOAD_CONST 0 (None) - 29 RETURN_VALUE -"""%(bug708901.func_code.co_firstlineno + 1, - bug708901.func_code.co_firstlineno + 2, - bug708901.func_code.co_firstlineno + 3) - - -def bug1333982(x=[]): - assert 0, ([s for s in x] + - 1) - pass - -dis_bug1333982 = """\ - %-4d 0 LOAD_CONST 1 (0) - 3 POP_JUMP_IF_TRUE 38 - 6 LOAD_GLOBAL 0 (AssertionError) - 9 BUILD_LIST 0 - 12 LOAD_FAST 0 (x) - 15 GET_ITER - >> 16 FOR_ITER 12 (to 31) - 19 STORE_FAST 1 (s) - 22 LOAD_FAST 1 (s) - 25 LIST_APPEND 2 - 28 JUMP_ABSOLUTE 16 - - %-4d >> 31 LOAD_CONST 2 (1) - 34 BINARY_ADD - 35 RAISE_VARARGS 2 - - %-4d >> 38 LOAD_CONST 0 (None) - 41 RETURN_VALUE -"""%(bug1333982.func_code.co_firstlineno + 1, - bug1333982.func_code.co_firstlineno + 2, - bug1333982.func_code.co_firstlineno + 3) - -_BIG_LINENO_FORMAT = """\ -%3d 0 LOAD_GLOBAL 0 (spam) - 3 POP_TOP - 4 LOAD_CONST 0 (None) - 7 RETURN_VALUE -""" - -class DisTests(unittest.TestCase): - def do_disassembly_test(self, func, expected): - s = StringIO.StringIO() - save_stdout = sys.stdout - sys.stdout = s - dis.dis(func) - sys.stdout = save_stdout - got = s.getvalue() - # Trim trailing blanks (if any). - lines = got.split('\n') - lines = [line.rstrip() for line in lines] - expected = expected.split("\n") - import difflib - if expected != lines: - self.fail( - "events did not match expectation:\n" + - "\n".join(difflib.ndiff(expected, - lines))) - - def test_opmap(self): - self.assertEqual(dis.opmap["STOP_CODE"], 0) - self.assertIn(dis.opmap["LOAD_CONST"], dis.hasconst) - self.assertIn(dis.opmap["STORE_NAME"], dis.hasname) - - def test_opname(self): - self.assertEqual(dis.opname[dis.opmap["LOAD_FAST"]], "LOAD_FAST") - - def test_boundaries(self): - self.assertEqual(dis.opmap["EXTENDED_ARG"], dis.EXTENDED_ARG) - self.assertEqual(dis.opmap["STORE_NAME"], dis.HAVE_ARGUMENT) - - def test_dis(self): - self.do_disassembly_test(_f, dis_f) - - def test_bug_708901(self): - self.do_disassembly_test(bug708901, dis_bug708901) - - def test_bug_1333982(self): - # This one is checking bytecodes generated for an `assert` statement, - # so fails if the tests are run with -O. Skip this test then. - if __debug__: - self.do_disassembly_test(bug1333982, dis_bug1333982) - - def test_big_linenos(self): - def func(count): - namespace = {} - func = "def foo():\n " + "".join(["\n "] * count + ["spam\n"]) - exec func in namespace - return namespace['foo'] - - # Test all small ranges - for i in xrange(1, 300): - expected = _BIG_LINENO_FORMAT % (i + 2) - self.do_disassembly_test(func(i), expected) - - # Test some larger ranges too - for i in xrange(300, 5000, 10): - expected = _BIG_LINENO_FORMAT % (i + 2) - self.do_disassembly_test(func(i), expected) - -def test_main(): - run_unittest(DisTests) - - -if __name__ == "__main__": - test_main() diff --git a/lib-python/modified-2.7/test/test_extcall.py b/lib-python/modified-2.7/test/test_extcall.py --- a/lib-python/modified-2.7/test/test_extcall.py +++ b/lib-python/modified-2.7/test/test_extcall.py @@ -299,7 +299,7 @@ def f(a): return a self.assertEqual(f(**{u'a': 4}), 4) - self.assertRaises(TypeError, lambda: f(**{u'stören': 4})) + self.assertRaises(TypeError, f, **{u'stören': 4}) self.assertRaises(TypeError, f, **{u'someLongString':2}) try: f(a=4, **{u'a': 4}) diff --git a/lib-python/modified-2.7/test/test_fcntl.py b/lib-python/modified-2.7/test/test_fcntl.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/test/test_fcntl.py @@ -0,0 +1,108 @@ +"""Test program for the fcntl C module. + +OS/2+EMX doesn't support the file locking operations. + +""" +import os +import struct +import sys +import unittest +from test.test_support import (verbose, TESTFN, unlink, run_unittest, + import_module) + +# Skip test if no fnctl module. +fcntl = import_module('fcntl') + + +# TODO - Write tests for flock() and lockf(). + +def get_lockdata(): + if sys.platform.startswith('atheos'): + start_len = "qq" + else: + try: + os.O_LARGEFILE + except AttributeError: + start_len = "ll" + else: + start_len = "qq" + + if sys.platform in ('netbsd1', 'netbsd2', 'netbsd3', + 'Darwin1.2', 'darwin', + 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', + 'freebsd6', 'freebsd7', 'freebsd8', + 'bsdos2', 'bsdos3', 'bsdos4', + 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4', 'openbsd5'): + if struct.calcsize('l') == 8: + off_t = 'l' + pid_t = 'i' + else: + off_t = 'lxxxx' + pid_t = 'l' + lockdata = struct.pack(off_t + off_t + pid_t + 'hh', 0, 0, 0, + fcntl.F_WRLCK, 0) + elif sys.platform in ['aix3', 'aix4', 'hp-uxB', 'unixware7']: + lockdata = struct.pack('hhlllii', fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0) + elif sys.platform in ['os2emx']: + lockdata = None + else: + lockdata = struct.pack('hh'+start_len+'hh', fcntl.F_WRLCK, 0, 0, 0, 0, 0) + if lockdata: + if verbose: + print 'struct.pack: ', repr(lockdata) + return lockdata + +lockdata = get_lockdata() + + +class TestFcntl(unittest.TestCase): + + def setUp(self): + self.f = None + + def tearDown(self): + if self.f and not self.f.closed: + self.f.close() + unlink(TESTFN) + + def test_fcntl_fileno(self): + # the example from the library docs + self.f = open(TESTFN, 'w') + rv = fcntl.fcntl(self.f.fileno(), fcntl.F_SETFL, os.O_NONBLOCK) + if verbose: + print 'Status from fcntl with O_NONBLOCK: ', rv + if sys.platform not in ['os2emx']: + rv = fcntl.fcntl(self.f.fileno(), fcntl.F_SETLKW, lockdata) + if verbose: + print 'String from fcntl with F_SETLKW: ', repr(rv) + self.f.close() + + def test_fcntl_file_descriptor(self): + # again, but pass the file rather than numeric descriptor + self.f = open(TESTFN, 'w') + rv = fcntl.fcntl(self.f, fcntl.F_SETFL, os.O_NONBLOCK) + if sys.platform not in ['os2emx']: + rv = fcntl.fcntl(self.f, fcntl.F_SETLKW, lockdata) + self.f.close() + + def test_fcntl_64_bit(self): + # Issue #1309352: fcntl shouldn't fail when the third arg fits in a + # C 'long' but not in a C 'int'. + try: + cmd = fcntl.F_NOTIFY + # This flag is larger than 2**31 in 64-bit builds + flags = fcntl.DN_MULTISHOT + except AttributeError: + self.skipTest("F_NOTIFY or DN_MULTISHOT unavailable") + fd = os.open(os.path.dirname(os.path.abspath(TESTFN)), os.O_RDONLY) + try: + fcntl.fcntl(fd, cmd, flags) + finally: + os.close(fd) + + +def test_main(): + run_unittest(TestFcntl) + +if __name__ == '__main__': + test_main() diff --git a/lib-python/modified-2.7/test/test_heapq.py b/lib-python/modified-2.7/test/test_heapq.py --- a/lib-python/modified-2.7/test/test_heapq.py +++ b/lib-python/modified-2.7/test/test_heapq.py @@ -186,6 +186,11 @@ self.assertFalse(sys.modules['heapq'] is self.module) self.assertTrue(hasattr(self.module.heapify, 'func_code')) + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + class TestHeapC(TestHeap): module = c_heapq diff --git a/lib-python/modified-2.7/test/test_import.py b/lib-python/modified-2.7/test/test_import.py --- a/lib-python/modified-2.7/test/test_import.py +++ b/lib-python/modified-2.7/test/test_import.py @@ -64,6 +64,7 @@ except ImportError, err: self.fail("import from %s failed: %s" % (ext, err)) else: + # XXX importing .pyw is missing on Windows self.assertEqual(mod.a, a, "module loaded (%s) but contents invalid" % mod) self.assertEqual(mod.b, b, diff --git a/lib-python/2.7/test/test_multibytecodec.py b/lib-python/modified-2.7/test/test_multibytecodec.py copy from lib-python/2.7/test/test_multibytecodec.py copy to lib-python/modified-2.7/test/test_multibytecodec.py --- a/lib-python/2.7/test/test_multibytecodec.py +++ b/lib-python/modified-2.7/test/test_multibytecodec.py @@ -42,7 +42,7 @@ dec = codecs.getdecoder('euc-kr') myreplace = lambda exc: (u'', sys.maxint+1) codecs.register_error('test.cjktest', myreplace) - self.assertRaises(IndexError, dec, + self.assertRaises((IndexError, OverflowError), dec, 'apple\x92ham\x93spam', 'test.cjktest') def test_codingspec(self): @@ -148,7 +148,8 @@ class Test_StreamReader(unittest.TestCase): def test_bug1728403(self): try: - open(TESTFN, 'w').write('\xa1') + with open(TESTFN, 'w') as f: + f.write('\xa1') f = codecs.open(TESTFN, encoding='cp949') self.assertRaises(UnicodeDecodeError, f.read, 2) finally: diff --git a/lib-python/2.7/test/test_multibytecodec_support.py b/lib-python/modified-2.7/test/test_multibytecodec_support.py copy from lib-python/2.7/test/test_multibytecodec_support.py copy to lib-python/modified-2.7/test/test_multibytecodec_support.py --- a/lib-python/2.7/test/test_multibytecodec_support.py +++ b/lib-python/modified-2.7/test/test_multibytecodec_support.py @@ -107,8 +107,8 @@ def myreplace(exc): return (u'x', sys.maxint + 1) codecs.register_error("test.cjktest", myreplace) - self.assertRaises(IndexError, self.encode, self.unmappedunicode, - 'test.cjktest') + self.assertRaises((IndexError, OverflowError), self.encode, + self.unmappedunicode, 'test.cjktest') def test_callback_None_index(self): def myreplace(exc): diff --git a/lib-python/modified-2.7/test/test_multiprocessing.py b/lib-python/modified-2.7/test/test_multiprocessing.py --- a/lib-python/modified-2.7/test/test_multiprocessing.py +++ b/lib-python/modified-2.7/test/test_multiprocessing.py @@ -510,7 +510,6 @@ p.join() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_qsize(self): q = self.Queue() try: @@ -532,7 +531,6 @@ time.sleep(DELTA) q.task_done() - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_task_done(self): queue = self.JoinableQueue() @@ -1091,7 +1089,6 @@ class _TestPoolWorkerLifetime(BaseTestCase): ALLOWED_TYPES = ('processes', ) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_pool_worker_lifetime(self): p = multiprocessing.Pool(3, maxtasksperchild=10) self.assertEqual(3, len(p._pool)) @@ -1280,7 +1277,6 @@ queue = manager.get_queue() queue.put('hello world') - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_rapid_restart(self): authkey = os.urandom(32) manager = QueueManager( @@ -1297,6 +1293,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1573,7 +1570,6 @@ ALLOWED_TYPES = ('processes',) - @unittest.skipIf(os.name == 'posix', "PYPY: FIXME") def test_heap(self): iterations = 5000 maxblocks = 50 diff --git a/lib-python/modified-2.7/test/test_repr.py b/lib-python/modified-2.7/test/test_repr.py --- a/lib-python/modified-2.7/test/test_repr.py +++ b/lib-python/modified-2.7/test/test_repr.py @@ -254,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/modified-2.7/test/test_sets.py copy from lib-python/2.7/test/test_sets.py copy to lib-python/modified-2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/modified-2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/modified-2.7/test/test_ssl.py b/lib-python/modified-2.7/test/test_ssl.py --- a/lib-python/modified-2.7/test/test_ssl.py +++ b/lib-python/modified-2.7/test/test_ssl.py @@ -58,32 +58,35 @@ # Issue #9415: Ubuntu hijacks their OpenSSL and forcefully disables SSLv2 def skip_if_broken_ubuntu_ssl(func): - # We need to access the lower-level wrapper in order to create an - # implicit SSL context without trying to connect or listen. - try: - import _ssl - except ImportError: - # The returned function won't get executed, just ignore the error - pass - @functools.wraps(func) - def f(*args, **kwargs): + if hasattr(ssl, 'PROTOCOL_SSLv2'): + # We need to access the lower-level wrapper in order to create an + # implicit SSL context without trying to connect or listen. try: - s = socket.socket(socket.AF_INET) - _ssl.sslwrap(s._sock, 0, None, None, - ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) - except ssl.SSLError as e: - if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and - platform.linux_distribution() == ('debian', 'squeeze/sid', '') - and 'Invalid SSL protocol variant specified' in str(e)): - raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") - return func(*args, **kwargs) - return f + import _ssl + except ImportError: + # The returned function won't get executed, just ignore the error + pass + @functools.wraps(func) + def f(*args, **kwargs): + try: + s = socket.socket(socket.AF_INET) + _ssl.sslwrap(s._sock, 0, None, None, + ssl.CERT_NONE, ssl.PROTOCOL_SSLv2, None, None) + except ssl.SSLError as e: + if (ssl.OPENSSL_VERSION_INFO == (0, 9, 8, 15, 15) and + platform.linux_distribution() == ('debian', 'squeeze/sid', '') + and 'Invalid SSL protocol variant specified' in str(e)): + raise unittest.SkipTest("Patched Ubuntu OpenSSL breaks behaviour") + return func(*args, **kwargs) + return f + else: + return func class BasicSocketTests(unittest.TestCase): def test_constants(self): - ssl.PROTOCOL_SSLv2 + #ssl.PROTOCOL_SSLv2 ssl.PROTOCOL_SSLv23 ssl.PROTOCOL_SSLv3 ssl.PROTOCOL_TLSv1 @@ -105,7 +108,6 @@ print "didn't raise TypeError" ssl.RAND_add("this is a random string", 75.0) - @test_support.impl_detail("obscure test") def test_parse_cert(self): # note that this uses an 'unofficial' function in _ssl.c, # provided solely for this test, to exercise the certificate @@ -840,6 +842,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -965,7 +969,8 @@ try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv23, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @@ -977,7 +982,8 @@ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, True, ssl.CERT_REQUIRED) - try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) + if hasattr(ssl, 'PROTOCOL_SSLv2'): + try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv23, False) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/modified-2.7/test/test_subprocess.py copy from lib-python/2.7/test/test_subprocess.py copy to lib-python/modified-2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/modified-2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/modified-2.7/test/test_support.py b/lib-python/modified-2.7/test/test_support.py --- a/lib-python/modified-2.7/test/test_support.py +++ b/lib-python/modified-2.7/test/test_support.py @@ -1066,7 +1066,7 @@ if '--pdb' in sys.argv: import pdb, traceback traceback.print_tb(exc_info[2]) - pdb.post_mortem(exc_info[2], pdb.Pdb) + pdb.post_mortem(exc_info[2]) # ---------------------------------- diff --git a/lib-python/modified-2.7/test/test_sys_settrace.py b/lib-python/modified-2.7/test/test_sys_settrace.py --- a/lib-python/modified-2.7/test/test_sys_settrace.py +++ b/lib-python/modified-2.7/test/test_sys_settrace.py @@ -286,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/modified-2.7/test/test_tarfile.py copy from lib-python/2.7/test/test_tarfile.py copy to lib-python/modified-2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/modified-2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -859,7 +871,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -938,6 +952,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1030,6 +1045,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1058,6 +1074,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1077,6 +1094,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1120,6 +1138,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1174,6 +1193,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1186,6 +1206,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1213,6 +1234,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1231,7 +1253,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1257,7 +1281,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/modified-2.7/test/test_tempfile.py b/lib-python/modified-2.7/test/test_tempfile.py --- a/lib-python/modified-2.7/test/test_tempfile.py +++ b/lib-python/modified-2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 diff --git a/lib-python/modified-2.7/test/test_urllib2.py b/lib-python/modified-2.7/test/test_urllib2.py --- a/lib-python/modified-2.7/test/test_urllib2.py +++ b/lib-python/modified-2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/modified-2.7/test/test_weakref.py b/lib-python/modified-2.7/test/test_weakref.py --- a/lib-python/modified-2.7/test/test_weakref.py +++ b/lib-python/modified-2.7/test/test_weakref.py @@ -993,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) diff --git a/lib-python/modified-2.7/urllib2.py b/lib-python/modified-2.7/urllib2.py new file mode 100644 --- /dev/null +++ b/lib-python/modified-2.7/urllib2.py @@ -0,0 +1,1436 @@ +"""An extensible library for opening URLs using a variety of protocols + +The simplest way to use this module is to call the urlopen function, +which accepts a string containing a URL or a Request object (described +below). It opens the URL and returns the results as file-like +object; the returned object has some extra methods described below. + +The OpenerDirector manages a collection of Handler objects that do +all the actual work. Each Handler implements a particular protocol or +option. The OpenerDirector is a composite object that invokes the +Handlers needed to open the requested URL. For example, the +HTTPHandler performs HTTP GET and POST requests and deals with +non-error returns. The HTTPRedirectHandler automatically deals with +HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler +deals with digest authentication. + +urlopen(url, data=None) -- Basic usage is the same as original +urllib. pass the url and optionally data to post to an HTTP URL, and +get a file-like object back. One difference is that you can also pass +a Request instance instead of URL. Raises a URLError (subclass of +IOError); for HTTP errors, raises an HTTPError, which can also be +treated as a valid response. + +build_opener -- Function that creates a new OpenerDirector instance. +Will install the default handlers. Accepts one or more Handlers as +arguments, either instances or Handler classes that it will +instantiate. If one of the argument is a subclass of the default +handler, the argument will be installed instead of the default. + +install_opener -- Installs a new opener as the default opener. + +objects of interest: + +OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages +the Handler classes, while dealing with requests and responses. + +Request -- An object that encapsulates the state of a request. The +state can be as simple as the URL. It can also include extra HTTP +headers, e.g. a User-Agent. + +BaseHandler -- + +exceptions: +URLError -- A subclass of IOError, individual protocols have their own +specific subclass. + +HTTPError -- Also a valid HTTP response, so you can treat an HTTP error +as an exceptional event or valid response. + +internals: +BaseHandler and parent +_call_chain conventions + +Example usage: + +import urllib2 + +# set up authentication info +authinfo = urllib2.HTTPBasicAuthHandler() +authinfo.add_password(realm='PDQ Application', + uri='https://mahler:8092/site-updates.py', + user='klem', + passwd='geheim$parole') + +proxy_support = urllib2.ProxyHandler({"http" : "http://ahad-haam:3128"}) + +# build a new opener that adds authentication and caching FTP handlers +opener = urllib2.build_opener(proxy_support, authinfo, urllib2.CacheFTPHandler) + +# install it +urllib2.install_opener(opener) + +f = urllib2.urlopen('http://www.python.org/') + + +""" + +# XXX issues: +# If an authentication error handler that tries to perform +# authentication for some reason but fails, how should the error be +# signalled? The client needs to know the HTTP error code. But if +# the handler knows that the problem was, e.g., that it didn't know +# that hash algo that requested in the challenge, it would be good to +# pass that information along to the client, too. +# ftp errors aren't handled cleanly +# check digest against correct (i.e. non-apache) implementation + +# Possible extensions: +# complex proxies XXX not sure what exactly was meant by this +# abstract factory for opener + +import base64 +import hashlib +import httplib +import mimetools +import os +import posixpath +import random +import re +import socket +import sys +import time +import urlparse +import bisect + +try: + from cStringIO import StringIO +except ImportError: + from StringIO import StringIO + +from urllib import (unwrap, unquote, splittype, splithost, quote, + addinfourl, splitport, splittag, + splitattr, ftpwrapper, splituser, splitpasswd, splitvalue) + +# support for FileHandler, proxies via environment variables +from urllib import localhost, url2pathname, getproxies, proxy_bypass + +# used in User-Agent header sent +__version__ = sys.version[:3] + +_opener = None +def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + global _opener + if _opener is None: + _opener = build_opener() + return _opener.open(url, data, timeout) + +def install_opener(opener): + global _opener + _opener = opener + +# do these error classes make sense? +# make sure all of the IOError stuff is overridden. we just want to be +# subtypes. + +class URLError(IOError): + # URLError is a sub-type of IOError, but it doesn't share any of + # the implementation. need to override __init__ and __str__. + # It sets self.args for compatibility with other EnvironmentError + # subclasses, but args doesn't have the typical format with errno in + # slot 0 and strerror in slot 1. This may be better than nothing. + def __init__(self, reason): + self.args = reason, + self.reason = reason + + def __str__(self): + return '' % self.reason + +class HTTPError(URLError, addinfourl): + """Raised when HTTP error occurs, but also acts like non-error return""" + __super_init = addinfourl.__init__ + + def __init__(self, url, code, msg, hdrs, fp): + self.code = code + self.msg = msg + self.hdrs = hdrs + self.fp = fp + self.filename = url + # The addinfourl classes depend on fp being a valid file + # object. In some cases, the HTTPError may not have a valid + # file object. If this happens, the simplest workaround is to + # not initialize the base classes. + if fp is not None: + self.__super_init(fp, hdrs, url, code) + + def __str__(self): + return 'HTTP Error %s: %s' % (self.code, self.msg) + +# copied from cookielib.py +_cut_port_re = re.compile(r":\d+$") +def request_host(request): + """Return request-host, as defined by RFC 2965. + + Variation from RFC: returned value is lowercased, for convenient + comparison. + + """ + url = request.get_full_url() + host = urlparse.urlparse(url)[1] + if host == "": + host = request.get_header("Host", "") + + # remove port, if present + host = _cut_port_re.sub("", host, 1) + return host.lower() + +class Request: + + def __init__(self, url, data=None, headers={}, + origin_req_host=None, unverifiable=False): + # unwrap('') --> 'type://host/path' + self.__original = unwrap(url) + self.__original, fragment = splittag(self.__original) + self.type = None + # self.__r_type is what's left after doing the splittype + self.host = None + self.port = None + self._tunnel_host = None + self.data = data + self.headers = {} + for key, value in headers.items(): + self.add_header(key, value) + self.unredirected_hdrs = {} + if origin_req_host is None: + origin_req_host = request_host(self) + self.origin_req_host = origin_req_host + self.unverifiable = unverifiable + + def __getattr__(self, attr): + # XXX this is a fallback mechanism to guard against these + # methods getting called in a non-standard order. this may be + # too complicated and/or unnecessary. + # XXX should the __r_XXX attributes be public? + if attr[:12] == '_Request__r_': + name = attr[12:] + if hasattr(Request, 'get_' + name): + getattr(self, 'get_' + name)() + return getattr(self, attr) + raise AttributeError, attr + + def get_method(self): + if self.has_data(): + return "POST" + else: + return "GET" + + # XXX these helper methods are lame + + def add_data(self, data): + self.data = data + + def has_data(self): + return self.data is not None + + def get_data(self): + return self.data + + def get_full_url(self): + return self.__original + + def get_type(self): + if self.type is None: + self.type, self.__r_type = splittype(self.__original) + if self.type is None: + raise ValueError, "unknown url type: %s" % self.__original + return self.type + + def get_host(self): + if self.host is None: + self.host, self.__r_host = splithost(self.__r_type) + if self.host: + self.host = unquote(self.host) + return self.host + + def get_selector(self): + return self.__r_host + + def set_proxy(self, host, type): + if self.type == 'https' and not self._tunnel_host: + self._tunnel_host = self.host + else: + self.type = type + self.__r_host = self.__original + + self.host = host + + def has_proxy(self): + return self.__r_host == self.__original + + def get_origin_req_host(self): + return self.origin_req_host + + def is_unverifiable(self): + return self.unverifiable + + def add_header(self, key, val): + # useful for something like authentication + self.headers[key.capitalize()] = val + + def add_unredirected_header(self, key, val): + # will not be added to a redirected request + self.unredirected_hdrs[key.capitalize()] = val + + def has_header(self, header_name): + return (header_name in self.headers or + header_name in self.unredirected_hdrs) + + def get_header(self, header_name, default=None): + return self.headers.get( + header_name, + self.unredirected_hdrs.get(header_name, default)) + + def header_items(self): + hdrs = self.unredirected_hdrs.copy() + hdrs.update(self.headers) + return hdrs.items() + +class OpenerDirector: + def __init__(self): + client_version = "Python-urllib/%s" % __version__ + self.addheaders = [('User-agent', client_version)] + # manage the individual handlers + self.handlers = [] + self.handle_open = {} + self.handle_error = {} + self.process_response = {} + self.process_request = {} + + def add_handler(self, handler): + if not hasattr(handler, "add_parent"): + raise TypeError("expected BaseHandler instance, got %r" % + type(handler)) + + added = False + for meth in dir(handler): + if meth in ["redirect_request", "do_open", "proxy_open"]: + # oops, coincidental match + continue + + i = meth.find("_") + protocol = meth[:i] + condition = meth[i+1:] + + if condition.startswith("error"): + j = condition.find("_") + i + 1 + kind = meth[j+1:] + try: + kind = int(kind) + except ValueError: + pass + lookup = self.handle_error.get(protocol, {}) + self.handle_error[protocol] = lookup + elif condition == "open": + kind = protocol + lookup = self.handle_open + elif condition == "response": + kind = protocol + lookup = self.process_response + elif condition == "request": + kind = protocol + lookup = self.process_request + else: + continue + + handlers = lookup.setdefault(kind, []) + if handlers: + bisect.insort(handlers, handler) + else: + handlers.append(handler) + added = True + + if added: + # the handlers must work in an specific order, the order + # is specified in a Handler attribute + bisect.insort(self.handlers, handler) + handler.add_parent(self) + + def close(self): + # Only exists for backwards compatibility. + pass + + def _call_chain(self, chain, kind, meth_name, *args): + # Handlers raise an exception if no one else should try to handle + # the request, or return None if they can't but another handler + # could. Otherwise, they return the response. + handlers = chain.get(kind, ()) + for handler in handlers: + func = getattr(handler, meth_name) + + result = func(*args) + if result is not None: + return result + + def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT): + # accept a URL or a Request object + if isinstance(fullurl, basestring): + req = Request(fullurl, data) + else: + req = fullurl + if data is not None: + req.add_data(data) + + req.timeout = timeout + protocol = req.get_type() + + # pre-process request + meth_name = protocol+"_request" + for processor in self.process_request.get(protocol, []): + meth = getattr(processor, meth_name) + req = meth(req) + + response = self._open(req, data) + + # post-process response + meth_name = protocol+"_response" + for processor in self.process_response.get(protocol, []): + meth = getattr(processor, meth_name) + response = meth(req, response) + + return response + + def _open(self, req, data=None): + result = self._call_chain(self.handle_open, 'default', + 'default_open', req) + if result: + return result + + protocol = req.get_type() + result = self._call_chain(self.handle_open, protocol, protocol + + '_open', req) + if result: + return result + + return self._call_chain(self.handle_open, 'unknown', + 'unknown_open', req) + + def error(self, proto, *args): + if proto in ('http', 'https'): + # XXX http[s] protocols are special-cased + dict = self.handle_error['http'] # https is not different than http + proto = args[2] # YUCK! + meth_name = 'http_error_%s' % proto + http_err = 1 + orig_args = args + else: + dict = self.handle_error + meth_name = proto + '_error' + http_err = 0 + args = (dict, proto, meth_name) + args + result = self._call_chain(*args) + if result: + return result + + if http_err: + args = (dict, 'default', 'http_error_default') + orig_args + return self._call_chain(*args) + +# XXX probably also want an abstract factory that knows when it makes +# sense to skip a superclass in favor of a subclass and when it might +# make sense to include both + +def build_opener(*handlers): + """Create an opener object from a list of handlers. + + The opener will use several default handlers, including support + for HTTP, FTP and when applicable, HTTPS. + + If any of the handlers passed as arguments are subclasses of the + default handlers, the default handlers will not be used. + """ + import types + def isclass(obj): + return isinstance(obj, (types.ClassType, type)) + + opener = OpenerDirector() + default_classes = [ProxyHandler, UnknownHandler, HTTPHandler, + HTTPDefaultErrorHandler, HTTPRedirectHandler, + FTPHandler, FileHandler, HTTPErrorProcessor] + if hasattr(httplib, 'HTTPS'): + default_classes.append(HTTPSHandler) + skip = set() + for klass in default_classes: + for check in handlers: + if isclass(check): + if issubclass(check, klass): + skip.add(klass) + elif isinstance(check, klass): + skip.add(klass) + for klass in skip: + default_classes.remove(klass) + + for klass in default_classes: + opener.add_handler(klass()) + + for h in handlers: + if isclass(h): + h = h() + opener.add_handler(h) + return opener + +class BaseHandler: + handler_order = 500 + + def add_parent(self, parent): + self.parent = parent + + def close(self): + # Only exists for backwards compatibility + pass + + def __lt__(self, other): + if not hasattr(other, "handler_order"): + # Try to preserve the old behavior of having custom classes + # inserted after default ones (works only for custom user + # classes which are not aware of handler_order). + return True + return self.handler_order < other.handler_order + + +class HTTPErrorProcessor(BaseHandler): + """Process HTTP error responses.""" + handler_order = 1000 # after all other processing + + def http_response(self, request, response): + code, msg, hdrs = response.code, response.msg, response.info() + + # According to RFC 2616, "2xx" code indicates that the client's + # request was successfully received, understood, and accepted. + if not (200 <= code < 300): + response = self.parent.error( + 'http', request, response, code, msg, hdrs) + + return response + + https_response = http_response + +class HTTPDefaultErrorHandler(BaseHandler): + def http_error_default(self, req, fp, code, msg, hdrs): + raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) + +class HTTPRedirectHandler(BaseHandler): + # maximum number of redirections to any single URL + # this is needed because of the state that cookies introduce + max_repeats = 4 + # maximum total number of redirections (regardless of URL) before + # assuming we're in a loop + max_redirections = 10 + + def redirect_request(self, req, fp, code, msg, headers, newurl): + """Return a Request or None in response to a redirect. + + This is called by the http_error_30x methods when a + redirection response is received. If a redirection should + take place, return a new Request to allow http_error_30x to + perform the redirect. Otherwise, raise HTTPError if no-one + else should try to handle this url. Return None if you can't + but another Handler might. + """ + m = req.get_method() + if (code in (301, 302, 303, 307) and m in ("GET", "HEAD") + or code in (301, 302, 303) and m == "POST"): + # Strictly (according to RFC 2616), 301 or 302 in response + # to a POST MUST NOT cause a redirection without confirmation + # from the user (of urllib2, in this case). In practice, + # essentially all clients do redirect in this case, so we + # do the same. + # be conciliant with URIs containing a space + newurl = newurl.replace(' ', '%20') + newheaders = dict((k,v) for k,v in req.headers.items() + if k.lower() not in ("content-length", "content-type") + ) + return Request(newurl, + headers=newheaders, + origin_req_host=req.get_origin_req_host(), + unverifiable=True) + else: + raise HTTPError(req.get_full_url(), code, msg, headers, fp) + + # Implementation note: To avoid the server sending us into an + # infinite loop, the request object needs to track what URLs we + # have already seen. Do this by adding a handler-specific + # attribute to the Request object. + def http_error_302(self, req, fp, code, msg, headers): + # Some servers (incorrectly) return multiple Location headers + # (so probably same goes for URI). Use first header. + if 'location' in headers: + newurl = headers.getheaders('location')[0] + elif 'uri' in headers: + newurl = headers.getheaders('uri')[0] + else: + return + + # fix a possible malformed URL + urlparts = urlparse.urlparse(newurl) + if not urlparts.path: + urlparts = list(urlparts) + urlparts[2] = "/" + newurl = urlparse.urlunparse(urlparts) + + newurl = urlparse.urljoin(req.get_full_url(), newurl) + + # XXX Probably want to forget about the state of the current + # request, although that might interact poorly with other + # handlers that also use handler-specific request attributes + new = self.redirect_request(req, fp, code, msg, headers, newurl) + if new is None: + return + + # loop detection + # .redirect_dict has a key url if url was previously visited. + if hasattr(req, 'redirect_dict'): + visited = new.redirect_dict = req.redirect_dict + if (visited.get(newurl, 0) >= self.max_repeats or + len(visited) >= self.max_redirections): + raise HTTPError(req.get_full_url(), code, + self.inf_msg + msg, headers, fp) + else: + visited = new.redirect_dict = req.redirect_dict = {} + visited[newurl] = visited.get(newurl, 0) + 1 + + # Don't close the fp until we are sure that we won't use it + # with HTTPError. + fp.read() + fp.close() + + return self.parent.open(new, timeout=req.timeout) + + http_error_301 = http_error_303 = http_error_307 = http_error_302 + + inf_msg = "The HTTP server returned a redirect error that would " \ + "lead to an infinite loop.\n" \ + "The last 30x error message was:\n" + + +def _parse_proxy(proxy): + """Return (scheme, user, password, host/port) given a URL or an authority. + + If a URL is supplied, it must have an authority (host:port) component. + According to RFC 3986, having an authority component means the URL must + have two slashes after the scheme: + + >>> _parse_proxy('file:/ftp.example.com/') + Traceback (most recent call last): + ValueError: proxy URL with no authority: 'file:/ftp.example.com/' + + The first three items of the returned tuple may be None. + + Examples of authority parsing: + + >>> _parse_proxy('proxy.example.com') + (None, None, None, 'proxy.example.com') + >>> _parse_proxy('proxy.example.com:3128') + (None, None, None, 'proxy.example.com:3128') + + The authority component may optionally include userinfo (assumed to be + username:password): + + >>> _parse_proxy('joe:password at proxy.example.com') + (None, 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('joe:password at proxy.example.com:3128') + (None, 'joe', 'password', 'proxy.example.com:3128') + + Same examples, but with URLs instead: + + >>> _parse_proxy('http://proxy.example.com/') + ('http', None, None, 'proxy.example.com') + >>> _parse_proxy('http://proxy.example.com:3128/') + ('http', None, None, 'proxy.example.com:3128') + >>> _parse_proxy('http://joe:password at proxy.example.com/') + ('http', 'joe', 'password', 'proxy.example.com') + >>> _parse_proxy('http://joe:password at proxy.example.com:3128') + ('http', 'joe', 'password', 'proxy.example.com:3128') + + Everything after the authority is ignored: + + >>> _parse_proxy('ftp://joe:password at proxy.example.com/rubbish:3128') + ('ftp', 'joe', 'password', 'proxy.example.com') + + Test for no trailing '/' case: + + >>> _parse_proxy('http://joe:password at proxy.example.com') + ('http', 'joe', 'password', 'proxy.example.com') + + """ + scheme, r_scheme = splittype(proxy) + if not r_scheme.startswith("/"): + # authority + scheme = None + authority = proxy + else: + # URL + if not r_scheme.startswith("//"): + raise ValueError("proxy URL with no authority: %r" % proxy) + # We have an authority, so for RFC 3986-compliant URLs (by ss 3. + # and 3.3.), path is empty or starts with '/' + end = r_scheme.find("/", 2) + if end == -1: + end = None + authority = r_scheme[2:end] + userinfo, hostport = splituser(authority) + if userinfo is not None: + user, password = splitpasswd(userinfo) + else: + user = password = None + return scheme, user, password, hostport + +class ProxyHandler(BaseHandler): + # Proxies must be in front + handler_order = 100 + + def __init__(self, proxies=None): + if proxies is None: + proxies = getproxies() + assert hasattr(proxies, 'has_key'), "proxies must be a mapping" + self.proxies = proxies + for type, url in proxies.items(): + setattr(self, '%s_open' % type, + lambda r, proxy=url, type=type, meth=self.proxy_open: \ + meth(r, proxy, type)) + + def proxy_open(self, req, proxy, type): + orig_type = req.get_type() + proxy_type, user, password, hostport = _parse_proxy(proxy) + + if proxy_type is None: + proxy_type = orig_type + + if req.host and proxy_bypass(req.host): + return None + + if user and password: + user_pass = '%s:%s' % (unquote(user), unquote(password)) + creds = base64.b64encode(user_pass).strip() + req.add_header('Proxy-authorization', 'Basic ' + creds) + hostport = unquote(hostport) + req.set_proxy(hostport, proxy_type) + + if orig_type == proxy_type or orig_type == 'https': + # let other handlers take care of it + return None + else: + # need to start over, because the other handlers don't + # grok the proxy's URL type + # e.g. if we have a constructor arg proxies like so: + # {'http': 'ftp://proxy.example.com'}, we may end up turning + # a request for http://acme.example.com/a into one for + # ftp://proxy.example.com/a + return self.parent.open(req, timeout=req.timeout) + +class HTTPPasswordMgr: + + def __init__(self): + self.passwd = {} + + def add_password(self, realm, uri, user, passwd): + # uri could be a single URI or a sequence + if isinstance(uri, basestring): + uri = [uri] + if not realm in self.passwd: + self.passwd[realm] = {} + for default_port in True, False: + reduced_uri = tuple( + [self.reduce_uri(u, default_port) for u in uri]) + self.passwd[realm][reduced_uri] = (user, passwd) + + def find_user_password(self, realm, authuri): + domains = self.passwd.get(realm, {}) + for default_port in True, False: + reduced_authuri = self.reduce_uri(authuri, default_port) + for uris, authinfo in domains.iteritems(): + for uri in uris: + if self.is_suburi(uri, reduced_authuri): + return authinfo + return None, None + + def reduce_uri(self, uri, default_port=True): + """Accept authority or URI and extract only the authority and path.""" + # note HTTP URLs do not have a userinfo component + parts = urlparse.urlsplit(uri) + if parts[1]: + # URI + scheme = parts[0] + authority = parts[1] + path = parts[2] or '/' + else: + # host or host:port + scheme = None + authority = uri + path = '/' + host, port = splitport(authority) + if default_port and port is None and scheme is not None: + dport = {"http": 80, + "https": 443, + }.get(scheme) + if dport is not None: + authority = "%s:%d" % (host, dport) + return authority, path + + def is_suburi(self, base, test): + """Check if test is below base in a URI tree + + Both args must be URIs in reduced form. + """ + if base == test: + return True + if base[0] != test[0]: + return False + common = posixpath.commonprefix((base[1], test[1])) + if len(common) == len(base[1]): + return True + return False + + +class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr): + + def find_user_password(self, realm, authuri): + user, password = HTTPPasswordMgr.find_user_password(self, realm, + authuri) + if user is not None: + return user, password + return HTTPPasswordMgr.find_user_password(self, None, authuri) + + +class AbstractBasicAuthHandler: + + # XXX this allows for multiple auth-schemes, but will stupidly pick + # the last one with a realm specified. + + # allow for double- and single-quoted realm values + # (single quotes are a violation of the RFC, but appear in the wild) + rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' + 'realm=(["\'])(.*?)\\2', re.I) + + # XXX could pre-emptively send auth info already accepted (RFC 2617, + # end of section 2, and section 1.2 immediately after "credentials" + # production). + + def __init__(self, password_mgr=None): + if password_mgr is None: + password_mgr = HTTPPasswordMgr() + self.passwd = password_mgr + self.add_password = self.passwd.add_password + self.retried = 0 + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, authreq, host, req, headers): + # host may be an authority (without userinfo) or a URL with an + # authority + # XXX could be multiple headers + authreq = headers.get(authreq, None) + + if self.retried > 5: + # retry sending the username:password 5 times before failing. + raise HTTPError(req.get_full_url(), 401, "basic auth failed", + headers, None) + else: + self.retried += 1 + + if authreq: + mo = AbstractBasicAuthHandler.rx.search(authreq) + if mo: + scheme, quote, realm = mo.groups() + if scheme.lower() == 'basic': + response = self.retry_http_basic_auth(host, req, realm) + if response and response.code != 401: + self.retried = 0 + return response + + def retry_http_basic_auth(self, host, req, realm): + user, pw = self.passwd.find_user_password(realm, host) + if pw is not None: + raw = "%s:%s" % (user, pw) + auth = 'Basic %s' % base64.b64encode(raw).strip() + if req.headers.get(self.auth_header, None) == auth: + return None + req.add_unredirected_header(self.auth_header, auth) + return self.parent.open(req, timeout=req.timeout) + else: + return None + + +class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Authorization' + + def http_error_401(self, req, fp, code, msg, headers): + url = req.get_full_url() + response = self.http_error_auth_reqed('www-authenticate', + url, req, headers) + self.reset_retry_count() + return response + + +class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler): + + auth_header = 'Proxy-authorization' + + def http_error_407(self, req, fp, code, msg, headers): + # http_error_auth_reqed requires that there is no userinfo component in + # authority. Assume there isn't one, since urllib2 does not (and + # should not, RFC 3986 s. 3.2.1) support requests for URLs containing + # userinfo. + authority = req.get_host() + response = self.http_error_auth_reqed('proxy-authenticate', + authority, req, headers) + self.reset_retry_count() + return response + + +def randombytes(n): + """Return n random bytes.""" + # Use /dev/urandom if it is available. Fall back to random module + # if not. It might be worthwhile to extend this function to use + # other platform-specific mechanisms for getting random bytes. + if os.path.exists("/dev/urandom"): + f = open("/dev/urandom") + s = f.read(n) + f.close() + return s + else: + L = [chr(random.randrange(0, 256)) for i in range(n)] + return "".join(L) + +class AbstractDigestAuthHandler: + # Digest authentication is specified in RFC 2617. + + # XXX The client does not inspect the Authentication-Info header + # in a successful response. + + # XXX It should be possible to test this implementation against + # a mock server that just generates a static set of challenges. + + # XXX qop="auth-int" supports is shaky + + def __init__(self, passwd=None): + if passwd is None: + passwd = HTTPPasswordMgr() + self.passwd = passwd + self.add_password = self.passwd.add_password + self.retried = 0 + self.nonce_count = 0 + self.last_nonce = None + + def reset_retry_count(self): + self.retried = 0 + + def http_error_auth_reqed(self, auth_header, host, req, headers): + authreq = headers.get(auth_header, None) + if self.retried > 5: + # Don't fail endlessly - if we failed once, we'll probably + # fail a second time. Hm. Unless the Password Manager is + # prompting for the information. Crap. This isn't great + # but it's better than the current 'repeat until recursion + # depth exceeded' approach + raise HTTPError(req.get_full_url(), 401, "digest auth failed", + headers, None) + else: + self.retried += 1 + if authreq: + scheme = authreq.split()[0] + if scheme.lower() == 'digest': + return self.retry_http_digest_auth(req, authreq) + + def retry_http_digest_auth(self, req, auth): + token, challenge = auth.split(' ', 1) + chal = parse_keqv_list(parse_http_list(challenge)) + auth = self.get_authorization(req, chal) + if auth: + auth_val = 'Digest %s' % auth + if req.headers.get(self.auth_header, None) == auth_val: + return None + req.add_unredirected_header(self.auth_header, auth_val) + resp = self.parent.open(req, timeout=req.timeout) + return resp + + def get_cnonce(self, nonce): + # The cnonce-value is an opaque + # quoted string value provided by the client and used by both client + # and server to avoid chosen plaintext attacks, to provide mutual + # authentication, and to provide some message integrity protection. + # This isn't a fabulous effort, but it's probably Good Enough. + dig = hashlib.sha1("%s:%s:%s:%s" % (self.nonce_count, nonce, time.ctime(), + randombytes(8))).hexdigest() + return dig[:16] + + def get_authorization(self, req, chal): + try: + realm = chal['realm'] + nonce = chal['nonce'] + qop = chal.get('qop') + algorithm = chal.get('algorithm', 'MD5') + # mod_digest doesn't send an opaque, even though it isn't + # supposed to be optional + opaque = chal.get('opaque', None) + except KeyError: + return None + + H, KD = self.get_algorithm_impls(algorithm) + if H is None: + return None + + user, pw = self.passwd.find_user_password(realm, req.get_full_url()) + if user is None: + return None + + # XXX not implemented yet + if req.has_data(): + entdig = self.get_entity_digest(req.get_data(), chal) + else: + entdig = None + + A1 = "%s:%s:%s" % (user, realm, pw) + A2 = "%s:%s" % (req.get_method(), + # XXX selector: what about proxies and full urls + req.get_selector()) + if qop == 'auth': + if nonce == self.last_nonce: + self.nonce_count += 1 + else: + self.nonce_count = 1 + self.last_nonce = nonce + + ncvalue = '%08x' % self.nonce_count + cnonce = self.get_cnonce(nonce) + noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2)) + respdig = KD(H(A1), noncebit) + elif qop is None: + respdig = KD(H(A1), "%s:%s" % (nonce, H(A2))) + else: + # XXX handle auth-int. + raise URLError("qop '%s' is not supported." % qop) + + # XXX should the partial digests be encoded too? + + base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ + 'response="%s"' % (user, realm, nonce, req.get_selector(), + respdig) + if opaque: + base += ', opaque="%s"' % opaque + if entdig: + base += ', digest="%s"' % entdig + base += ', algorithm="%s"' % algorithm + if qop: + base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce) + return base + + def get_algorithm_impls(self, algorithm): + # algorithm should be case-insensitive according to RFC2617 + algorithm = algorithm.upper() + # lambdas assume digest modules are imported at the top level + if algorithm == 'MD5': + H = lambda x: hashlib.md5(x).hexdigest() + elif algorithm == 'SHA': + H = lambda x: hashlib.sha1(x).hexdigest() + # XXX MD5-sess + KD = lambda s, d: H("%s:%s" % (s, d)) + return H, KD + + def get_entity_digest(self, data, chal): + # XXX not implemented yet + return None + + +class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + """An authentication protocol defined by RFC 2069 + + Digest authentication improves on basic authentication because it + does not transmit passwords in the clear. + """ + + auth_header = 'Authorization' + handler_order = 490 # before Basic auth + + def http_error_401(self, req, fp, code, msg, headers): + host = urlparse.urlparse(req.get_full_url())[1] + retry = self.http_error_auth_reqed('www-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + + +class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler): + + auth_header = 'Proxy-Authorization' + handler_order = 490 # before Basic auth + + def http_error_407(self, req, fp, code, msg, headers): + host = req.get_host() + retry = self.http_error_auth_reqed('proxy-authenticate', + host, req, headers) + self.reset_retry_count() + return retry + +class AbstractHTTPHandler(BaseHandler): + + def __init__(self, debuglevel=0): + self._debuglevel = debuglevel + + def set_http_debuglevel(self, level): + self._debuglevel = level + + def do_request_(self, request): + host = request.get_host() + if not host: + raise URLError('no host given') + + if request.has_data(): # POST + data = request.get_data() + if not request.has_header('Content-type'): + request.add_unredirected_header( + 'Content-type', + 'application/x-www-form-urlencoded') + if not request.has_header('Content-length'): + request.add_unredirected_header( + 'Content-length', '%d' % len(data)) + + sel_host = host + if request.has_proxy(): + scheme, sel = splittype(request.get_selector()) + sel_host, sel_path = splithost(sel) + + if not request.has_header('Host'): + request.add_unredirected_header('Host', sel_host) + for name, value in self.parent.addheaders: + name = name.capitalize() + if not request.has_header(name): + request.add_unredirected_header(name, value) + + return request + + def do_open(self, http_class, req): + """Return an addinfourl object for the request, using http_class. + + http_class must implement the HTTPConnection API from httplib. + The addinfourl return value is a file-like object. It also + has methods and attributes including: + - info(): return a mimetools.Message object for the headers + - geturl(): return the original request URL + - code: HTTP status code + """ + host = req.get_host() + if not host: + raise URLError('no host given') + + h = http_class(host, timeout=req.timeout) # will parse host:port + h.set_debuglevel(self._debuglevel) + + headers = dict(req.unredirected_hdrs) + headers.update(dict((k, v) for k, v in req.headers.items() + if k not in headers)) + + # We want to make an HTTP/1.1 request, but the addinfourl + # class isn't prepared to deal with a persistent connection. + # It will try to read all remaining data from the socket, + # which will block while the server waits for the next request. + # So make sure the connection gets closed after the (only) + # request. + headers["Connection"] = "close" + headers = dict( + (name.title(), val) for name, val in headers.items()) + + if req._tunnel_host: + tunnel_headers = {} + proxy_auth_hdr = "Proxy-Authorization" + if proxy_auth_hdr in headers: + tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr] + # Proxy-Authorization should not be sent to origin + # server. + del headers[proxy_auth_hdr] + h.set_tunnel(req._tunnel_host, headers=tunnel_headers) + + try: + h.request(req.get_method(), req.get_selector(), req.data, headers) + try: + r = h.getresponse(buffering=True) + except TypeError: #buffering kw not supported + r = h.getresponse() + except socket.error, err: # XXX what error? + h.close() + raise URLError(err) + + # Pick apart the HTTPResponse object to get the addinfourl + # object initialized properly. + + # Wrap the HTTPResponse object in socket's file object adapter + # for Windows. That adapter calls recv(), so delegate recv() + # to read(). This weird wrapping allows the returned object to + # have readline() and readlines() methods. + + # XXX It might be better to extract the read buffering code + # out of socket._fileobject() and into a base class. + + r.recv = r.read + fp = socket._fileobject(r, close=True) + + resp = addinfourl(fp, r.msg, req.get_full_url()) + resp.code = r.status + resp.msg = r.reason + return resp + + +class HTTPHandler(AbstractHTTPHandler): + + def http_open(self, req): + return self.do_open(httplib.HTTPConnection, req) + + http_request = AbstractHTTPHandler.do_request_ + +if hasattr(httplib, 'HTTPS'): + class HTTPSHandler(AbstractHTTPHandler): + + def https_open(self, req): + return self.do_open(httplib.HTTPSConnection, req) + + https_request = AbstractHTTPHandler.do_request_ + +class HTTPCookieProcessor(BaseHandler): + def __init__(self, cookiejar=None): + import cookielib + if cookiejar is None: + cookiejar = cookielib.CookieJar() + self.cookiejar = cookiejar + + def http_request(self, request): + self.cookiejar.add_cookie_header(request) + return request + + def http_response(self, request, response): + self.cookiejar.extract_cookies(response, request) + return response + + https_request = http_request + https_response = http_response + +class UnknownHandler(BaseHandler): + def unknown_open(self, req): + type = req.get_type() + raise URLError('unknown url type: %s' % type) + +def parse_keqv_list(l): + """Parse list of key=value strings where keys are not duplicated.""" + parsed = {} + for elt in l: + k, v = elt.split('=', 1) + if v[0] == '"' and v[-1] == '"': + v = v[1:-1] + parsed[k] = v + return parsed + +def parse_http_list(s): + """Parse lists as described by RFC 2068 Section 2. + + In particular, parse comma-separated lists where the elements of + the list may include quoted-strings. A quoted-string could + contain a comma. A non-quoted string could have quotes in the + middle. Neither commas nor quotes count if they are escaped. + Only double-quotes count, not single-quotes. + """ + res = [] + part = '' + + escape = quote = False + for cur in s: + if escape: + part += cur + escape = False + continue + if quote: + if cur == '\\': + escape = True + continue + elif cur == '"': + quote = False + part += cur + continue + + if cur == ',': + res.append(part) + part = '' + continue + + if cur == '"': + quote = True + + part += cur + + # append last part + if part: + res.append(part) + + return [part.strip() for part in res] + +def _safe_gethostbyname(host): + try: + return socket.gethostbyname(host) + except socket.gaierror: + return None + +class FileHandler(BaseHandler): + # Use local file or FTP depending on form of URL + def file_open(self, req): + url = req.get_selector() + if url[:2] == '//' and url[2:3] != '/' and (req.host and + req.host != 'localhost'): + req.type = 'ftp' + return self.parent.open(req) + else: + return self.open_local_file(req) + + # names for the localhost + names = None + def get_names(self): + if FileHandler.names is None: + try: + FileHandler.names = tuple( + socket.gethostbyname_ex('localhost')[2] + + socket.gethostbyname_ex(socket.gethostname())[2]) + except socket.gaierror: + FileHandler.names = (socket.gethostbyname('localhost'),) + return FileHandler.names + + # not entirely sure what the rules are here + def open_local_file(self, req): + import email.utils + import mimetypes + host = req.get_host() + filename = req.get_selector() + localfile = url2pathname(filename) + try: + stats = os.stat(localfile) + size = stats.st_size + modified = email.utils.formatdate(stats.st_mtime, usegmt=True) + mtype = mimetypes.guess_type(filename)[0] + headers = mimetools.Message(StringIO( + 'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' % + (mtype or 'text/plain', size, modified))) + if host: + host, port = splitport(host) + if not host or \ + (not port and _safe_gethostbyname(host) in self.get_names()): + if host: + origurl = 'file://' + host + filename + else: + origurl = 'file://' + filename + return addinfourl(open(localfile, 'rb'), headers, origurl) + except OSError, msg: + # urllib2 users shouldn't expect OSErrors coming from urlopen() + raise URLError(msg) + raise URLError('file not on local host') + +class FTPHandler(BaseHandler): + def ftp_open(self, req): + import ftplib + import mimetypes + host = req.get_host() + if not host: + raise URLError('ftp error: no host given') + host, port = splitport(host) + if port is None: + port = ftplib.FTP_PORT + else: + port = int(port) + + # username/password handling + user, host = splituser(host) + if user: + user, passwd = splitpasswd(user) + else: + passwd = None + host = unquote(host) + user = user or '' + passwd = passwd or '' + + try: + host = socket.gethostbyname(host) + except socket.error, msg: + raise URLError(msg) + path, attrs = splitattr(req.get_selector()) + dirs = path.split('/') + dirs = map(unquote, dirs) + dirs, file = dirs[:-1], dirs[-1] + if dirs and not dirs[0]: + dirs = dirs[1:] + try: + fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout) + type = file and 'I' or 'D' + for attr in attrs: + attr, value = splitvalue(attr) + if attr.lower() == 'type' and \ + value in ('a', 'A', 'i', 'I', 'd', 'D'): + type = value.upper() + fp, retrlen = fw.retrfile(file, type) + headers = "" + mtype = mimetypes.guess_type(req.get_full_url())[0] + if mtype: + headers += "Content-type: %s\n" % mtype + if retrlen is not None and retrlen >= 0: + headers += "Content-length: %d\n" % retrlen + sf = StringIO(headers) + headers = mimetools.Message(sf) + return addinfourl(fp, headers, req.get_full_url()) + except ftplib.all_errors, msg: + raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2] + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + fw = ftpwrapper(user, passwd, host, port, dirs, timeout) +## fw.ftp.set_debuglevel(1) + return fw + +class CacheFTPHandler(FTPHandler): + # XXX would be nice to have pluggable cache strategies + # XXX this stuff is definitely not thread safe + def __init__(self): + self.cache = {} + self.timeout = {} + self.soonest = 0 + self.delay = 60 + self.max_conns = 16 + + def setTimeout(self, t): + self.delay = t + + def setMaxConns(self, m): + self.max_conns = m + + def connect_ftp(self, user, passwd, host, port, dirs, timeout): + key = user, host, port, '/'.join(dirs), timeout + if key in self.cache: + self.timeout[key] = time.time() + self.delay + else: + self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout) + self.timeout[key] = time.time() + self.delay + self.check_cache() + return self.cache[key] + + def check_cache(self): + # first check for old ones + t = time.time() + if self.soonest <= t: + for k, v in self.timeout.items(): + if v < t: + self.cache[k].close() + del self.cache[k] + del self.timeout[k] + self.soonest = min(self.timeout.values()) + + # then check the size + if len(self.cache) == self.max_conns: + for k, v in self.timeout.items(): + if v == self.soonest: + del self.cache[k] + del self.timeout[k] + break + self.soonest = min(self.timeout.values()) diff --git a/lib-python/2.7/uuid.py b/lib-python/modified-2.7/uuid.py copy from lib-python/2.7/uuid.py copy to lib-python/modified-2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/modified-2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib_pypy/_codecs_cn.py b/lib_pypy/_codecs_cn.py new file mode 100644 --- /dev/null +++ b/lib_pypy/_codecs_cn.py @@ -0,0 +1,7 @@ +# this getcodec() function supports any multibyte codec, although +# for compatibility with CPython it should only be used for the +# codecs from this module, i.e.: +# +# 'gb2312', 'gbk', 'gb18030', 'hz' + +from _multibytecodec import __getcodec as getcodec diff --git a/lib_pypy/_codecs_hk.py b/lib_pypy/_codecs_hk.py new file mode 100644 --- /dev/null +++ b/lib_pypy/_codecs_hk.py @@ -0,0 +1,7 @@ +# this getcodec() function supports any multibyte codec, although +# for compatibility with CPython it should only be used for the +# codecs from this module, i.e.: +# +# 'big5hkscs' + +from _multibytecodec import __getcodec as getcodec diff --git a/lib_pypy/_codecs_iso2022.py b/lib_pypy/_codecs_iso2022.py new file mode 100644 --- /dev/null +++ b/lib_pypy/_codecs_iso2022.py @@ -0,0 +1,8 @@ +# this getcodec() function supports any multibyte codec, although +# for compatibility with CPython it should only be used for the +# codecs from this module, i.e.: +# +# 'iso2022_kr', 'iso2022_jp', 'iso2022_jp_1', 'iso2022_jp_2', +# 'iso2022_jp_2004', 'iso2022_jp_3', 'iso2022_jp_ext' + +from _multibytecodec import __getcodec as getcodec diff --git a/lib_pypy/_codecs_jp.py b/lib_pypy/_codecs_jp.py new file mode 100644 --- /dev/null +++ b/lib_pypy/_codecs_jp.py @@ -0,0 +1,8 @@ +# this getcodec() function supports any multibyte codec, although +# for compatibility with CPython it should only be used for the +# codecs from this module, i.e.: +# +# 'shift_jis', 'cp932', 'euc_jp', 'shift_jis_2004', +# 'euc_jis_2004', 'euc_jisx0213', 'shift_jisx0213' + +from _multibytecodec import __getcodec as getcodec diff --git a/lib_pypy/_codecs_kr.py b/lib_pypy/_codecs_kr.py new file mode 100644 --- /dev/null +++ b/lib_pypy/_codecs_kr.py @@ -0,0 +1,7 @@ +# this getcodec() function supports any multibyte codec, although +# for compatibility with CPython it should only be used for the +# codecs from this module, i.e.: +# +# 'euc_kr', 'cp949', 'johab' + +from _multibytecodec import __getcodec as getcodec diff --git a/lib_pypy/_codecs_tw.py b/lib_pypy/_codecs_tw.py new file mode 100644 --- /dev/null +++ b/lib_pypy/_codecs_tw.py @@ -0,0 +1,7 @@ +# this getcodec() function supports any multibyte codec, although +# for compatibility with CPython it should only be used for the +# codecs from this module, i.e.: +# +# 'big5', 'cp950' + +from _multibytecodec import __getcodec as getcodec diff --git a/lib_pypy/_collections.py b/lib_pypy/_collections.py --- a/lib_pypy/_collections.py +++ b/lib_pypy/_collections.py @@ -379,12 +379,14 @@ class defaultdict(dict): def __init__(self, *args, **kwds): - self.default_factory = None - if 'default_factory' in kwds: - self.default_factory = kwds.pop('default_factory') - elif len(args) > 0 and (callable(args[0]) or args[0] is None): - self.default_factory = args[0] + if len(args) > 0: + default_factory = args[0] args = args[1:] + if not callable(default_factory) and default_factory is not None: + raise TypeError("first argument must be callable") + else: + default_factory = None + self.default_factory = default_factory super(defaultdict, self).__init__(*args, **kwds) def __missing__(self, key): @@ -404,7 +406,7 @@ recurse.remove(id(self)) def copy(self): - return type(self)(self, default_factory=self.default_factory) + return type(self)(self.default_factory, self) def __copy__(self): return self.copy() diff --git a/lib_pypy/_ctypes/__init__.py b/lib_pypy/_ctypes/__init__.py --- a/lib_pypy/_ctypes/__init__.py +++ b/lib_pypy/_ctypes/__init__.py @@ -18,7 +18,16 @@ if _os.name in ("nt", "ce"): from _rawffi import FormatError from _rawffi import check_HRESULT as _check_HRESULT - CopyComPointer = None # XXX + + def CopyComPointer(src, dst): + from ctypes import c_void_p, cast + if src: + hr = src[0][0].AddRef(src) + if hr & 0x80000000: + return hr + dst[0] = cast(src, c_void_p).value + return 0 + LoadLibrary = dlopen from _rawffi import FUNCFLAG_STDCALL, FUNCFLAG_CDECL, FUNCFLAG_PYTHONAPI diff --git a/lib_pypy/_ctypes/array.py b/lib_pypy/_ctypes/array.py --- a/lib_pypy/_ctypes/array.py +++ b/lib_pypy/_ctypes/array.py @@ -208,6 +208,9 @@ def _get_buffer_value(self): return self._buffer.buffer + def _to_ffi_param(self): + return self._get_buffer_value() + ARRAY_CACHE = {} def create_array_type(base, length): diff --git a/lib_pypy/_ctypes/basics.py b/lib_pypy/_ctypes/basics.py --- a/lib_pypy/_ctypes/basics.py +++ b/lib_pypy/_ctypes/basics.py @@ -1,5 +1,6 @@ import _rawffi +import _ffi import sys keepalive_key = str # XXX fix this when provided with test @@ -46,6 +47,16 @@ else: return self.from_param(as_parameter) + def get_ffi_param(self, value): + cdata = self.from_param(value) + return cdata, cdata._to_ffi_param() + + def get_ffi_argtype(self): + if self._ffiargtype: + return self._ffiargtype + self._ffiargtype = _shape_to_ffi_type(self._ffiargshape) + return self._ffiargtype + def _CData_output(self, resbuffer, base=None, index=-1): #assert isinstance(resbuffer, _rawffi.ArrayInstance) """Used when data exits ctypes and goes into user code. @@ -99,6 +110,7 @@ """ __metaclass__ = _CDataMeta _objects = None + _ffiargtype = None def __init__(self, *args, **kwds): raise TypeError("%s has no type" % (type(self),)) @@ -119,11 +131,20 @@ def _get_buffer_value(self): return self._buffer[0] + def _to_ffi_param(self): + if self.__class__._is_pointer_like(): + return self._get_buffer_value() + else: + return self.value + def __buffer__(self): return buffer(self._buffer) def _get_b_base(self): - return self._base + try: + return self._base + except AttributeError: + return None _b_base_ = property(_get_b_base) _b_needsfree_ = False @@ -146,11 +167,12 @@ return tp._alignmentofinstances() def byref(cdata): - from ctypes import pointer + # "pointer" is imported at the end of this module to avoid circular + # imports return pointer(cdata) def cdata_from_address(self, address): - # fix the address, in case it's unsigned + # fix the address: turn it into as unsigned, in case it's a negative number address = address & (sys.maxint * 2 + 1) instance = self.__new__(self) lgt = getattr(self, '_length_', 1) @@ -159,3 +181,54 @@ def addressof(tp): return tp._buffer.buffer + + +# ---------------------------------------------------------------------- + +def is_struct_shape(shape): + # see the corresponding code to set the shape in + # _ctypes.structure._set_shape + return (isinstance(shape, tuple) and + len(shape) == 2 and + isinstance(shape[0], _rawffi.Structure) and + shape[1] == 1) + +def _shape_to_ffi_type(shape): + try: + return _shape_to_ffi_type.typemap[shape] + except KeyError: + pass + if is_struct_shape(shape): + return shape[0].get_ffi_type() + # + assert False, 'unknown shape %s' % (shape,) + + +_shape_to_ffi_type.typemap = { + 'c' : _ffi.types.char, + 'b' : _ffi.types.sbyte, + 'B' : _ffi.types.ubyte, + 'h' : _ffi.types.sshort, + 'u' : _ffi.types.unichar, + 'H' : _ffi.types.ushort, + 'i' : _ffi.types.sint, + 'I' : _ffi.types.uint, + 'l' : _ffi.types.slong, + 'L' : _ffi.types.ulong, + 'q' : _ffi.types.slonglong, + 'Q' : _ffi.types.ulonglong, + 'f' : _ffi.types.float, + 'd' : _ffi.types.double, + 's' : _ffi.types.void_p, + 'P' : _ffi.types.void_p, + 'z' : _ffi.types.void_p, + 'O' : _ffi.types.void_p, + 'Z' : _ffi.types.void_p, + 'X' : _ffi.types.void_p, + 'v' : _ffi.types.sshort, + '?' : _ffi.types.ubyte, + } + + +# used by "byref" +from _ctypes.pointer import pointer diff --git a/lib_pypy/_ctypes/function.py b/lib_pypy/_ctypes/function.py --- a/lib_pypy/_ctypes/function.py +++ b/lib_pypy/_ctypes/function.py @@ -1,12 +1,15 @@ + +from _ctypes.basics import _CData, _CDataMeta, cdata_from_address +from _ctypes.primitive import SimpleType, _SimpleCData +from _ctypes.basics import ArgumentError, keepalive_key +from _ctypes.basics import is_struct_shape +from _ctypes.builtin import set_errno, set_last_error import _rawffi +import _ffi import sys import traceback import warnings -from _ctypes.basics import ArgumentError, keepalive_key -from _ctypes.basics import _CData, _CDataMeta, cdata_from_address -from _ctypes.builtin import set_errno, set_last_error -from _ctypes.primitive import SimpleType # XXX this file needs huge refactoring I fear @@ -24,6 +27,7 @@ WIN64 = sys.platform == 'win32' and sys.maxint == 2**63 - 1 + def get_com_error(errcode, riid, pIunk): "Win32 specific: build a COM Error exception" # XXX need C support code @@ -36,6 +40,7 @@ funcptr.restype = int return funcptr(*args) + class CFuncPtrType(_CDataMeta): # XXX write down here defaults and such things @@ -50,6 +55,7 @@ from_address = cdata_from_address + class CFuncPtr(_CData): __metaclass__ = CFuncPtrType @@ -65,12 +71,12 @@ callable = None _ptr = None _buffer = None + _address = None # win32 COM properties _paramflags = None _com_index = None _com_iid = None - - __restype_set = False + _is_fastpath = False def _getargtypes(self): return self._argtypes_ @@ -85,9 +91,14 @@ raise TypeError( "item %d in _argtypes_ has no from_param method" % ( i + 1,)) - self._argtypes_ = argtypes + self._argtypes_ = list(argtypes) + self._check_argtypes_for_fastpath() + argtypes = property(_getargtypes, _setargtypes) - argtypes = property(_getargtypes, _setargtypes) + def _check_argtypes_for_fastpath(self): + if all([hasattr(argtype, '_ffiargshape') for argtype in self._argtypes_]): + fastpath_cls = make_fastpath_subclass(self.__class__) + fastpath_cls.enable_fastpath_maybe(self) def _getparamflags(self): return self._paramflags @@ -133,11 +144,11 @@ paramflags = property(_getparamflags, _setparamflags) + def _getrestype(self): return self._restype_ def _setrestype(self, restype): - self.__restype_set = True self._ptr = None if restype is int: from ctypes import c_int @@ -146,27 +157,24 @@ callable(restype)): raise TypeError("restype must be a type, a callable, or None") self._restype_ = restype - + def _delrestype(self): self._ptr = None del self._restype_ - + restype = property(_getrestype, _setrestype, _delrestype) def _geterrcheck(self): return getattr(self, '_errcheck_', None) - From noreply at buildbot.pypy.org Sun Jan 22 20:28:05 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 20:28:05 +0100 (CET) Subject: [pypy-commit] pypy 32ptr-on-64bit: This part of the code lost during the merge belongs to rewrite.py. Message-ID: <20120122192805.B8233821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: 32ptr-on-64bit Changeset: r51660:6687f847a628 Date: 2012-01-22 20:16 +0100 http://bitbucket.org/pypy/pypy/changeset/6687f847a628/ Log: This part of the code lost during the merge belongs to rewrite.py. Save it there for now, commented out. diff --git a/pypy/jit/backend/llsupport/rewrite.py b/pypy/jit/backend/llsupport/rewrite.py --- a/pypy/jit/backend/llsupport/rewrite.py +++ b/pypy/jit/backend/llsupport/rewrite.py @@ -326,3 +326,58 @@ # assume that "self.gc_ll_descr.minimal_size_in_nursery" is 2 WORDs size = max(size, 2 * WORD) return (size + WORD-1) & ~(WORD-1) # round up + + +if 0: + # ---------- compressptr support ---------- + descr = op.getdescr() + if (self.supports_compressed_ptrs and + (isinstance(descr, GcPtrHidden32FieldDescr) or + isinstance(descr, GcPtrHidden32ArrayDescr))): + num = op.getopnum() + if (num == rop.GETFIELD_GC or + num == rop.GETFIELD_GC_PURE or + num == rop.GETARRAYITEM_GC or + num == rop.GETARRAYITEM_GC_PURE): + v1 = BoxInt() + v2 = op.result + newops.append(op.copy_and_change(num, result=v1)) + op = ResOperation(rop.SHOW_FROM_PTR32, [v1], v2) + elif num == rop.SETFIELD_GC or num == rop.SETFIELD_RAW: + v1 = op.getarg(1) + v2 = BoxInt() + newops.append(ResOperation(rop.HIDE_INTO_PTR32, [v1], v2)) + op = op.copy_and_change(num, args=[op.getarg(0), v2]) + elif num == rop.SETARRAYITEM_GC or num == rop.SETARRAYITEM_RAW: + v1 = op.getarg(2) + v2 = BoxInt() + newops.append(ResOperation(rop.HIDE_INTO_PTR32, [v1], v2)) + op = op.copy_and_change(num, args=[op.getarg(0), + op.getarg(1), v2]) + elif num == rop.ARRAYLEN_GC or num == rop.NEW_ARRAY: + # although these operations operate on a + # GcArray(HiddenGcRef32), there is no actual + # HiddenGcRef32 argument or result + pass + else: + raise AssertionError(op) + elif (self.supports_compressed_ptrs and + isinstance(descr, BaseCallDescr)): + args = op.getarglist() + arg_classes = descr.get_arg_types() + fixed = 1 + assert len(args) == fixed + len(arg_classes) + for i in range(len(arg_classes)): + if arg_classes[i] == 'H': + v1 = args[fixed + i] + v2 = BoxInt() + newops.append(ResOperation(rop.HIDE_INTO_PTR32, + [v1], v2)) + args[fixed + i] = v2 + op = op.copy_and_change(op.getopnum(), args=args) + if descr.get_return_type() == 'H': + v1 = BoxInt() + v2 = op.result + newops.append(op.copy_and_change(op.getopnum(), result=v1)) + op = ResOperation(rop.SHOW_FROM_PTR32, [v1], v2) + # ---------- From noreply at buildbot.pypy.org Sun Jan 22 20:28:06 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 22 Jan 2012 20:28:06 +0100 (CET) Subject: [pypy-commit] pypy 32ptr-on-64bit: fix. Message-ID: <20120122192806.EB188821FA@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: 32ptr-on-64bit Changeset: r51661:27de83186d13 Date: 2012-01-22 20:20 +0100 http://bitbucket.org/pypy/pypy/changeset/27de83186d13/ Log: fix. diff --git a/pypy/rpython/lltypesystem/opimpl.py b/pypy/rpython/lltypesystem/opimpl.py --- a/pypy/rpython/lltypesystem/opimpl.py +++ b/pypy/rpython/lltypesystem/opimpl.py @@ -622,14 +622,14 @@ return lltype.cast_opaque_ptr(llmemory.HiddenGcRef32, ptr) def op_show_from_ptr32(RESTYPE, ptr32): - if not ptr32: - return lltype.nullptr(RESTYPE.TO) if RESTYPE == llmemory.Address: if not ptr32: return llmemory.NULL PTRTYPE = lltype.Ptr(ptr32._obj.container._TYPE) ptr = lltype.cast_opaque_ptr(PTRTYPE, ptr32) return llmemory.cast_ptr_to_adr(ptr) + if not ptr32: + return lltype.nullptr(RESTYPE.TO) if isinstance(RESTYPE.TO, lltype.GcOpaqueType): try: ptr32 = ptr32._obj.container._as_ptr() From noreply at buildbot.pypy.org Sun Jan 22 20:57:13 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 20:57:13 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Implement CPython issue5057: do not const-fold a unicode.__getitem__ Message-ID: <20120122195713.336F4821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51662:693b08144e00 Date: 2012-01-22 20:24 +0100 http://bitbucket.org/pypy/pypy/changeset/693b08144e00/ Log: Implement CPython issue5057: do not const-fold a unicode.__getitem__ operation which returns a non-BMP character, this produces .pyc files which depends on the unicode width diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -5,6 +5,7 @@ from pypy.tool import stdlib_opcode as ops from pypy.interpreter.error import OperationError from pypy.rlib.unroll import unrolling_iterable +from pypy.rlib.runicode import MAXUNICODE def optimize_ast(space, tree, compile_info): @@ -289,8 +290,30 @@ w_idx = subs.slice.as_constant() if w_idx is not None: try: - return ast.Const(self.space.getitem(w_obj, w_idx), subs.lineno, subs.col_offset) + w_const = self.space.getitem(w_obj, w_idx) except OperationError: - # Let exceptions propgate at runtime. - pass + # Let exceptions propagate at runtime. + return subs + + # CPython issue5057: if v is unicode, there might + # be differences between wide and narrow builds in + # cases like u'\U00012345'[0]. + # Wide builds will return a non-BMP char, whereas + # narrow builds will return a surrogate. In both + # the cases skip the optimization in order to + # produce compatible pycs. + if (self.space.isinstance_w(w_obj, self.space.w_unicode) + and + self.space.isinstance_w(w_const, self.space.w_unicode)): + unistr = self.space.unicode_w(w_const) + if len(unistr) == 1: + ch = ord(unistr[0]) + else: + ch = 0 + if (ch > 0xFFFF or + (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= OxDFFFF)): + return subs + + return ast.Const(w_const, subs.lineno, subs.col_offset) + return subs diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -838,6 +838,30 @@ # Just checking this doesn't crash out self.count_instructions(source) + def test_const_fold_unicode_subscr(self): + source = """def f(): + return u"abc"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + + # getitem outside of the BMP should not be optimized + source = """def f(): + return u"\U00012345"[0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, + ops.RETURN_VALUE: 1} + + # getslice is not yet optimized. + # Still, check a case which yields the empty string. + source = """def f(): + return u"abc"[:0] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.SLICE+2: 1, + ops.RETURN_VALUE: 1} + def test_remove_dead_code(self): source = """def f(x): return 5 From noreply at buildbot.pypy.org Sun Jan 22 20:57:14 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 20:57:14 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: hg merge default Message-ID: <20120122195714.C0E18821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51663:6e17d519d090 Date: 2012-01-22 20:42 +0100 http://bitbucket.org/pypy/pypy/changeset/6e17d519d090/ Log: hg merge default diff --git a/.hgignore b/.hgignore --- a/.hgignore +++ b/.hgignore @@ -2,6 +2,9 @@ *.py[co] *~ .*.swp +.idea +.project +.pydevproject syntax: regexp ^testresult$ diff --git a/lib_pypy/datetime.py b/lib_pypy/datetime.py --- a/lib_pypy/datetime.py +++ b/lib_pypy/datetime.py @@ -1519,12 +1519,17 @@ @classmethod def utcfromtimestamp(cls, t): "Construct a UTC datetime from a POSIX timestamp (like time.time())." - if 1 - (t % 1.0) < 0.0000005: - t = float(int(t)) + 1 - if t < 0: - t -= 1 + t, frac = divmod(t, 1.0) + us = round(frac * 1e6) + + # If timestamp is less than one microsecond smaller than a + # full second, us can be rounded up to 1000000. In this case, + # roll over to seconds, otherwise, ValueError is raised + # by the constructor. + if us == 1000000: + t += 1 + us = 0 y, m, d, hh, mm, ss, weekday, jday, dst = _time.gmtime(t) - us = int((t % 1.0) * 1000000) ss = min(ss, 59) # clamp out leap seconds if the platform has them return cls(y, m, d, hh, mm, ss, us) diff --git a/lib_pypy/numpypy/__init__.py b/lib_pypy/numpypy/__init__.py --- a/lib_pypy/numpypy/__init__.py +++ b/lib_pypy/numpypy/__init__.py @@ -1,2 +1,2 @@ from _numpypy import * -from .fromnumeric import * +from .core import * diff --git a/lib_pypy/numpypy/core/__init__.py b/lib_pypy/numpypy/core/__init__.py new file mode 100644 --- /dev/null +++ b/lib_pypy/numpypy/core/__init__.py @@ -0,0 +1,1 @@ +from .fromnumeric import * diff --git a/lib_pypy/numpypy/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py rename from lib_pypy/numpypy/fromnumeric.py rename to lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -85,7 +85,7 @@ array([4, 3, 6]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') # not deprecated --- copy if necessary, view otherwise @@ -149,6 +149,7 @@ [5, 6]]) """ + assert order == 'C' if not hasattr(a, 'reshape'): a = numpypy.array(a) return a.reshape(newshape) @@ -273,7 +274,7 @@ [-1, -2, -3, -4, -5]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def repeat(a, repeats, axis=None): @@ -315,7 +316,7 @@ [3, 4]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def put(a, ind, v, mode='raise'): @@ -366,7 +367,7 @@ array([ 0, 1, 2, 3, -5]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def swapaxes(a, axis1, axis2): @@ -410,7 +411,7 @@ [3, 7]]]) """ - raise NotImplemented('Waiting on interp level method') + raise NotImplementedError('Waiting on interp level method') def transpose(a, axes=None): @@ -451,8 +452,11 @@ (2, 1, 3) """ - raise NotImplemented('Waiting on interp level method') - + if axes is not None: + raise NotImplementedError('No "axes" arg yet.') + if not hasattr(a, 'T'): + a = numpypy.array(a) + return a.T def sort(a, axis=-1, kind='quicksort', order=None): """ @@ -553,7 +557,7 @@ dtype=[('name', '|S10'), ('height', '= self.size @@ -157,6 +177,8 @@ offset += self.strides[i] break else: + if i == self.dim: + first_line = True indices[i] = 0 offset -= self.backstrides[i] else: diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -2,14 +2,15 @@ from pypy.interpreter.error import OperationError, operationerrfmt from pypy.interpreter.gateway import interp2app, NoneNotWrapped from pypy.interpreter.typedef import TypeDef, GetSetProperty -from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature +from pypy.module.micronumpy import interp_ufuncs, interp_dtype, signature,\ + interp_boxes from pypy.module.micronumpy.strides import calculate_slice_strides from pypy.rlib import jit from pypy.rpython.lltypesystem import lltype, rffi from pypy.tool.sourcetools import func_with_new_name from pypy.rlib.rstring import StringBuilder from pypy.module.micronumpy.interp_iter import ArrayIterator, OneDimIterator,\ - SkipLastAxisIterator + SkipLastAxisIterator, Chunk, ViewIterator numpy_driver = jit.JitDriver( greens=['shapelen', 'sig'], @@ -39,7 +40,24 @@ get_printable_location=signature.new_printable_location('slice'), name='numpy_slice', ) - +count_driver = jit.JitDriver( + greens=['shapelen'], + virtualizables=['frame'], + reds=['s', 'frame', 'iter', 'arr'], + name='numpy_count' +) +filter_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['concr', 'argi', 'ri', 'frame', 'v', 'res', 'self'], + name='numpy_filter', +) +filter_set_driver = jit.JitDriver( + greens=['shapelen', 'sig'], + virtualizables=['frame'], + reds=['idx', 'idxi', 'frame', 'arr'], + name='numpy_filterset', +) def _find_shape_and_elems(space, w_iterable): shape = [space.len_w(w_iterable)] @@ -157,9 +175,6 @@ # (meaning that the realignment of elements crosses from one step into another) # return None so that the caller can raise an exception. def calc_new_strides(new_shape, old_shape, old_strides): - # Return the proper strides for new_shape, or None if the mapping crosses - # stepping boundaries - # Assumes that prod(old_shape) == prod(new_shape), len(old_shape) > 1, and # len(new_shape) > 0 steps = [] @@ -167,6 +182,7 @@ oldI = 0 new_strides = [] if old_strides[0] < old_strides[-1]: + #Start at old_shape[0], old_stides[0] for i in range(len(old_shape)): steps.append(old_strides[i] / last_step) last_step *= old_shape[i] @@ -184,10 +200,11 @@ if n_new_elems_used == n_old_elems_to_use: oldI += 1 if oldI >= len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] else: + #Start at old_shape[-1], old_strides[-1] for i in range(len(old_shape) - 1, -1, -1): steps.insert(0, old_strides[i] / last_step) last_step *= old_shape[i] @@ -207,7 +224,7 @@ if n_new_elems_used == n_old_elems_to_use: oldI -= 1 if oldI < -len(old_shape): - break + continue cur_step = steps[oldI] n_old_elems_to_use *= old_shape[oldI] return new_strides @@ -271,6 +288,9 @@ descr_gt = _binop_impl("greater") descr_ge = _binop_impl("greater_equal") + descr_and = _binop_impl("bitwise_and") + descr_or = _binop_impl("bitwise_or") + def _binop_right_impl(ufunc_name): def impl(self, space, w_other): w_other = scalar_w(space, @@ -288,11 +308,11 @@ descr_rmod = _binop_right_impl("mod") def _reduce_ufunc_impl(ufunc_name, promote_to_largest=False): - def impl(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def impl(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) return getattr(interp_ufuncs.get(space), ufunc_name).reduce(space, - self, True, promote_to_largest, w_dim) + self, True, promote_to_largest, w_axis) return func_with_new_name(impl, "reduce_%s_impl" % ufunc_name) descr_sum = _reduce_ufunc_impl("add") @@ -410,9 +430,15 @@ def descr_copy(self, space): return self.copy(space) + def descr_flatten(self, space): + return self.flatten(space) + def copy(self, space): return self.get_concrete().copy(space) + def flatten(self, space): + return self.get_concrete().flatten(space) + def descr_len(self, space): if len(self.shape): return space.wrap(self.shape[0]) @@ -480,11 +506,69 @@ def _prepare_slice_args(self, space, w_idx): if (space.isinstance_w(w_idx, space.w_int) or space.isinstance_w(w_idx, space.w_slice)): - return [space.decode_index4(w_idx, self.shape[0])] - return [space.decode_index4(w_item, self.shape[i]) for i, w_item in + return [Chunk(*space.decode_index4(w_idx, self.shape[0]))] + return [Chunk(*space.decode_index4(w_item, self.shape[i])) for i, w_item in enumerate(space.fixedview(w_idx))] + def count_all_true(self, arr): + sig = arr.find_sig() + frame = sig.create_frame(self) + shapelen = len(arr.shape) + s = 0 + iter = None + while not frame.done(): + count_driver.jit_merge_point(arr=arr, frame=frame, iter=iter, s=s, + shapelen=shapelen) + iter = frame.get_final_iter() + s += arr.dtype.getitem_bool(arr.storage, iter.offset) + frame.next(shapelen) + return s + + def getitem_filter(self, space, arr): + concr = arr.get_concrete() + size = self.count_all_true(concr) + res = W_NDimArray(size, [size], self.find_dtype()) + ri = ArrayIterator(size) + shapelen = len(self.shape) + argi = concr.create_iter() + sig = self.find_sig() + frame = sig.create_frame(self) + v = None + while not frame.done(): + filter_driver.jit_merge_point(concr=concr, argi=argi, ri=ri, + frame=frame, v=v, res=res, sig=sig, + shapelen=shapelen, self=self) + if concr.dtype.getitem_bool(concr.storage, argi.offset): + v = sig.eval(frame, self) + res.setitem(ri.offset, v) + ri = ri.next(1) + else: + ri = ri.next_no_increase(1) + argi = argi.next(shapelen) + frame.next(shapelen) + return res + + def setitem_filter(self, space, idx, val): + size = self.count_all_true(idx) + arr = SliceArray([size], self.dtype, self, val) + sig = arr.find_sig() + shapelen = len(self.shape) + frame = sig.create_frame(arr) + idxi = idx.create_iter() + while not frame.done(): + filter_set_driver.jit_merge_point(idx=idx, idxi=idxi, sig=sig, + frame=frame, arr=arr, + shapelen=shapelen) + if idx.dtype.getitem_bool(idx.storage, idxi.offset): + sig.eval(frame, arr) + frame.next_from_second(1) + frame.next_first(shapelen) + idxi = idxi.next(shapelen) + def descr_getitem(self, space, w_idx): + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.getitem_filter(space, w_idx) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -494,6 +578,11 @@ def descr_setitem(self, space, w_idx, w_value): self.invalidated() + if (isinstance(w_idx, BaseArray) and w_idx.shape == self.shape and + w_idx.find_dtype().is_bool_type()): + return self.get_concrete().setitem_filter(space, + w_idx.get_concrete(), + convert_to_array(space, w_value)) if self._single_item_result(space, w_idx): concrete = self.get_concrete() item = concrete._index_of_single_item(space, w_idx) @@ -510,9 +599,8 @@ def create_slice(self, chunks): shape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - shape.append(lgt) + for i, chunk in enumerate(chunks): + chunk.extend_shape(shape) s = i + 1 assert s >= 0 shape += self.shape[s:] @@ -570,19 +658,19 @@ ) return w_result - def descr_mean(self, space, w_dim=None): - if space.is_w(w_dim, space.w_None): - w_dim = space.wrap(-1) + def descr_mean(self, space, w_axis=None): + if space.is_w(w_axis, space.w_None): + w_axis = space.wrap(-1) w_denom = space.wrap(self.size) else: - dim = space.int_w(w_dim) + dim = space.int_w(w_axis) w_denom = space.wrap(self.shape[dim]) - return space.div(self.descr_sum_promote(space, w_dim), w_denom) + return space.div(self.descr_sum_promote(space, w_axis), w_denom) def descr_var(self, space): # var = mean((values - mean(values)) ** 2) w_res = self.descr_sub(space, self.descr_mean(space, space.w_None)) - assert isinstance(w_res, BaseArray) + assert isinstance(w_res, BaseArray) w_res = w_res.descr_pow(space, space.wrap(2)) assert isinstance(w_res, BaseArray) return w_res.descr_mean(space, space.w_None) @@ -591,6 +679,10 @@ # std(v) = sqrt(var(v)) return interp_ufuncs.get(space).sqrt.call(space, [self.descr_var(space)]) + def descr_fill(self, space, w_value): + concr = self.get_concrete_or_scalar() + concr.fill(space, w_value) + def descr_nonzero(self, space): if self.size > 1: raise OperationError(space.w_ValueError, space.wrap( @@ -683,6 +775,14 @@ def copy(self, space): return Scalar(self.dtype, self.value) + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype) + array.setitem(0, self.value) + return array + + def fill(self, space, w_value): + self.value = self.dtype.coerce(space, w_value) + def create_sig(self): return signature.ScalarSignature(self.dtype) @@ -718,8 +818,7 @@ frame=frame, ri=ri, self=self, result=result) - result.dtype.setitem(result.storage, ri.offset, - sig.eval(frame, self)) + result.setitem(ri.offset, sig.eval(frame, self)) frame.next(shapelen) ri = ri.next(shapelen) return result @@ -789,7 +888,7 @@ Intermediate class for performing binary operations. """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, calc_dtype, res_dtype, left, right): VirtualArray.__init__(self, name, shape, res_dtype) self.ufunc = ufunc @@ -826,14 +925,15 @@ self.left.create_sig(), self.right.create_sig()) class SliceArray(Call2): - def __init__(self, shape, dtype, left, right): + def __init__(self, shape, dtype, left, right, no_broadcast=False): + self.no_broadcast = no_broadcast Call2.__init__(self, None, 'sliceloop', shape, dtype, dtype, left, right) - + def create_sig(self): lsig = self.left.create_sig() rsig = self.right.create_sig() - if self.shape != self.right.shape: + if not self.no_broadcast and self.shape != self.right.shape: return signature.SliceloopBroadcastSignature(self.ufunc, self.name, self.calc_dtype, @@ -848,7 +948,7 @@ when we'll make AxisReduce lazy """ _immutable_fields_ = ['left', 'right'] - + def __init__(self, ufunc, name, shape, dtype, left, right, dim): Call2.__init__(self, ufunc, name, shape, dtype, dtype, left, right) @@ -920,14 +1020,14 @@ if size < 1: builder.append('[]') return - elif size == 1: - builder.append(dtype.itemtype.str_format(self.getitem(0))) - return if size > 1000: # Once this goes True it does not go back to False for recursive # calls use_ellipsis = True ndims = len(self.shape) + if ndims == 0: + builder.append(dtype.itemtype.str_format(self.getitem(0))) + return i = 0 builder.append('[') if ndims > 1: @@ -939,7 +1039,7 @@ builder.append('\n' + indent) else: builder.append(indent) - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) if i < self.shape[0] - 1: @@ -956,7 +1056,7 @@ builder.append(indent) # create_slice requires len(chunks) > 1 in order to reduce # shape - view = self.create_slice([(i, 0, 0, 1)]).get_concrete() + view = self.create_slice([Chunk(i, 0, 0, 1)]).get_concrete() view.to_str(space, comma, builder, indent=indent + ' ', use_ellipsis=use_ellipsis) i += 1 @@ -1062,6 +1162,18 @@ array.setslice(space, self) return array + def flatten(self, space): + array = W_NDimArray(self.size, [self.size], self.dtype, self.order) + if self.supports_fast_slicing(): + array._fast_setslice(space, self) + else: + arr = SliceArray(array.shape, array.dtype, array, self, no_broadcast=True) + array._sliceloop(arr) + return array + + def fill(self, space, w_value): + self.setslice(space, scalar_w(space, self.dtype, w_value)) + class ViewArray(ConcreteArray): def create_sig(self): @@ -1082,6 +1194,10 @@ parent) self.start = start + def create_iter(self): + return ViewIterator(self.start, self.strides, self.backstrides, + self.shape) + def setshape(self, space, new_shape): if len(self.shape) < 1: return @@ -1128,6 +1244,9 @@ self.shape = new_shape self.calc_strides(new_shape) + def create_iter(self): + return ArrayIterator(self.size) + def create_sig(self): return signature.ArraySignature(self.dtype) @@ -1182,6 +1301,7 @@ arr = W_NDimArray(size, shape[:], dtype=dtype, order=order) shapelen = len(shape) arr_iter = ArrayIterator(arr.size) + # XXX we might want to have a jitdriver here for i in range(len(elems_w)): w_elem = elems_w[i] dtype.setitem(arr.storage, arr_iter.offset, @@ -1248,6 +1368,9 @@ __gt__ = interp2app(BaseArray.descr_gt), __ge__ = interp2app(BaseArray.descr_ge), + __and__ = interp2app(BaseArray.descr_and), + __or__ = interp2app(BaseArray.descr_or), + __repr__ = interp2app(BaseArray.descr_repr), __str__ = interp2app(BaseArray.descr_str), __array_interface__ = GetSetProperty(BaseArray.descr_array_iface), @@ -1274,7 +1397,10 @@ var = interp2app(BaseArray.descr_var), std = interp2app(BaseArray.descr_std), + fill = interp2app(BaseArray.descr_fill), + copy = interp2app(BaseArray.descr_copy), + flatten = interp2app(BaseArray.descr_flatten), reshape = interp2app(BaseArray.descr_reshape), tolist = interp2app(BaseArray.descr_tolist), ) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -249,15 +249,16 @@ class W_Ufunc2(W_Ufunc): - _immutable_fields_ = ["comparison_func", "func", "name"] + _immutable_fields_ = ["comparison_func", "func", "name", "int_only"] argcount = 2 def __init__(self, func, name, promote_to_float=False, promote_bools=False, - identity=None, comparison_func=False): + identity=None, comparison_func=False, int_only=False): W_Ufunc.__init__(self, name, promote_to_float, promote_bools, identity) self.func = func self.comparison_func = comparison_func + self.int_only = int_only def call(self, space, args_w): from pypy.module.micronumpy.interp_numarray import (Call2, @@ -268,6 +269,7 @@ w_rhs = convert_to_array(space, w_rhs) calc_dtype = find_binop_result_dtype(space, w_lhs.find_dtype(), w_rhs.find_dtype(), + int_only=self.int_only, promote_to_float=self.promote_to_float, promote_bools=self.promote_bools, ) @@ -304,10 +306,12 @@ def find_binop_result_dtype(space, dt1, dt2, promote_to_float=False, - promote_bools=False): + promote_bools=False, int_only=False): # dt1.num should be <= dt2.num if dt1.num > dt2.num: dt1, dt2 = dt2, dt1 + if int_only and (not dt1.is_int_type() or not dt2.is_int_type()): + raise OperationError(space.w_TypeError, space.wrap("Unsupported types")) # Some operations promote op(bool, bool) to return int8, rather than bool if promote_bools and (dt1.kind == dt2.kind == interp_dtype.BOOLLTR): return interp_dtype.get_dtype_cache(space).w_int8dtype @@ -425,6 +429,10 @@ ("add", "add", 2, {"identity": 0}), ("subtract", "sub", 2), ("multiply", "mul", 2, {"identity": 1}), + ("bitwise_and", "bitwise_and", 2, {"identity": 1, + 'int_only': True}), + ("bitwise_or", "bitwise_or", 2, {"identity": 0, + 'int_only': True}), ("divide", "div", 2, {"promote_bools": True}), ("mod", "mod", 2, {"promote_bools": True}), ("power", "pow", 2, {"promote_bools": True}), @@ -449,6 +457,7 @@ ("fabs", "fabs", 1, {"promote_to_float": True}), ("floor", "floor", 1, {"promote_to_float": True}), + ("ceil", "ceil", 1, {"promote_to_float": True}), ("exp", "exp", 1, {"promote_to_float": True}), ('sqrt', 'sqrt', 1, {'promote_to_float': True}), @@ -475,7 +484,7 @@ extra_kwargs["identity"] = identity func = ufunc_dtype_caller(space, ufunc_name, op_name, argcount, - comparison_func=extra_kwargs.get("comparison_func", False) + comparison_func=extra_kwargs.get("comparison_func", False), ) if argcount == 1: ufunc = W_Ufunc1(func, ufunc_name, **extra_kwargs) diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -82,6 +82,16 @@ for i in range(len(self.iterators)): self.iterators[i] = self.iterators[i].next(shapelen) + @unroll_safe + def next_from_second(self, shapelen): + """ Don't increase the first iterator + """ + for i in range(1, len(self.iterators)): + self.iterators[i] = self.iterators[i].next(shapelen) + + def next_first(self, shapelen): + self.iterators[0] = self.iterators[0].next(shapelen) + def get_final_iter(self): final_iter = promote(self.final_iter) if final_iter < 0: diff --git a/pypy/module/micronumpy/strides.py b/pypy/module/micronumpy/strides.py --- a/pypy/module/micronumpy/strides.py +++ b/pypy/module/micronumpy/strides.py @@ -10,12 +10,12 @@ rstart = start rshape = [] i = -1 - for i, (start_, stop, step, lgt) in enumerate(chunks): - if step != 0: - rstrides.append(strides[i] * step) - rbackstrides.append(strides[i] * (lgt - 1) * step) - rshape.append(lgt) - rstart += strides[i] * start_ + for i, chunk in enumerate(chunks): + if chunk.step != 0: + rstrides.append(strides[i] * chunk.step) + rbackstrides.append(strides[i] * (chunk.lgt - 1) * chunk.step) + rshape.append(chunk.lgt) + rstart += strides[i] * chunk.start # add a reminder s = i + 1 assert s >= 0 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -166,14 +166,11 @@ # You can't subclass dtype raises(TypeError, type, "Foo", (dtype,), {}) - def test_new(self): - import _numpypy as np - assert np.int_(4) == 4 - assert np.float_(3.4) == 3.4 + def test_aliases(self): + from _numpypy import dtype - def test_pow(self): - from _numpypy import int_ - assert int_(4) ** 2 == 16 + assert dtype("float") is dtype(float) + class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): @@ -189,6 +186,15 @@ raises(TypeError, numpy.floating, 0) raises(TypeError, numpy.inexact, 0) + def test_new(self): + import _numpypy as np + assert np.int_(4) == 4 + assert np.float_(3.4) == 3.4 + + def test_pow(self): + from _numpypy import int_ + assert int_(4) ** 2 == 16 + def test_bool(self): import _numpypy as numpy @@ -318,7 +324,7 @@ else: raises(OverflowError, numpy.int64, 9223372036854775807) raises(OverflowError, numpy.int64, '9223372036854775807') - + raises(OverflowError, numpy.int64, 9223372036854775808) raises(OverflowError, numpy.int64, '9223372036854775808') diff --git a/pypy/module/micronumpy/test/test_module.py b/pypy/module/micronumpy/test/test_module.py --- a/pypy/module/micronumpy/test/test_module.py +++ b/pypy/module/micronumpy/test/test_module.py @@ -2,16 +2,11 @@ class AppTestNumPyModule(BaseNumpyAppTest): - def test_mean(self): - from _numpypy import array, mean - assert mean(array(range(5))) == 2.0 - assert mean(range(5)) == 2.0 - def test_average(self): from _numpypy import array, average assert average(range(10)) == 4.5 assert average(array(range(10))) == 4.5 - + def test_sum(self): from _numpypy import array, sum assert sum(range(10)) == 45 @@ -21,7 +16,7 @@ from _numpypy import array, min assert min(range(10)) == 0 assert min(array(range(10))) == 0 - + def test_max(self): from _numpypy import array, max assert max(range(10)) == 9 diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -2,6 +2,7 @@ import py from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement +from pypy.module.micronumpy.interp_iter import Chunk from pypy.module.micronumpy import signature from pypy.interpreter.error import OperationError from pypy.conftest import gettestobjspace @@ -37,53 +38,54 @@ def test_create_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 3 assert s.strides == [10, 50] assert s.backstrides == [40, 100] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 1 assert s.strides == [2, 10, 50] assert s.backstrides == [6, 40, 100] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), Chunk(1, 0, 0, 1)]) assert s.shape == [2, 1] assert s.strides == [3, 10] assert s.backstrides == [3, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 20 assert s.shape == [10, 3] def test_create_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') - s = a.create_slice([(3, 0, 0, 1)]) + s = a.create_slice([Chunk(3, 0, 0, 1)]) assert s.start == 45 assert s.strides == [3, 1] assert s.backstrides == [12, 2] - s = a.create_slice([(1, 9, 2, 4)]) + s = a.create_slice([Chunk(1, 9, 2, 4)]) assert s.start == 15 assert s.strides == [30, 3, 1] assert s.backstrides == [90, 12, 2] - s = a.create_slice([(1, 5, 3, 2), (1, 2, 1, 1), (1, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2), Chunk(1, 2, 1, 1), + Chunk(1, 0, 0, 1)]) assert s.start == 19 assert s.shape == [2, 1] assert s.strides == [45, 3] assert s.backstrides == [45, 0] - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) assert s.start == 6 assert s.shape == [10, 3] def test_slice_of_slice_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [50] assert s2.parent is a assert s2.backstrides == [100] assert s2.start == 35 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [3, 50] assert s2.backstrides == [3, 100] @@ -91,16 +93,16 @@ def test_slice_of_slice_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(5, 0, 0, 1)]) + s = a.create_slice([Chunk(5, 0, 0, 1)]) assert s.start == 15 * 5 - s2 = s.create_slice([(3, 0, 0, 1)]) + s2 = s.create_slice([Chunk(3, 0, 0, 1)]) assert s2.shape == [3] assert s2.strides == [1] assert s2.parent is a assert s2.backstrides == [2] assert s2.start == 5 * 15 + 3 * 3 - s = a.create_slice([(1, 5, 3, 2)]) - s2 = s.create_slice([(0, 2, 1, 2), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(1, 5, 3, 2)]) + s2 = s.create_slice([Chunk(0, 2, 1, 2), Chunk(2, 0, 0, 1)]) assert s2.shape == [2, 3] assert s2.strides == [45, 1] assert s2.backstrides == [45, 2] @@ -108,14 +110,14 @@ def test_negative_step_f(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 9 assert s.strides == [-2, 10, 50] assert s.backstrides == [-8, 40, 100] def test_negative_step_c(self): a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), order='C') - s = a.create_slice([(9, -1, -2, 5)]) + s = a.create_slice([Chunk(9, -1, -2, 5)]) assert s.start == 135 assert s.strides == [-30, 3, 1] assert s.backstrides == [-120, 12, 2] @@ -124,7 +126,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'F') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 + 2 * 10 + 2 * 50 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -134,7 +136,7 @@ a = W_NDimArray(10 * 5 * 3, [10, 5, 3], MockDtype(), 'C') r = a._index_of_single_item(self.space, self.newtuple(1, 2, 2)) assert r == 1 * 3 * 5 + 2 * 3 + 2 - s = a.create_slice([(0, 10, 1, 10), (2, 0, 0, 1)]) + s = a.create_slice([Chunk(0, 10, 1, 10), Chunk(2, 0, 0, 1)]) r = s._index_of_single_item(self.space, self.newtuple(1, 0)) assert r == a._index_of_single_item(self.space, self.newtuple(1, 2, 0)) r = s._index_of_single_item(self.space, self.newtuple(1, 1)) @@ -157,6 +159,8 @@ assert calc_new_strides([2, 3, 4], [8, 3], [1, 16]) is None assert calc_new_strides([24], [2, 4, 3], [48, 6, 1]) is None assert calc_new_strides([24], [2, 4, 3], [24, 6, 2]) == [2] + assert calc_new_strides([105, 1], [3, 5, 7], [35, 7, 1]) == [1, 1] + assert calc_new_strides([1, 105], [3, 5, 7], [35, 7, 1]) == [105, 1] class AppTestNumArray(BaseNumpyAppTest): @@ -724,15 +728,17 @@ assert d[1] == 12 def test_mean(self): - from _numpypy import array, mean + from _numpypy import array, arange a = array(range(5)) assert a.mean() == 2.0 assert a[:4].mean() == 1.5 a = array(range(105)).reshape(3, 5, 7) - b = mean(a, axis=0) - b[0,0]==35. + b = a.mean(axis=0) + b[0, 0]==35. + assert a.mean(axis=0)[0, 0] == 35 assert (b == array(range(35, 70), dtype=float).reshape(5, 7)).all() - assert (mean(a, 2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (a.mean(2) == array(range(0, 15), dtype=float).reshape(3, 5) * 7 + 3).all() + assert (arange(10).reshape(5, 2).mean(axis=1) == [0.5, 2.5, 4.5, 6.5, 8.5]).all() def test_sum(self): from _numpypy import array @@ -753,6 +759,7 @@ assert array([]).sum() == 0.0 raises(ValueError, 'array([]).max()') assert (a.sum(0) == [30, 35, 40]).all() + assert (a.sum(axis=0) == [30, 35, 40]).all() assert (a.sum(1) == [3, 12, 21, 30, 39]).all() assert (a.max(0) == [12, 13, 14]).all() assert (a.max(1) == [2, 5, 8, 11, 14]).all() @@ -765,9 +772,10 @@ assert (a[:, 1, :].sum(1) == [70, 315, 560]).all() raises (ValueError, 'a[:, 1, :].sum(2)') assert ((a + a).T.sum(2).T == (a + a).sum(0)).all() - skip("Those are broken on reshape, fix!") assert (a.reshape(1,-1).sum(0) == range(105)).all() assert (a.reshape(1,-1).sum(1) == 5460) + assert (array([[1,2],[3,4]]).prod(0) == [3, 8]).all() + assert (array([[1,2],[3,4]]).prod(1) == [2, 12]).all() def test_identity(self): from _numpypy import identity, array @@ -1014,11 +1022,15 @@ assert a[0].tolist() == [17.1, 27.2] def test_var(self): - from _numpypy import array + from _numpypy import array, arange a = array(range(10)) assert a.var() == 8.25 a = array([5.0]) assert a.var() == 0.0 + a = arange(10).reshape(5, 2) + assert a.var() == 8.25 + #assert (a.var(0) == [8, 8]).all() + #assert (a.var(1) == [.25] * 5).all() def test_std(self): from _numpypy import array @@ -1027,6 +1039,22 @@ a = array([5.0]) assert a.std() == 0.0 + def test_flatten(self): + from _numpypy import array + + a = array([[1, 2, 3], [4, 5, 6]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6]).all() + a = array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) + assert (a.flatten() == [1, 2, 3, 4, 5, 6, 7, 8]).all() + a = array([1, 2, 3, 4, 5, 6, 7, 8]) + assert (a[::2].flatten() == [1, 3, 5, 7]).all() + a = array([1, 2, 3]) + assert ((a + a).flatten() == [2, 4, 6]).all() + a = array(2) + assert (a.flatten() == [2]).all() + a = array([[1, 2], [3, 4]]) + assert (a.T.flatten() == [1, 3, 2, 4]).all() + class AppTestMultiDim(BaseNumpyAppTest): def test_init(self): @@ -1297,6 +1325,56 @@ assert isinstance(i['data'][0], int) raises(TypeError, getattr, array(3), '__array_interface__') + def test_array_indexing_one_elem(self): + skip("not yet") + from _numpypy import array, arange + raises(IndexError, 'arange(3)[array([3.5])]') + a = arange(3)[array([1])] + assert a == 1 + assert a[0] == 1 + raises(IndexError,'arange(3)[array([15])]') + assert arange(3)[array([-3])] == 0 + raises(IndexError,'arange(3)[array([-15])]') + assert arange(3)[array(1)] == 1 + + def test_fill(self): + from _numpypy import array + a = array([1, 2, 3]) + a.fill(10) + assert (a == [10, 10, 10]).all() + a.fill(False) + assert (a == [0, 0, 0]).all() + b = a[:1] + b.fill(4) + assert (b == [4]).all() + assert (a == [4, 0, 0]).all() + + c = b + b + c.fill(27) + assert (c == [27]).all() + + d = array(10) + d.fill(100) + assert d == 100 + + def test_array_indexing_bool(self): + from _numpypy import arange + a = arange(10) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + a = arange(10).reshape(5, 2) + assert (a[a > 3] == [4, 5, 6, 7, 8, 9]).all() + assert (a[a & 1 == 1] == [1, 3, 5, 7, 9]).all() + + def test_array_indexing_bool_setitem(self): + from _numpypy import arange, array + a = arange(6) + a[a > 3] = 15 + assert (a == [0, 1, 2, 3, 15, 15]).all() + a = arange(6).reshape(3, 2) + a[a & 1 == 1] = array([8, 9, 10]) + assert (a == [[0, 8], [2, 9], [4, 10]]).all() + + class AppTestSupport(BaseNumpyAppTest): def setup_class(cls): @@ -1436,9 +1514,11 @@ assert repr(a) == "array(0.0)" a = array(0.2) assert repr(a) == "array(0.2)" + a = array([2]) + assert repr(a) == "array([2])" def test_repr_multi(self): - from _numpypy import arange, zeros + from _numpypy import arange, zeros, array a = zeros((3, 4)) assert repr(a) == '''array([[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], @@ -1461,6 +1541,9 @@ [498, 999], [499, 1000], [500, 1001]])''' + a = arange(2).reshape((2,1)) + assert repr(a) == '''array([[0], + [1]])''' def test_repr_slice(self): from _numpypy import array, zeros @@ -1545,14 +1628,3 @@ a = arange(0, 0.8, 0.1) assert len(a) == 8 assert arange(False, True, True).dtype is dtype(int) - - -class AppTestRanges(BaseNumpyAppTest): - def test_app_reshape(self): - from _numpypy import arange, array, dtype, reshape - a = arange(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) - a = range(12) - b = reshape(a, (3, 4)) - assert b.shape == (3, 4) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -190,14 +190,24 @@ for i in range(3): assert c[i] == a[i] - b[i] - def test_floor(self): - from _numpypy import array, floor - - reference = [-2.0, -1.0, 0.0, 1.0, 1.0] - a = array([-1.4, -1.0, 0.0, 1.0, 1.4]) + def test_floorceil(self): + from _numpypy import array, floor, ceil + import math + reference = [-2.0, -2.0, -1.0, 0.0, 1.0, 1.0, 0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) b = floor(a) for i in range(5): assert b[i] == reference[i] + reference = [-1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0] + a = array([-1.4, -1.5, -1.0, 0.0, 1.0, 1.4, 0.5]) + b = ceil(a) + assert (reference == b).all() + inf = float("inf") + data = [1.5, 2.9999, -1.999, inf] + results = [math.floor(x) for x in data] + assert (floor(data) == results).all() + results = [math.ceil(x) for x in data] + assert (ceil(data) == results).all() def test_copysign(self): from _numpypy import array, copysign @@ -238,7 +248,7 @@ assert b[i] == math.sin(a[i]) a = sin(array([True, False], dtype=bool)) - assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise + assert abs(a[0] - sin(1)) < 1e-7 # a[0] will be less precise assert a[1] == 0.0 def test_cos(self): @@ -259,7 +269,6 @@ for i in range(len(a)): assert b[i] == math.tan(a[i]) - def test_arcsin(self): import math from _numpypy import array, arcsin @@ -283,7 +292,6 @@ for i in range(len(a)): assert b[i] == math.acos(a[i]) - a = array([-10, -1.5, -1.01, 1.01, 1.5, 10, float('nan'), float('inf'), float('-inf')]) b = arccos(a) for f in b: @@ -347,11 +355,20 @@ raises(ValueError, maximum.reduce, []) def test_reduceND(self): - from numpypy import add, arange + from _numpypy import add, arange a = arange(12).reshape(3, 4) assert (add.reduce(a, 0) == [12, 15, 18, 21]).all() assert (add.reduce(a, 1) == [6.0, 22.0, 38.0]).all() + def test_bitwise(self): + from _numpypy import bitwise_and, bitwise_or, arange, array + a = arange(6).reshape(2, 3) + assert (a & 1 == [[0, 1, 0], [1, 0, 1]]).all() + assert (a & 1 == bitwise_and(a, 1)).all() + assert (a | 1 == [[1, 1, 3], [3, 5, 5]]).all() + assert (a | 1 == bitwise_or(a, 1)).all() + raises(TypeError, 'array([1.0]) & 1') + def test_comparisons(self): import operator from _numpypy import equal, not_equal, less, less_equal, greater, greater_equal diff --git a/pypy/module/micronumpy/test/test_zjit.py b/pypy/module/micronumpy/test/test_zjit.py --- a/pypy/module/micronumpy/test/test_zjit.py +++ b/pypy/module/micronumpy/test/test_zjit.py @@ -217,6 +217,7 @@ # This is the sum of the ops for both loops, however if you remove the # optimization then you end up with 2 float_adds, so we can still be # sure it was optimized correctly. + py.test.skip("too fragile") self.check_resops({'setinteriorfield_raw': 4, 'getfield_gc': 22, 'getarrayitem_gc': 4, 'getarrayitem_gc_pure': 2, 'getfield_gc_pure': 8, @@ -349,7 +350,8 @@ self.check_trace_count(1) self.check_simple_loop({'getinteriorfield_raw': 2, 'float_add': 1, 'setinteriorfield_raw': 1, 'int_add': 2, - 'int_eq': 1, 'guard_false': 1, 'jump': 1}) + 'int_eq': 1, 'guard_false': 1, 'jump': 1, + 'arraylen_gc': 1}) def define_virtual_slice(): return """ diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -94,6 +94,9 @@ width, storage, i, offset )) + def read_bool(self, storage, width, i, offset): + raise NotImplementedError + def store(self, storage, width, i, offset, box): value = self.unbox(box) libffi.array_setitem(clibffi.cast_type_to_ffitype(self.T), @@ -168,6 +171,7 @@ @simple_binary_op def min(self, v1, v2): return min(v1, v2) + class Bool(BaseType, Primitive): T = lltype.Bool @@ -185,6 +189,11 @@ else: return self.False + + def read_bool(self, storage, width, i, offset): + return libffi.array_getitem(clibffi.cast_type_to_ffitype(self.T), + width, storage, i, offset) + def coerce_subtype(self, space, w_subtype, w_item): # Doesn't return subclasses so it can return the constants. return self._coerce(space, w_item) @@ -253,6 +262,14 @@ assert v == 0 return 0 + @simple_binary_op + def bitwise_and(self, v1, v2): + return v1 & v2 + + @simple_binary_op + def bitwise_or(self, v1, v2): + return v1 | v2 + class Int8(BaseType, Integer): T = rffi.SIGNEDCHAR BoxType = interp_boxes.W_Int8Box @@ -374,6 +391,10 @@ return math.floor(v) @simple_unary_op + def ceil(self, v): + return math.ceil(v) + + @simple_unary_op def exp(self, v): try: return math.exp(v) @@ -436,4 +457,4 @@ class Float64(BaseType, Float): T = rffi.DOUBLE BoxType = interp_boxes.W_Float64Box - format_code = "d" \ No newline at end of file + format_code = "d" diff --git a/pypy/module/pypyjit/__init__.py b/pypy/module/pypyjit/__init__.py --- a/pypy/module/pypyjit/__init__.py +++ b/pypy/module/pypyjit/__init__.py @@ -11,6 +11,7 @@ 'set_optimize_hook': 'interp_resop.set_optimize_hook', 'set_abort_hook': 'interp_resop.set_abort_hook', 'ResOperation': 'interp_resop.WrappedOp', + 'DebugMergePoint': 'interp_resop.DebugMergePoint', 'Box': 'interp_resop.WrappedBox', } diff --git a/pypy/module/pypyjit/interp_resop.py b/pypy/module/pypyjit/interp_resop.py --- a/pypy/module/pypyjit/interp_resop.py +++ b/pypy/module/pypyjit/interp_resop.py @@ -1,5 +1,6 @@ -from pypy.interpreter.typedef import TypeDef, GetSetProperty +from pypy.interpreter.typedef import (TypeDef, GetSetProperty, + interp_attrproperty, interp_attrproperty_w) from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.gateway import unwrap_spec, interp2app, NoneNotWrapped from pypy.interpreter.pycode import PyCode @@ -10,6 +11,7 @@ from pypy.jit.metainterp.resoperation import rop, AbstractResOp from pypy.rlib.nonconst import NonConstant from pypy.rlib import jit_hooks +from pypy.module.pypyjit.interp_jit import pypyjitdriver class Cache(object): in_recursion = False @@ -111,13 +113,24 @@ def wrap_oplist(space, logops, operations, ops_offset=None): l_w = [] + jitdrivers_sd = logops.metainterp_sd.jitdrivers_sd for op in operations: if ops_offset is None: ofs = -1 else: ofs = ops_offset.get(op, 0) - l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, - logops.repr_of_resop(op))) + if op.opnum == rop.DEBUG_MERGE_POINT: + jd_sd = jitdrivers_sd[op.getarg(0).getint()] + greenkey = op.getarglist()[2:] + repr = jd_sd.warmstate.get_location_str(greenkey) + w_greenkey = wrap_greenkey(space, jd_sd.jitdriver, greenkey, repr) + l_w.append(DebugMergePoint(space, jit_hooks._cast_to_gcref(op), + logops.repr_of_resop(op), + jd_sd.jitdriver.name, + w_greenkey)) + else: + l_w.append(WrappedOp(jit_hooks._cast_to_gcref(op), ofs, + logops.repr_of_resop(op))) return l_w class WrappedBox(Wrappable): @@ -150,6 +163,15 @@ llres = res.llbox return WrappedOp(jit_hooks.resop_new(num, args, llres), offset, repr) + at unwrap_spec(repr=str, jd_name=str) +def descr_new_dmp(space, w_tp, w_args, repr, jd_name, w_greenkey): + args = [space.interp_w(WrappedBox, w_arg).llbox for w_arg in + space.listview(w_args)] + num = rop.DEBUG_MERGE_POINT + return DebugMergePoint(space, + jit_hooks.resop_new(num, args, jit_hooks.emptyval()), + repr, jd_name, w_greenkey) + class WrappedOp(Wrappable): """ A class representing a single ResOperation, wrapped nicely """ @@ -182,6 +204,25 @@ box = space.interp_w(WrappedBox, w_box) jit_hooks.resop_setresult(self.op, box.llbox) +class DebugMergePoint(WrappedOp): + def __init__(self, space, op, repr_of_resop, jd_name, w_greenkey): + WrappedOp.__init__(self, op, -1, repr_of_resop) + self.w_greenkey = w_greenkey + self.jd_name = jd_name + + def get_pycode(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(0)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_bytecode_no(self, space): + if self.jd_name == pypyjitdriver.name: + return space.getitem(self.w_greenkey, space.wrap(1)) + raise OperationError(space.w_AttributeError, space.wrap("This DebugMergePoint doesn't belong to the main Python JitDriver")) + + def get_jitdriver_name(self, space): + return space.wrap(self.jd_name) + WrappedOp.typedef = TypeDef( 'ResOperation', __doc__ = WrappedOp.__doc__, @@ -195,3 +236,15 @@ WrappedOp.descr_setresult) ) WrappedOp.acceptable_as_base_class = False + +DebugMergePoint.typedef = TypeDef( + 'DebugMergePoint', WrappedOp.typedef, + __new__ = interp2app(descr_new_dmp), + greenkey = interp_attrproperty_w("w_greenkey", cls=DebugMergePoint), + pycode = GetSetProperty(DebugMergePoint.get_pycode), + bytecode_no = GetSetProperty(DebugMergePoint.get_bytecode_no), + jitdriver_name = GetSetProperty(DebugMergePoint.get_jitdriver_name), +) +DebugMergePoint.acceptable_as_base_class = False + + diff --git a/pypy/module/pypyjit/policy.py b/pypy/module/pypyjit/policy.py --- a/pypy/module/pypyjit/policy.py +++ b/pypy/module/pypyjit/policy.py @@ -127,7 +127,7 @@ 'imp', 'sys', 'array', '_ffi', 'itertools', 'operator', 'posix', '_socket', '_sre', '_lsprof', '_weakref', '__pypy__', 'cStringIO', '_collections', 'struct', - 'mmap', 'marshal']: + 'mmap', 'marshal', '_codecs']: if modname == 'pypyjit' and 'interp_resop' in rest: return False return True diff --git a/pypy/module/pypyjit/test/test_jit_hook.py b/pypy/module/pypyjit/test/test_jit_hook.py --- a/pypy/module/pypyjit/test/test_jit_hook.py +++ b/pypy/module/pypyjit/test/test_jit_hook.py @@ -92,6 +92,7 @@ cls.w_on_compile_bridge = space.wrap(interp2app(interp_on_compile_bridge)) cls.w_on_abort = space.wrap(interp2app(interp_on_abort)) cls.w_int_add_num = space.wrap(rop.INT_ADD) + cls.w_dmp_num = space.wrap(rop.DEBUG_MERGE_POINT) cls.w_on_optimize = space.wrap(interp2app(interp_on_optimize)) cls.orig_oplist = oplist @@ -117,6 +118,10 @@ assert elem[2][2] == False assert len(elem[3]) == 4 int_add = elem[3][0] + dmp = elem[3][1] + assert isinstance(dmp, pypyjit.DebugMergePoint) + assert dmp.pycode is self.f.func_code + assert dmp.greenkey == (self.f.func_code, 0, False) #assert int_add.name == 'int_add' assert int_add.num == self.int_add_num self.on_compile_bridge() @@ -211,3 +216,18 @@ assert op.getarg(0).getint() == 4 op.result = box assert op.result.getint() == 1 + + def test_creation_dmp(self): + from pypyjit import DebugMergePoint, Box + + def f(): + pass + + op = DebugMergePoint([Box(0)], 'repr', 'pypyjit', (f.func_code, 0, 0)) + assert op.bytecode_no == 0 + assert op.pycode is f.func_code + assert repr(op) == 'repr' + assert op.jitdriver_name == 'pypyjit' + assert op.num == self.dmp_num + op = DebugMergePoint([Box(0)], 'repr', 'notmain', ('str',)) + raises(AttributeError, 'op.pycode') diff --git a/pypy/module/pypyjit/test_pypy_c/test_math.py b/pypy/module/pypyjit/test_pypy_c/test_math.py --- a/pypy/module/pypyjit/test_pypy_c/test_math.py +++ b/pypy/module/pypyjit/test_pypy_c/test_math.py @@ -90,3 +90,19 @@ --TICK-- jump(..., descr=) """) + + def test_pow_two(self): + def main(n): + s = 0.123 + while n > 0: + s -= s ** 2 # ID: pow + n -= 1 + return s + log = self.run(main, [500]) + assert abs(log.result - main(500)) < 1e-9 + loop, = log.loops_by_filename(self.filepath) + assert loop.match_by_id("pow", """ + guard_not_invalidated(descr=...) + f38 = float_mul(f30, f30) + f39 = float_sub(f30, f38) + """) diff --git a/pypy/module/sys/version.py b/pypy/module/sys/version.py --- a/pypy/module/sys/version.py +++ b/pypy/module/sys/version.py @@ -10,7 +10,7 @@ CPYTHON_VERSION = (2, 7, 1, "final", 42) #XXX # sync patchlevel.h CPYTHON_API_VERSION = 1013 #XXX # sync with include/modsupport.h -PYPY_VERSION = (1, 7, 1, "dev", 0) #XXX # sync patchlevel.h +PYPY_VERSION = (1, 8, 1, "dev", 0) #XXX # sync patchlevel.h if platform.name == 'msvc': COMPILER_INFO = 'MSC v.%d 32 bit' % (platform.version * 10 + 600) diff --git a/lib_pypy/numpypy/test/test_fromnumeric.py b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py rename from lib_pypy/numpypy/test/test_fromnumeric.py rename to pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py --- a/lib_pypy/numpypy/test/test_fromnumeric.py +++ b/pypy/module/test_lib_pypy/numpypy/core/test_fromnumeric.py @@ -1,7 +1,7 @@ - from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest -class AppTestFromNumeric(BaseNumpyAppTest): + +class AppTestFromNumeric(BaseNumpyAppTest): def test_argmax(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, argmax @@ -18,12 +18,12 @@ from numpypy import array, arange, argmin a = arange(6).reshape((2,3)) assert argmin(a) == 0 - # assert (argmax(a, axis=0) == array([0, 0, 0])).all() - # assert (argmax(a, axis=1) == array([0, 0])).all() + #assert (argmin(a, axis=0) == array([0, 0, 0])).all() + #assert (argmin(a, axis=1) == array([0, 0])).all() b = arange(6) b[1] = 0 assert argmin(b) == 0 - + def test_shape(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, identity, shape @@ -40,11 +40,11 @@ assert sum([0.5, 1.5])== 2.0 assert sum([[0, 1], [0, 5]]) == 6 # assert sum([0.5, 0.7, 0.2, 1.5], dtype=int32) == 1 - # assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() - # assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() + assert (sum([[0, 1], [0, 5]], axis=0) == array([0, 6])).all() + assert (sum([[0, 1], [0, 5]], axis=1) == array([1, 5])).all() # If the accumulator is too small, overflow occurs: # assert ones(128, dtype=int8).sum(dtype=int8) == -128 - + def test_amin(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, arange, amin @@ -86,24 +86,54 @@ assert ndim([[1,2,3],[4,5,6]]) == 2 assert ndim(array([[1,2,3],[4,5,6]])) == 2 assert ndim(1) == 0 - + def test_rank(self): # tests taken from numpy/core/fromnumeric.py docstring from numpypy import array, rank assert rank([[1,2,3],[4,5,6]]) == 2 assert rank(array([[1,2,3],[4,5,6]])) == 2 assert rank(1) == 0 - + def test_var(self): from numpypy import array, var a = array([[1,2],[3,4]]) assert var(a) == 1.25 - # assert (np.var(a,0) == array([ 1., 1.])).all() - # assert (np.var(a,1) == array([ 0.25, 0.25])).all() + #assert (var(a,0) == array([ 1., 1.])).all() + #assert (var(a,1) == array([ 0.25, 0.25])).all() def test_std(self): from numpypy import array, std a = array([[1, 2], [3, 4]]) assert std(a) == 1.1180339887498949 - # assert (std(a, axis=0) == array([ 1., 1.])).all() - # assert (std(a, axis=1) == array([ 0.5, 0.5]).all() + #assert (std(a, axis=0) == array([ 1., 1.])).all() + #assert (std(a, axis=1) == array([ 0.5, 0.5])).all() + + def test_mean(self): + from numpypy import array, mean, arange + assert mean(array(range(5))) == 2.0 + assert mean(range(5)) == 2.0 + assert (mean(arange(10).reshape(5, 2), axis=0) == [4, 5]).all() + assert (mean(arange(10).reshape(5, 2), axis=1) == [0.5, 2.5, 4.5, 6.5, 8.5]).all() + + def test_reshape(self): + from numpypy import arange, array, dtype, reshape + a = arange(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = range(12) + b = reshape(a, (3, 4)) + assert b.shape == (3, 4) + a = array(range(105)).reshape(3, 5, 7) + assert reshape(a, (1, -1)).shape == (1, 105) + assert reshape(a, (1, 1, -1)).shape == (1, 1, 105) + assert reshape(a, (-1, 1, 1)).shape == (105, 1, 1) + + def test_transpose(self): + from numpypy import arange, array, transpose, ones + x = arange(4).reshape((2,2)) + assert (transpose(x) == array([[0, 2],[1, 3]])).all() + # Once axes argument is implemented, add more tests + raises(NotImplementedError, "transpose(x, axes=(1, 0, 2))") + # x = ones((1, 2, 3)) + # assert transpose(x, (1, 0, 2)).shape == (2, 1, 3) + diff --git a/pypy/module/test_lib_pypy/test_datetime.py b/pypy/module/test_lib_pypy/test_datetime.py new file mode 100644 --- /dev/null +++ b/pypy/module/test_lib_pypy/test_datetime.py @@ -0,0 +1,24 @@ +"""Additional tests for datetime.""" + +import time +import datetime +import os + +def test_utcfromtimestamp(): + """Confirm that utcfromtimestamp and fromtimestamp give consistent results. + + Based on danchr's test script in https://bugs.pypy.org/issue986 + """ + try: + prev_tz = os.environ.get("TZ") + os.environ["TZ"] = "GMT" + for unused in xrange(100): + now = time.time() + delta = (datetime.datetime.utcfromtimestamp(now) - + datetime.datetime.fromtimestamp(now)) + assert delta.days * 86400 + delta.seconds == 0 + finally: + if prev_tz is None: + del os.environ["TZ"] + else: + os.environ["TZ"] = prev_tz diff --git a/pypy/objspace/std/bytearrayobject.py b/pypy/objspace/std/bytearrayobject.py --- a/pypy/objspace/std/bytearrayobject.py +++ b/pypy/objspace/std/bytearrayobject.py @@ -624,8 +624,8 @@ if step == 1: assert start >= 0 - assert slicelength >= 0 - del items[start:start+slicelength] + if slicelength > 0: + del items[start:start+slicelength] else: n = len(items) i = start @@ -662,10 +662,11 @@ while i >= lim: items[i] = items[i-delta] i -= 1 - elif start >= 0: + elif delta == 0: + pass + else: + assert start >= 0 # start<0 is only possible with slicelength==0 del items[start:start+delta] - else: - assert delta==0 # start<0 is only possible with slicelength==0 elif len2 != slicelength: # No resize for extended slices raise operationerrfmt(space.w_ValueError, "attempt to " "assign sequence of size %d to extended slice of size %d", diff --git a/pypy/objspace/std/complexobject.py b/pypy/objspace/std/complexobject.py --- a/pypy/objspace/std/complexobject.py +++ b/pypy/objspace/std/complexobject.py @@ -8,6 +8,7 @@ from pypy.rlib.rbigint import rbigint from pypy.rlib.rfloat import ( formatd, DTSF_STR_PRECISION, isinf, isnan, copysign) +from pypy.rlib import jit import math @@ -129,10 +130,10 @@ ir = len * math.sin(phase) return W_ComplexObject(rr, ir) - def pow_int(self, n): - if n > 100 or n < -100: - return self.pow(W_ComplexObject(1.0 * n, 0.0)) - elif n > 0: + def pow_small_int(self, n): + if n >= 0: + if jit.isconstant(n) and n == 2: + return self.mul(self) return self.pow_positive_int(n) else: return w_one.div(self.pow_positive_int(-n)) @@ -217,10 +218,10 @@ def pow__Complex_Complex_ANY(space, w_complex, w_exponent, thirdArg): if not space.is_w(thirdArg, space.w_None): raise OperationError(space.w_ValueError, space.wrap('complex modulo')) - int_exponent = int(w_exponent.realval) try: - if w_exponent.imagval == 0.0 and w_exponent.realval == int_exponent: - w_p = w_complex.pow_int(int_exponent) + r = w_exponent.realval + if w_exponent.imagval == 0.0 and -100.0 <= r <= 100.0 and r == int(r): + w_p = w_complex.pow_small_int(int(r)) else: w_p = w_complex.pow(w_exponent) except ZeroDivisionError: diff --git a/pypy/objspace/std/floatobject.py b/pypy/objspace/std/floatobject.py --- a/pypy/objspace/std/floatobject.py +++ b/pypy/objspace/std/floatobject.py @@ -451,6 +451,8 @@ y = w_float2.floatval # Sort out special cases here instead of relying on pow() + if y == 2.0: # special case for performance: + return W_FloatObject(x * x) # x * x is always correct if y == 0.0: # x**0 is 1, even 0**0 return W_FloatObject(1.0) diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py --- a/pypy/objspace/std/listobject.py +++ b/pypy/objspace/std/listobject.py @@ -804,10 +804,11 @@ while i >= lim: items[i] = items[i-delta] i -= 1 - elif start >= 0: + elif delta == 0: + pass + else: + assert start >= 0 # start<0 is only possible with slicelength==0 del items[start:start+delta] - else: - assert delta==0 # start<0 is only possible with slicelength==0 elif len2 != slicelength: # No resize for extended slices raise operationerrfmt(self.space.w_ValueError, "attempt to " "assign sequence of size %d to extended slice of size %d", @@ -851,8 +852,8 @@ if step == 1: assert start >= 0 - assert slicelength >= 0 - del items[start:start+slicelength] + if slicelength > 0: + del items[start:start+slicelength] else: n = len(items) i = start diff --git a/pypy/objspace/std/test/test_bytearrayobject.py b/pypy/objspace/std/test/test_bytearrayobject.py --- a/pypy/objspace/std/test/test_bytearrayobject.py +++ b/pypy/objspace/std/test/test_bytearrayobject.py @@ -1,5 +1,9 @@ +from pypy import conftest class AppTestBytesArray: + def setup_class(cls): + cls.w_runappdirect = cls.space.wrap(conftest.option.runappdirect) + def test_basics(self): b = bytearray() assert type(b) is bytearray @@ -439,3 +443,15 @@ def test_reduce(self): assert bytearray('caf\xe9').__reduce__() == ( bytearray, (u'caf\xe9', 'latin-1'), None) + + def test_setitem_slice_performance(self): + # because of a complexity bug, this used to take forever on a + # translated pypy. On CPython2.6 -A, it takes around 8 seconds. + if self.runappdirect: + count = 16*1024*1024 + else: + count = 1024 + b = bytearray(count) + for i in range(count): + b[i:i+1] = 'y' + assert str(b) == 'y' * count diff --git a/pypy/objspace/std/test/test_complexobject.py b/pypy/objspace/std/test/test_complexobject.py --- a/pypy/objspace/std/test/test_complexobject.py +++ b/pypy/objspace/std/test/test_complexobject.py @@ -71,7 +71,7 @@ assert _powu((0.0,1.0),2) == (-1.0,0.0) def _powi((r1, i1), n): - w_res = W_ComplexObject(r1, i1).pow_int(n) + w_res = W_ComplexObject(r1, i1).pow_small_int(n) return w_res.realval, w_res.imagval assert _powi((0.0,2.0),0) == (1.0,0.0) assert _powi((0.0,0.0),2) == (0.0,0.0) @@ -213,6 +213,7 @@ assert a ** 105 == a ** 105 assert a ** -105 == a ** -105 assert a ** -30 == a ** -30 + assert a ** 2 == a * a assert 0.0j ** 0 == 1 diff --git a/pypy/objspace/std/test/test_listobject.py b/pypy/objspace/std/test/test_listobject.py --- a/pypy/objspace/std/test/test_listobject.py +++ b/pypy/objspace/std/test/test_listobject.py @@ -397,6 +397,7 @@ on_cpython = (option.runappdirect and not hasattr(sys, 'pypy_translation_info')) cls.w_on_cpython = cls.space.wrap(on_cpython) + cls.w_runappdirect = cls.space.wrap(option.runappdirect) def test_getstrategyfromlist_w(self): l0 = ["a", "2", "a", True] @@ -898,6 +899,18 @@ l[::-1] = l assert l == [6,5,4,3,2,1] + def test_setitem_slice_performance(self): + # because of a complexity bug, this used to take forever on a + # translated pypy. On CPython2.6 -A, it takes around 5 seconds. + if self.runappdirect: + count = 16*1024*1024 + else: + count = 1024 + b = [None] * count + for i in range(count): + b[i:i+1] = ['y'] + assert b == ['y'] * count + def test_recursive_repr(self): l = [] assert repr(l) == '[]' diff --git a/pypy/objspace/std/test/test_unicodeobject.py b/pypy/objspace/std/test/test_unicodeobject.py --- a/pypy/objspace/std/test/test_unicodeobject.py +++ b/pypy/objspace/std/test/test_unicodeobject.py @@ -64,6 +64,12 @@ check(', '.join([u'a']), u'a') check(', '.join(['a', u'b']), u'a, b') check(u', '.join(['a', 'b']), u'a, b') + try: + u''.join([u'a', 2, 3]) + except TypeError, e: + assert 'sequence item 1' in str(e) + else: + raise Exception("DID NOT RAISE") if sys.version_info >= (2,3): def test_contains_ex(self): diff --git a/pypy/objspace/std/unicodeobject.py b/pypy/objspace/std/unicodeobject.py --- a/pypy/objspace/std/unicodeobject.py +++ b/pypy/objspace/std/unicodeobject.py @@ -201,7 +201,7 @@ return space.newbool(container.find(item) != -1) def unicode_join__Unicode_ANY(space, w_self, w_list): - list_w = space.unpackiterable(w_list) + list_w = space.listview(w_list) size = len(list_w) if size == 0: @@ -216,22 +216,21 @@ def _unicode_join_many_items(space, w_self, list_w, size): self = w_self._value - sb = UnicodeBuilder() + prealloc_size = len(self) * (size - 1) + for i in range(size): + try: + prealloc_size += len(space.unicode_w(list_w[i])) + except OperationError, e: + if not e.match(space, space.w_TypeError): + raise + raise operationerrfmt(space.w_TypeError, + "sequence item %d: expected string or Unicode", i) + sb = UnicodeBuilder(prealloc_size) for i in range(size): if self and i != 0: sb.append(self) w_s = list_w[i] - if isinstance(w_s, W_UnicodeObject): - # shortcut for performance - sb.append(w_s._value) - else: - try: - sb.append(space.unicode_w(w_s)) - except OperationError, e: - if not e.match(space, space.w_TypeError): - raise - raise operationerrfmt(space.w_TypeError, - "sequence item %d: expected string or Unicode", i) + sb.append(space.unicode_w(w_s)) return space.wrap(sb.build()) def hash__Unicode(space, w_uni): diff --git a/pypy/rlib/jit_hooks.py b/pypy/rlib/jit_hooks.py --- a/pypy/rlib/jit_hooks.py +++ b/pypy/rlib/jit_hooks.py @@ -52,7 +52,10 @@ from pypy.jit.metainterp.history import ResOperation args = [_cast_to_box(llargs[i]) for i in range(len(llargs))] - res = _cast_to_box(llres) + if llres: + res = _cast_to_box(llres) + else: + res = None return _cast_to_gcref(ResOperation(no, args, res)) @register_helper(annmodel.SomePtr(llmemory.GCREF)) diff --git a/pypy/rlib/longlong2float.py b/pypy/rlib/longlong2float.py --- a/pypy/rlib/longlong2float.py +++ b/pypy/rlib/longlong2float.py @@ -79,19 +79,23 @@ longlong2float = rffi.llexternal( "pypy__longlong2float", [rffi.LONGLONG], rffi.DOUBLE, _callable=longlong2float_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__longlong2float") float2longlong = rffi.llexternal( "pypy__float2longlong", [rffi.DOUBLE], rffi.LONGLONG, _callable=float2longlong_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__float2longlong") uint2singlefloat = rffi.llexternal( "pypy__uint2singlefloat", [rffi.UINT], rffi.FLOAT, _callable=uint2singlefloat_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__uint2singlefloat") singlefloat2uint = rffi.llexternal( "pypy__singlefloat2uint", [rffi.FLOAT], rffi.UINT, _callable=singlefloat2uint_emulator, compilation_info=eci, - _nowrapper=True, elidable_function=True, sandboxsafe=True) + _nowrapper=True, elidable_function=True, sandboxsafe=True, + oo_primitive="pypy__singlefloat2uint") diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -420,7 +420,7 @@ vobj.concretetype.TO._gckind == 'gc') else: from pypy.rpython.ootypesystem import ootype - ok = isinstance(vobj.concretetype, ootype.Instance) + ok = isinstance(vobj.concretetype, (ootype.Instance, ootype.BuiltinType)) if not ok: from pypy.rpython.error import TyperError raise TyperError("compute_unique_id() cannot be applied to" diff --git a/pypy/rpython/lltypesystem/rclass.py b/pypy/rpython/lltypesystem/rclass.py --- a/pypy/rpython/lltypesystem/rclass.py +++ b/pypy/rpython/lltypesystem/rclass.py @@ -510,7 +510,13 @@ ctype = inputconst(Void, self.object_type) cflags = inputconst(Void, flags) vlist = [ctype, cflags] - vptr = llops.genop('malloc', vlist, + cnonmovable = self.classdef.classdesc.read_attribute( + '_alloc_nonmovable_', Constant(False)) + if cnonmovable.value: + opname = 'malloc_nonmovable' + else: + opname = 'malloc' + vptr = llops.genop(opname, vlist, resulttype = Ptr(self.object_type)) ctypeptr = inputconst(CLASSTYPE, self.rclass.getvtable()) self.setfield(vptr, '__class__', ctypeptr, llops) diff --git a/pypy/rpython/lltypesystem/rtuple.py b/pypy/rpython/lltypesystem/rtuple.py --- a/pypy/rpython/lltypesystem/rtuple.py +++ b/pypy/rpython/lltypesystem/rtuple.py @@ -27,6 +27,10 @@ def newtuple(cls, llops, r_tuple, items_v): # items_v should have the lowleveltype of the internal reprs + assert len(r_tuple.items_r) == len(items_v) + for r_item, v_item in zip(r_tuple.items_r, items_v): + assert r_item.lowleveltype == v_item.concretetype + # if len(r_tuple.items_r) == 0: return inputconst(Void, ()) # a Void empty tuple c1 = inputconst(Void, r_tuple.lowleveltype.TO) diff --git a/pypy/rpython/ootypesystem/ootype.py b/pypy/rpython/ootypesystem/ootype.py --- a/pypy/rpython/ootypesystem/ootype.py +++ b/pypy/rpython/ootypesystem/ootype.py @@ -512,6 +512,7 @@ "ll_append_char": Meth([CHARTP], Void), "ll_append": Meth([STRINGTP], Void), "ll_build": Meth([], STRINGTP), + "ll_getlength": Meth([], Signed), }) self._setup_methods({}) @@ -1376,6 +1377,9 @@ def _cast_to_object(self): return make_object(self) + def _identityhash(self): + return object.__hash__(self) + class _string(_builtin_type): def __init__(self, STRING, value = ''): @@ -1543,6 +1547,9 @@ else: return make_unicode(u''.join(self._buf)) + def ll_getlength(self): + return self.ll_build().ll_strlen() + class _null_string_builder(_null_mixin(_string_builder), _string_builder): def __init__(self, STRING_BUILDER): self.__dict__["_TYPE"] = STRING_BUILDER diff --git a/pypy/rpython/ootypesystem/rbuilder.py b/pypy/rpython/ootypesystem/rbuilder.py --- a/pypy/rpython/ootypesystem/rbuilder.py +++ b/pypy/rpython/ootypesystem/rbuilder.py @@ -21,6 +21,10 @@ builder.ll_append_char(char) @staticmethod + def ll_getlength(builder): + return builder.ll_getlength() + + @staticmethod def ll_append(builder, string): builder.ll_append(string) diff --git a/pypy/rpython/rrange.py b/pypy/rpython/rrange.py --- a/pypy/rpython/rrange.py +++ b/pypy/rpython/rrange.py @@ -204,7 +204,10 @@ v_index = hop.gendirectcall(self.ll_getnextindex, v_enumerate) hop2 = hop.copy() hop2.args_r = [self.r_baseiter] + r_item_src = self.r_baseiter.r_list.external_item_repr + r_item_dst = hop.r_result.items_r[1] v_item = self.r_baseiter.rtype_next(hop2) + v_item = hop.llops.convertvar(v_item, r_item_src, r_item_dst) return hop.r_result.newtuple(hop.llops, hop.r_result, [v_index, v_item]) diff --git a/pypy/rpython/test/test_rbuilder.py b/pypy/rpython/test/test_rbuilder.py --- a/pypy/rpython/test/test_rbuilder.py +++ b/pypy/rpython/test/test_rbuilder.py @@ -124,9 +124,5 @@ pass class TestOOtype(BaseTestStringBuilder, OORtypeMixin): - def test_string_getlength(self): - py.test.skip("getlength(): not implemented on ootype") - def test_unicode_getlength(self): - py.test.skip("getlength(): not implemented on ootype") def test_append_charpsize(self): py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -463,6 +463,31 @@ assert x1 == intmask(x0) assert x3 == intmask(x2) + def test_id_on_builtins(self): + from pypy.rlib.objectmodel import compute_unique_id + from pypy.rlib.rstring import StringBuilder, UnicodeBuilder + def fn(): + return (compute_unique_id("foo"), + compute_unique_id(u"bar"), + compute_unique_id([1]), + compute_unique_id({"foo": 3}), + compute_unique_id(StringBuilder()), + compute_unique_id(UnicodeBuilder())) + res = self.interpret(fn, []) + for id in self.ll_unpack_tuple(res, 6): + assert isinstance(id, (int, r_longlong)) + + def test_uniqueness_of_id_on_strings(self): + from pypy.rlib.objectmodel import compute_unique_id + def fn(s1, s2): + return (compute_unique_id(s1), compute_unique_id(s2)) + + s1 = "foo" + s2 = ''.join(['f','oo']) + res = self.interpret(fn, [self.string_to_ll(s1), self.string_to_ll(s2)]) + i1, i2 = self.ll_unpack_tuple(res, 2) + assert i1 != i2 + def test_cast_primitive(self): from pypy.rpython.annlowlevel import LowLevelAnnotatorPolicy def llf(u): diff --git a/pypy/rpython/test/test_rclass.py b/pypy/rpython/test/test_rclass.py --- a/pypy/rpython/test/test_rclass.py +++ b/pypy/rpython/test/test_rclass.py @@ -1130,6 +1130,18 @@ assert sorted([u]) == [6] # 32-bit types assert sorted([i, r, d, l]) == [2, 3, 4, 5] # 64-bit types + def test_nonmovable(self): + for (nonmovable, opname) in [(True, 'malloc_nonmovable'), + (False, 'malloc')]: + class A(object): + _alloc_nonmovable_ = nonmovable + def f(): + return A() + t, typer, graph = self.gengraph(f, []) + assert summary(graph) == {opname: 1, + 'cast_pointer': 1, + 'setfield': 1} + class TestOOtype(BaseTestRclass, OORtypeMixin): diff --git a/pypy/rpython/test/test_rrange.py b/pypy/rpython/test/test_rrange.py --- a/pypy/rpython/test/test_rrange.py +++ b/pypy/rpython/test/test_rrange.py @@ -169,6 +169,22 @@ res = self.interpret(fn, [2]) assert res == 789 + def test_enumerate_instances(self): + class A: + pass + def fn(n): + a = A() + b = A() + a.k = 10 + b.k = 20 + for i, x in enumerate([a, b]): + if i == n: + return x.k + return 5 + res = self.interpret(fn, [1]) + assert res == 20 + + class TestLLtype(BaseTestRrange, LLRtypeMixin): from pypy.rpython.lltypesystem import rrange diff --git a/pypy/tool/jitlogparser/parser.py b/pypy/tool/jitlogparser/parser.py --- a/pypy/tool/jitlogparser/parser.py +++ b/pypy/tool/jitlogparser/parser.py @@ -140,13 +140,15 @@ bytecode_name = None is_bytecode = True inline_level = None + has_dmp = False def parse_code_data(self, arg): m = re.search('\w]+)[\.,] file \'(.+?)\'[\.,] line (\d+)> #(\d+) (\w+)', arg) if m is None: # a non-code loop, like StrLiteralSearch or something - self.bytecode_name = arg + if arg: + self.bytecode_name = arg else: self.name, self.filename, lineno, bytecode_no, self.bytecode_name = m.groups() self.startlineno = int(lineno) @@ -218,7 +220,7 @@ self.inputargs = inputargs self.chunks = chunks for chunk in self.chunks: - if chunk.filename is not None: + if chunk.bytecode_name is not None: self.startlineno = chunk.startlineno self.filename = chunk.filename self.name = chunk.name diff --git a/pypy/tool/jitlogparser/test/test_parser.py b/pypy/tool/jitlogparser/test/test_parser.py --- a/pypy/tool/jitlogparser/test/test_parser.py +++ b/pypy/tool/jitlogparser/test/test_parser.py @@ -283,3 +283,13 @@ assert loops[-1].count == 1234 assert loops[1].count == 123 assert loops[2].count == 12 + +def test_parse_nonpython(): + loop = parse(""" + [] + debug_merge_point(0, 'random') + debug_merge_point(0, ' #15 COMPARE_OP') + """) + f = Function.from_operations(loop.operations, LoopStorage()) + assert f.chunks[-1].filename == 'x.py' + assert f.filename is None diff --git a/pypy/tool/test/test_version.py b/pypy/tool/test/test_version.py --- a/pypy/tool/test/test_version.py +++ b/pypy/tool/test/test_version.py @@ -1,6 +1,22 @@ import os, sys import py -from pypy.tool.version import get_repo_version_info +from pypy.tool.version import get_repo_version_info, _get_hg_archive_version + +def test_hg_archival_version(tmpdir): + def version_for(name, **kw): + path = tmpdir.join(name) + path.write('\n'.join('%s: %s' % x for x in kw.items())) + return _get_hg_archive_version(str(path)) + + assert version_for('release', + tag='release-123', + node='000', + ) == ('PyPy', 'release-123', '000') + assert version_for('somebranch', + node='000', + branch='something', + ) == ('PyPy', 'something', '000') + def test_get_repo_version_info(): assert get_repo_version_info(None) diff --git a/pypy/tool/version.py b/pypy/tool/version.py --- a/pypy/tool/version.py +++ b/pypy/tool/version.py @@ -3,111 +3,139 @@ from subprocess import Popen, PIPE import pypy pypydir = os.path.dirname(os.path.abspath(pypy.__file__)) +pypyroot = os.path.dirname(pypydir) +default_retval = 'PyPy', '?', '?' + +def maywarn(err, repo_type='Mercurial'): + if not err: + return + + from pypy.tool.ansi_print import ansi_log + log = py.log.Producer("version") + py.log.setconsumer("version", ansi_log) + log.WARNING('Errors getting %s information: %s' % (repo_type, err)) def get_repo_version_info(hgexe=None): '''Obtain version information by invoking the 'hg' or 'git' commands.''' - # TODO: support extracting from .hg_archival.txt - - default_retval = 'PyPy', '?', '?' - pypyroot = os.path.abspath(os.path.join(pypydir, '..')) - - def maywarn(err, repo_type='Mercurial'): - if not err: - return - - from pypy.tool.ansi_print import ansi_log - log = py.log.Producer("version") - py.log.setconsumer("version", ansi_log) - log.WARNING('Errors getting %s information: %s' % (repo_type, err)) # Try to see if we can get info from Git if hgexe is not specified. if not hgexe: if os.path.isdir(os.path.join(pypyroot, '.git')): - gitexe = py.path.local.sysfind('git') - if gitexe: - try: - p = Popen( - [str(gitexe), 'rev-parse', 'HEAD'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - except OSError, e: - maywarn(e, 'Git') - return default_retval - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return default_retval - revision_id = p.stdout.read().strip()[:12] - p = Popen( - [str(gitexe), 'describe', '--tags', '--exact-match'], - stdout=PIPE, stderr=PIPE, cwd=pypyroot - ) - if p.wait() != 0: - p = Popen( - [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, - cwd=pypyroot - ) - if p.wait() != 0: - maywarn(p.stderr.read(), 'Git') - return 'PyPy', '?', revision_id - branch = '?' - for line in p.stdout.read().strip().split('\n'): - if line.startswith('* '): - branch = line[1:].strip() - if branch == '(no branch)': - branch = '?' - break - return 'PyPy', branch, revision_id - return 'PyPy', p.stdout.read().strip(), revision_id + return _get_git_version() # Fallback to trying Mercurial. if hgexe is None: hgexe = py.path.local.sysfind('hg') - if not os.path.isdir(os.path.join(pypyroot, '.hg')): + if os.path.isfile(os.path.join(pypyroot, '.hg_archival.txt')): + return _get_hg_archive_version(os.path.join(pypyroot, '.hg_archival.txt')) + elif not os.path.isdir(os.path.join(pypyroot, '.hg')): maywarn('Not running from a Mercurial repository!') return default_retval elif not hgexe: maywarn('Cannot find Mercurial command!') return default_retval else: - env = dict(os.environ) - # get Mercurial into scripting mode - env['HGPLAIN'] = '1' - # disable user configuration, extensions, etc. - env['HGRCPATH'] = os.devnull + return _get_hg_version(hgexe) - try: - p = Popen([str(hgexe), 'version', '-q'], - stdout=PIPE, stderr=PIPE, env=env) - except OSError, e: - maywarn(e) - return default_retval - if not p.stdout.read().startswith('Mercurial Distributed SCM'): - maywarn('command does not identify itself as Mercurial') - return default_retval +def _get_hg_version(hgexe): + env = dict(os.environ) + # get Mercurial into scripting mode + env['HGPLAIN'] = '1' + # disable user configuration, extensions, etc. + env['HGRCPATH'] = os.devnull - p = Popen([str(hgexe), 'id', '-i', pypyroot], + try: + p = Popen([str(hgexe), 'version', '-q'], stdout=PIPE, stderr=PIPE, env=env) - hgid = p.stdout.read().strip() + except OSError, e: + maywarn(e) + return default_retval + + if not p.stdout.read().startswith('Mercurial Distributed SCM'): + maywarn('command does not identify itself as Mercurial') + return default_retval + + p = Popen([str(hgexe), 'id', '-i', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgid = p.stdout.read().strip() + maywarn(p.stderr.read()) + if p.wait() != 0: + hgid = '?' + + p = Popen([str(hgexe), 'id', '-t', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] + maywarn(p.stderr.read()) + if p.wait() != 0: + hgtags = ['?'] + + if hgtags: + return 'PyPy', hgtags[0], hgid + else: + # use the branch instead + p = Popen([str(hgexe), 'id', '-b', pypyroot], + stdout=PIPE, stderr=PIPE, env=env) + hgbranch = p.stdout.read().strip() maywarn(p.stderr.read()) + + return 'PyPy', hgbranch, hgid + + +def _get_hg_archive_version(path): + fp = open(path) + try: + data = dict(x.split(': ', 1) for x in fp.read().splitlines()) + finally: + fp.close() + if 'tag' in data: + return 'PyPy', data['tag'], data['node'] + else: + return 'PyPy', data['branch'], data['node'] + + +def _get_git_version(): + #XXX: this function is a untested hack, + # so the git mirror tav made will work + gitexe = py.path.local.sysfind('git') + if not gitexe: + return default_retval + + try: + p = Popen( + [str(gitexe), 'rev-parse', 'HEAD'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + except OSError, e: + maywarn(e, 'Git') + return default_retval + if p.wait() != 0: + maywarn(p.stderr.read(), 'Git') + return default_retval + revision_id = p.stdout.read().strip()[:12] + p = Popen( + [str(gitexe), 'describe', '--tags', '--exact-match'], + stdout=PIPE, stderr=PIPE, cwd=pypyroot + ) + if p.wait() != 0: + p = Popen( + [str(gitexe), 'branch'], stdout=PIPE, stderr=PIPE, + cwd=pypyroot + ) if p.wait() != 0: - hgid = '?' + maywarn(p.stderr.read(), 'Git') + return 'PyPy', '?', revision_id + branch = '?' + for line in p.stdout.read().strip().split('\n'): + if line.startswith('* '): + branch = line[1:].strip() + if branch == '(no branch)': + branch = '?' + break + return 'PyPy', branch, revision_id + return 'PyPy', p.stdout.read().strip(), revision_id - p = Popen([str(hgexe), 'id', '-t', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgtags = [t for t in p.stdout.read().strip().split() if t != 'tip'] - maywarn(p.stderr.read()) - if p.wait() != 0: - hgtags = ['?'] - if hgtags: - return 'PyPy', hgtags[0], hgid - else: - # use the branch instead - p = Popen([str(hgexe), 'id', '-b', pypyroot], - stdout=PIPE, stderr=PIPE, env=env) - hgbranch = p.stdout.read().strip() - maywarn(p.stderr.read()) - - return 'PyPy', hgbranch, hgid +if __name__ == '__main__': + print get_repo_version_info() diff --git a/pypy/translator/c/src/profiling.c b/pypy/translator/c/src/profiling.c --- a/pypy/translator/c/src/profiling.c +++ b/pypy/translator/c/src/profiling.c @@ -29,33 +29,33 @@ profiling_setup = 0; } } - -#elif defined(_WIN32) -#include - -DWORD_PTR base_affinity_mask; -int profiling_setup = 0; - -void pypy_setup_profiling() { - if (!profiling_setup) { - DWORD_PTR affinity_mask, system_affinity_mask; - GetProcessAffinityMask(GetCurrentProcess(), - &base_affinity_mask, &system_affinity_mask); - affinity_mask = 1; - /* Pick one cpu allowed by the system */ - if (system_affinity_mask) - while ((affinity_mask & system_affinity_mask) == 0) - affinity_mask <<= 1; - SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); - profiling_setup = 1; - } -} - -void pypy_teardown_profiling() { - if (profiling_setup) { - SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); - profiling_setup = 0; - } + +#elif defined(_WIN32) +#include + +DWORD_PTR base_affinity_mask; +int profiling_setup = 0; + +void pypy_setup_profiling() { + if (!profiling_setup) { + DWORD_PTR affinity_mask, system_affinity_mask; + GetProcessAffinityMask(GetCurrentProcess(), + &base_affinity_mask, &system_affinity_mask); + affinity_mask = 1; + /* Pick one cpu allowed by the system */ + if (system_affinity_mask) + while ((affinity_mask & system_affinity_mask) == 0) + affinity_mask <<= 1; + SetProcessAffinityMask(GetCurrentProcess(), affinity_mask); + profiling_setup = 1; + } +} + +void pypy_teardown_profiling() { + if (profiling_setup) { + SetProcessAffinityMask(GetCurrentProcess(), base_affinity_mask); + profiling_setup = 0; + } } #else diff --git a/pypy/translator/goal/translate.py b/pypy/translator/goal/translate.py --- a/pypy/translator/goal/translate.py +++ b/pypy/translator/goal/translate.py @@ -293,17 +293,18 @@ drv.exe_name = targetspec_dic['__name__'] + '-%(backend)s' # Double check to ensure we are not overwriting the current interpreter - try: - this_exe = py.path.local(sys.executable).new(ext='') - exe_name = drv.compute_exe_name() - samefile = this_exe.samefile(exe_name) - assert not samefile, ( - 'Output file %s is the currently running ' - 'interpreter (use --output=...)'% exe_name) - except EnvironmentError: - pass + goals = translateconfig.goals + if not goals or 'compile' in goals: + try: + this_exe = py.path.local(sys.executable).new(ext='') + exe_name = drv.compute_exe_name() + samefile = this_exe.samefile(exe_name) + assert not samefile, ( + 'Output file %s is the currently running ' + 'interpreter (use --output=...)'% exe_name) + except EnvironmentError: + pass - goals = translateconfig.goals try: drv.proceed(goals) finally: diff --git a/pypy/translator/jvm/builtin.py b/pypy/translator/jvm/builtin.py --- a/pypy/translator/jvm/builtin.py +++ b/pypy/translator/jvm/builtin.py @@ -84,6 +84,9 @@ (ootype.StringBuilder.__class__, "ll_build"): jvm.Method.v(jStringBuilder, "toString", (), jString), + (ootype.StringBuilder.__class__, "ll_getlength"): + jvm.Method.v(jStringBuilder, "length", (), jInt), + (ootype.String.__class__, "ll_hash"): jvm.Method.v(jString, "hashCode", (), jInt), diff --git a/pypy/translator/jvm/database.py b/pypy/translator/jvm/database.py --- a/pypy/translator/jvm/database.py +++ b/pypy/translator/jvm/database.py @@ -358,7 +358,7 @@ ootype.Unsigned:jvm.PYPYSERIALIZEUINT, ootype.SignedLongLong:jvm.LONGTOSTRINGL, ootype.UnsignedLongLong: jvm.PYPYSERIALIZEULONG, - ootype.Float:jvm.DOUBLETOSTRINGD, + ootype.Float:jvm.PYPYSERIALIZEDOUBLE, ootype.Bool:jvm.PYPYSERIALIZEBOOLEAN, ootype.Void:jvm.PYPYSERIALIZEVOID, ootype.Char:jvm.PYPYESCAPEDCHAR, diff --git a/pypy/translator/jvm/metavm.py b/pypy/translator/jvm/metavm.py --- a/pypy/translator/jvm/metavm.py +++ b/pypy/translator/jvm/metavm.py @@ -92,6 +92,7 @@ CASTS = { # FROM TO (ootype.Signed, ootype.UnsignedLongLong): jvm.I2L, + (ootype.Unsigned, ootype.UnsignedLongLong): jvm.I2L, (ootype.SignedLongLong, ootype.Signed): jvm.L2I, (ootype.UnsignedLongLong, ootype.Unsigned): jvm.L2I, (ootype.UnsignedLongLong, ootype.Signed): jvm.L2I, diff --git a/pypy/translator/jvm/opcodes.py b/pypy/translator/jvm/opcodes.py --- a/pypy/translator/jvm/opcodes.py +++ b/pypy/translator/jvm/opcodes.py @@ -101,6 +101,7 @@ 'jit_force_virtualizable': Ignore, 'jit_force_virtual': DoNothing, 'jit_force_quasi_immutable': Ignore, + 'jit_is_virtual': PushPrimitive(ootype.Bool, False), 'debug_assert': [], # TODO: implement? 'debug_start_traceback': Ignore, diff --git a/pypy/translator/jvm/src/pypy/PyPy.java b/pypy/translator/jvm/src/pypy/PyPy.java --- a/pypy/translator/jvm/src/pypy/PyPy.java +++ b/pypy/translator/jvm/src/pypy/PyPy.java @@ -283,6 +283,14 @@ } } + public double pypy__longlong2float(long l) { + return Double.longBitsToDouble(l); + } + + public long pypy__float2longlong(double d) { + return Double.doubleToRawLongBits(d); + } + public double ooparse_float(String s) { try { return Double.parseDouble(s); @@ -353,6 +361,19 @@ return "False"; } + public static String serialize_double(double d) { + if (Double.isNaN(d)) { + return "float(\"nan\")"; + } else if (Double.isInfinite(d)) { + if (d > 0) + return "float(\"inf\")"; + else + return "float(\"-inf\")"; + } else { + return Double.toString(d); + } + } + private static String format_char(char c) { String res = "\\x"; if (c <= 0x0F) res = res + "0"; diff --git a/pypy/translator/jvm/test/runtest.py b/pypy/translator/jvm/test/runtest.py --- a/pypy/translator/jvm/test/runtest.py +++ b/pypy/translator/jvm/test/runtest.py @@ -56,6 +56,7 @@ # CLI could-be duplicate class JvmGeneratedSourceWrapper(object): + def __init__(self, gensrc): """ gensrc is an instance of JvmGeneratedSource """ self.gensrc = gensrc diff --git a/pypy/translator/jvm/test/test_builder.py b/pypy/translator/jvm/test/test_builder.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_builder.py @@ -0,0 +1,7 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rpython.test.test_rbuilder import BaseTestStringBuilder +import py + +class TestJvmStringBuilder(JvmTest, BaseTestStringBuilder): + def test_append_charpsize(self): + py.test.skip("append_charpsize(): not implemented on ootype") diff --git a/pypy/translator/jvm/test/test_longlong2float.py b/pypy/translator/jvm/test/test_longlong2float.py new file mode 100644 --- /dev/null +++ b/pypy/translator/jvm/test/test_longlong2float.py @@ -0,0 +1,20 @@ +from pypy.translator.jvm.test.runtest import JvmTest +from pypy.rlib.longlong2float import * +from pypy.rlib.test.test_longlong2float import enum_floats +from pypy.rlib.test.test_longlong2float import fn as float2longlong2float +import py + +class TestLongLong2Float(JvmTest): + + def test_float2longlong_and_longlong2float(self): + def func(f): + return float2longlong2float(f) + + for f in enum_floats(): + assert repr(f) == repr(self.interpret(func, [f])) + + def test_uint2singlefloat(self): + py.test.skip("uint2singlefloat is not implemented in ootype") + + def test_singlefloat2uint(self): + py.test.skip("singlefloat2uint is not implemented in ootype") diff --git a/pypy/translator/jvm/typesystem.py b/pypy/translator/jvm/typesystem.py --- a/pypy/translator/jvm/typesystem.py +++ b/pypy/translator/jvm/typesystem.py @@ -955,6 +955,7 @@ PYPYSERIALIZEUINT = Method.s(jPyPy, 'serialize_uint', (jInt,), jString) PYPYSERIALIZEULONG = Method.s(jPyPy, 'serialize_ulonglong', (jLong,),jString) PYPYSERIALIZEVOID = Method.s(jPyPy, 'serialize_void', (), jString) +PYPYSERIALIZEDOUBLE = Method.s(jPyPy, 'serialize_double', (jDouble,), jString) PYPYESCAPEDCHAR = Method.s(jPyPy, 'escaped_char', (jChar,), jString) PYPYESCAPEDUNICHAR = Method.s(jPyPy, 'escaped_unichar', (jChar,), jString) PYPYESCAPEDSTRING = Method.s(jPyPy, 'escaped_string', (jString,), jString) diff --git a/pypy/translator/oosupport/test_template/cast.py b/pypy/translator/oosupport/test_template/cast.py --- a/pypy/translator/oosupport/test_template/cast.py +++ b/pypy/translator/oosupport/test_template/cast.py @@ -13,6 +13,9 @@ def to_longlong(x): return r_longlong(x) +def to_ulonglong(x): + return r_ulonglong(x) + def uint_to_int(x): return intmask(x) @@ -56,6 +59,9 @@ def test_unsignedlonglong_to_unsigned4(self): self.check(to_uint, [r_ulonglong(18446744073709551615l)]) # max 64 bit num + def test_unsigned_to_usignedlonglong(self): + self.check(to_ulonglong, [r_uint(42)]) + def test_uint_to_int(self): self.check(uint_to_int, [r_uint(sys.maxint+1)]) From noreply at buildbot.pypy.org Sun Jan 22 20:57:16 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 20:57:16 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Update _ctypes_test.c from CPython 2.7.2 Message-ID: <20120122195716.060E1821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51664:ead17cd9edbf Date: 2012-01-22 20:49 +0100 http://bitbucket.org/pypy/pypy/changeset/ead17cd9edbf/ Log: Update _ctypes_test.c from CPython 2.7.2 diff --git a/lib_pypy/_ctypes_test.c b/lib_pypy/_ctypes_test.c --- a/lib_pypy/_ctypes_test.c +++ b/lib_pypy/_ctypes_test.c @@ -26,6 +26,20 @@ /* some functions handy for testing */ +EXPORT(int) +_testfunc_cbk_reg_int(int a, int b, int c, int d, int e, + int (*func)(int, int, int, int, int)) +{ + return func(a*a, b*b, c*c, d*d, e*e); +} + +EXPORT(double) +_testfunc_cbk_reg_double(double a, double b, double c, double d, double e, + double (*func)(double, double, double, double, double)) +{ + return func(a*a, b*b, c*c, d*d, e*e); +} + EXPORT(void)testfunc_array(int values[4]) { printf("testfunc_array %d %d %d %d\n", From noreply at buildbot.pypy.org Sun Jan 22 20:57:17 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 20:57:17 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: json: Rename some functions in our pure-python implementation: Message-ID: <20120122195717.38E6E821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51665:c000c3332cb0 Date: 2012-01-22 20:56 +0100 http://bitbucket.org/pypy/pypy/changeset/c000c3332cb0/ Log: json: Rename some functions in our pure-python implementation: in CPython, encode_basestring() is supposed to add double quotes. diff --git a/lib-python/modified-2.7/json/encoder.py b/lib-python/modified-2.7/json/encoder.py --- a/lib-python/modified-2.7/json/encoder.py +++ b/lib-python/modified-2.7/json/encoder.py @@ -23,15 +23,16 @@ INFINITY = float('1e66666') FLOAT_REPR = repr -def encode_basestring(s): +def raw_encode_basestring(s): """Return a JSON representation of a Python string """ def replace(match): return ESCAPE_DCT[match.group(0)] return ESCAPE.sub(replace, s) +encode_basestring = lambda s: '"' + raw_encode_basestring(s) + '"' -def encode_basestring_ascii(s): +def raw_encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -54,8 +55,8 @@ if ESCAPE_ASCII.search(s): return str(ESCAPE_ASCII.sub(replace, s)) return s -py_encode_basestring_ascii = lambda s: '"' + encode_basestring_ascii(s) + '"' -c_encode_basestring_ascii = None +encode_basestring_ascii = lambda s: '"' + raw_encode_basestring_ascii(s) + '"' + class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -137,9 +138,9 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii if ensure_ascii: - self.encoder = encode_basestring_ascii + self.encoder = raw_encode_basestring_ascii else: - self.encoder = encode_basestring + self.encoder = raw_encode_basestring if encoding != 'utf-8': orig_encoder = self.encoder def encoder(o): From noreply at buildbot.pypy.org Sun Jan 22 21:54:07 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 21:54:07 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Allow both str.__add__ to raise TypeError or return NotImplemented. Message-ID: <20120122205407.9B85C821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51666:c84666d936d2 Date: 2012-01-22 21:10 +0100 http://bitbucket.org/pypy/pypy/changeset/c84666d936d2/ Log: Allow both str.__add__ to raise TypeError or return NotImplemented. IMO the point of the test is to check that str.__add__ does not crash or use str() instead. diff --git a/lib-python/modified-2.7/test/test_descr.py b/lib-python/modified-2.7/test/test_descr.py --- a/lib-python/modified-2.7/test/test_descr.py +++ b/lib-python/modified-2.7/test/test_descr.py @@ -4592,8 +4592,12 @@ str.split(fake_str) # call a slot wrapper descriptor - with self.assertRaises(TypeError): - str.__add__(fake_str, "abc") + try: + r = str.__add__(fake_str, "abc") + except TypeError: + pass + else: + self.assertEqual(r, NotImplemented) class DictProxyTests(unittest.TestCase): From noreply at buildbot.pypy.org Sun Jan 22 22:10:29 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 22 Jan 2012 22:10:29 +0100 (CET) Subject: [pypy-commit] buildbot default: Added the numready script. Message-ID: <20120122211029.4DC65821FA@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r608:4232c21cfab6 Date: 2012-01-22 15:10 -0600 http://bitbucket.org/pypy/buildbot/changeset/4232c21cfab6/ Log: Added the numready script. diff --git a/numready.py b/numready.py new file mode 100644 --- /dev/null +++ b/numready.py @@ -0,0 +1,205 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +""" +This should be run under PyPy. +""" + +import platform +import subprocess +import sys +import tempfile +import webbrowser +from collections import OrderedDict + +import jinja2 + + +MODULE_SEARCH_CODE = """ +import types +import {modname} as numpy + +for name in dir(numpy): + if name.startswith("_"): + continue + obj = getattr(numpy, name) + kind = "{kinds[UNKNOWN]}" + if isinstance(obj, types.TypeType): + kind = "{kinds[TYPE]}" + print kind, ":", name +""" + +ATTR_SEARCH_CODE = """ +import types +import {modname} as numpy + +obj = getattr(numpy, "{name}") +for name in dir(obj): + if name.startswith("_"): + continue + sub_obj = getattr(obj, name) + kind = "{kinds[UNKNOWN]}" + if isinstance(sub_obj, types.TypeType): + kind = "{kinds[TYPE]}" + print kind, ":", name +""" + +KINDS = { + "UNKNOWN": "U", + "TYPE": "T", +} + +PAGE_TEMPLATE = u""" + + + + NumPyPy Status + + + + +

    NumPyPy Status

    + + + + + + + + + {% for item in all_items %} + + + + + {% if item.subitems %} + {% for item in item.subitems %} + + + + + {% endfor %} + {% endif %} + {% endfor %} + +
    PyPy
    {{ item.name }}{% if item.pypy_exists %}✔{% else %}✖{% endif %}
       {{ item.name }}{% if item.pypy_exists %}✔{% else %}✖{% endif %}
    + + +""" + +class SearchableSet(object): + def __init__(self, items=()): + self._items = {} + for item in items: + self.add(item) + + def __iter__(self): + return iter(self._items) + + def __contains__(self, other): + return other in self._items + + def __getitem__(self, idx): + return self._items[idx] + + def add(self, item): + self._items[item] = item + +class Item(object): + def __init__(self, name, kind, subitems=None): + self.name = name + self.kind = kind + self.subitems = subitems + + def __hash__(self): + return hash(self.name) + + def __eq__(self, other): + return self.name == other.name + + +class ItemStatus(object): + def __init__(self, name, pypy_exists, subitems=None): + self.name = name + self.pypy_exists = pypy_exists + self.subitems = subitems + + def __lt__(self, other): + return self.name < other.name + +def find_numpy_attrs(python, modname, name): + lines = subprocess.check_output( + [python, "-c", ATTR_SEARCH_CODE.format(modname=modname, kinds=KINDS, name=name)] + ).splitlines() + items = SearchableSet() + for line in lines: + kind, name = line.split(" : ", 1) + items.add(Item(name, kind)) + return items + +def find_numpy_items(python, modname="numpy"): + lines = subprocess.check_output( + [python, "-c", MODULE_SEARCH_CODE.format(modname=modname, kinds=KINDS)] + ).splitlines() + items = SearchableSet() + for line in lines: + kind, name = line.split(" : ", 1) + subitems = None + if kind == KINDS["TYPE"]: + subitems = find_numpy_attrs(python, modname, name) + items.add(Item(name, kind, subitems)) + return items + +def main(argv): + assert platform.python_implementation() == "PyPy" + + cpy_items = find_numpy_items("/usr/bin/python") + pypy_items = find_numpy_items(sys.executable, "numpypy") + all_items = [] + + for item in cpy_items: + pypy_exists = item in pypy_items + subitems = None + if item.subitems: + subitems = [] + for sub in item.subitems: + subitems.append( + ItemStatus(sub.name, pypy_exists=pypy_exists and pypy_items[item].subitems and sub in pypy_items[item].subitems) + ) + subitems = sorted(subitems) + all_items.append( + ItemStatus(item.name, pypy_exists=item in pypy_items, subitems=subitems) + ) + + html = jinja2.Template(PAGE_TEMPLATE).render(all_items=sorted(all_items)) + with tempfile.NamedTemporaryFile(delete=False) as f: + f.write(html.encode("utf-8")) + webbrowser.open_new_tab(f.name) + + +if __name__ == '__main__': + main(sys.argv) \ No newline at end of file From noreply at buildbot.pypy.org Sun Jan 22 22:26:52 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Sun, 22 Jan 2012 22:26:52 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: Fix for mmap when trying to specify an offset that's past the end of a file. Message-ID: <20120122212652.BF356821FA@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: merge-2.7.2 Changeset: r51667:b0b180ee94ea Date: 2012-01-22 15:26 -0600 http://bitbucket.org/pypy/pypy/changeset/b0b180ee94ea/ Log: Fix for mmap when trying to specify an offset that's past the end of a file. diff --git a/pypy/module/mmap/test/test_mmap.py b/pypy/module/mmap/test/test_mmap.py --- a/pypy/module/mmap/test/test_mmap.py +++ b/pypy/module/mmap/test/test_mmap.py @@ -8,7 +8,7 @@ space = gettestobjspace(usemodules=('mmap',)) cls.space = space cls.w_tmpname = space.wrap(str(udir.join('mmap-'))) - + def test_page_size(self): import mmap assert mmap.PAGESIZE > 0 @@ -43,7 +43,7 @@ raises(TypeError, mmap, "foo") raises(TypeError, mmap, 0, "foo") - + if os.name == "posix": raises(ValueError, mmap, 0, 1, 2, 3, 4) raises(TypeError, mmap, 0, 1, 2, 3, "foo", 5) @@ -72,7 +72,7 @@ from mmap import mmap f = open(self.tmpname + "a", "w+") - + f.write("c") f.flush() raises(ValueError, mmap, f.fileno(), 123) @@ -81,18 +81,18 @@ def test_create(self): from mmap import mmap f = open(self.tmpname + "b", "w+") - + f.write("c") f.flush() m = mmap(f.fileno(), 1) assert m.read(99) == "c" - + f.close() def test_close(self): from mmap import mmap f = open(self.tmpname + "c", "w+") - + f.write("c") f.flush() m = mmap(f.fileno(), 1) @@ -131,7 +131,7 @@ def test_read(self): from mmap import mmap f = open(self.tmpname + "f", "w+") - + f.write("foobar") f.flush() m = mmap(f.fileno(), 6) @@ -217,7 +217,7 @@ def test_is_modifiable(self): import mmap f = open(self.tmpname + "h", "w+") - + f.write("foobar") f.flush() m = mmap.mmap(f.fileno(), 6, access=mmap.ACCESS_READ) @@ -229,7 +229,7 @@ def test_seek(self): from mmap import mmap f = open(self.tmpname + "i", "w+") - + f.write("foobar") f.flush() m = mmap(f.fileno(), 6) @@ -270,7 +270,7 @@ def test_write_byte(self): import mmap f = open(self.tmpname + "k", "w+") - + f.write("foobar") f.flush() m = mmap.mmap(f.fileno(), 6, access=mmap.ACCESS_READ) @@ -286,7 +286,7 @@ def test_size(self): from mmap import mmap f = open(self.tmpname + "l", "w+") - + f.write("foobar") f.flush() m = mmap(f.fileno(), 5) @@ -297,7 +297,7 @@ def test_tell(self): from mmap import mmap f = open(self.tmpname + "m", "w+") - + f.write("c") f.flush() m = mmap(f.fileno(), 1) @@ -308,7 +308,7 @@ def test_flush(self): from mmap import mmap f = open(self.tmpname + "n", "w+") - + f.write("foobar") f.flush() m = mmap(f.fileno(), 6) @@ -319,10 +319,18 @@ m.close() f.close() + def test_length_0_large_offset(self): + import mmap + + with open(self.tmpname, "wb") as f: + f.write(115699 * 'm') + with open(self.tmpname, "w+b") as f: + raises(ValueError, mmap.mmap, f.fileno(), 0, offset=2147418112) + def test_move(self): import mmap f = open(self.tmpname + "o", "w+") - + f.write("foobar") f.flush() m = mmap.mmap(f.fileno(), 6, access=mmap.ACCESS_READ) @@ -339,15 +347,15 @@ assert a == "frarar" m.close() f.close() - + def test_resize(self): import sys if ("darwin" in sys.platform) or ("freebsd" in sys.platform): skip("resize does not work under OSX or FreeBSD") - + import mmap import os - + f = open(self.tmpname + "p", "w+") f.write("foobar") f.flush() @@ -368,10 +376,10 @@ import sys if ("darwin" not in sys.platform) and ("freebsd" not in sys.platform): skip("resize works under not OSX or FreeBSD") - + import mmap import os - + f = open(self.tmpname + "p", "w+") f.write("foobar") f.flush() @@ -388,7 +396,7 @@ def test_len(self): from mmap import mmap - + f = open(self.tmpname + "q", "w+") f.write("foobar") f.flush() @@ -397,14 +405,14 @@ assert len(m) == 6 m.close() f.close() - + def test_get_item(self): from mmap import mmap - + f = open(self.tmpname + "r", "w+") f.write("foobar") f.flush() - + m = mmap(f.fileno(), 6) fn = lambda: m["foo"] raises(TypeError, fn) @@ -416,10 +424,10 @@ assert m[4:1:-2] == 'ao' m.close() f.close() - + def test_set_item(self): import mmap - + f = open(self.tmpname + "s", "w+") f.write("foobar") f.flush() @@ -455,14 +463,14 @@ assert data == "yxxBaR" m.close() f.close() - + def test_del_item(self): from mmap import mmap - + f = open(self.tmpname + "t", "w+") f.write("foobar") f.flush() - + m = mmap(f.fileno(), 6) def fn(): del m["foo"] raises(TypeError, fn) @@ -475,11 +483,11 @@ def test_concatenation(self): from mmap import mmap - + f = open(self.tmpname + "u", "w+") f.write("foobar") f.flush() - + m = mmap(f.fileno(), 6) def fn(): m + 1 raises((SystemError, TypeError), fn) # SystemError is in CPython, @@ -492,11 +500,11 @@ def test_repeatition(self): from mmap import mmap - + f = open(self.tmpname + "v", "w+") f.write("foobar") f.flush() - + m = mmap(f.fileno(), 6) def fn(): m * 1 raises((SystemError, TypeError), fn) # SystemError is in CPython, @@ -506,14 +514,14 @@ raises((SystemError, TypeError), fn) m.close() f.close() - + def test_slicing(self): from mmap import mmap f = open(self.tmpname + "v", "w+") f.write("foobar") f.flush() - + f.seek(0) m = mmap(f.fileno(), 6) assert m[-3:7] == "bar" @@ -589,11 +597,11 @@ from mmap import PAGESIZE import sys import os - + filename = self.tmpname + "w" - + f = open(filename, "w+") - + # write 2 pages worth of data to the file f.write('\0' * PAGESIZE) f.write('foo') @@ -601,24 +609,24 @@ f.flush() m = mmap.mmap(f.fileno(), 2 * PAGESIZE) f.close() - + # sanity checks assert m.find("foo") == PAGESIZE assert len(m) == 2 * PAGESIZE assert m[0] == '\0' assert m[0:3] == '\0\0\0' - + # modify the file's content m[0] = '3' m[PAGESIZE+3:PAGESIZE+3+3] = 'bar' - + # check that the modification worked assert m[0] == '3' assert m[0:3] == '3\0\0' assert m[PAGESIZE-1:PAGESIZE+7] == '\0foobar\0' m.flush() - + # test seeking around m.seek(0,0) assert m.tell() == 0 @@ -626,28 +634,28 @@ assert m.tell() == 42 m.seek(0, 2) assert m.tell() == len(m) - + raises(ValueError, m.seek, -1) raises(ValueError, m.seek, 1, 2) raises(ValueError, m.seek, -len(m) - 1, 2) - + # try resizing map if not (("darwin" in sys.platform) or ("freebsd" in sys.platform)): m.resize(512) - + assert len(m) == 512 raises(ValueError, m.seek, 513, 0) - + # check that the underlying file is truncated too f = open(filename) f.seek(0, 2) assert f.tell() == 512 f.close() assert m.size() == 512 - + m.close() f.close() - + # test access=ACCESS_READ mapsize = 10 f = open(filename, "wb") @@ -667,21 +675,21 @@ if not (("darwin" in sys.platform) or ("freebsd" in sys.platform)): raises(TypeError, m.resize, 2 * mapsize) assert open(filename, "rb").read() == 'a' * mapsize - + # opening with size too big f = open(filename, "r+b") if not os.name == "nt": # this should work under windows raises(ValueError, mmap.mmap, f.fileno(), mapsize + 1) f.close() - + # if _MS_WINDOWS: # # repair damage from the resizing test. # f = open(filename, 'r+b') # f.truncate(mapsize) # f.close() m.close() - + # test access=ACCESS_WRITE" f = open(filename, "r+b") m = mmap.mmap(f.fileno(), mapsize, access=mmap.ACCESS_WRITE) @@ -696,7 +704,7 @@ stuff = f.read() f.close() assert stuff == 'c' * mapsize - + # test access=ACCESS_COPY f = open(filename, "r+b") m = mmap.mmap(f.fileno(), mapsize, access=mmap.ACCESS_COPY) @@ -710,12 +718,12 @@ raises(TypeError, m.resize, 2 * mapsize) m.close() f.close() - + # test invalid access f = open(filename, "r+b") raises(ValueError, mmap.mmap, f.fileno(), mapsize, access=4) f.close() - + # test incompatible parameters if os.name == "posix": f = open(filename, "r+b") @@ -723,10 +731,10 @@ prot=mmap.PROT_READ, access=mmap.ACCESS_WRITE) f.close() - + # bad file descriptor raises(EnvironmentError, mmap.mmap, -2, 4096) - + # do a tougher .find() test. SF bug 515943 pointed out that, in 2.2, # searching for data with embedded \0 bytes didn't work. f = open(filename, 'w+') @@ -736,14 +744,14 @@ f.flush() m = mmap.mmap(f.fileno(), n) f.close() - + for start in range(n + 1): for finish in range(start, n + 1): sl = data[start:finish] assert m.find(sl) == data.find(sl) assert m.find(sl + 'x') == -1 m.close() - + # test mapping of entire file by passing 0 for map length f = open(filename, "w+") f.write(2**16 * 'm') @@ -754,7 +762,7 @@ assert m.read(2**16) == 2**16 * "m" m.close() f.close() - + # make move works everywhere (64-bit format problem earlier) f = open(filename, 'w+') f.write("ABCDEabcde") diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -43,7 +43,7 @@ constants = {} if _POSIX: # constants, look in sys/mman.h and platform docs for the meaning - # some constants are linux only so they will be correctly exposed outside + # some constants are linux only so they will be correctly exposed outside # depending on the OS constant_names = ['MAP_SHARED', 'MAP_PRIVATE', 'PROT_READ', 'PROT_WRITE', @@ -136,7 +136,7 @@ ) SYSINFO_UNION = rffi.CStruct( - 'union SYSINFO_UNION', + 'union SYSINFO_UNION', ("dwOemId", DWORD), ("_struct_", SYSINFO_STRUCT), ) @@ -250,7 +250,7 @@ elif _POSIX: self.fd = -1 self.closed = False - + def check_valid(self): if _MS_WINDOWS: to_close = self.map_handle == INVALID_HANDLE @@ -259,11 +259,11 @@ if to_close: raise RValueError("map closed or invalid") - + def check_writeable(self): if not (self.access != ACCESS_READ): raise RTypeError("mmap can't modify a readonly memory map.") - + def check_resizeable(self): if not (self.access == ACCESS_WRITE or self.access == _ACCESS_DEFAULT): raise RTypeError("mmap can't resize a readonly or copy-on-write memory map.") @@ -273,7 +273,7 @@ assert size >= 0 self.data = data self.size = size - + def close(self): if _MS_WINDOWS: if self.size > 0: @@ -302,7 +302,7 @@ def unmapview(self): UnmapViewOfFile(self.getptr(0)) - + def read_byte(self): self.check_valid() @@ -312,7 +312,7 @@ return value else: raise RValueError("read byte out of range") - + def readline(self): self.check_valid() @@ -327,7 +327,7 @@ res = "".join([data[i] for i in range(self.pos, eol)]) self.pos += len(res) return res - + def read(self, num=-1): self.check_valid() @@ -389,10 +389,10 @@ def seek(self, pos, whence=0): self.check_valid() - + dist = pos how = whence - + if how == 0: # relative to start where = dist elif how == 1: # relative to current position @@ -404,16 +404,16 @@ if not (0 <= where <= self.size): raise RValueError("seek out of range") - + self.pos = where - + def tell(self): self.check_valid() return self.pos - + def file_size(self): self.check_valid() - + size = self.size if _MS_WINDOWS: if self.file_handle != INVALID_HANDLE: @@ -433,11 +433,11 @@ else: size = int(size) return size - + def write(self, data): - self.check_valid() + self.check_valid() self.check_writeable() - + data_len = len(data) if self.pos + data_len > self.size: raise RValueError("data out of range") @@ -447,10 +447,10 @@ for i in range(data_len): internaldata[start+i] = data[i] self.pos = start + data_len - + def write_byte(self, byte): self.check_valid() - + if len(byte) != 1: raise RTypeError("write_byte() argument must be char") @@ -491,14 +491,14 @@ if res == -1: errno = rposix.get_errno() raise OSError(errno, os.strerror(errno)) - + return 0 - + def move(self, dest, src, count): self.check_valid() - + self.check_writeable() - + # check boundings if (src < 0 or dest < 0 or count < 0 or src + count > self.size or dest + count > self.size): @@ -507,19 +507,19 @@ datasrc = self.getptr(src) datadest = self.getptr(dest) c_memmove(datadest, datasrc, count) - + def resize(self, newsize): self.check_valid() - + self.check_resizeable() - + if _POSIX: if not has_mremap: raise RValueError("mmap: resizing not available--no mremap()") - + # resize the underlying file first os.ftruncate(self.fd, self.offset + newsize) - + # now resize the mmap newdata = c_mremap(self.getptr(0), self.size, newsize, MREMAP_MAYMOVE or 0) @@ -573,9 +573,9 @@ def len(self): self.check_valid() - + return self.size - + def getitem(self, index): self.check_valid() # simplified version, for rpython @@ -650,6 +650,8 @@ size = int(size) if stat.S_ISREG(mode): if map_size == 0: + if offset > st[stat.ST_SIZE]: + raise RValueError("mmap offset is greater than file size") map_size = size elif map_size > size: raise RValueError("mmap length is greater than file size") @@ -673,7 +675,7 @@ if res == rffi.cast(PTR, -1): errno = rposix.get_errno() raise OSError(errno, os.strerror(errno)) - + m.setdata(res, map_size) return m @@ -704,7 +706,7 @@ alloc._annenforceargs_ = (int,) free = c_munmap_safe - + elif _MS_WINDOWS: def mmap(fileno, length, tagname="", access=_ACCESS_DEFAULT, offset=0): # check size boundaries @@ -712,11 +714,11 @@ map_size = length if offset < 0: raise RValueError("negative offset") - + flProtect = 0 dwDesiredAccess = 0 fh = NULL_HANDLE - + if access == ACCESS_READ: flProtect = PAGE_READONLY dwDesiredAccess = FILE_MAP_READ @@ -728,7 +730,7 @@ dwDesiredAccess = FILE_MAP_COPY else: raise RValueError("mmap invalid access parameter.") - + # assume -1 and 0 both mean invalid file descriptor # to 'anonymously' map memory. if fileno != -1 and fileno != 0: @@ -739,7 +741,7 @@ # Win9x appears to need us seeked to zero # SEEK_SET = 0 # libc._lseek(fileno, 0, SEEK_SET) - + m = MMap(access, offset) m.file_handle = INVALID_HANDLE m.map_handle = INVALID_HANDLE @@ -752,7 +754,7 @@ res = DuplicateHandle(GetCurrentProcess(), # source process handle fh, # handle to be duplicated GetCurrentProcess(), # target process handle - handle_ref, # result + handle_ref, # result 0, # access - ignored due to options value False, # inherited by child procs? DUPLICATE_SAME_ACCESS) # options @@ -761,7 +763,7 @@ m.file_handle = handle_ref[0] finally: lltype.free(handle_ref, flavor='raw') - + if not map_size: low, high = _get_file_size(fh) if _64BIT: @@ -775,7 +777,7 @@ if tagname: m.tagname = tagname - + # DWORD is a 4-byte int. If int > 4-byte it must be divided if _64BIT: size_hi = (map_size + offset) >> 32 @@ -807,7 +809,7 @@ def alloc(map_size): """Allocate memory. This is intended to be used by the JIT, - so the memory has the executable bit set. + so the memory has the executable bit set. XXX implement me: it should get allocated internally in case of a sandboxed process """ @@ -825,5 +827,5 @@ def free(ptr, map_size): VirtualFree(ptr, 0, MEM_RELEASE) - + # register_external here? From noreply at buildbot.pypy.org Sun Jan 22 22:35:50 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Sun, 22 Jan 2012 22:35:50 +0100 (CET) Subject: [pypy-commit] pypy merge-2.7.2: CPython Issue #12100: Don't reset incremental encoders of CJK codecs at each call to encode(). Message-ID: <20120122213550.38D19821FA@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: merge-2.7.2 Changeset: r51668:0c697ef6b87f Date: 2012-01-22 22:32 +0100 http://bitbucket.org/pypy/pypy/changeset/0c697ef6b87f/ Log: CPython Issue #12100: Don't reset incremental encoders of CJK codecs at each call to encode(). diff --git a/pypy/module/_multibytecodec/c_codecs.py b/pypy/module/_multibytecodec/c_codecs.py --- a/pypy/module/_multibytecodec/c_codecs.py +++ b/pypy/module/_multibytecodec/c_codecs.py @@ -230,14 +230,14 @@ if ignore_error == 0: flags = MBENC_FLUSH | MBENC_RESET else: - flags = MBENC_RESET + flags = 0 while True: r = pypy_cjk_enc_chunk(encodebuf, flags) if r == 0 or r == ignore_error: break multibytecodec_encerror(encodebuf, r, errors, errorcb, namecb, unicodedata) - while True: + while flags & MBENC_RESET: r = pypy_cjk_enc_reset(encodebuf) if r == 0: break diff --git a/pypy/module/_multibytecodec/test/test_app_incremental.py b/pypy/module/_multibytecodec/test/test_app_incremental.py --- a/pypy/module/_multibytecodec/test/test_app_incremental.py +++ b/pypy/module/_multibytecodec/test/test_app_incremental.py @@ -129,6 +129,15 @@ r = e.encode(u"xyz\u5f95\u6c85") assert r == 'xyz~{abcd~}' + def test_encode_hz_noreset(self): + text = (u'\u5df1\u6240\u4e0d\u6b32\uff0c\u52ff\u65bd\u65bc\u4eba\u3002' + u'Bye.') + out = '' + e = self.IncrementalHzEncoder() + for c in text: + out += e.encode(c) + assert out == b'~{<:Ky2;S{#,NpJ)l6HK!#~}Bye.' + def test_encode_hz_error(self): e = self.IncrementalHzEncoder() raises(UnicodeEncodeError, e.encode, u"\u4321", True) diff --git a/pypy/translator/c/src/cjkcodecs/multibytecodec.c b/pypy/translator/c/src/cjkcodecs/multibytecodec.c --- a/pypy/translator/c/src/cjkcodecs/multibytecodec.c +++ b/pypy/translator/c/src/cjkcodecs/multibytecodec.c @@ -187,7 +187,7 @@ Py_ssize_t r; Py_ssize_t inleft = (Py_ssize_t)(d->inbuf_end - d->inbuf); Py_ssize_t outleft = (Py_ssize_t)(d->outbuf_end - d->outbuf); - if (inleft == 0) + if (inleft == 0 && !(flags & MBENC_RESET)) return 0; r = d->codec->encode(&d->state, d->codec->config, &d->inbuf, inleft, &d->outbuf, outleft, flags); diff --git a/pypy/translator/c/src/cjkcodecs/multibytecodec.h b/pypy/translator/c/src/cjkcodecs/multibytecodec.h --- a/pypy/translator/c/src/cjkcodecs/multibytecodec.h +++ b/pypy/translator/c/src/cjkcodecs/multibytecodec.h @@ -84,6 +84,7 @@ #define MBERR_NOMEMORY (-4) /* out of memory */ #define MBENC_FLUSH 0x0001 /* encode all characters encodable */ +#define MBENC_RESET 0x0002 /* reset after an encoding session */ #define MBENC_MAX MBENC_FLUSH From noreply at buildbot.pypy.org Sun Jan 22 22:55:28 2012 From: noreply at buildbot.pypy.org (hodgestar) Date: Sun, 22 Jan 2012 22:55:28 +0100 (CET) Subject: [pypy-commit] benchmarks default: Add Genshi XML and text template benchmarks. Message-ID: <20120122215528.9F9C071027D@wyvern.cs.uni-duesseldorf.de> Author: Simon Cross Branch: Changeset: r161:fad651ead401 Date: 2012-01-22 23:50 +0200 http://bitbucket.org/pypy/benchmarks/changeset/fad651ead401/ Log: Add Genshi XML and text template benchmarks. diff too long, truncating to 10000 out of 32654 lines diff --git a/benchmarks.py b/benchmarks.py --- a/benchmarks.py +++ b/benchmarks.py @@ -52,6 +52,11 @@ extra_args=['--benchmark=' + name], iteration_scaling=0.1) +for name in ['xml', 'text']: + _register_new_bm('bm_genshi', 'genshi_' + name, + globals(), bm_env={'PYTHONPATH': relative('lib/genshi')}, + extra_args=['--benchmark=' + name]) + for name in ['float', 'nbody_modified', 'meteor-contest', 'fannkuch', 'spectral-norm', 'chaos', 'telco', 'go', 'pyflate-fast', 'raytrace-simple', 'crypto_pyaes', 'bm_mako', 'bm_chameleon', diff --git a/lib/genshi/COPYING b/lib/genshi/COPYING new file mode 100644 --- /dev/null +++ b/lib/genshi/COPYING @@ -0,0 +1,28 @@ +Copyright (C) 2006-2010 Edgewall Software +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions +are met: + + 1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + 3. The name of the author may not be used to endorse or promote + products derived from this software without specific prior + written permission. + +THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS +OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY +DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE +GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER +IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR +OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN +IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/lib/genshi/ChangeLog b/lib/genshi/ChangeLog new file mode 100644 --- /dev/null +++ b/lib/genshi/ChangeLog @@ -0,0 +1,472 @@ +Version 0.6.1 +http://svn.edgewall.org/repos/genshi/tags/0.6.1/ +(???, from branches/stable/0.6.x) + + * Fix for error in how `HTMLFormFiller` would handle `textarea` elements if + no value was not supplied form them. + * The `HTMLFormFiller` now correctly handles check boxes and radio buttons + with an empty `value` attribute. + + +Version 0.6 +http://svn.edgewall.org/repos/genshi/tags/0.6.0/ +(Apr 22 2010, from branches/stable/0.6.x) + + * Support for Python 2.3 has been dropped. + * Rewrite of the XPath evaluation engine for better performance and improved + correctness. This is the result of integrating work done by Marcin Kurczych + during GSoC 2008. + * Updated the Python AST processing for template code evaluation to use the + `_ast` module instead of the deprecated `compiler` package, including an + adapter layer for Python 2.4. This, too, is the result of integrating work + done by Marcin Kurczych during GSoC 2008. + * Added caching in the serialization stage for improved performance in some + cases. + * Various improvements to the HTML sanitization filter. + * Fix problem with I18n filter that would get confused by expressions in + attribute values when inside an `i18n:msg` block (ticket #250). + * Fix problem with the transformation filter dropping events after the + selection (ticket #290). + * `for` loops in template code blocks no longer establish their own locals + scope, meaning you can now access variables assigned in the loop outside + of the loop, just as you can in regular Python code (ticket #259). + * Import statements inside function definitions in template code blocks no + longer result in an UndefinedError when the imported name is accessed + (ticket #276). + * Fixed handling of relative URLs with fragment identifiers containing colons + in the `HTMLSanitizer` (ticket #274). + * Added an option to the `HTMLFiller` to also populate password fields. + * Match template processing no longer produces unwanted duplicate output in + some cases (ticket #254). + * Templates instantiated without a loader now get an implicit loader based on + their file path, or the current directory as a fallback (ticket #320). + * Added documentation for the `TemplateLoader`. + * Enhanced documentation for internationalization. + + +Version 0.5.1 +http://svn.edgewall.org/repos/genshi/tags/0.5.1/ +(Jul 9 2008, from branches/stable/0.5.x) + + * Fix problem with nested match templates not being applied when buffering + on the outer `py:match` is disabled. Thanks to Erik Bray for reporting the + problem and providing a test case! + * Fix problem in `Translator` filter that would cause the translation of + text nodes to fail if the translation function returned an object that was + not directly a string, but rather something like an instance of the + `LazyProxy` class in Babel (ticket #145). + * Fix problem with match templates incorrectly being applied multiple times. + * Includes from templates loaded via an absolute path now include the correct + file in nested directories as long if no search path has been configured + (ticket #240). + * Unbuffered match templates could result in parts of the matched content + being included in the output if the match template didn't actually consume + it via one or more calls to the `select()` function (ticket #243). + + +Version 0.5 +http://svn.edgewall.org/repos/genshi/tags/0.5.0/ +(Jun 9 2008, from branches/stable/0.5.x) + + * Added #include directive for text templates (ticket #115). + * Added new markup transformation filter contributed by Alec Thomas. This + provides gorgeous jQuery-inspired stream transformation capabilities based + on XPath expressions. + * When using HTML or XHTML serialization, the `xml:lang` attribute is + automatically translated to the `lang` attribute which HTML user agents + understand. + * Added support for the XPath 2 `matches()` function in XPath expressions, + which allow matching against regular expressions. + * Support for Python code blocks in templates can now be disabled + (ticket #123). + * Includes are now processed when the template is parsed if possible, but + only if the template loader is not set to do automatic reloading. Included + templates are basically inlined into the including template, which can + speed up rendering of that template a bit. + * Added new syntax for text templates, which is more powerful and flexible + with respect to white-space and line breaks. It also supports Python code + blocks. The old syntax is still available and the default for now, but in a + future release the new syntax will become the default, and some time after + that the old syntax will be removed. + * Added support for passing optimization hints to `` directives, + which can speed up match templates in many cases, for example when a match + template should only be applied once to a stream, or when it should not be + applied recursively. + * Text templates now default to rendering as plain text; it is no longer + necessary to explicitly specify the "text" method to the `render()` or + `serialize()` method of the generated markup stream. + * XInclude elements in markup templates now support the `parse` attribute; + when set to "xml" (the default), the include is processed as before, but + when set to "text", the included template is parsed as a text template using + the new syntax (ticket #101). + * Python code blocks inside match templates are now executed (ticket #155). + * The template engine plugin no longer adds the `default_doctype` when the + `fragment` parameter is `True`. + * The `striptags` function now also removes HTML/XML-style comments (ticket + #150). + * The `py:replace` directive can now also be used as an element, with an + attribute named `value` (ticket #144). + * The `TextSerializer` class no longer strips all markup in text by default, + so that it is still possible to use the Genshi `escape` function even with + text templates. The old behavior is available via the `strip_markup` option + of the serializer (ticket #146). + * Assigning to a variable named `data` in a Python code block no longer + breaks context lookup. + * The `Stream.render` now accepts an optional `out` parameter that can be + used to pass in a writable file-like object to use for assembling the + output, instead of building a big string and returning it. + * The XHTML serializer now strips `xml:space` attributes as they are only + allowed on very few tags. + * Match templates are now applied in a more controlled fashion: in the order + they are declared in the template source, all match templates up to (and + including) the matching template itself are applied to the matched content, + whereas the match templates declared after the matching template are only + applied to the generated content (ticket #186). + * The `TemplateLoader` class now provides an `_instantiate()` method that can + be overridden by subclasses to implement advanced template instantiation + logic (ticket #204). + * The search path of the `TemplateLoader` class can now contain ''load + functions'' in addition to path strings. A load function is passed the + name of the requested template file, and should return a file-like object + and some metadata. New load functions are supplied for loading from egg + package data, and loading from different loaders depending on the path + prefix of the requested filename (ticket #182). + * Match templates can now be processed without keeping the complete matched + content in memory, which could cause excessive memory use on long pages. + The buffering can be disabled using the new `buffer` optimization hint on + the `` directive. + * Improve error reporting when accessing an attribute in a Python expression + raises an `AttributeError` (ticket #191). + * The `Markup` class now supports mappings for right hand of the `%` (modulo) + operator in the same way the Python string classes do, except that the + substituted values are escape. Also, the special constructor which took + positional arguments that would be substituted was removed. Thus the + `Markup` class now supports the same arguments as that of its `unicode` + base class (ticket #211). + * The `Template` class and its subclasses, as well as the interpolation API, + now take an `filepath` parameter instead of `basedir` (ticket #207). + * The `XHTMLSerializer` now has a `drop_xml_decl` option that defaults to + `True`. Setting it to `False` will cause any XML decl in the serialized + stream to be included in the output as it would for XML serialization. + * Add support for a protocol that would allow interoperability of different + Python packages that generate and/or consume markup, based on the special + `__html__()` method (ticket #202). + + +Version 0.4.4 +http://svn.edgewall.org/repos/genshi/tags/0.4.4/ +(Aug 14, 2007, from branches/stable/0.4.x) + + * Fixed augmented assignment to local variables in Python code blocks. + * Fixed handling of nested function and class definitions in Python code + blocks. + * Includes were not raising `TemplateNotFound` exceptions even when no + fallback has been specified. That has been corrected. + * The template loader now raises a `TemplateNotFound` error when a previously + cached template is removed or renamed, where it previously was passing up + an `OSError`. + * The Genshi I18n filter can be configured to only extract messages found in + `gettext` function calls, ignoring any text nodes and attribute values + (ticket #138). + + +Version 0.4.3 +http://svn.edgewall.org/repos/genshi/tags/0.4.3/ +(Jul 17 2007, from branches/stable/0.4.x) + + * The I18n filter no longer extracts or translates literal strings in + attribute values that also contain expressions. + * Added `loader_callback` option to plugin interface, which allows specifying + a callback function that the template loader should invoke whenever a new + template is loaded (ticket #130). Note that the value for this option can + not be specified as a string, only as an actual function object, which means + it is not available for use through configuration files. + * The I18n filter now extracts messages from gettext functions even inside + ignored tags (ticket #132). + * The HTML sanitizer now strips any CSS comments in style attributes, which + could previously be used to hide malicious property values. + * The HTML sanitizer now also removes any HTML comments encountered, as those + may be used to hide malicious payloads targetting a certain "innovative" + browser that goes and interprets the content of specially prepared comments. + * Attribute access in template expressions no longer silently ignores + exceptions other than `AttributeError` raised in the attribute accessor. + + +Version 0.4.2 +http://svn.edgewall.org/repos/genshi/tags/0.4.2/ +(Jun 20 2007, from branches/stable/0.4.x) + + * The `doctype` parameter of the markup serializers now also accepts the "name" + of the doctype as string, in addition to the `(name, pubid, sysid)` tuple. + * The I18n filter was not replacing the original attributes with the + translation, but instead adding a second attribute with the same name. + * `TextTemplate` can now handle unicode source (ticket #125). + * A `` processing instruction containing trailing whitespace no + longer causes a syntax error (ticket #127). + * The I18n filter now skips the content of elements that have an `xml:lang` + attribute with a fixed string value. Basically, `xml:lang` can now be used + as a flag to mark specific sections as not needing localization. + * Added plugin for message extraction via Babel (http://babel.edgewall.org/). + + +Version 0.4.1 +http://svn.edgewall.org/repos/genshi/tags/0.4.1/ +(May 21 2007, from branches/stable/0.4.x) + + * Fix incorrect reference to translation function in the I18N filter. + * The `ET()` function now correctly handles attributes with a namespace. + * XML declarations are now processed internally, as well as written to the + output when XML serialization is used (ticket #111). + * Added the functions `encode()` and `get_serializer()` to the `genshi.output` + module, which provide a lower-level API to the functionality previously only + available through `Stream.render()` and `Stream.serialize()`. + * The `DocType` class now has a `get(name)` function that returns a `DOCTYPE` + tuple for a given string. + * Added frameset variants to the `DocType` constants for HTML 4.01 and XHTML + 1.0. + * Improved I18n extraction for pluralizable messages: for any translation + function with multiple string arguments (such as ``ngettext``), a single + item with a tuple of strings is yielded, instead an item for each string + argument. + * The `HTMLFormFiller` stream filter no longer alters form elements for which + the data element contains no corresponding item. + * Code in `` processing instructions no longer gets the special + treatment as Python code in template expressions, i.e. item and attribute + access are no longer interchangeable (which was broken in a number of ways + anyway, see ticket #113). This change does not affect expressions. + * Numerous fixes for the execution of Python code in `` processing + instructions (tickets #113 and #114). + * The `py:def` (and `#def`) directive now supports "star args" (i.e. `*args` + and `**kwargs`) in the function declaration (ticket #116). + + +Version 0.4 +http://svn.edgewall.org/repos/genshi/tags/0.4.0/ +(Apr 16 2007, from branches/stable/0.4.x) + + * New example applications for CherryPy and web.py. + * The template loader now uses a LRU cache to limit the number of cached + templates to a configurable maximum. Also, a race condition in the template + loader was fixed by adding locking. + * A new filter (genshi.filters.HTMLFormFiller) was added, which can populate + HTML forms based on a dictionary of values. + * The set of permitted tag and attribute names for the HTMLSanitizer can now + be configured per instance. + * The template engine plugin now supports a range of options for + configuration, for example to set the default serialization method, the + default output encoding, or the default DOCTYPE. + * The ElementTree adaptation function `ET()` has moved into the `genshi.input` + module. + * Allow `when` directives to omit the test expression as long as the + associated choose directive does have one. In that case, the when branch is + followed if the expression of the choose directive evaluates to a truth + value. + * Unsuccessful attribute or item lookups now return `Undefined` objects for + nicer error messages. + * Split up the `genshi.template` module into multiple modules inside the new + `genshi.template` package. + * Results of expression evaluation are no longer implicitly called if they + are callable. + * Instances of the `genshi.core.Attrs` class are now immutable (they are + subclasses of `tuple` instead of `list`). + * `MarkupTemplate`s can now be instantiated from markup streams, in addition + to strings and file-like objects (ticket #69). + * Improve handling of incorrectly nested tags in the HTML parser. + * Template includes can now be nested inside fallback content. + * Expressions can now contain dict literals (ticket #37). + * It is now possible to have one or more escaped dollar signs in front of a + full expression (ticket #92). + * The `Markup` class is now available by default in template expressions + (ticket #67). + * The handling of namespace declarations in XML/XHTML output has been improved. + * The `Attrs` class no longer automatically wraps all attribute names in + `QName` objects. This is now the responsibility of whoever is instantiating + `Attrs` objects (for example, stream filters and generators). + * Python code blocks are now supported using the `` processing + instruction (ticket #84). + * The way errors in template expressions are handled can now be configured. The + option `LenientLookup` provides the same forgiving mode used in previous + Genshi versions, while `StrictLookup` raises exceptions when undefined + variables or members are accessed. The lenient mode is still the default in + this version, but that may change in the future. (ticket #88) + * If a variable is not necessarily defined at the top level of the template + data, the new built-in functions `defined(key)` and `value_of(key, default)` + can be used so that the template also works in strict lookup mode. These + functions were previously only available when using Genshi via the template + engine plugin (for compatibility with Kid). + * `style` attributes are no longer allowed by the `HTMLSanitizer` by default. + If they are explicitly added to the set of safe attributes, any unicode + escapes in the attribute value are now handled properly. + * Namespace declarations on conditional elements (for example using a `py:if` + directive`) are no longer moved to the following element when the element + originally carrying the declaration is removed from the stream (ticket #107). + * Added basic built-in support for internationalizing templates by providing + a new `Translator` class that can both extract localizable strings from a + stream, and replace those strings with their localizations at render time. + The code for this was largely taken from previous work done by Matt Good + and David Fraser. + + +Version 0.3.6 +http://svn.edgewall.org/repos/genshi/tags/0.3.6/ +(Dec 11 2006, from branches/stable/0.3.x) + + * The builder API now accepts streams as children of elements and fragments. + + +Version 0.3.5 +http://svn.edgewall.org/repos/genshi/tags/0.3.5/ +(Nov 22 2006, from branches/stable/0.3.x) + + * Fix XPath traversal in match templates. Previously, `div/p` would be treated + the same as `div//p`, i.e. it would match all descendants and not just the + immediate children. + * Preserve whitespace in HTML `
    ` elements also when they contain child
    +   elements.
    + * Match templates no longer match their own output (ticket #77).
    + * Blank lines before directives in text templates are now preserved as
    +   expected (ticket #62).
    +
    +
    +Version 0.3.4
    +http://svn.edgewall.org/repos/genshi/tags/0.3.4/
    +(Nov 2 2006, from branches/stable/0.3.x)
    +
    + * The encoding of HTML and XML files, as well as markup and text templates,
    +   can now be specified. Also, the encoding specified in XML declarations is
    +   now respected unless an expiclit encoding is requested.
    + * Expressions used as arguments for `py:with`, `py:def`, and `py:for`
    +   directives can now contain non-ASCII strings.
    +
    +
    +Version 0.3.3
    +http://svn.edgewall.org/repos/genshi/tags/0.3.3/
    +(Oct 16 2006, from branches/stable/0.3.x)
    +
    + * Fixed bug introduced in 0.3.2 that broke the parsing of templates which
    +   declare the same namespace more than once in a nested fashion.
    + * Fixed the parsing of HTML entity references inside attribute values, both
    +   in the `XMLParser` and the `HTMLParser` classes.
    + * Some changes to usage of absolute vs. relative template paths to ensure that
    +   the filenamed-keyed cache employed by the TemplateLoader doesn't mix up
    +   templates with the same name, but from different subdirectories.
    +
    +
    +Version 0.3.2
    +http://svn.edgewall.org/repos/genshi/tags/0.3.2/
    +(Oct 12 2006, from branches/stable/0.3.x)
    +
    + * Exceptions from templates now contain the absolute path to the template file
    +   when a search path is used. This enables tracebacks to display the line in
    +   question.
    + * The template engine plugin now provides three different types: "genshi" and
    +   "genshi-markup" map to markup templates, while "genshi-text" maps to text
    +   templates.
    + * Fixed the namespace context used by XPath patterns in py:match templates.
    +   The were erroneously using the namespace context of the elements being
    +   matched, where they should rather use the context in which they were
    +   defined.
    + * The contents of `
    +  ... 
    """) + >>> sanitize = HTMLSanitizer() + >>> print(html | sanitize) +
    +

    Innocent looking text.

    +
    + +In this example, the ```` element. Both ````. Also, it will normalize any boolean attributes values + that are minimized in HTML, so that for example ``
    `` + becomes ``
    ``. + + This serializer supports the use of namespaces for compound documents, for + example to use inline SVG inside an XHTML document. + +``html`` + The ``HTMLSerializer`` produces proper HTML markup. The main differences + compared to ``xhtml`` serialization are that boolean attributes are + minimized, empty tags are not self-closing (so it's ``
    `` instead of + ``
    ``), and that the contents of ``') + >>> print(tmpl.generate()) + + +On the other hand, Genshi will always replace two dollar signs in text with a +single dollar sign, so you'll need to use three dollar signs to get two in the +output: + +.. code-block:: pycon + + >>> tmpl = MarkupTemplate('') + >>> print(tmpl.generate()) + + + +.. _`code blocks`: + +Code Blocks +=========== + +Templates also support full Python code blocks, using the ```` +processing instruction in XML templates: + +.. code-block:: genshi + +
    + + ${greeting('world')} +
    + +This will produce the following output: + +.. code-block:: xml + +
    + Hello, world! +
    + +In text templates (although only those using the new syntax introduced in +Genshi 0.5), code blocks use the special ``{% python %}`` directive: + +.. code-block:: genshitext + + {% python + from genshi.builder import tag + def greeting(name): + return 'Hello, %s!' % name + %} + ${greeting('world')} + +This will produce the following output:: + + Hello, world! + + +Code blocks can import modules, define classes and functions, and basically do +anything you can do in normal Python code. What code blocks can *not* do is to +produce content that is emitted directly tp the generated output. + +.. note:: Using the ``print`` statement will print to the standard output + stream, just as it does for other Python code in your application. + +Unlike expressions, Python code in ```` processing instructions can +not use item and attribute access in an interchangeable manner. That means that +“dotted notation” is always attribute access, and vice-versa. + +The support for Python code blocks in templates is not supposed to encourage +mixing application code into templates, which is generally considered bad +design. If you're using many code blocks, that may be a sign that you should +move such code into separate Python modules. + +If you'd rather not allow the use of Python code blocks in templates, you can +simply set the ``allow_exec`` parameter (available on the ``Template`` and the +``TemplateLoader`` initializers) to ``False``. In that case Genshi will raise +a syntax error when a ```` processing instruction is encountered. +But please note that disallowing code blocks in templates does not turn Genshi +into a sandboxable template engine; there are sufficient ways to do harm even +using plain expressions. + + +.. _`error handling`: + +Error Handling +============== + +By default, Genshi raises an ``UndefinedError`` if a template expression +attempts to access a variable that is not defined: + +.. code-block:: pycon + + >>> from genshi.template import MarkupTemplate + >>> tmpl = MarkupTemplate('

    ${doh}

    ') + >>> tmpl.generate().render('xhtml') + Traceback (most recent call last): + ... + UndefinedError: "doh" not defined + +You can change this behavior by setting the variable lookup mode to "lenient". +In that case, accessing undefined variables returns an `Undefined` object, +meaning that the expression does not fail immediately. See below for details. + +If you need to check whether a variable exists in the template context, use the +defined_ or the value_of_ function described below. To check for existence of +attributes on an object, or keys in a dictionary, use the ``hasattr()``, +``getattr()`` or ``get()`` functions, or the ``in`` operator, just as you would +in regular Python code: + + >>> from genshi.template import MarkupTemplate + >>> tmpl = MarkupTemplate('

    ${defined("doh")}

    ') + >>> print(tmpl.generate().render('xhtml')) +

    False

    + +.. note:: Lenient error handling was the default in Genshi prior to version 0.5. + Strict mode was introduced in version 0.4, and became the default in + 0.5. The reason for this change was that the lenient error handling + was masking actual errors in templates, thereby also making it harder + to debug some problems. + + +.. _`lenient`: + +Lenient Mode +------------ + +If you instruct Genshi to use the lenient variable lookup mode, it allows you +to access variables that are not defined, without raising an ``UndefinedError``. + +This mode can be chosen by passing the ``lookup='lenient'`` keyword argument to +the template initializer, or by passing the ``variable_lookup='lenient'`` +keyword argument to the ``TemplateLoader`` initializer: + +.. code-block:: pycon + + >>> from genshi.template import MarkupTemplate + >>> tmpl = MarkupTemplate('

    ${doh}

    ', lookup='lenient') + >>> print(tmpl.generate().render('xhtml')) +

    + +You *will* however get an exception if you try to call an undefined variable, or +do anything else with it, such as accessing its attributes: + +.. code-block:: pycon + + >>> from genshi.template import MarkupTemplate + >>> tmpl = MarkupTemplate('

    ${doh.oops}

    ', lookup='lenient') + >>> print(tmpl.generate().render('xhtml')) + Traceback (most recent call last): + ... + UndefinedError: "doh" not defined + +If you need to know whether a variable is defined, you can check its type +against the ``Undefined`` class, for example in a conditional directive: + +.. code-block:: pycon + + >>> from genshi.template import MarkupTemplate + >>> tmpl = MarkupTemplate('

    ${type(doh) is not Undefined}

    ', + ... lookup='lenient') + >>> print(tmpl.generate().render('xhtml')) +

    False

    + +Alternatively, the built-in functions defined_ or value_of_ can be used in this +case. + +Custom Modes +------------ + +In addition to the built-in "lenient" and "strict" modes, it is also possible to +use a custom error handling mode. For example, you could use lenient error +handling in a production environment, while also logging a warning when an +undefined variable is referenced. + +See the API documentation of the ``genshi.template.eval`` module for details. + + +Built-in Functions & Types +========================== + +The following functions and types are available by default in template code, in +addition to the standard built-ins that are available to all Python code. + +.. _`defined`: + +``defined(name)`` +----------------- +This function determines whether a variable of the specified name exists in +the context data, and returns ``True`` if it does. + +.. _`value_of`: + +``value_of(name, default=None)`` +-------------------------------- +This function returns the value of the variable with the specified name if +such a variable is defined, and returns the value of the ``default`` +parameter if no such variable is defined. + +.. _`Markup`: + +``Markup(text)`` +---------------- +The ``Markup`` type marks a given string as being safe for inclusion in markup, +meaning it will *not* be escaped in the serialization stage. Use this with care, +as not escaping a user-provided string may allow malicious users to open your +web site to cross-site scripting attacks. + +.. _`Undefined`: + +``Undefined`` +---------------- +The ``Undefined`` type can be used to check whether a reference variable is +defined, as explained in `error handling`_. + + +.. _`directives`: + +------------------- +Template Directives +------------------- + +Directives provide control flow functionality for templates, such as conditions +or iteration. As the syntax for directives depends on whether you're using +markup or text templates, refer to the +`XML Template Language `_ or +`Text Template Language `_ pages for information. diff --git a/lib/genshi/doc/text-templates.txt b/lib/genshi/doc/text-templates.txt new file mode 100644 --- /dev/null +++ b/lib/genshi/doc/text-templates.txt @@ -0,0 +1,360 @@ +.. -*- mode: rst; encoding: utf-8 -*- + +============================= +Genshi Text Template Language +============================= + +In addition to the XML-based template language, Genshi provides a simple +text-based template language, intended for basic plain text generation needs. +The language is similar to the Django_ template language. + +This document describes the template language and will be most useful as +reference to those developing Genshi text templates. Templates are text files of +some kind that include processing directives_ that affect how the template is +rendered, and template expressions that are dynamically substituted by +variable data. + +See `Genshi Templating Basics `_ for general information on +embedding Python code in templates. + +.. note:: Actually, Genshi currently has two different syntaxes for text + templates languages: One implemented by the class ``OldTextTemplate`` + and another implemented by ``NewTextTemplate``. This documentation + concentrates on the latter, which is planned to completely replace the + older syntax. The older syntax is briefly described under legacy_. + +.. _django: http://www.djangoproject.com/ + +.. contents:: Contents + :depth: 3 +.. sectnum:: + + +.. _`directives`: + +------------------- +Template Directives +------------------- + +Directives are template commands enclosed by ``{% ... %}`` characters. They can +affect how the template is rendered in a number of ways: Genshi provides +directives for conditionals and looping, among others. + +Each directive must be terminated using an ``{% end %}`` marker. You can add +a string inside the ``{% end %}`` marker, for example to document which +directive is being closed, or even the expression associated with that +directive. Any text after ``end`` inside the delimiters is ignored, and +effectively treated as a comment. + +If you want to include a literal delimiter in the output, you need to escape it +by prepending a backslash character (``\``). + + +Conditional Sections +==================== + +.. _`if`: + +``{% if %}`` +------------ + +The content is only rendered if the expression evaluates to a truth value: + +.. code-block:: genshitext + + {% if foo %} + ${bar} + {% end %} + +Given the data ``foo=True`` and ``bar='Hello'`` in the template context, this +would produce:: + + Hello + + +.. _`choose`: +.. _`when`: +.. _`otherwise`: + +``{% choose %}`` +---------------- + +The ``choose`` directive, in combination with the directives ``when`` and +``otherwise``, provides advanced contional processing for rendering one of +several alternatives. The first matching ``when`` branch is rendered, or, if +no ``when`` branch matches, the ``otherwise`` branch is be rendered. + +If the ``choose`` directive has no argument the nested ``when`` directives will +be tested for truth: + +.. code-block:: genshitext + + The answer is: + {% choose %} + {% when 0 == 1 %}0{% end %} + {% when 1 == 1 %}1{% end %} + {% otherwise %}2{% end %} + {% end %} + +This would produce the following output:: + + The answer is: + 1 + +If the ``choose`` does have an argument, the nested ``when`` directives will +be tested for equality to the parent ``choose`` value: + +.. code-block:: genshitext + + The answer is: + {% choose 1 %}\ + {% when 0 %}0{% end %}\ + {% when 1 %}1{% end %}\ + {% otherwise %}2{% end %}\ + {% end %} + +This would produce the following output:: + + The answer is: + 1 + + +Looping +======= + +.. _`for`: + +``{% for %}`` +------------- + +The content is repeated for every item in an iterable: + +.. code-block:: genshitext + + Your items: + {% for item in items %}\ + * ${item} + {% end %} + +Given ``items=[1, 2, 3]`` in the context data, this would produce:: + + Your items + * 1 + * 2 + * 3 + + +Snippet Reuse +============= + +.. _`def`: +.. _`macros`: + +``{% def %}`` +------------- + +The ``def`` directive can be used to create macros, i.e. snippets of template +text that have a name and optionally some parameters, and that can be inserted +in other places: + +.. code-block:: genshitext + + {% def greeting(name) %} + Hello, ${name}! + {% end %} + ${greeting('world')} + ${greeting('everyone else')} + +The above would be rendered to:: + + Hello, world! + Hello, everyone else! + +If a macro doesn't require parameters, it can be defined without the +parenthesis. For example: + +.. code-block:: genshitext + + {% def greeting %} + Hello, world! + {% end %} + ${greeting()} + +The above would be rendered to:: + + Hello, world! + + +.. _includes: +.. _`include`: + +``{% include %}`` +----------------- + +To reuse common parts of template text across template files, you can include +other files using the ``include`` directive: + +.. code-block:: genshitext + + {% include base.txt %} + +Any content included this way is inserted into the generated output. The +included template sees the context data as it exists at the point of the +include. `Macros`_ in the included template are also available to the including +template after the point it was included. + +Include paths are relative to the filename of the template currently being +processed. So if the example above was in the file "``myapp/mail.txt``" +(relative to the template search path), the include directive would look for +the included file at "``myapp/base.txt``". You can also use Unix-style +relative paths, for example "``../base.txt``" to look in the parent directory. + +Just like other directives, the argument to the ``include`` directive accepts +any Python expression, so the path to the included template can be determined +dynamically: + +.. code-block:: genshitext + + {% include ${'%s.txt' % filename} %} + +Note that a ``TemplateNotFound`` exception is raised if an included file can't +be found. + +.. note:: The include directive for text templates was added in Genshi 0.5. + + +Variable Binding +================ + +.. _`with`: + +``{% with %}`` +-------------- + +The ``{% with %}`` directive lets you assign expressions to variables, which can +be used to make expressions inside the directive less verbose and more +efficient. For example, if you need use the expression ``author.posts`` more +than once, and that actually results in a database query, assigning the results +to a variable using this directive would probably help. + +For example: + +.. code-block:: genshitext + + Magic numbers! + {% with y=7; z=x+10 %} + $x $y $z + {% end %} + +Given ``x=42`` in the context data, this would produce:: + + Magic numbers! + 42 7 52 + +Note that if a variable of the same name already existed outside of the scope +of the ``with`` directive, it will **not** be overwritten. Instead, it will +have the same value it had prior to the ``with`` assignment. Effectively, +this means that variables are immutable in Genshi. + + +.. _whitespace: + +--------------------------- +White-space and Line Breaks +--------------------------- + +Note that space or line breaks around directives is never automatically removed. +Consider the following example: + +.. code-block:: genshitext + + {% for item in items %} + {% if item.visible %} + ${item} + {% end %} + {% end %} + +This will result in two empty lines above and beneath every item, plus the +spaces used for indentation. If you want to supress a line break, simply end +the line with a backslash: + +.. code-block:: genshitext + + {% for item in items %}\ + {% if item.visible %}\ + ${item} + {% end %}\ + {% end %}\ + +Now there would be no empty lines between the items in the output. But you still +get the spaces used for indentation, and because the line breaks are removed, +they actually continue and add up between lines. There are numerous ways to +control white-space in the output while keeping the template readable, such as +moving the indentation into the delimiters, or moving the end delimiter on the +next line, and so on. + + +.. _comments: + +-------- +Comments +-------- + +Parts in templates can be commented out using the delimiters ``{# ... #}``. +Any content in comments are removed from the output. + +.. code-block:: genshitext + + {# This won't end up in the output #} + This will. + +Just like directive delimiters, these can be escaped by prefixing with a +backslash. + +.. code-block:: genshitext + + \{# This *will* end up in the output, including delimiters #} + This too. + + +.. _legacy: + +--------------------------- +Legacy Text Template Syntax +--------------------------- + +The syntax for text templates was redesigned in version 0.5 of Genshi to make +the language more flexible and powerful. The older syntax is based on line +starting with dollar signs, similar to e.g. Cheetah_ or Velocity_. + +.. _cheetah: http://cheetahtemplate.org/ +.. _velocity: http://jakarta.apache.org/velocity/ + +A simple template using the old syntax looked like this: + +.. code-block:: genshitext + + Dear $name, + + We have the following items for you: + #for item in items + * $item + #end + + All the best, + Foobar + +Beyond the requirement of putting directives on separate lines prefixed with +dollar signs, the language itself is very similar to the new one. Except that +comments are lines that start with two ``#`` characters, and a line-break at the +end of a directive is removed automatically. + +.. note:: If you're using this old syntax, it is strongly recommended to + migrate to the new syntax. Simply replace any references to + ``TextTemplate`` by ``NewTextTemplate`` (and also change the + text templates, of course). On the other hand, if you want to stick + with the old syntax for a while longer, replace references to + ``TextTemplate`` by ``OldTextTemplate``; while ``TextTemplate`` is + still an alias for the old language at this point, that will change + in a future release. But also note that the old syntax may be + dropped entirely in a future release. diff --git a/lib/genshi/doc/upgrade.txt b/lib/genshi/doc/upgrade.txt new file mode 100644 --- /dev/null +++ b/lib/genshi/doc/upgrade.txt @@ -0,0 +1,176 @@ +================ +Upgrading Genshi +================ + + +.. contents:: Contents + :depth: 2 +.. sectnum:: + +------------------------------------------------------ +Upgrading from Genshi 0.6.x to the development version +------------------------------------------------------ + +The Genshi development version now supports both Python 2 and Python 3. + +The most noticable API change in the Genshi development version is that the +default encoding in numerous places is now None (i.e. unicode) instead +of UTF-8. This change was made in order to ease the transition to Python 3 +where strings are unicode strings by default. + +If your application relies on the default UTF-8 encoding a simple way to +have it work both with Genshi 0.6.x and the development version is to specify the +encoding explicitly in calls to the following classes, methods and functions: + +* genshi.HTML +* genshi.Stream.render +* genshi.input.HTMLParser +* genshi.template.MarkupTemplate +* genshi.template.TemplateLoader +* genshi.template.TextTemplate (and genshi.template.NewTextTemplate) +* genshi.template.OldTextTemplate + +Whether you explicitly specify UTF-8 or explicitly specify None (unicode) is +a matter of personal taste, although working with unicode may make porting +your own application to Python 3 easier. + + +------------------------------------ +Upgrading from Genshi 0.5.x to 0.6.x +------------------------------------ + +Required Python Version +----------------------- + +Support for Python 2.3 has been dropped in this release. Python 2.4 is +now the minimum version of Python required to run Genshi. + +The XPath engine has been completely overhauled for this version. Some +patterns that previously matched incorrectly will no longer match, and +the other way around. In such cases, the XPath expressions need to be +fixed in your application and templates. + + +------------------------------------ +Upgrading from Genshi 0.4.x to 0.5.x +------------------------------------ + +Error Handling +-------------- + +The default error handling mode has been changed to "strict". This +means that accessing variables not defined in the template data will +now generate an immediate exception, as will accessing object +attributes or dictionary keys that don't exist. If your templates rely +on the old lenient behavior, you can configure Genshi to use that +instead. See the documentation for details on how to do that. But be +warned that lenient error handling may be removed completely in a +future release. + +Match Template Processing +------------------------- + +There has also been a subtle change to how ``py:match`` templates are +processed: in previous versions, all match templates would be applied +to the content generated by the matching template, and only the +matching template itself was applied recursively to the original +content. This behavior resulted in problems with many kinds of +recursive matching, and hence was changed for 0.5: now, all match +templates declared before the matching template are applied to the +original content, and match templates declared after the matching +template are applied to the generated content. This change should not +have any effect on most applications, but you may want to check your +use of match templates to make sure. + +Text Templates +-------------- + +Genshi 0.5 introduces a new, alternative syntax for text templates, +which is more flexible and powerful compared to the old syntax. For +backwards compatibility, this new syntax is not used by default, +though it will be in a future version. It is recommended that you +migrate to using this new syntax. To do so, simply rename any +references in your code to ``TextTemplate`` to ``NewTextTemplate``. To +explicitly use the old syntax, use ``OldTextTemplate`` instead, so +that you can be sure you'll be using the same language when the +default in Genshi is changed (at least until the old implementation is +completely removed). + +``Markup`` Constructor +---------------------- + +The ``Markup`` class no longer has a specialized constructor. The old +(undocumented) constructor provided a shorthand for doing positional +substitutions. If you have code like this: + +.. code-block:: python + + Markup('%s', name) + +You must replace it by the more explicit: + +.. code-block:: python + + Markup('%s') % name + +``Template`` Constructor +------------------------ + +The constructor of the ``Template`` class and its subclasses has changed +slightly: instead of the optional ``basedir`` parameter, it now expects +an (also optional) ``filepath`` parameter, which specifies the absolute +path to the template. You probably aren't using those constructors +directly, anyway, but using the ``TemplateLoader`` API instead. + + +------------------------------------ +Upgrading from Genshi 0.3.x to 0.4.x +------------------------------------ + +The modules ``genshi.filters`` and ``genshi.template`` have been +refactored into packages containing multiple modules. While code using +the regular APIs should continue to work without problems, you should +make sure to remove any leftover traces of the files ``filters.py`` +and ``template.py`` in the ``genshi`` package on the installation +path (including the corresponding ``.pyc`` files). This is not +necessary when Genshi was installed as a Python egg. + +Results of evaluating template expressions are no longer implicitly +called if they are callable. If you have been using that feature, you +will need to add the parenthesis to actually call the function. + +Instances of ``genshi.core.Attrs`` are now immutable. Filters +manipulating the attributes in a stream may need to be updated. Also, +the ``Attrs`` class no longer automatically wraps all attribute names +in ``QName`` objects, so users of the ``Attrs`` class need to do this +themselves. See the documentation of the ``Attrs`` class for more +information. + + +--------------------- +Upgrading from Markup +--------------------- + +Prior to version 0.3, the name of the Genshi project was "Markup". The +name change means that you will have to adjust your import statements +and the namespace URI of XML templates, among other things: + +* The package name was changed from "markup" to "genshi". Please + adjust any import statements referring to the old package name. +* The namespace URI for directives in Genshi XML templates has changed + from ``http://markup.edgewall.org/`` to + ``http://genshi.edgewall.org/``. Please update the ``xmlns:py`` + declaration in your template files accordingly. + +Furthermore, due to the inclusion of a text-based template language, +the class:: + + markup.template.Template + +has been renamed to:: + + genshi.template.MarkupTemplate + +If you've been using the Template class directly, you'll need to +update your code (a simple find/replace should do—the API itself +did not change). diff --git a/lib/genshi/doc/xml-templates.txt b/lib/genshi/doc/xml-templates.txt new file mode 100644 --- /dev/null +++ b/lib/genshi/doc/xml-templates.txt @@ -0,0 +1,702 @@ +.. -*- mode: rst; encoding: utf-8 -*- + +============================ +Genshi XML Template Language +============================ + +Genshi provides a XML-based template language that is heavily inspired by Kid_, +which in turn was inspired by a number of existing template languages, namely +XSLT_, TAL_, and PHP_. + +.. _kid: http://kid-templating.org/ +.. _python: http://www.python.org/ +.. _xslt: http://www.w3.org/TR/xslt +.. _tal: http://www.zope.org/Wikis/DevSite/Projects/ZPT/TAL +.. _php: http://www.php.net/ + +This document describes the template language and will be most useful as +reference to those developing Genshi XML templates. Templates are XML files of +some kind (such as XHTML) that include processing directives_ (elements or +attributes identified by a separate namespace) that affect how the template is +rendered, and template expressions that are dynamically substituted by +variable data. + +See `Genshi Templating Basics `_ for general information on +embedding Python code in templates. + + +.. contents:: Contents + :depth: 3 +.. sectnum:: + + +.. _`directives`: + +------------------- +Template Directives +------------------- + +Directives are elements and/or attributes in the template that are identified +by the namespace ``http://genshi.edgewall.org/``. They can affect how the +template is rendered in a number of ways: Genshi provides directives for +conditionals and looping, among others. + +To use directives in a template, the namespace must be declared, which is +usually done on the root element: + +.. code-block:: genshi + + + ... + + +In this example, the default namespace is set to the XHTML namespace, and the +namespace for Genshi directives is bound to the prefix “py”. + +All directives can be applied as attributes, and some can also be used as +elements. The ``if`` directives for conditionals, for example, can be used in +both ways: + +.. code-block:: genshi + + + ... +
    +

    Bar

    +
    + ... + + +This is basically equivalent to the following: + +.. code-block:: genshi + + + ... + +
    +

    Bar

    +
    +
    + ... + + +The rationale behind the second form is that directives do not always map +naturally to elements in the template. In such cases, the ``py:strip`` +directive can be used to strip off the unwanted element, or the directive can +simply be used as an element. + + +Conditional Sections +==================== + +.. _`py:if`: + +``py:if`` +--------- + +The element and its content is only rendered if the expression evaluates to a +truth value: + +.. code-block:: genshi + +
    + ${bar} +
    + +Given the data ``foo=True`` and ``bar='Hello'`` in the template context, this +would produce: + +.. code-block:: xml + +
    + Hello +
    + +But setting ``foo=False`` would result in the following output: + +.. code-block:: xml + +
    +
    + +This directive can also be used as an element: + +.. code-block:: genshi + +
    + + ${bar} + +
    + +.. _`py:choose`: +.. _`py:when`: +.. _`py:otherwise`: + +``py:choose`` +------------- + +The ``py:choose`` directive, in combination with the directives ``py:when`` +and ``py:otherwise`` provides advanced conditional processing for rendering one +of several alternatives. The first matching ``py:when`` branch is rendered, or, +if no ``py:when`` branch matches, the ``py:otherwise`` branch is rendered. + +If the ``py:choose`` directive is empty the nested ``py:when`` directives will +be tested for truth: + +.. code-block:: genshi + +
    + 0 + 1 + 2 +
    + +This would produce the following output: + +.. code-block:: xml + +
    + 1 +
    + +If the ``py:choose`` directive contains an expression the nested ``py:when`` +directives will be tested for equality to the parent ``py:choose`` value: + +.. code-block:: genshi + +
    + 0 + 1 + 2 +
    + +This would produce the following output: + +.. code-block:: xml + +
    + 1 +
    + +These directives can also be used as elements: + +.. code-block:: genshi + + + 0 + 1 + 2 + + +Looping +======= + +.. _`py:for`: + +``py:for`` +---------- + +The element is repeated for every item in an iterable: + +.. code-block:: genshi + +
      +
    • ${item}
    • +
    + +Given ``items=[1, 2, 3]`` in the context data, this would produce: + +.. code-block:: xml + +
      +
    • 1
    • 2
    • 3
    • +
    + +This directive can also be used as an element: + +.. code-block:: genshi + +
      + +
    • ${item}
    • +
      +
    + + +Snippet Reuse +============= + +.. _`py:def`: +.. _`macros`: + +``py:def`` +---------- + +The ``py:def`` directive can be used to create macros, i.e. snippets of +template code that have a name and optionally some parameters, and that can be +inserted in other places: + +.. code-block:: genshi + +
    +

    + Hello, ${name}! +

    + ${greeting('world')} + ${greeting('everyone else')} +
    + +The above would be rendered to: + +.. code-block:: xml + +
    +

    + Hello, world! +

    +

    + Hello, everyone else! +

    +
    + +If a macro doesn't require parameters, it can be defined without the +parenthesis. For example: + +.. code-block:: genshi + +
    +

    + Hello, world! +

    + ${greeting()} +
    + +The above would be rendered to: + +.. code-block:: xml + +
    +

    + Hello, world! +

    +
    + +This directive can also be used as an element: + +.. code-block:: genshi + +
    + +

    Hello, ${name}!

    +
    +
    + + +.. _Match Templates: +.. _`py:match`: + +``py:match`` +------------ + +This directive defines a *match template*: given an XPath expression, it +replaces any element in the template that matches the expression with its own +content. + +For example, the match template defined in the following template matches any +element with the tag name “greeting”: + +.. code-block:: genshi + +
    + + Hello ${select('@name')} + + +
    + +This would result in the following output: + +.. code-block:: xml + +
    + + Hello Dude + +
    + +Inside the body of a ``py:match`` directive, the ``select(path)`` function is +made available so that parts or all of the original element can be incorporated +in the output of the match template. See `Using XPath`_ for more information +about this function. + +.. _`Using XPath`: streams.html#using-xpath + +Match templates are applied both to the original markup as well to the +generated markup. The order in which they are applied depends on the order +they are declared in the template source: a match template defined after +another match template is applied to the output generated by the first match +template. The match templates basically form a pipeline. + +This directive can also be used as an element: + +.. code-block:: genshi + +
    + + Hello ${select('@name')} + + +
    + +When used this way, the ``py:match`` directive can also be annotated with a +couple of optimization hints. For example, the following informs the matching +engine that the match should only be applied once: + +.. code-block:: genshi + + + + + ${select("*|text()")} + + + + +The following optimization hints are recognized: + ++---------------+-----------+-----------------------------------------------+ +| Attribute | Default | Description | ++===============+===========+===============================================+ +| ``buffer`` | ``true`` | Whether the matched content should be | +| | | buffered in memory. Buffering can improve | +| | | performance a bit at the cost of needing more | +| | | memory during rendering. Buffering is | +| | | ''required'' for match templates that contain | +| | | more than one invocation of the ``select()`` | +| | | function. If there is only one call, and the | +| | | matched content can potentially be very long, | +| | | consider disabling buffering to avoid | +| | | excessive memory use. | ++---------------+-----------+-----------------------------------------------+ +| ``once`` | ``false`` | Whether the engine should stop looking for | +| | | more matching elements after the first match. | +| | | Use this on match templates that match | +| | | elements that can only occur once in the | +| | | stream, such as the ```` or ```` | +| | | elements in an HTML template, or elements | +| | | with a specific ID. | ++---------------+-----------+-----------------------------------------------+ +| ``recursive`` | ``true`` | Whether the match template should be applied | +| | | to its own output. Note that ``once`` implies | +| | | non-recursive behavior, so this attribute | +| | | only needs to be set for match templates that | +| | | don't also have ``once`` set. | ++---------------+-----------+-----------------------------------------------+ + +.. note:: The ``py:match`` optimization hints were added in the 0.5 release. In + earlier versions, the attributes have no effect. + + +Variable Binding +================ + +.. _`with`: + +``py:with`` +----------- + +The ``py:with`` directive lets you assign expressions to variables, which can +be used to make expressions inside the directive less verbose and more +efficient. For example, if you need use the expression ``author.posts`` more +than once, and that actually results in a database query, assigning the results +to a variable using this directive would probably help. + +For example: + +.. code-block:: genshi + +
    + $x $y $z +
    + +Given ``x=42`` in the context data, this would produce: + +.. code-block:: xml + +
    + 42 7 52 +
    + +This directive can also be used as an element: + +.. code-block:: genshi + +
    + $x $y $z +
    + +Note that if a variable of the same name already existed outside of the scope +of the ``py:with`` directive, it will **not** be overwritten. Instead, it +will have the same value it had prior to the ``py:with`` assignment. +Effectively, this means that variables are immutable in Genshi. + + +Structure Manipulation +====================== + +.. _`py:attrs`: + +``py:attrs`` +------------ + +This directive adds, modifies or removes attributes from the element: + +.. code-block:: genshi + +
      +
    • Bar
    • +
    + +Given ``foo={'class': 'collapse'}`` in the template context, this would +produce: + +.. code-block:: xml + +
      +
    • Bar
    • +
    + +Attributes with the value ``None`` are omitted, so given ``foo={'class': None}`` +in the context for the same template this would produce: + +.. code-block:: xml + +
      +
    • Bar
    • +
    + +This directive can only be used as an attribute. + + +.. _`py:content`: + +``py:content`` +-------------- + +This directive replaces any nested content with the result of evaluating the +expression: + +.. code-block:: genshi + +
      +
    • Hello
    • +
    + +Given ``bar='Bye'`` in the context data, this would produce: + +.. code-block:: xml + +
      +
    • Bye
    • +
    + +This directive can only be used as an attribute. + + +.. _`py:replace`: + +``py:replace`` +-------------- + +This directive replaces the element itself with the result of evaluating the +expression: + +.. code-block:: genshi + +
    + Hello +
    + +Given ``bar='Bye'`` in the context data, this would produce: + +.. code-block:: xml + +
    + Bye +
    + +This directive can also be used as an element (since version 0.5): + +.. code-block:: genshi + +
    + Placeholder +
    + + + +.. _`py:strip`: + +``py:strip`` +------------ + +This directive conditionally strips the top-level element from the output. When +the value of the ``py:strip`` attribute evaluates to ``True``, the element is +stripped from the output: + +.. code-block:: genshi + +
    +
    foo
    +
    + +This would be rendered as: + +.. code-block:: xml + +
    + foo +
    + +As a shorthand, if the value of the ``py:strip`` attribute is empty, that has +the same effect as using a truth value (i.e. the element is stripped). + + +.. _order: + +Processing Order +================ + +It is possible to attach multiple directives to a single element, although not +all combinations make sense. When multiple directives are encountered, they are +processed in the following order: + +#. `py:def`_ +#. `py:match`_ +#. `py:when`_ +#. `py:otherwise`_ +#. `py:for`_ +#. `py:if`_ +#. `py:choose`_ +#. `py:with`_ +#. `py:replace`_ +#. `py:content`_ +#. `py:attrs`_ +#. `py:strip`_ + + +.. _includes: + +-------- +Includes +-------- + +To reuse common snippets of template code, you can include other files using +XInclude_. + +.. _xinclude: http://www.w3.org/TR/xinclude/ + +For this, you need to declare the XInclude namespace (commonly bound to the +prefix “xi”) and use the ```` element where you want the external +file to be pulled in: + +.. code-block:: genshi + + + + ... + + +Include paths are relative to the filename of the template currently being +processed. So if the example above was in the file "``myapp/index.html``" +(relative to the template search path), the XInclude processor would look for +the included file at "``myapp/base.html``". You can also use Unix-style +relative paths, for example "``../base.html``" to look in the parent directory. + +Any content included this way is inserted into the generated output instead of +the ```` element. The included template sees the same context data. +`Match templates`_ and `macros`_ in the included template are also available to +the including template after the point it was included. + +By default, an error will be raised if an included file is not found. If that's +not what you want, you can specify fallback content that should be used if the +include fails. For example, to to make the include above fail silently, you'd +write: + +.. code-block:: genshi + + + +See the `XInclude specification`_ for more about fallback content. Note though +that Genshi currently only supports a small subset of XInclude. + +.. _`xinclude specification`: http://www.w3.org/TR/xinclude/ + + +Dynamic Includes +================ + +Incudes in Genshi are fully dynamic: Just like normal attributes, the `href` +attribute accepts expressions, and directives_ can be used on the +```` element just as on any other element, meaning you can do +things like conditional includes: + +.. code-block:: genshi + + + + +Including Text Templates +======================== + +The ``parse`` attribute of the ```` element can be used to specify +whether the included template is an XML template or a text template (using the +new syntax added in Genshi 0.5): + +.. code-block:: genshi + + + +This example would load the ``myscript.js`` file as a ``NewTextTemplate``. See +`text templates`_ for details on the syntax of text templates. + +.. _`text templates`: text-templates.html + + +.. _comments: + +-------- +Comments +-------- + +Normal XML/HTML comment syntax can be used in templates: + +.. code-block:: genshi + + + +However, such comments get passed through the processing pipeline and are by +default included in the final output. If that's not desired, prefix the comment +text with an exclamation mark: + +.. code-block:: genshi + + + +Note that it does not matter whether there's whitespace before or after the +exclamation mark, so the above could also be written as follows: + +.. code-block:: genshi + + diff --git a/lib/genshi/doc/xpath.txt b/lib/genshi/doc/xpath.txt new file mode 100644 --- /dev/null +++ b/lib/genshi/doc/xpath.txt @@ -0,0 +1,101 @@ +.. -*- mode: rst; encoding: utf-8 -*- + +===================== +Using XPath in Genshi +===================== + +Genshi provides basic XPath_ support for matching and querying event streams. + +.. _xpath: http://www.w3.org/TR/xpath + + +.. contents:: Contents + :depth: 2 +.. sectnum:: + + +----------- +Limitations +----------- + +Due to the streaming nature of the processing model, Genshi uses only a subset +of the `XPath 1.0`_ language. + +.. _`XPath 1.0`: http://www.w3.org/TR/xpath + +In particular, only the following axes are supported: + +* ``attribute`` +* ``child`` +* ``descendant`` +* ``descendant-or-self`` +* ``self`` + +This means you can't use the ``parent``, ancestor, or sibling axes in Genshi +(the ``namespace`` axis isn't supported either, but what you'd ever need that +for I don't know). Basically, any path expression that would require buffering +of the stream is not supported. + +Predicates are of course supported, but path expressions *inside* predicates +are restricted to attribute lookups (again due to the lack of buffering). + +Most of the XPath functions and operators are supported, however they +(currently) only work inside predicates. The following functions are **not** +supported: + +* ``count()`` +* ``id()`` +* ``lang()`` +* ``last()`` +* ``position()`` +* ``string()`` +* ``sum()`` + +The mathematical operators (``+``, ``-``, ``*``, ``div``, and ``mod``) are not +yet supported, whereas sub-expressions and the various comparison and logical +operators should work as expected. + +You can also use XPath variable references (``$var``) inside predicates. + + +---------------- +Querying Streams +---------------- + +The ``Stream`` class provides a ``select(path)`` function that can be used to +retrieve subsets of the stream: + +.. code-block:: pycon + + >>> from genshi.input import XML + + >>> doc = XML(''' + ... + ... + ... Foo + ... + ... + ... Bar + ... + ... + ... Baz + ... + ... + ... Waz + ... + ... + ... ''') + + >>> print(doc.select('items/item[@status="closed" and ' + ... '(@resolution="invalid" or not(@resolution))]/summary/text()')) + BarBaz + + + +--------------------- +Matching in Templates +--------------------- + +See the directive ``py:match`` in the `XML Template Language Specification`_. + +.. _`XML Template Language Specification`: xml-templates.html diff --git a/lib/genshi/examples/basic/kidrun.py b/lib/genshi/examples/basic/kidrun.py new file mode 100755 --- /dev/null +++ b/lib/genshi/examples/basic/kidrun.py @@ -0,0 +1,48 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +import os +import sys +import time + +import kid + +def test(): + base_path = os.path.dirname(os.path.abspath(__file__)) + kid.path = kid.TemplatePath([base_path]) + + ctxt = dict(hello='', hey='ZYX', bozz=None, + items=['Number %d' % num for num in range(1, 15)], + prefix='#') + + start = time.clock() + template = kid.Template(file='test.kid', **ctxt) + print ' --> parse stage: %.4f ms' % ((time.clock() - start) * 1000) + + for output in template.generate(): + sys.stdout.write(output) + print + + times = [] + for i in range(1000): + start = time.clock() + list(template.generate()) + times.append(time.clock() - start) + sys.stdout.write('.') + sys.stdout.flush() + print + + print ' --> render stage: %s ms (average)' % ( + (sum(times) / len(times) * 1000)) + +if __name__ == '__main__': + if '-p' in sys.argv: + import hotshot, hotshot.stats + prof = hotshot.Profile("template.prof") + benchtime = prof.runcall(test) + stats = hotshot.stats.load("template.prof") + stats.strip_dirs() + stats.sort_stats('time', 'calls') + stats.print_stats() + else: + test() diff --git a/lib/genshi/examples/basic/layout.html b/lib/genshi/examples/basic/layout.html new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/basic/layout.html @@ -0,0 +1,15 @@ +
    + + Hello ${hello} + + +
    reference me, please
    +
    + Hello ${name.title()} +
    + + Hello ${select('@name')} + + +
    diff --git a/lib/genshi/examples/basic/layout.kid b/lib/genshi/examples/basic/layout.kid new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/basic/layout.kid @@ -0,0 +1,15 @@ +
    + + Hello ${hello} + + +
    reference me, please
    +
    + Hello ${name.title()} +
    + + Hello ${item.get('name')} + + +
    diff --git a/lib/genshi/examples/basic/run.py b/lib/genshi/examples/basic/run.py new file mode 100755 --- /dev/null +++ b/lib/genshi/examples/basic/run.py @@ -0,0 +1,46 @@ +#!/usr/bin/python +# -*- coding: utf-8 -*- + +import os +import sys +import time + +from genshi.template import TemplateLoader + +def test(): + base_path = os.path.dirname(os.path.abspath(__file__)) + loader = TemplateLoader([base_path], auto_reload=True) + + start = time.clock() + tmpl = loader.load('test.html') + print ' --> parse stage: %.4f ms' % ((time.clock() - start) * 1000) + + data = dict(hello='', skin='default', hey='ZYX', bozz=None, + items=['Number %d' % num for num in range(1, 15)], + prefix='#') + + print tmpl.generate(**data).render(method='html') + + times = [] + for i in range(1000): + start = time.clock() + list(tmpl.generate(**data)) + times.append(time.clock() - start) + sys.stdout.write('.') + sys.stdout.flush() + print + + print ' --> render stage: %s ms (average)' % ( + (sum(times) / len(times) * 1000)) + +if __name__ == '__main__': + if '-p' in sys.argv: + import hotshot, hotshot.stats + prof = hotshot.Profile("template.prof") + benchtime = prof.runcall(test) + stats = hotshot.stats.load("template.prof") + stats.strip_dirs() + stats.sort_stats('time', 'calls') + stats.print_stats() + else: + test() diff --git a/lib/genshi/examples/basic/test.html b/lib/genshi/examples/basic/test.html new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/basic/test.html @@ -0,0 +1,22 @@ + + + + + +
      +
    • Item $prefix${item.split()[-1]}
    • +
    + ${macro1()} ${macro1()} ${macro1()} + ${macro2('john')} + ${macro2('kate', classname='collapsed')} +
    Replace me
    + + + Hello Silicon + + diff --git a/lib/genshi/examples/basic/test.kid b/lib/genshi/examples/basic/test.kid new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/basic/test.kid @@ -0,0 +1,21 @@ + + + +
      +
    • Item $prefix${item.split()[-1]}
    • + XYZ ${hey} +
    + ${macro1()} ${macro1()} ${macro1()} + ${macro2('john')} + ${macro2('kate', classname='collapsed')} +
    Replace me
    + + + Hello Silicon + + diff --git a/lib/genshi/examples/bench/basic.py b/lib/genshi/examples/bench/basic.py new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/basic.py @@ -0,0 +1,204 @@ +# -*- encoding: utf-8 -*- +# Template language benchmarks +# +# Objective: Test general templating features using a small template + +from cgi import escape +import os +from StringIO import StringIO +import sys +import timeit + +__all__ = ['clearsilver', 'mako', 'django', 'kid', 'genshi', 'genshi_text', + 'simpletal'] + +def genshi(dirname, verbose=False): + from genshi.template import TemplateLoader + loader = TemplateLoader([dirname], auto_reload=False) + template = loader.load('template.html') + def render(): + data = dict(title='Just a test', user='joe', + items=['Number %d' % num for num in range(1, 15)]) + return template.generate(**data).render('xhtml') + + if verbose: + print render() + return render + +def genshi_text(dirname, verbose=False): + from genshi.core import escape + from genshi.template import TemplateLoader, NewTextTemplate + loader = TemplateLoader([dirname], auto_reload=False) + template = loader.load('template.txt', cls=NewTextTemplate) + def render(): + data = dict(escape=escape, title='Just a test', user='joe', + items=['Number %d' % num for num in range(1, 15)]) + return template.generate(**data).render('text') + + if verbose: + print render() + return render + +def mako(dirname, verbose=False): + from mako.lookup import TemplateLookup + lookup = TemplateLookup(directories=[dirname], filesystem_checks=False) + template = lookup.get_template('template.html') + def render(): + data = dict(title='Just a test', user='joe', + list_items=['Number %d' % num for num in range(1, 15)]) + return template.render(**data) + if verbose: + print render() + return render + +def cheetah(dirname, verbose=False): + # FIXME: infinite recursion somewhere... WTF? + try: + from Cheetah.Template import Template + except ImportError: + print>>sys.stderr, 'Cheetah not installed, skipping' + return lambda: None + class MyTemplate(Template): + def serverSidePath(self, path): return os.path.join(dirname, path) + filename = os.path.join(dirname, 'template.tmpl') + template = MyTemplate(file=filename) + + def render(): + template = MyTemplate(file=filename, + searchList=[{'title': 'Just a test', 'user': 'joe', + 'items': [u'Number %d' % num for num in range(1, 15)]}]) + return template.respond() + + if verbose: + print render() + return render + +def clearsilver(dirname, verbose=False): + try: + import neo_cgi + except ImportError: + print>>sys.stderr, 'ClearSilver not installed, skipping' + return lambda: None + neo_cgi.update() + import neo_util + import neo_cs + def render(): + hdf = neo_util.HDF() + hdf.setValue('hdf.loadpaths.0', dirname) + hdf.setValue('title', escape('Just a test')) + hdf.setValue('user', escape('joe')) + for num in range(1, 15): + hdf.setValue('items.%d' % (num - 1), escape('Number %d' % num)) + cs = neo_cs.CS(hdf) + cs.parseFile('template.cs') + return cs.render() + + if verbose: + print render() + return render + +def django(dirname, verbose=False): + try: + from django.conf import settings + settings.configure(TEMPLATE_DIRS=[os.path.join(dirname, 'templates')]) + except ImportError: + print>>sys.stderr, 'Django not installed, skipping' + return lambda: None + from django import template, templatetags + from django.template import loader + templatetags.__path__.append(os.path.join(dirname, 'templatetags')) + tmpl = loader.get_template('template.html') + + def render(): + data = {'title': 'Just a test', 'user': 'joe', + 'items': ['Number %d' % num for num in range(1, 15)]} + return tmpl.render(template.Context(data)) + + if verbose: + print render() + return render + +def kid(dirname, verbose=False): + try: + import kid + except ImportError: + print>>sys.stderr, "Kid not installed, skipping" + return lambda: None + kid.path = kid.TemplatePath([dirname]) + template = kid.load_template('template.kid').Template + def render(): + return template( + title='Just a test', user='joe', + items=['Number %d' % num for num in range(1, 15)] + ).serialize(output='xhtml') + + if verbose: + print render() + return render + +def simpletal(dirname, verbose=False): + try: + from simpletal import simpleTAL, simpleTALES + except ImportError: + print>>sys.stderr, "SimpleTAL not installed, skipping" + return lambda: None + fileobj = open(os.path.join(dirname, 'base.html')) + base = simpleTAL.compileHTMLTemplate(fileobj) + fileobj.close() + fileobj = open(os.path.join(dirname, 'template.html')) + template = simpleTAL.compileHTMLTemplate(fileobj) + fileobj.close() + def render(): + ctxt = simpleTALES.Context(allowPythonPath=1) + ctxt.addGlobal('base', base) + ctxt.addGlobal('title', 'Just a test') + ctxt.addGlobal('user', 'joe') + ctxt.addGlobal('items', ['Number %d' % num for num in range(1, 15)]) + buf = StringIO() + template.expand(ctxt, buf) + return buf.getvalue() + + if verbose: + print render() + return render + +def run(engines, number=2000, verbose=False): + basepath = os.path.abspath(os.path.dirname(__file__)) + for engine in engines: + dirname = os.path.join(basepath, engine) + if verbose: + print '%s:' % engine.capitalize() + print '--------------------------------------------------------' + else: + print '%s:' % engine.capitalize(), + t = timeit.Timer(setup='from __main__ import %s; render = %s(r"%s", %s)' + % (engine, engine, dirname, verbose), + stmt='render()') + time = t.timeit(number=number) / number + if verbose: + print '--------------------------------------------------------' + print '%.2f ms' % (1000 * time) + if verbose: + print '--------------------------------------------------------' + + +if __name__ == '__main__': + engines = [arg for arg in sys.argv[1:] if arg[0] != '-'] + if not engines: + engines = __all__ + + verbose = '-v' in sys.argv + + if '-p' in sys.argv: + import cProfile, pstats + prof = cProfile.Profile() + prof.run('run(%r, number=200, verbose=%r)' % (engines, verbose)) + stats = pstats.Stats(prof) + stats.strip_dirs() + stats.sort_stats('calls') + stats.print_stats(25) + if verbose: + stats.print_callees() + stats.print_callers() + else: + run(engines, verbose=verbose) diff --git a/lib/genshi/examples/bench/bigtable.py b/lib/genshi/examples/bench/bigtable.py new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/bigtable.py @@ -0,0 +1,233 @@ +# -*- encoding: utf-8 -*- +# Template language benchmarks +# +# Objective: Generate a 1000x10 HTML table as fast as possible. +# +# Author: Jonas Borgström + +import cgi +import sys +import timeit +from StringIO import StringIO +from genshi.builder import tag +from genshi.template import MarkupTemplate, NewTextTemplate + +try: + from elementtree import ElementTree as et +except ImportError: + et = None + +try: + import cElementTree as cet +except ImportError: + cet = None + +try: + import neo_cgi, neo_cs, neo_util +except ImportError: + neo_cgi = None + +try: + import kid +except ImportError: + kid = None + +try: + from django.conf import settings + settings.configure() + from django.template import Context as DjangoContext + from django.template import Template as DjangoTemplate +except ImportError: + DjangoContext = DjangoTemplate = None + +try: + from mako.template import Template as MakoTemplate +except ImportError: + MakoTemplate = None + +table = [dict(a=1,b=2,c=3,d=4,e=5,f=6,g=7,h=8,i=9,j=10) + for x in range(1000)] + +genshi_tmpl = MarkupTemplate(""" + + + +
    +
    +""") + +genshi_tmpl2 = MarkupTemplate(""" +$table
    +""") + +genshi_text_tmpl = NewTextTemplate(""" + +{% for row in table %} +{% for c in row.values() %}{% end %} +{% end %} +
    $c
    +""") + +if DjangoTemplate: + django_tmpl = DjangoTemplate(""" + + {% for row in table %} + {% for col in row.values %}{{ col|escape }}{% endfor %} + {% endfor %} +
    + """) + + def test_django(): + """Djange template""" + context = DjangoContext({'table': table}) + django_tmpl.render(context) + +if MakoTemplate: + mako_tmpl = MakoTemplate(""" + + % for row in table: + + % for col in row.values(): + + % endfor + + % endfor +
    ${ col | h }
    +""") + def test_mako(): + """Mako Template""" + mako_tmpl.render(table=table) + +def test_genshi(): + """Genshi template""" + stream = genshi_tmpl.generate(table=table) + stream.render('html', strip_whitespace=False) + +def test_genshi_text(): + """Genshi text template""" + stream = genshi_text_tmpl.generate(table=table) + stream.render('text') + +def test_genshi_builder(): + """Genshi template + tag builder""" + stream = tag.TABLE([ + tag.tr([tag.td(c) for c in row.values()]) + for row in table + ]).generate() + stream = genshi_tmpl2.generate(table=stream) + stream.render('html', strip_whitespace=False) + +def test_builder(): + """Genshi tag builder""" + stream = tag.TABLE([ + tag.tr([ + tag.td(c) for c in row.values() + ]) + for row in table + ]).generate() + stream.render('html', strip_whitespace=False) + +if kid: + kid_tmpl = kid.Template(""" + + + +
    +
    + """) + + kid_tmpl2 = kid.Template(""" + $table + """) + + def test_kid(): + """Kid template""" + kid_tmpl.table = table + kid_tmpl.serialize(output='html') + + if cet: + def test_kid_et(): + """Kid template + cElementTree""" + _table = cet.Element('table') + for row in table: + td = cet.SubElement(_table, 'tr') + for c in row.values(): + cet.SubElement(td, 'td').text=str(c) + kid_tmpl2.table = _table + kid_tmpl2.serialize(output='html') + +if et: + def test_et(): + """ElementTree""" + _table = et.Element('table') + for row in table: + tr = et.SubElement(_table, 'tr') + for c in row.values(): + et.SubElement(tr, 'td').text=str(c) + et.tostring(_table) + +if cet: + def test_cet(): + """cElementTree""" + _table = cet.Element('table') + for row in table: + tr = cet.SubElement(_table, 'tr') + for c in row.values(): + cet.SubElement(tr, 'td').text=str(c) + cet.tostring(_table) + +if neo_cgi: + def test_clearsilver(): + """ClearSilver""" + hdf = neo_util.HDF() + for i, row in enumerate(table): + for j, c in enumerate(row.values()): + hdf.setValue("rows.%d.cell.%d" % (i, j), cgi.escape(str(c))) + + cs = neo_cs.CS(hdf) + cs.parseStr(""" + +
    """) + cs.render() + + +def run(which=None, number=10): + tests = ['test_builder', 'test_genshi', 'test_genshi_text', + 'test_genshi_builder', 'test_mako', 'test_kid', 'test_kid_et', + 'test_et', 'test_cet', 'test_clearsilver', 'test_django'] + + if which: + tests = filter(lambda n: n[5:] in which, tests) + + for test in [t for t in tests if hasattr(sys.modules[__name__], t)]: + t = timeit.Timer(setup='from __main__ import %s;' % test, + stmt='%s()' % test) + time = t.timeit(number=number) / number + + if time < 0.00001: + result = ' (not installed?)' + else: + result = '%16.2f ms' % (1000 * time) + print '%-35s %s' % (getattr(sys.modules[__name__], test).__doc__, result) + + +if __name__ == '__main__': + which = [arg for arg in sys.argv[1:] if arg[0] != '-'] + + if '-p' in sys.argv: + import cProfile, pstats + prof = cProfile.Profile() + prof.run('run(%r, number=1)' % which) + stats = pstats.Stats(prof) + stats.strip_dirs() + stats.sort_stats('time', 'calls') + stats.print_stats(25) + if '-v' in sys.argv: + stats.print_callees() + stats.print_callers() + else: + run(which) diff --git a/lib/genshi/examples/bench/cheetah/footer.tmpl b/lib/genshi/examples/bench/cheetah/footer.tmpl new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/cheetah/footer.tmpl @@ -0,0 +1,2 @@ + diff --git a/lib/genshi/examples/bench/cheetah/header.tmpl b/lib/genshi/examples/bench/cheetah/header.tmpl new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/cheetah/header.tmpl @@ -0,0 +1,3 @@ + diff --git a/lib/genshi/examples/bench/cheetah/template.tmpl b/lib/genshi/examples/bench/cheetah/template.tmpl new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/cheetah/template.tmpl @@ -0,0 +1,22 @@ + + + + ${title} + + + #include "cheetah/header.tmpl" + +

    Loop

    + #if $items +
      + #for $item in $items +
    • $item
    • + #end for +
    + #end if + + #include "cheetah/footer.tmpl" + + diff --git a/lib/genshi/examples/bench/clearsilver/footer.cs b/lib/genshi/examples/bench/clearsilver/footer.cs new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/clearsilver/footer.cs @@ -0,0 +1,2 @@ + diff --git a/lib/genshi/examples/bench/clearsilver/header.cs b/lib/genshi/examples/bench/clearsilver/header.cs new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/clearsilver/header.cs @@ -0,0 +1,7 @@ + + + +

    Hello, !

    + diff --git a/lib/genshi/examples/bench/clearsilver/template.cs b/lib/genshi/examples/bench/clearsilver/template.cs new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/clearsilver/template.cs @@ -0,0 +1,25 @@ + + + + <?cs var:title ?> + + + + + + + +

    Loop

    + +
      + + class="last"> + +
    + + + + + diff --git a/lib/genshi/examples/bench/django/templates/base.html b/lib/genshi/examples/bench/django/templates/base.html new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/django/templates/base.html @@ -0,0 +1,18 @@ + + + + {% block content %} + {% block header %} + + {% endblock %} + + {% block footer %} + + {% endblock %} + {% endblock %} + + diff --git a/lib/genshi/examples/bench/django/templates/template.html b/lib/genshi/examples/bench/django/templates/template.html new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/django/templates/template.html @@ -0,0 +1,27 @@ +{% extends "base.html" %} +{% load bench %} + +{% block content %} + + {{title|escape}} + + + + {% block header %}{% endblock %} + +
    {% greeting user %}
    +
    {% greeting "me" %}
    +
    {% greeting "world" %}
    + +

    Loop

    + {% if items %} +
      + {% for item in items %} + {{ item|escape }} + {% endfor %} +
    + {% endif %} + + {% block footer %}{% endblock %} + +{% endblock %} diff --git a/lib/genshi/examples/bench/django/templatetags/__init__.py b/lib/genshi/examples/bench/django/templatetags/__init__.py new file mode 100644 diff --git a/lib/genshi/examples/bench/django/templatetags/bench.py b/lib/genshi/examples/bench/django/templatetags/bench.py new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/django/templatetags/bench.py @@ -0,0 +1,8 @@ +from django.template import Library, Node, resolve_variable +from django.utils.html import escape + +register = Library() + +def greeting(name): + return 'Hello, %s!' % escape(name) +greeting = register.simple_tag(greeting) diff --git a/lib/genshi/examples/bench/genshi/base.html b/lib/genshi/examples/bench/genshi/base.html new file mode 100644 --- /dev/null +++ b/lib/genshi/examples/bench/genshi/base.html @@ -0,0 +1,17 @@ + + +

    + Hello, ${name}! +

    + + + + ${select('*')} +